Needs a GPU, Docker container, and local LLM for best performance ...