Local autonomous AI agent setup using OpenHands (Docker) and LM Studio (LLM Provider).
- LM Studio:
- Load a model (Recommended:
qwen2.5-coder-7b-instructor similar). - Start Local Server on port
1234.
- Load a model (Recommended:
- Start OpenHands:
docker-compose up -d
- Access: Open http://localhost:3000.
Settings are in .env.
LLM_MODEL: Must match the model ID loaded in LM Studio (e.g.,openai/qwen/qwen3-coder-30b).LLM_BASE_URL:http://host.docker.internal:1234/v1(Required for Docker -> Host communication).
To switch models:
- Edit
.env(uncomment desired model). - Restart:
docker-compose restart.
Run the pre-flight check to verify connectivity:
./preflight-check.shdocker-compose.yml: Container definition..env: Configuration variables.workspace/: Shared directory for agent files.