Skip to content

Commit e70c89d

Browse files
committed
docs: add missing local-openai provider documentation and configuration
1 parent d618b2f commit e70c89d

File tree

3 files changed

+132
-1
lines changed

3 files changed

+132
-1
lines changed

packages/agent/src/core/llm/provider.ts

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,12 @@ export const providerConfig: Record<string, ProviderConfig> = {
6363
baseUrl: 'http://localhost:11434',
6464
factory: (model, options) => new OllamaProvider(model, options),
6565
},
66+
'local-openai': {
67+
docsUrl: 'https://mycoder.ai/docs/provider/local-openai',
68+
model: 'llama3.2',
69+
baseUrl: 'http://localhost:80',
70+
factory: (model, options) => new OpenAIProvider(model, options),
71+
},
6672
xai: {
6773
keyName: 'XAI_API_KEY',
6874
docsUrl: 'https://mycoder.ai/docs/provider/xai',

packages/docs/docs/providers/index.mdx

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,9 @@ MyCoder supports multiple Language Model (LLM) providers, giving you flexibility
1111
MyCoder currently supports the following LLM providers:
1212

1313
- [**Anthropic**](./anthropic.md) - Claude models from Anthropic
14-
- [**OpenAI**](./openai.md) - GPT models from OpenAI (and OpenAI compatible providers)
14+
- [**OpenAI**](./openai.md) - GPT models from OpenAI
1515
- [**Ollama**](./ollama.md) - Self-hosted open-source models via Ollama
16+
- [**Local OpenAI Compatible**](./local-openai.md) - GPUStack and other OpenAI-compatible servers
1617
- [**xAI**](./xai.md) - Grok models from xAI
1718

1819
## Configuring Providers
@@ -53,4 +54,5 @@ For detailed instructions on setting up each provider, see the provider-specific
5354
- [Anthropic Configuration](./anthropic.md)
5455
- [OpenAI Configuration](./openai.md)
5556
- [Ollama Configuration](./ollama.md)
57+
- [Local OpenAI Compatible Configuration](./local-openai.md)
5658
- [xAI Configuration](./xai.md)
Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
---
2+
sidebar_position: 5
3+
---
4+
5+
# Local OpenAI Compatible Servers
6+
7+
MyCoder supports connecting to local or self-hosted OpenAI-compatible API servers, including solutions like [GPUStack](https://gpustack.ai/), [LM Studio](https://lmstudio.ai/), [Ollama OpenAI compatibility mode](https://github.com/ollama/ollama/blob/main/docs/openai.md), and [LocalAI](https://localai.io/).
8+
9+
## Setup
10+
11+
To use a local OpenAI-compatible server with MyCoder:
12+
13+
1. Install and set up your preferred OpenAI-compatible server
14+
2. Start the server according to its documentation
15+
3. Configure MyCoder to connect to your local server
16+
17+
### Configuration
18+
19+
Configure MyCoder to use your local OpenAI-compatible server in your `mycoder.config.js` file:
20+
21+
```javascript
22+
export default {
23+
// Provider selection - use local-openai for any OpenAI-compatible server
24+
provider: 'local-openai',
25+
model: 'llama3.2', // Use the model name available on your server
26+
27+
// The base URL for your local server
28+
baseUrl: 'http://localhost:80', // Default for GPUStack, adjust as needed
29+
30+
// Other MyCoder settings
31+
maxTokens: 4096,
32+
temperature: 0.7,
33+
// ...
34+
};
35+
```
36+
37+
## GPUStack
38+
39+
[GPUStack](https://gpustack.ai/) is a solution for running AI models on your own hardware. It provides an OpenAI-compatible API server that works seamlessly with MyCoder.
40+
41+
### Setting up GPUStack
42+
43+
1. Install GPUStack following the instructions on their website
44+
2. Start the GPUStack server
45+
3. Configure MyCoder to use the `local-openai` provider
46+
47+
```javascript
48+
export default {
49+
provider: 'local-openai',
50+
model: 'llama3.2', // Choose a model available on your GPUStack instance
51+
baseUrl: 'http://localhost:80', // Default GPUStack URL
52+
};
53+
```
54+
55+
## Other OpenAI-Compatible Servers
56+
57+
You can use MyCoder with any OpenAI-compatible server by setting the appropriate `baseUrl`:
58+
59+
### LM Studio
60+
61+
```javascript
62+
export default {
63+
provider: 'local-openai',
64+
model: 'llama3', // Use the model name as configured in LM Studio
65+
baseUrl: 'http://localhost:1234', // Default LM Studio server URL
66+
};
67+
```
68+
69+
### LocalAI
70+
71+
```javascript
72+
export default {
73+
provider: 'local-openai',
74+
model: 'gpt-3.5-turbo', // Use the model name as configured in LocalAI
75+
baseUrl: 'http://localhost:8080', // Default LocalAI server URL
76+
};
77+
```
78+
79+
### Ollama (OpenAI Compatibility Mode)
80+
81+
```javascript
82+
export default {
83+
provider: 'local-openai',
84+
model: 'llama3', // Use the model name as configured in Ollama
85+
baseUrl: 'http://localhost:11434/v1', // Ollama OpenAI compatibility endpoint
86+
};
87+
```
88+
89+
## Hardware Requirements
90+
91+
Running LLMs locally requires significant hardware resources:
92+
93+
- Minimum 16GB RAM (32GB+ recommended)
94+
- GPU with at least 8GB VRAM for optimal performance
95+
- SSD storage for model files (models can be 5-20GB each)
96+
97+
## Best Practices
98+
99+
- Ensure your local server and the selected model support tool calling/function calling
100+
- Use models optimized for coding tasks when available
101+
- Monitor your system resources when running large models locally
102+
- Consider using a dedicated machine for hosting your local server
103+
104+
## Troubleshooting
105+
106+
If you encounter issues with local OpenAI-compatible servers:
107+
108+
- Verify the server is running and accessible at the configured base URL
109+
- Check that the model name exactly matches what's available on your server
110+
- Ensure the model supports tool/function calling (required for MyCoder)
111+
- Check server logs for specific error messages
112+
- Test the server with a simple curl command to verify API compatibility:
113+
114+
```bash
115+
curl http://localhost:80/v1/chat/completions \
116+
-H "Content-Type: application/json" \
117+
-d '{
118+
"model": "llama3.2",
119+
"messages": [{"role": "user", "content": "Hello!"}]
120+
}'
121+
```
122+
123+
For more information, refer to the documentation for your specific OpenAI-compatible server.

0 commit comments

Comments
 (0)