Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/features/shell-integration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -426,7 +426,7 @@ This setup works reliably on Windows systems using Cygwin, Fish, and the Starshi

**Issue**: Commands that span multiple lines can confuse Roo and may show output from previous commands mixed in with current output.

**Workaround**: Instead of multi-line commands, use command chaining with `&&` to keep everything on one line (e.g., `echo a && echo b` instead of typing each command on a separate line).
**Workaround**: Instead of multi-line commands, use command chaining with `&&` to keep everything on one line (e.g., `echo a && echo b` instead of typing each command on a separate line).

#### PowerShell-Specific Issues

Expand Down
12 changes: 6 additions & 6 deletions docs/providers/ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,20 +135,20 @@ If no model instance is running, Ollama spins one up on demand. During that cold
**Fixes**
1. **Preload the model**
```bash
ollama run <model-name>
ollama run <model-name>
```
Keep it running, then issue the request from Roo.

2. **Pin the context window (`num_ctx`)**
- Option A — interactive session, then save:
```bash
# inside `ollama run &lt;base-model&gt;`
# inside `ollama run <base-model>`
/set parameter num_ctx 32768
/save &lt;your_model_name&gt;
/save <your_model_name>
```
- Option B — Modelfile (recommended for reproducibility):
```dockerfile
FROM &lt;base-model&gt;
FROM <base-model>
PARAMETER num_ctx 32768
# Adjust based on your available memory:
# 16384 for ~8GB VRAM
Expand All @@ -157,7 +157,7 @@ If no model instance is running, Ollama spins one up on demand. During that cold
```
Then create the model:
```bash
ollama create &lt;your_model_name&gt; -f Modelfile
ollama create <your_model_name> -f Modelfile
```

3. **Ensure the model's context window is pinned**
Expand All @@ -169,7 +169,7 @@ If no model instance is running, Ollama spins one up on demand. During that cold
5. **Restart after an OOM**
```bash
ollama ps
ollama stop &lt;model-name&gt;
ollama stop <model-name>
```

**Quick checklist**
Expand Down