-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
Problem (one or two sentences)
When condensing a chat the user expects some sort of feedback if they model they have selected for the task fails, this does not happen and the ui is just stuck in Condensing Chat. Could be fixed in a few way:
- Warn the user that the model they have has too small of a context window and they should use a different model.
- Multi-Stage condensing where the model make summaries for the context split into chunks that can fit the model window. Could have a warning and a button to do this multi-stage method.
Context (who is affected and when)
All users have smaller context window models selected for their condensing tasks.
Reproduction steps
Create a chat that is relatively long and has a large amount of content. Then, select a model that has a small context window for condensing and attempt to condense. What happens is nothing appears to the user other than "condensing the conversation". And that stays forever.
Expected result
Either a warning or a secondary prompt should appear when this happens.
Actual result
No user feedback is given, and it seems like it's still working when it is not.
Variations tried (optional)
No response
App Version
3.39.3 (85d253c)
API Provider (optional)
None
Model Used (optional)
N/A
Roo Code Task Links (optional)
No response
Relevant logs or errors (optional)
Message when trying to condense a chat with an insufficient model:
[error] [Extension Host] [OpenRouter] API error: {"message":"400 This endpoint's maximum context length is 131072 tokens. However, you requested about 145406 tokens (119191 of text input, 26215 in the output). Please reduce the length of either one, or use the \"middle-out\" transform to compress your prompt automatically.","name":"Error","stack":"Error: 400 This endpoint's maximum context length is 131072 tokens. However, you requested about 145406 tokens (119191 of text input, 26215 in the output). Please reduce the length of either one, or use the \"middle-out\" transform to compress your prompt automatically.\n at Function.generate (/Users/darkeden/wpilib/2025/vscode/code-portable-data/extensions/node_modules/.pnpm/openai@5.12.2_ws@8.18.3_zod@3.25.61/node_modules/openai/src/core/error.ts:72:14)\n at ei.makeStatusError (/Users/darkeden/wpilib/2025/vscode/code-portable-data/extensions/node_modules/.pnpm/openai@5.12.2_ws@8.18.3_zod@3.25.61/node_modules/openai/src/client.ts:445:28)\n at ei.makeRequest (/Users/darkeden/wpilib/2025/vscode/code-portable-data/extensions/node_modules/.pnpm/openai@5.12.2_ws@8.18.3_zod@3.25.61/node_modules/openai/src/client.ts:668:24)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at d$.chunk [as createMessage] (/Users/darkeden/wpilib/2025/vscode/code-portable-data/extensions/rooveterinaryinc.roo-cline-3.39.3/api/providers/openrouter.ts:325:13)\n at cNt (/Users/darkeden/wpilib/2025/vscode/code-portable-data/extensions/rooveterinaryinc.roo-cline-3.39.3/core/condense/index.ts:328:19)\n at t.condenseContext (/Users/darkeden/wpilib/2025/vscode/code-portable-data/extensions/rooveterinaryinc.roo-cline-3.39.3/core/task/Task.ts:1651:7)\n at t.condenseTaskContext (/Users/darkeden/wpilib/2025/vscode/code-portable-data/extensions/rooveterinaryinc.roo-cline-3.39.3/core/webview/ClineProvider.ts:1735:3)","status":400}Metadata
Metadata
Assignees
Labels
Type
Projects
Status