-
Notifications
You must be signed in to change notification settings - Fork 83
Description
Bug Description
There are actually three issues going on here:
First, starting a session with a ChatGPT Model (OAuth) will create ChatGPT specific item ids/resources that can only be used by ChatGPT models. I.E. if you attempt to use an OpenAI API Model in the same session will result in an error: Item with id 'rs_08b18df330eb3b96016944d15cf92081a0be6ec95a36d4f337' not found.
Second, if you start OpenCode using an OpenAI API Model and enter a session containing ChatGPT messages, and then use /models to switch to a ChatGPT Model, any sent messages throws an error: The requested model 'gpt-5.2-medium' does not exist.
Third, if you start OpenCode using an ChatGPT Model, enter the "bugged" session, things are fine. If you switch to an Open API Model using /model, it'll continue operating as if the selected model was a ChatGPT Model. I.E. You "think" you're using an API model, and you aren't. It's still ChatGPT.
Steps to Reproduce
NOTE: Make sure your primary agent.md is configured to start with a ChatGPT Model (OAuth)
This is how you get it into a broken state:
- Start a new session using any ChatGPT Model (OAuth)
- Type a couple messages and exit OpenCode.
- Edit your agent.md file and switch the model over to an OpenAI API Model and relaunch OpenCode.
- Enter the "bugged" session and type a message: error - Item with id .... not found.
- Using
/modelswitch back to a ChatGPT Model and type a message: error - The requested model ... does not exist. - Edit your agent.md file and switch the model back over to a ChatGPT Model (OAuth) and relaunch OpenCode.
- Re-enter the bugged session and type a message, everything is ok. No errors
- Using the
/modeloption select an OpenAI API Model and start asking questions. You'll see the normal "Reasoning Bubbles" you only see when using ChatGPT Models.
Expected Behavior
What should happen.
You should seamlessly be able to switch back and forth between ChatGPT and OpenAI Models without issue.
Actual Behavior
What actually happens.
And this my friends is the smoking gun. Remember those "Reasoning Bubbles" you commonly see in ChatGPT? Those are unique item ids that are embedded and referenced from OpenCode internal memory files. If you search for one of those ID's contained in one of the error messages, you will see:
error: {
message: Item with id 'rs_012e955bc1943987016944b500056481a2b01bf65cfc99be51' not found.,
type: invalid_request_error,
param: input,
code: null
}
And these references that basically proves my point:
Edit: Oh yeah, this one is the "compaction" one. Which is odd that it did that automatically, normally you have to manually select "compact session". Just weird how ChatGPTs "compaction" is classified as a "reasoning". At anyrate, there are other log files that have these same blocks but aren't labeled with compaction.
- type: "reasoning"
- text: a natural-language internal summary of the session
- metadata.openai.itemId:
rs_012e955bc1943987016944b500056481a2b01bf65cfc99be51
- metadata.openai.reasoningEncryptedContent: a long encrypted blob
- role: "assistant"
- agent: "compaction"
- mode: "compaction"
- modelID: "gpt-5.2-medium"
- providerID: "openai"
Edit: Here it is... I found one.
{
"id": "prt_b34d1cd1e00138hutnPaWH9dFP",
"sessionID": "ses_4cb2e3939ffeX5rAaFny0Z58g9",
"messageID": "msg_b34d1c6ea001NTzTh1iadVOZWi",
"type": "reasoning",
"text": "**Clarifying OAuth testing**\n\nThe user's request to \"Testing OAuth\" seems a bit vague. I should clarify if they're testing an OAuth flow in a specific project. Since we're in the opencode-architecture repository, I need to ask some questions to get more details. I want to know what provider they're using, what platform it relates to, what specific testing they want to do (like authorization code or PKCE), and the environment they're working in, such as local development.",
"metadata": {
"openai": {
"itemId": "rs_08b18df330eb3b96016944d15cf92081a0be6ec95a36d4f337",
"reasoningEncryptedContent": "gAAAAABpRNFhyxvLP08dCapgETPi3lJEqqeX22cEmzcznV_UFPLbUbKlGg-17ftOirmyYLfQ_fvoBWn9AFuLzqhQdxpMRA01wxriDa0t9HaeUsvGL0YiqeUqHuTXn6oEQTpFXhQ10qNgBV19Ij7pIWH4c69NV3spcm3Y8mjTvb2nCAPvOwMU35_GkSZ5a7CHuCc89hAnDxVBxXVN8UAHrumC5wD38wF4hIMR1WF0WkGIZXVa-8GmUKKNezr9v3m4HwdVRHNR78MEooA5yRf_kGJwPAtg3BLobibSQV5BN3st3Nyuk9kN9lVyeKSfqT1VKLKucnxCFaQkVF7FsBWuHT788YiUD4yfHiSP84MNAsrQmdJGPMLOAv3QM8lbgZ3GqrWwu1CwjKNjbviml2Q16pYoNIPzPWr4RDf5KP6iyKcIDBYyTJqABI7ZF6rhXmNEFMpVVd_e3JUxDMpoe-RGn4khK3I9sr8AifXaCIcSgNFFiL2P6mj3TqtO00CQj7FUdL_APqweMXnhWIa5sBC2O9ve5S22uKK2TWZ6-E2JjL67pAcyvWPDXSq9soogfIMGW88Nnn2KavB9psJ7l8ciZ8UCuQ83nbTBt2v_hdwCEQeNKb4K7ESMyq3FyoSRm63MEys8FUvq5maHgIDKRAEiAfOVKQsMcaTWqHcI8bANMg_o0kxTkYSk3SDUmdOjOFJoCM3PSZxNzdFwBW00FI0sVT6upvK4Kd5Bp056Ec1mBaxNMyrT8y_iW-lKmD3J1QiSe5wHXW_XczPOzA93R4MeAB025Fc2r1c7HvGIWG_D5GzhPyoEvOg0U2jrbJyEPa0Os5E5Oq6pZEJ--0c3fBkp5mKlWIvxXLVRa7Os8ys8jfZZGlTlmSn2S8BwL7hkp36SUOpADM6KsGphzv-65xZLYrIIGAA_woJeh5SpuGH4R-Yn5Wy4SllbR5pgrnaS57TT0NQZ9Tu18gesMw5oyI5e15VaZu2JW8kDSJs1QqxOW7bYmhqPUav4aipdXSDzujVRK7AjUKqYyP8lw_u_U-d9psqSCRGMCV-7nEAx-9ANCLRAIGlBmVVI_hYJp39vLkzJZSYmp0Wwc3LO9nJs-HUuYno8zFV3vDZrMqPNrrpiQ5-u3YFjxNYq2994y-E4iD8RTDBIUnqJ_UDJF3WMtlKK21v2tZ1ngoXzAyvAvvzjwsS_mA_wSdykDSw164acZrUwjmHyJwbV2-0Szu6TN_H3SKd1XhXDY7dJrZAbIx2sAMA01hDgoVbY1WI189SgxwPgxysB-IWlW4Kiykyd1vPAAgXCHiSJqmmrOpoteuYgtGsGz5C0tafplvN60jJYg8OnG55qs2hAwtAvHdWT2Fae_5o85y3C3sVV4g=="
}
},
"time": {
"start": 1766117723422,
"end": 1766117729575
}
}
Possible Fix?
EDIT: OK, I want back to my "broken" session an snagged the item id, searched through all my OpenCode Internal Storage files: ~\.local\share\opencode\storage. I found the referenced item id and deleted it. Tried sending a message and turns out, I knew there was going to be another one because there were two ChatGPT message exchanges present in that session, there was another error. This time pointing at a msg_*.json file. With those 4 entries gone, session isn't broken anymore.
Long story short, there is 1 root msg_*.json file and a matching rs_* hash string associated with every ChatGPT message produced. The good news is that it would be easy to find. The search strings are:
"itemId": "msg_*"
"itemId": "rs_*"
If these could be blocked from being created, or pruned/deleted, you can enter that session with an OpenAI API Model and use that session again without any errors.
I'm sure if you were to delete all references to this Item id (there are context pruners out there but none of them remove "Reasoning Bubbles" as far as I know because the contents aren't stored in OpenCodes local memory, only a web reference on where to go and retrieve it.), then OpenAI Models would have no reason to try and connect to ChatGPT to retrieve the contents of these payloads, and you could happily pick up in the session where you left off. But unless there is some kind of built in pruning method to trim those references when one changes a model, then any more messages that ChatGPT models produce in that session will result in more item id references.
Or, better yet... if a user switches from a ChatGPT to an OpenAI API model in the same session, or starts OpenCode with it configured and enters said session, then have your plugin intercept those item id requests and return it to the client as empty HTTP/200s. I haven't tested that scenario but I'm sure it'll work.
Or give us an option to block item ids from being creating in the first place. I hate em... they are annoying anyways.
Environment
- opencode version: 1.0.169
- Plugin version: 4.1.1
- Operating System: Ubuntu
- Node.js version: 24.11.1
Logs
If applicable, attach logs from ~/.opencode/logs/codex-plugin/ (enable with ENABLE_PLUGIN_REQUEST_LOGGING=1)
Compliance Checklist
Please confirm:
- I'm using this plugin for personal development only
- I have an active ChatGPT Plus/Pro subscription
- This issue is not related to attempting commercial use or TOS violations
- I've reviewed the FAQ and Troubleshooting sections
Additional Context
Add any other relevant information.