Skip to content

[Bug]: OpenAI Responses API - Tool Call After Thinking Fails with 400 Error #20333

@RiccardoRubini

Description

@RiccardoRubini

Bug Description

Hello,
I'm having issues with the OpenAI Responses API when combining thinking with tool calls.

Setup:

from llama_index.llms.openai.responses import OpenAIResponses 

llm = OpenAIResponses(
    model='gpt-5',
    api_base=AZURE_AI_FOUNDRY_ENDPOINT_OPENAILIKE,
    api_key=AZURE_AI_FOUNDRY_KEY,
    max_tokens=max_tokens,
    timeout=timeout,
    reasoning_options={"effort": "high", "summary": "auto"}
)
Settings.llm = llm

Problem:
The model works fine when it only thinks OR only calls a tool. But when a tool call follows thinking, I get this error:

openai.BadRequestError: Error code: 400 - {'error': {'message': "Item with id 'rs_03d84ac1087bdc8c01693036340724819783adfd5c3224f826' not found. Items are not persisted when `store` is set to false. Try again with `store` set to true, or remove this item from your input.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

Attempted fix:
Setting store=True (even if I do not want to store any messages) gives a different error:

openai.BadRequestError: Error code: 400 - {'error': {'message': "Item 'rs_0cf94f2fa7f56d54006930382e84b081909b6464fc7cddd4b7' of type 'reasoning' was provided without its required following item.", 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

I've found some related issues in other projects: this and this

Regards,
RR

Version

core: 0.14.8 - llm-openai: 0.6.9

Steps to Reproduce

See the issue

Relevant Logs/Tracbacks

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingtriageIssue needs to be triaged/prioritized

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions