fix(prompt-placeholders): don't strip tool calls from placeholder messages #1346
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Important
Fixes bug in
ChatPromptClient.compile()to preserve tool calls in message placeholders, with a new test added for verification.ChatPromptClient.compile()to preserve tool calls in message placeholders.roleandcontent.test_tool_calls_preservation_in_message_placeholder()intest_prompt_compilation.pyto verify tool calls are preserved during compilation.This description was created by
for ab40d06. You can customize this summary. It will automatically update as commits are pushed.
Disclaimer: Experimental PR review
Greptile Summary
Updated On: 2025-09-17 12:24:14 UTC
This PR fixes a bug in the Langfuse prompt system where message placeholders were losing important OpenAI message fields like
tool_callsduring compilation. The change affects the prompt compilation logic inlangfuse/model.pyand adds a corresponding test intests/test_prompt_compilation.py.The Core Issue: The previous implementation of placeholder message compilation only preserved
roleandcontentfields by creating a restrictiveChatMessageDictstructure. This caused data loss for complex chat messages that contain additional metadata liketool_calls,function_call, or other OpenAI-specific properties.The Solution: The fix modifies the compilation logic to preserve all fields from the original message dictionary while ensuring backward compatibility. Instead of creating a new
ChatMessageDictwith only role and content, the code now:roleandcontentfields are always present with sensible defaultsIntegration with Codebase: This change fits into Langfuse's prompt management system, which supports message placeholders for conversation history injection. The
BasePromptClientandChatPromptClientclasses are part of the framework's template compilation pipeline, where prompts can contain variable placeholders ({{variable_name}}) that get resolved at runtime. The fix ensures that when message objects are used as placeholder values, their complete structure is maintained.Type System Updates: The return type annotations for the
compilemethod in both abstract and concrete classes were updated to includeDict[str, Any]alongside existing types, reflecting that compiled messages can now contain arbitrary fields beyond the basicChatMessageDictstructure.Confidence score: 4/5
Context used:
Rule - Prefer simpler solutions when fixing bugs - if the root cause can be addressed by extending an existing condition rather than adding complex state tracking, choose the simpler approach. (link)