fix(openai): handle pydantic BaseModel as metadata #1449
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Important
Handle
pydantic.BaseModelasmetadatainlangfuse/openai.pyby converting it to a dictionary, and update tests accordingly._get_langfuse_data_from_kwargs()inlangfuse/openai.py, handlemetadataaspydantic.BaseModelby converting it to a dictionary usingmodel_dump(). If not aBaseModelordict, setmetadatato an empty dictionary.test_fails_wrong_metadata()fromtests/test_openai.pyasmetadatano longer raisesTypeErrorfor non-dict inputs.This description was created by
for fdc8a2b. You can customize this summary. It will automatically update as commits are pushed.
Disclaimer: Experimental PR review
Greptile Overview
Greptile Summary
Added support for Pydantic
BaseModelas metadata by converting it to dict usingmodel_dump(). However, the implementation has a critical flaw: invalid metadata types (strings, numbers, etc.) are now silently converted to empty dict{}instead of raising aTypeError.Key Issues:
elsebranch in the metadata validation (line 406) catches all non-dict, non-BaseModel types and silently converts them to{}TypeErrorfor invalid metadatatest_fails_wrong_metadataverified this error behavior, but no test was added to verifyBaseModelhandling works correctlyConfidence Score: 1/5
langfuse/openai.pyrequires immediate attention to fix the metadata validation logicImportant Files Changed
File Analysis
BaseModelhandling for metadata, butelsebranch silently converts invalid types to empty dict instead of raising errorBaseModelmetadata handling worksSequence Diagram
sequenceDiagram participant User participant OpenAI Client participant _get_langfuse_data_from_kwargs participant Langfuse User->>OpenAI Client: create(metadata=BaseModel()) OpenAI Client->>_get_langfuse_data_from_kwargs: kwargs with metadata _get_langfuse_data_from_kwargs->>_get_langfuse_data_from_kwargs: Check if metadata is dict alt metadata is BaseModel _get_langfuse_data_from_kwargs->>_get_langfuse_data_from_kwargs: metadata.model_dump() else metadata is invalid type _get_langfuse_data_from_kwargs->>_get_langfuse_data_from_kwargs: metadata = {} (silent conversion) end _get_langfuse_data_from_kwargs->>Langfuse: return langfuse_data with metadata Langfuse-->>User: tracked generation