-
Notifications
You must be signed in to change notification settings - Fork 227
fix: process Vertex AI response properly for usage #1173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: process Vertex AI response properly for usage #1173
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
2 file(s) reviewed, no comment(s)
Edit PR Review Bot Settings | Greptile
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
1 file(s) reviewed, no comment(s)
Edit PR Review Bot Settings | Greptile
|
Sorry, I'm not used to poerty and updated lock file, but now I restored it. |
|
I've fixed a test for VertexAI and tested if it works. |
|
Thanks a lot for your contribution! Closing this in favor of #1181 |
Fixed to send usage of VertexAI correctly.
I encountered the following error while using VertexAI with LangChain.
This seems to occur when using the newer version of langchain-google-vertexai, and was caused by the UsageMetadata in VertexAI API responses containing fields that are not integers.
Example: json of response (`langchain_core.outputs.LLMResult`)
{ "generations": [ [ { "text": "", "generation_info": { "is_blocked": false, "safety_ratings": [], "usage_metadata": { "prompt_token_count": 4464, "candidates_token_count": 18, "total_token_count": 4482, "prompt_tokens_details": [ { "modality": 1, "token_count": 4464 } ], "candidates_tokens_details": [ { "modality": 1, "token_count": 18 } ], "cached_content_token_count": 0, "cache_tokens_details": [] }, "finish_reason": "STOP", "avg_logprobs": -0.000507740666055017 }, "type": "ChatGeneration", "message": { "content": "", "additional_kwargs": { "function_call": { "name": "xxxxx", "arguments": "{\"aaaaa\": \"bbbbb\"}" } }, "response_metadata": { "is_blocked": false, "safety_ratings": [], "usage_metadata": { "prompt_token_count": 4464, "candidates_token_count": 18, "total_token_count": 4482, "prompt_tokens_details": [ { "modality": 1, "token_count": 4464 } ], "candidates_tokens_details": [ { "modality": 1, "token_count": 18 } ], "cached_content_token_count": 0, "cache_tokens_details": [] }, "finish_reason": "STOP", "avg_logprobs": -0.000507740666055017, "model_name": "gemini-2.0-flash-001" }, "type": "ai", "name": null, "id": "run-3a8c2d0d-9f67-4e04-a58e-6dafeaeb9202-0" } } ] ], "llm_output": null, "run": null, "type": "LLMResult" }Since these fields are not necessary for Usage, I added processing to remove them.
my dependencies
Important
Fixes VertexAI response processing in LangChain integration by removing non-integer fields from usage data.
langfuse/callback/langchain.pyby removing non-integer fields from usage data.prompt_tokens_details,candidates_tokens_details, andcache_tokens_details.langchain-google-vertexaidependency frompyproject.toml.This description was created by
for 7270a42. You can customize this summary. It will automatically update as commits are pushed.
Greptile Summary
Disclaimer: Experimental PR review
This PR addresses validation errors when processing Vertex AI responses in the Langfuse Python SDK by modifying how usage metadata is handled.
UpdateGenerationBodyto properly validate Vertex AI's non-integer usage metadata fieldslangchain.pyto filter out non-standard fields from Vertex AI responses before validationlangchain-google-vertexaifrom dev dependencies to avoid version conflictsprompt_tokens_detailsandcandidates_tokens_detailsThe changes focus on maintaining compatibility with newer versions of langchain-google-vertexai while ensuring proper validation of usage data structures.
💡 (1/5) You can manually trigger the bot by mentioning @greptileai in a comment!