Skip to content

Conversation

@jpaodev
Copy link

@jpaodev jpaodev commented Apr 25, 2025

After getting a massive headache while working on our lovely application, I have decided to finally fix this problem in the Langfuse Python SDK.

Basically using ChatVertexAI together with Langfuse will end up providing you with humongous amounts of pain in form of spamming your terminal logs with pydantic error messages.
On top the token usage informations won't be displayed in Langfuse - Hooray.

Here a minimal example to reproduce the issue with the latest langfuse pypi package (version 2.60.3):

from dotenv import load_dotenv
load_dotenv()

from langchain_google_vertexai import ChatVertexAI
from langfuse.callback.langchain import LangchainCallbackHandler

cb_handler = LangchainCallbackHandler()

llm = ChatVertexAI(
    model_name="gemini-2.0-flash-001",
    callbacks=[cb_handler],
)

response = llm.invoke("Hello")
print(response)

The terminal output:

 python test_vertex_langfuse.py                    
10 validation errors for UpdateGenerationBody
usageDetails -> prompt_tokens_details
  value is not a valid integer (type=type_error.integer)
usageDetails -> candidates_tokens_details
  value is not a valid integer (type=type_error.integer)
usageDetails -> cache_tokens_details
  value is not a valid integer (type=type_error.integer)
usageDetails -> prompt_tokens
  field required (type=value_error.missing)
usageDetails -> completion_tokens
  field required (type=value_error.missing)
usageDetails -> total_tokens
  field required (type=value_error.missing)
usageDetails -> prompt_tokens_details -> modality
  value is not a valid integer (type=type_error.integer)
usageDetails -> input_tokens
  field required (type=value_error.missing)
usageDetails -> output_tokens
  field required (type=value_error.missing)
usageDetails -> total_tokens
  field required (type=value_error.missing)
Traceback (most recent call last):
  File "/Users/pao/Documents/Code/Testing/.venv/lib/python3.11/site-packages/langfuse/client.py", line 2733, in update
    request = UpdateGenerationBody(**generation_body)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pao/Documents/Code/Testing/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 347, in __init__
    raise validation_error

The fix has been tested with both the above GoogleVertexAI config and also an AzureChatOpenAI config. Please retest.

Yall need to use some more Google models! (that's the secret sauce :D)

Best regards,
Pao


Important

Fixes pydantic validation errors and improves model name extraction for ChatVertexAI in Langfuse by updating _parse_usage() and _parse_model() functions.

  • Behavior:
    • Fixes pydantic validation errors in langfuse/callback/langchain.py by ensuring token usage details are integers in _parse_usage().
    • Updates _parse_usage() to handle Vertex AI's usage_metadata and log warnings if conversion to integer fails.
    • Enhances _parse_model() to extract model names from response_metadata for Vertex AI.
  • Functions:
    • Modifies _parse_usage() to extract and convert token counts from usage_metadata.
    • Updates _parse_model() to check response_metadata for model names.
  • Misc:
    • Adds logging for conversion failures in _parse_usage().

This description was created by Ellipsis for 3e3460b. You can customize this summary. It will automatically update as commits are pushed.

Greptile Summary

Disclaimer: Experimental PR review

Added support for Google's Vertex AI (Gemini) model in the Langfuse Python SDK, fixing token usage parsing and error handling issues when using ChatVertexAI with Langfuse callbacks.

  • Modified langfuse/callback/langchain.py to properly parse Vertex AI's unique response structure for token usage metrics
  • Added type conversion and validation for token counts to prevent Pydantic validation errors
  • Enhanced model name extraction to properly identify Vertex AI/Gemini models
  • Fixed missing token usage information display in Langfuse UI for Vertex AI responses
  • Added error handling to prevent terminal log spam from Pydantic validation errors

💡 (1/5) You can manually trigger the bot by mentioning @greptileai in a comment!

@CLAassistant
Copy link

CLAassistant commented Apr 25, 2025

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

1 file(s) reviewed, no comment(s)
Edit PR Review Bot Settings | Greptile

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

No file(s) reviewed, no comment(s)
Edit PR Review Bot Settings | Greptile

@jpaodev
Copy link
Author

jpaodev commented Apr 25, 2025

Nice and once again I should've checked the open PRs, yippie 😄
#1173 seems much cleaner, haven't tested though

@hassiebp
Copy link
Contributor

hassiebp commented May 6, 2025

@jpaodev Thanks a lot for your contribution! Closing this PR in favor of #1181 that should fix this issue 👍🏾

@hassiebp hassiebp closed this May 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants