Create Custom LiteLLM Callback Handler Maintained by AgentOps#1182
Closed
devin-ai-integration[bot] wants to merge 8 commits intomainfrom
Closed
Create Custom LiteLLM Callback Handler Maintained by AgentOps#1182devin-ai-integration[bot] wants to merge 8 commits intomainfrom
devin-ai-integration[bot] wants to merge 8 commits intomainfrom
Conversation
Contributor
Author
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
- Fix TypeError when tool_calls is None in instrumentor.py - Fix import ordering in test_litellm_instrumentation.py - Fix mock object attribute assignment in tests - Add litellm dependency to pyproject.toml Co-Authored-By: Pratyush Shukla <pratyush@agentops.ai>
- Add proper span context attachment/detachment in StreamWrapper and AsyncStreamWrapper - Import required OpenTelemetry context management modules - Ensure spans are properly transmitted to AgentOps backend - Fix streaming span validation issues by following OpenAI instrumentation pattern Co-Authored-By: Pratyush Shukla <pratyush@agentops.ai>
- Add litellm_streaming_example.py: demonstrates streaming responses with time-to-first-token metrics - Add litellm_async_example.py: showcases async operations and concurrent completions - Add litellm_multi_provider_example.py: tests multiple LLM providers (OpenAI, Anthropic, etc.) - Add litellm_advanced_features_example.py: covers function calling and advanced features - All examples include AgentOps validation and session tracking - Examples follow established patterns from other provider integrations Co-Authored-By: Pratyush Shukla <pratyush@agentops.ai>
Contributor
|
This pull request has been automatically marked as stale because it has not had any activity in the last 14 days. If no updates are made within 7 days, this PR will be automatically closed. |
Contributor
|
This pull request has been automatically closed because it has been stale for 7 days with no activity. Feel free to reopen this PR if you'd like to continue working on it. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Create Custom LiteLLM Callback Handler Maintained by AgentOps
Summary
This PR implements a comprehensive LiteLLM instrumentation system for AgentOps that addresses issue #1180. The implementation provides a hybrid approach combining LiteLLM's callback system with wrapt-based instrumentation for complete telemetry coverage.
Key Components:
The implementation supports all major LLM providers (OpenAI, Anthropic, Cohere, etc.) and is compatible with LiteLLM versions from 1.68.0 to the latest 1.74.9.rc.1.
Review & Testing Checklist for Human
Recommended Test Plan:
examples/litellm/litellm_example.pywith different providersDiagram
%%{ init : { "theme" : "default" }}%% graph TB subgraph "LiteLLM Integration" callback["agentops/instrumentation/providers/<br/>litellm/callback_handler.py"]:::major-edit instrumentor["agentops/instrumentation/providers/<br/>litellm/instrumentor.py"]:::major-edit stream["agentops/instrumentation/providers/<br/>litellm/stream_wrapper.py"]:::major-edit utils["agentops/instrumentation/providers/<br/>litellm/utils.py"]:::major-edit end subgraph "Core Integration" init["agentops/instrumentation/__init__.py"]:::minor-edit main_init["agentops/__init__.py"]:::context end subgraph "Testing" tests["tests/test_litellm_instrumentation.py"]:::major-edit example["examples/litellm/litellm_example.py"]:::context end callback --> instrumentor instrumentor --> stream instrumentor --> utils init --> instrumentor main_init --> init tests --> callback tests --> instrumentor subgraph Legend L1[Major Edit]:::major-edit L2[Minor Edit]:::minor-edit L3[Context/No Edit]:::context end classDef major-edit fill:#90EE90 classDef minor-edit fill:#87CEEB classDef context fill:#FFFFFFNotes
callback_handler.pylines 254, 256, 258 where Exception objects are accessed without proper type checkingLink to Devin run: https://app.devin.ai/sessions/c572fc6b318948c4bc61b0b8841d6ca1
Requested by: Pratyush Shukla (pratyush@agentops.ai)
Fixes #1180