Skip to content

Comments

Aj/tool use agent chat completion support#248

Merged
amanjaiswal73892 merged 3 commits intotlsdc/tool_use_agentfrom
aj/tool_use_agent_chat_completion_support
May 21, 2025
Merged

Aj/tool use agent chat completion support#248
amanjaiswal73892 merged 3 commits intotlsdc/tool_use_agentfrom
aj/tool_use_agent_chat_completion_support

Conversation

@amanjaiswal73892
Copy link
Collaborator

@amanjaiswal73892 amanjaiswal73892 commented May 21, 2025

Adding Chat Completion Support

Description by Korbit AI

What change is being made?

Add support for chat completion functionality to the ToolUseAgent by integrating additional model configurations, message builders, retry logic, tool usage facilitation, and model response handling.

Why are these changes being made?

These changes enable the ToolUseAgent to better handle conversations and calls to various LLM service providers such as OpenAI, OpenRouter, and VLLM, enhancing its capability to interact and respond effectively. The approach improves flexibility by supporting different configurations for chat-based models and implements mechanisms to manage retries and failures, ensuring robust operations across supported APIs.

Is this description stale? Ask me to generate a new description by commenting /korbit-generate-pr-description

Copy link

@korbit-ai korbit-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review by Korbit AI

Korbit automatically attempts to detect when you fix issues in new commits.
Category Issue Status
Documentation Missing client_args parameter description ▹ view
Error Handling Redundant Exception Logging ▹ view
Functionality Provider Name Typo ▹ view
Readability Inconsistent Constant Naming ▹ view
Performance Missing Exponential Backoff in Retries ▹ view
Logging Missing token usage logging ▹ view
Files scanned
File Path Reviewed
src/agentlab/agents/tool_use_agent/agent.py
src/agentlab/llm/response_api.py

Explore our documentation to understand the languages and file types we support and the files we ignore.

Check out our docs on how you can make Korbit work best for you and your team.

Loving Korbit!? Share us on LinkedIn Reddit and X

Comment on lines 515 to 517
except Exception as e:
logging.exception(f"[Attempt {attempt}] Unexpected exception occurred.")
raise
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant Exception Logging category Error Handling

Tell me more
What is the issue?

The error handler logs with both f-string and exception(), which is redundant since logging.exception() already includes the exception details

Why this matters

Using f-string with logging.exception() is redundant as it automatically includes exception info. This creates duplicate exception information in logs.

Suggested change ∙ Feature Preview

Remove the f-string and simplify to just the message:

except Exception as e:
    logging.exception("Unexpected exception occurred")
    raise
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

Comment on lines +310 to +315
PROVIDER_FACTORY_MAP = {
"openai": {"chatcompletion": OpenAIChatModelArgs, "response": OpenAIResponseModelArgs},
"openrouter": OpenRouterModelArgs,
"vllm": VLLMModelArgs,
"antrophic": ClaudeResponseModelArgs,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Provider Name Typo category Functionality

Tell me more
What is the issue?

The 'antrophic' key in PROVIDER_FACTORY_MAP contains a typo (should be 'anthropic').

Why this matters

This typo will cause runtime errors when trying to use Anthropic models as the provider key won't match.

Suggested change ∙ Feature Preview

Correct the spelling in the dictionary:

PROVIDER_FACTORY_MAP = {
    "openai": {"chatcompletion": OpenAIChatModelArgs, "response": OpenAIResponseModelArgs},
    "openrouter": OpenRouterModelArgs,
    "vllm": VLLMModelArgs,
    "anthropic": ClaudeResponseModelArgs,
}
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

Comment on lines +305 to +307
OAI_CHAT_TOOl_AGENT = ToolUseAgentArgs(
model_args=OpenAIChatModelArgs(model_name="gpt-4o-2024-08-06")
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent Constant Naming category Readability

Tell me more
What is the issue?

Variable name contains a typo ('TOOl' instead of 'TOOL') and is inconsistent with other constant naming patterns.

Why this matters

Inconsistent naming makes the code harder to search for and understand, especially when the inconsistency is due to a typo.

Suggested change ∙ Feature Preview
OAI_CHAT_TOOL_AGENT = ToolUseAgentArgs(
    model_args=OpenAIChatModelArgs(model_name="gpt-4o-2024-08-06")
)
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

Comment on lines +374 to +381
def __init__(
self,
model_name: str,
client_args: Optional[Dict[str, Any]] = {},
temperature: float = 0.5,
max_tokens: int = 100,
extra_kwargs: Optional[Dict[str, Any]] = None,
):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing client_args parameter description category Documentation

Tell me more
What is the issue?

The docstring for OpenAIChatCompletionModel.init() should explain what client_args is used for since it's a non-standard parameter.

Why this matters

Without clarifying the client_args parameter, developers may not understand how to properly configure custom OpenAI API clients.

Suggested change ∙ Feature Preview
def __init__(
    self,
    model_name: str,
    client_args: Optional[Dict[str, Any]] = {},  # Custom args for OpenAI client (e.g. base_url, api_key)
    temperature: float = 0.5,
    max_tokens: int = 100,
    extra_kwargs: Optional[Dict[str, Any]] = None,
):
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

Comment on lines 471 to 472
@staticmethod
def call_with_retries(client_function, api_params, max_retries=5):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing Exponential Backoff in Retries category Performance

Tell me more
What is the issue?

The retry mechanism does not implement exponential backoff or jittering, which can lead to suboptimal retry patterns under high load.

Why this matters

Without exponential backoff and jitter, retries can create thundering herd problems and unnecessary API load during recovery from failures.

Suggested change ∙ Feature Preview

Add exponential backoff with jitter to the retry mechanism:

import random
import time

@staticmethod
def call_with_retries(client_function, api_params, max_retries=5):
    for attempt in range(max_retries):
        try:
            # ... existing try block code ...
        except openai.APIError as e:
            if attempt == max_retries - 1:
                raise
            # Add exponential backoff with jitter
            sleep_time = (2 ** attempt) + random.uniform(0, 0.1)
            time.sleep(sleep_time)
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

Comment on lines +415 to +423
if response.usage:
input_tokens = response.usage.prompt_tokens
output_tokens = response.usage.completion_tokens
# Cost calculation would require pricing data
# cost = ...
# if hasattr(tracking.TRACKER, "instance") and isinstance(
# tracking.TRACKER.instance, tracking.LLMTracker
# ):
# tracking.TRACKER.instance(input_tokens, output_tokens, cost) # Placeholder for cost
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing token usage logging category Logging

Tell me more
What is the issue?

Token usage information is captured but not logged in OpenAIChatCompletionModel._call_api()

Why this matters

Without logging token usage metrics, it becomes impossible to monitor API consumption patterns and costs over time.

Suggested change ∙ Feature Preview
if response.usage:
    input_tokens = response.usage.prompt_tokens
    output_tokens = response.usage.completion_tokens
    logging.info(
        "API call token usage",
        extra={
            "input_tokens": input_tokens,
            "output_tokens": output_tokens,
            "model": self.model_name
        }
    )
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

return OpenAIChatCompletionAPIMessageBuilder

## Some Tests for VLLM server in the works!
def test_vllm_server_reachability(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests should be in the directory


self.llm = model_args.make_model(extra_kwargs={"tools": self.tools})

self.msg_builder = model_args.get_message_builder()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the static method from message builder could be in the llm object instead. this way we wouldn't need to implement this extra thing.

e.g. message = self.llm.tool().add_image

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But we can push like this and move it after

@amanjaiswal73892 amanjaiswal73892 merged commit ffd5c5e into tlsdc/tool_use_agent May 21, 2025
0 of 3 checks passed
@amanjaiswal73892 amanjaiswal73892 deleted the aj/tool_use_agent_chat_completion_support branch May 21, 2025 17:44
@amanjaiswal73892 amanjaiswal73892 restored the aj/tool_use_agent_chat_completion_support branch May 21, 2025 17:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants