-
Notifications
You must be signed in to change notification settings - Fork 105
Add up-to-date Azure OpenAI Agents #284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review by Korbit AI
Korbit automatically attempts to detect when you fix issues in new commits.
| Category | Issue | Status |
|---|---|---|
| Unexplained API Version Hardcoding ▹ view | ||
| Invalid Future Model Versions ▹ view | ||
| Mismatched Model Flags ▹ view | ||
| Inconsistent Token Limits for GPT-5 ▹ view | ✅ Fix detected |
Files scanned
| File Path | Reviewed |
|---|---|
| src/agentlab/agents/generic_agent/init.py | ✅ |
| src/agentlab/agents/generic_agent/agent_configs.py | ✅ |
| src/agentlab/llm/llm_configs.py | ✅ |
| src/agentlab/llm/chat_api.py | ✅ |
Explore our documentation to understand the languages and file types we support and the files we ignore.
Check out our docs on how you can make Korbit work best for you and your team.
| "openai/gpt-5-2025-08-07": OpenAIModelArgs( | ||
| model_name="gpt-5-2025-08-07", | ||
| max_total_tokens=400_000, | ||
| max_input_tokens=272_000, | ||
| max_new_tokens=16_384, | ||
| temperature=1, # gpt-5 supports temperature of 1 only | ||
| vision_support=True, | ||
| ), |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
| AGENT_AZURE_4o_MINI = GenericAgentArgs( | ||
| chat_model_args=CHAT_MODEL_ARGS_DICT["azure/gpt-4o-mini-2024-07-18"], | ||
| flags=FLAGS_GPT_4o, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalid Future Model Versions 
Tell me more
What is the issue?
The Azure agents are created using future dates (2024-2025) in their model versions, which are not yet available and would cause runtime errors when attempting to use these models.
Why this matters
Using non-existent model versions will cause the application to fail when trying to make API calls to Azure OpenAI services.
Suggested change ∙ Feature Preview
Update the model versions to use currently available Azure OpenAI models. For example:
AGENT_AZURE_4o_MINI = GenericAgentArgs(
chat_model_args=CHAT_MODEL_ARGS_DICT["azure/gpt-4-1106-preview"],
flags=FLAGS_GPT_4o,
)Provide feedback to improve future suggestions
💬 Looking for more details? Reply to this comment to chat with Korbit.
| AGENT_AZURE_5 = GenericAgentArgs( | ||
| chat_model_args=CHAT_MODEL_ARGS_DICT["azure/gpt-5-2025-08-07"], | ||
| flags=FLAGS_GPT_4o, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mismatched Model Flags 
Tell me more
What is the issue?
Using GPT-4 flags (FLAGS_GPT_4o) for GPT-5 models may not be appropriate as GPT-5 could have different capabilities and requirements.
Why this matters
Mismatched flags could lead to suboptimal performance or errors if GPT-5 has different token limits, capabilities, or API requirements compared to GPT-4.
Suggested change ∙ Feature Preview
Create specific flags for GPT-5 models that account for their unique capabilities:
FLAGS_GPT_5 = GenericPromptFlags(
# GPT-5 specific configurations
max_prompt_tokens=100_000, # Adjust based on actual GPT-5 limits
# ... other GPT-5 specific settings
)
AGENT_AZURE_5 = GenericAgentArgs(
chat_model_args=CHAT_MODEL_ARGS_DICT["azure/gpt-5-2025-08-07"],
flags=FLAGS_GPT_5,
)Provide feedback to improve future suggestions
💬 Looking for more details? Reply to this comment to chat with Korbit.
| client_args = { | ||
| "azure_deployment": deployment_name, | ||
| "azure_endpoint": endpoint, | ||
| "api_version": "2024-02-01", | ||
| "base_url": endpoint, | ||
| "default_query": {"api-version": "preview"}, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unexplained API Version Hardcoding 
Tell me more
What is the issue?
The API version 'preview' is hardcoded without explanation of why this specific version is used or what features it enables.
Why this matters
Future developers may hesitate to update the API version without understanding its implications, potentially missing out on important features or bug fixes.
Suggested change ∙ Feature Preview
Add a constant at the module level with a comment explaining the choice:
# Latest stable version supporting vision features and updated token limits
AZURE_API_VERSION = "preview"
client_args = {
"base_url": endpoint,
"default_query": {"api-version": AZURE_API_VERSION},
}Provide feedback to improve future suggestions
💬 Looking for more details? Reply to this comment to chat with Korbit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
migration guide from Azure: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/api-version-lifecycle?tabs=key#code-changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sharing the migration guide. That explains the use of "preview" for API version. Since it's documented in official Azure guidance, I'll withdraw my suggestion about adding explanatory comments.
| max_new_tokens=64_000, | ||
| temperature=1e-1, | ||
| ), | ||
| "openai/gpt-5-nano-2025-08-07": OpenAIModelArgs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why deleting those?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a duplicate entry. The same key is already populated above.
| self, | ||
| model_name, | ||
| api_key=None, | ||
| deployment_name=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deployment_name is no longer needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, we switched from using the AzureOpenAI client to OpenAI as it allows us to remove one of the necessary env variables.
Here's the migration guide from Azure: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/api-version-lifecycle?tabs=key#code-changes
|
Currently using token limits provided by Azure: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#gpt-5 |
amanjaiswal73892
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM Thank you!
This PR adds / updates azure openai agents with latest models (GPT-4o, GPT-4.1 and GPT-5).
Description by Korbit AI
What change is being made?
Add a comprehensive list of up-to-date Azure OpenAI agents, including configuration for various models like gpt-4o, gpt-4.1, and gpt-5 with their mini and nano versions, alongside adjustments to the implementation for Azure model API interactions.
Why are these changes being made?
These changes are being implemented to integrate the latest Azure OpenAI models into the existing system, ensuring that the system supports a broader range of models with vision capabilities and optimized configuration tokens. This also involves transitioning to a more streamlined API interaction approach by utilizing environment variable options more effectively.