Skip to content

Commit 4f92f04

Browse files
committed
fix: adapt OpenAI client for AI Foundry compatibility
Modify API parameter handling in OpenAILLM to properly support AI Foundry endpoints: - Add special case for AI Foundry endpoints - Remove unsupported parameters for o-series models - Update URL detection for Azure endpoints - Improve parameter cleaning for different API variants
1 parent e98d371 commit 4f92f04

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

openevolve/llm/openai.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,14 @@ async def generate_with_context(
7272
"messages": formatted_messages,
7373
"max_completion_tokens": kwargs.get("max_tokens", self.max_tokens),
7474
}
75+
# if we use aifoundry we need to get rid of max_completion_tokens
76+
elif self.api_base.startswith('https://aispocuksouth'):
77+
params = {
78+
"model": self.model,
79+
"messages": formatted_messages,
80+
"temperature": kwargs.get("temperature", self.temperature),
81+
"top_p": kwargs.get("top_p", self.top_p),
82+
}
7583
else:
7684
params = {
7785
"model": self.model,

0 commit comments

Comments
 (0)