Skip to content

fix(copilot): fixes from testing and integration#131

Open
theblazehen wants to merge 9 commits intoMirrowel:feature/copilot-providerfrom
theblazehen:copilot-fixes
Open

fix(copilot): fixes from testing and integration#131
theblazehen wants to merge 9 commits intoMirrowel:feature/copilot-providerfrom
theblazehen:copilot-fixes

Conversation

@theblazehen
Copy link

@theblazehen theblazehen commented Feb 15, 2026

Summary

Fixes found while testing and integrating the Copilot provider:

  • Credential setup flow: Added setup_credential() method with email-based deduplication and env export support
  • Email null fallback: GitHub API sometimes returns null email — fall back to login username
  • Provider-prefixed models: get_models() now returns copilot/gpt-5 instead of bare gpt-5 so they're directly usable in /v1/models
  • Expanded model catalog: Updated DEFAULT_COPILOT_MODELS from 8 → 20 models (GPT-5.x, Claude 4.5/4.6, Gemini 3, Grok)
  • credential_identifier fix: acompletion() now reads credential_identifier kwarg (used by the rotator) instead of only api_key
  • Await streaming handler: _handle_streaming_response was not awaited in the completion path
  • Stream delta compatibility: Use plain dicts instead of litellm.Choices/litellm.Delta objects for streaming chunks (avoids serialization issues)
  • ResponseNotRead fix: Explicitly aread() response body before logging HTTP errors in streaming mode

Test plan

  • Non-streaming chat completions via proxy
  • Streaming chat completions via proxy
  • Credential rotation between multiple Copilot accounts
  • Token refresh (Copilot API tokens expire ~30min)
  • 401 re-auth flow

🤖 Generated with Claude Code

@mirrobot-agent
Copy link
Contributor

I'm starting my review of the Copilot provider fixes. I'll be looking closely at the new credential setup flow and the streaming response handling. Back soon with the full report!

@theblazehen theblazehen changed the title fix(copilot): battle-tested fixes from production use fix(copilot): fixes from testing and integration Feb 15, 2026
Copy link
Contributor

@mirrobot-agent mirrobot-agent bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall Assessment

This PR provides a solid set of improvements for the Copilot provider, particularly around production stability and credential management. The fixes for streaming response handling (awaiting the handler and handling ResponseNotRead errors) are critical and well-implemented. The expansion of the model catalog and the improved model prefixing logic significantly enhance the provider's usability.

Architectural Feedback

  • Credential Storage: The introduction of setup_credential and associated management methods is a great step toward standardized credential handling. Using the email for deduplication is a robust approach.
  • Streaming Compatibility: The switch to plain dictionaries for streaming chunks is an interesting choice to avoid serialization issues. While practical, it's important to ensure this doesn't break expectations for downstream consumers that might rely on LiteLLM's object structure.

Key Suggestions

  • Locking Mechanism: The in-memory lock key using id(creds) might be slightly fragile if objects are cloned.
  • Persistence Path: Consider using a more global configuration directory for OAuth credentials to avoid "orphaned" credential files when running from different directories.

Verified Fixes

  • Streaming Await: Correctly added await to _handle_streaming_response call.
  • Error Handling: Implemented aread() for error responses in streaming mode, preventing ResponseNotRead exceptions.
  • Email Fallback: Added reliable fallback from email to login username for GitHub identity.

Questions for the Author

  1. Was the change from litellm.Choices to plain dictionaries tested with all intended downstream consumers?
  2. Is the oauth_creds directory intended to be local to the project root, or would a global config directory be more appropriate for the CLI use case?

This review was generated by an AI assistant.


def _get_oauth_base_dir(self) -> Path:
"""Return the OAuth credentials base directory."""
return Path.cwd() / "oauth_creds"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Path.cwd() / "oauth_creds" might cause credentials to be scattered if the application is executed from different directories. Consider using a standardized user configuration directory (e.g., Path.home() / ".config/llm-proxy/oauth_creds") to ensure persistence across sessions.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — fixed in 229257e. Now uses the centralized get_oauth_dir() from utils/paths.py which handles PyInstaller mode and overrides.

The Copilot API token (access_token) expires after ~30 minutes.
"""
async with await self._get_lock(path):
lock_key = path or f"in-memory://copilot/{id(creds)}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using id(creds) for an in-memory lock key is clever, but be aware that it might lead to redundant locks if the creds dictionary is copied or recreated (e.g., during serialization/deserialization cycles). If the credentials have a unique identifier like an email, that might be a more stable key.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point — id() can be recycled if the dict gets garbage collected and recreated. In practice the in-memory path only triggers for env-based credentials which are long-lived singletons, but using the email from _proxy_metadata would be more robust. Open to changing if preferred.

}
)

return litellm.ModelResponse(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While using plain dictionaries for choices avoids serialization issues, it might break compatibility with LiteLLM utilities or downstream consumers that expect litellm.Choices objects. Have you verified that this change doesn’t affect any of the proxy’s core functionality or existing integrations?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes — the litellm.Choices/litellm.Delta objects were causing serialization issues during streaming. ModelResponse() accepts plain dicts for choices and handles the wrapping internally, so downstream consumers still get proper objects. Tested with both streaming and non-streaming through the full proxy pipeline.

4. Parses response into LiteLLM format
"""
credential_path = kwargs.get("api_key", "")
credential_path = kwargs.pop("credential_identifier", kwargs.get("api_key", ""))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good addition of credential_identifier. However, kwargs.get("api_key", "") is used as a fallback. In this provider, api_key is repurposed to store the credential path. It might be worth adding a comment to clarify this for future maintainers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. For context: all OAuth-based providers in this project pass credential file paths (or env:// virtual paths) through the api_key/credential_identifier field since the rotator assigns credentials per-request. The credential_identifier kwarg was added in a later refactor as the canonical way to pass this, with api_key kept as fallback for backward compat.

…th.cwd()

Avoids credentials scattering if run from different directories and
respects PyInstaller/override modes via utils.paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Mirrowel Mirrowel added enhancement New feature or request Agent Monitored Monitored for AI Agent to review PR's and commits Priority labels Feb 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Agent Monitored Monitored for AI Agent to review PR's and commits enhancement New feature or request Priority

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants