Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions bun.lock
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
{
"lockfileVersion": 1,
"configVersion": 0,
"workspaces": {
"": {
"name": "copilot-api",
Expand Down
44 changes: 44 additions & 0 deletions src/lib/model-consumption.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
{
"models": [
{
"name": "GPT-4.1",
"consumption": "0x"
},
{
"name": "GPT-4o",
"consumption": "0x"
},
{
"name": "GPT-5 mini",
"consumption": "0x"
},
{
"name": "Grok Code Fast 1",
"consumption": "0x"
},
{
"name": "Claude Haiku 4.5",
"consumption": "0.33x"
},
{
"name": "Claude Sonnet 4",
"consumption": "1x"
},
{
"name": "Claude Sonnet 4.5",
"consumption": "1x"
},
{
"name": "Gemini 2.5 Pro",
"consumption": "1x"
},
{
"name": "GPT-5",
"consumption": "1x"
},
{
"name": "GPT-5-Codex (Preview)",
"consumption": "1x"
}
]
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify model names match actual GitHub Copilot models.

The consumption data structure looks good, but some model names appear unusual (e.g., "GPT-5", "GPT-5 mini", "GPT-5-Codex (Preview)", "Grok Code Fast 1"). These may be placeholders or future models.

Please verify that these model names match actual GitHub Copilot model IDs by searching the web for the latest Copilot model documentation:

Minor note: The AI summary mentions 11 models, but the file contains 10 models.


🌐 Web query:

What are the current GitHub Copilot model names and IDs available as of November 2024?

💡 Result:

Here are the GitHub Copilot models announced/available around November 2024 (names as published during the GitHub Universe Oct 2024 multi‑model announcement). Sources below.

  • Anthropic: Claude 3.5 Sonnet. [1][2]
  • Google: Gemini 1.5 Pro. [1][2]
  • OpenAI: GPT‑4o (and OpenAI “o1‑preview” / o1‑mini variants mentioned for Copilot). [1][2]

Sources:
[1] The Verge — “GitHub Copilot will support models from Anthropic, Google, and OpenAI” (Oct 29, 2024).
[2] GitHub Universe coverage / press summaries reporting the Oct 2024 Copilot multi‑model announcement.


Critical: Update model names to match actual GitHub Copilot models available as of November 2024.

The current GitHub Copilot models announced are: Claude 3.5 Sonnet (Anthropic), Gemini 1.5 Pro (Google), and GPT-4o (OpenAI), with o1-preview and o1-mini variants also available.

Most names in this file are inaccurate:

  • Remove: GPT-4.1, GPT-5, GPT-5 mini, GPT-5-Codex, Grok Code Fast 1, Claude Haiku 4.5, Claude Sonnet 4, Claude Sonnet 4.5
  • Fix: Gemini 2.5 Pro → Gemini 1.5 Pro
  • Keep: GPT-4o (correct)
  • Add missing: o1-preview, o1-mini

139 changes: 139 additions & 0 deletions src/lib/model-matcher.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
import consola from "consola"

import { state } from "./state"

/**
* Find a matching model from available models
* If exact match exists, return it
* If no exact match, try to find by prefix (e.g., claude-haiku-4-5-xxx -> claude-haiku-4.5)
*/
export function findMatchingModel(requestedModel: string): string | null {
const availableModels = state.models?.data.filter(
(m) => typeof m.capabilities?.limits?.max_context_window_tokens === "number",
)

if (!availableModels || availableModels.length === 0) {
return null
}

const availableModelIds = availableModels.map((m) => m.id)

consola.debug(`Looking for match for: ${requestedModel}`)
consola.debug(`Available models: ${availableModelIds.join(", ")}`)

// Try exact match first
if (availableModelIds.includes(requestedModel)) {
return requestedModel
}

// Normalize the requested model
// 1. Replace underscores with hyphens
// 2. Remove date suffix (8 digits at the end)
// 3. Replace version numbers: 4-5 -> 4.5
let normalizedRequested = requestedModel
.toLowerCase()
.replace(/_/g, "-")
.replace(/-(\d{8})$/, "") // Remove -20251001 style suffix
.replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5
Copy link

Copilot AI Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The regex pattern /(\d)-(\d)/g will incorrectly transform hyphens between all single digits, not just version numbers. For example, "gpt-3-5-turbo" would become "gpt.3.5.turbo" instead of the likely intended "gpt-3.5-turbo". Consider using a more specific pattern like /(-\d+)-(\d+)(?:-|$)/g to only match version-like patterns at word boundaries, or anchor it to specific positions.

Suggested change
.replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5
.replace(/(\d+)-(\d+)(?=\D|$)/g, "$1.$2") // Replace 4-5 with 4.5, but not 3-5 in gpt-3-5-turbo

Copilot uses AI. Check for mistakes.

Comment on lines 132 to 137
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Normalization breaks valid IDs like gpt-4-1106-preview

replace(/(\d)-(\d)/g, "$1.$2") also fires on multi-digit suffixes, so a request for gpt-4-1106-preview becomes gpt-4.1106-preview and can no longer match the real model ID. This makes validateAndReplaceModel reject legitimate models. Please constrain the normalization to single-digit version fragments only, e.g.:

-    .replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5
+    .replace(/\b(\d)-(\d)\b/g, (_match, major, minor) => `${major}.${minor}`) // Replace 4-5 with 4.5
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let normalizedRequested = requestedModel
.toLowerCase()
.replace(/_/g, "-")
.replace(/-(\d{8})$/, "") // Remove -20251001 style suffix
.replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5
let normalizedRequested = requestedModel
.toLowerCase()
.replace(/_/g, "-")
.replace(/-(\d{8})$/, "") // Remove -20251001 style suffix
.replace(/\b(\d)-(\d)\b/g, (_match, major, minor) => `${major}.${minor}`) // Replace 4-5 with 4.5
🧰 Tools
🪛 ESLint

[error] 33-33: 'normalizedRequested' is never reassigned. Use 'const' instead.

(prefer-const)


[error] 35-35: Prefer String#replaceAll() over String#replace().

(unicorn/prefer-string-replace-all)


[error] 36-36: Capturing group number 1 is defined but never used.

(regexp/no-unused-capturing-group)


[error] 37-37: Prefer String#replaceAll() over String#replace().

(unicorn/prefer-string-replace-all)

🤖 Prompt for AI Agents
In src/lib/model-matcher.ts around lines 33 to 38, the normalization step's
pattern that replaces digit-dash-digit sequences also matches multi-digit
fragments (e.g. transforms "gpt-4-1106-preview" to "gpt-4.1106-preview");
restrict that replacement so it only converts single-digit version fragments (a
single digit, a dash, a single digit) and does not fire when the digit after the
dash is followed by additional digits (i.e. ensure the second digit is not part
of a multi-digit sequence or use a word boundary), so multi-digit suffixes
remain unchanged.

consola.debug(`Normalized requested: ${normalizedRequested}`)

// Try exact match after normalization
for (const availableId of availableModelIds) {
if (availableId.toLowerCase() === normalizedRequested) {
consola.info(
`🔄 Model normalized match: '${requestedModel}' -> '${availableId}'`,
)
return availableId
}
}

// Try prefix matching
for (const availableId of availableModelIds) {
const normalizedAvailable = availableId.toLowerCase()

// Check if they start with each other
if (
normalizedAvailable.startsWith(normalizedRequested) ||
normalizedRequested.startsWith(normalizedAvailable)
Comment on lines +158 to +159
Copy link

Copilot AI Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bidirectional prefix matching could produce ambiguous results when multiple models share prefixes. For example, if "gpt-4" is requested and both "gpt-4" and "gpt-4o" are available, this could match either one depending on iteration order. Consider matching only one direction or adding explicit preference logic.

Suggested change
normalizedAvailable.startsWith(normalizedRequested) ||
normalizedRequested.startsWith(normalizedAvailable)
normalizedAvailable.startsWith(normalizedRequested)

Copilot uses AI. Check for mistakes.
) {
consola.info(
`🔄 Model prefix match: '${requestedModel}' -> '${availableId}'`,
)
return availableId
}
}

// Try fuzzy matching by comparing main parts
const requestedParts = normalizedRequested.split("-")
for (const availableId of availableModelIds) {
const normalizedAvailable = availableId.toLowerCase()
const availableParts = normalizedAvailable.split("-")

// Match by comparing first N-1 parts (everything except version)
if (requestedParts.length >= 3 && availableParts.length >= 3) {
const requestedBase = requestedParts.slice(0, -1).join("-")
const availableBase = availableParts.slice(0, -1).join("-")

if (requestedBase === availableBase) {
consola.info(
`🔄 Model base match: '${requestedModel}' -> '${availableId}'`,
)
return availableId
}
}
}

consola.debug(`No match found for: ${requestedModel}`)
return null
}

/**
* Validate and potentially replace the requested model
* Returns the validated model ID or throws/returns error info
*/
export function validateAndReplaceModel(requestedModel: string): {
success: boolean
model?: string
error?: {
message: string
code: string
param: string
type: string
}
} {
const availableModels = state.models?.data.filter(
(m) => typeof m.capabilities?.limits?.max_context_window_tokens === "number",
)
const availableModelIds = availableModels?.map((m) => m.id) || []

const matchedModel = findMatchingModel(requestedModel)

if (!matchedModel) {
consola.error(`❌ Model not available: ${requestedModel}`)
consola.error(`Available models: ${availableModelIds.join(", ")}`)

return {
success: false,
error: {
message: `The requested model '${requestedModel}' is not supported. Available models: ${availableModelIds.join(", ")}`,
code: "model_not_supported",
param: "model",
type: "invalid_request_error",
},
}
}

if (matchedModel !== requestedModel) {
consola.success(
`✓ Model matched and replaced: ${requestedModel} -> ${matchedModel}`,
)
} else {
consola.success(`✓ Model validated: ${matchedModel}`)
}

return {
success: true,
model: matchedModel,
}
}
14 changes: 14 additions & 0 deletions src/routes/chat-completions/handler.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import consola from "consola"
import { streamSSE, type SSEMessage } from "hono/streaming"

import { awaitApproval } from "~/lib/approval"
import { validateAndReplaceModel } from "~/lib/model-matcher"
import { checkRateLimit } from "~/lib/rate-limit"
import { state } from "~/lib/state"
import { getTokenCount } from "~/lib/tokenizer"
Expand All @@ -20,6 +21,19 @@ export async function handleCompletion(c: Context) {
let payload = await c.req.json<ChatCompletionsPayload>()
consola.debug("Request payload:", JSON.stringify(payload).slice(-400))

// Log the requested model
consola.info(`Requested model: ${payload.model}`)

// Validate and potentially replace model
const validation = validateAndReplaceModel(payload.model)

if (!validation.success) {
return c.json({ error: validation.error }, 400)
}

// Replace model if a match was found
payload.model = validation.model!

// Find the selected model
const selectedModel = state.models?.data.find(
(model) => model.id === payload.model,
Expand Down
14 changes: 14 additions & 0 deletions src/routes/messages/handler.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import consola from "consola"
import { streamSSE } from "hono/streaming"

import { awaitApproval } from "~/lib/approval"
import { validateAndReplaceModel } from "~/lib/model-matcher"
import { checkRateLimit } from "~/lib/rate-limit"
import { state } from "~/lib/state"
import {
Expand Down Expand Up @@ -34,6 +35,19 @@ export async function handleCompletion(c: Context) {
JSON.stringify(openAIPayload),
)

// Log the requested model
consola.info(`Requested model: ${openAIPayload.model}`)

// Validate and potentially replace model
const validation = validateAndReplaceModel(openAIPayload.model)

if (!validation.success) {
return c.json({ error: validation.error }, 400)
}

// Replace model if a match was found
openAIPayload.model = validation.model!

if (state.manualApprove) {
await awaitApproval()
}
Expand Down
43 changes: 34 additions & 9 deletions src/routes/models/route.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ import { Hono } from "hono"
import { forwardError } from "~/lib/error"
import { state } from "~/lib/state"
import { cacheModels } from "~/lib/utils"
import modelConsumptionData from "~/lib/model-consumption.json"

export const modelRoutes = new Hono()

Expand All @@ -13,15 +14,39 @@ modelRoutes.get("/", async (c) => {
await cacheModels()
}

const models = state.models?.data.map((model) => ({
id: model.id,
object: "model",
type: "model",
created: 0, // No date available from source
created_at: new Date(0).toISOString(), // No date available from source
owned_by: model.vendor,
display_name: model.name,
}))
// Create a map for quick consumption lookup
const consumptionMap = new Map(
modelConsumptionData.models.map((m) => [m.name, m.consumption]),
)

// Helper function to convert consumption string to number for sorting
const consumptionToNumber = (consumption: string): number => {
if (consumption === "N/A") return 999 // Put N/A at the end
const match = consumption.match(/^([\d.]+)x$/)
return match ? Number.parseFloat(match[1]) : 999
}
Comment on lines +17 to +27
Copy link

Copilot AI Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The consumption map creation and consumptionToNumber helper function are duplicated in both src/routes/models/route.ts (lines 17-27) and src/start.ts (lines 84-93). This duplicate logic should be extracted to a shared utility function to maintain consistency and reduce maintenance burden.

Copilot uses AI. Check for mistakes.

// Filter to only include models with context window information (Available models)
const models = state.models?.data
.filter((model) => {
const maxTokens = model.capabilities?.limits?.max_context_window_tokens
return typeof maxTokens === "number"
})
.map((model) => ({
model,
consumption: consumptionMap.get(model.name) || "N/A",
}))
.sort((a, b) => consumptionToNumber(a.consumption) - consumptionToNumber(b.consumption))
.map((item) => ({
id: item.model.id,
object: "model",
type: "model",
created: 0, // No date available from source
created_at: new Date(0).toISOString(), // No date available from source
owned_by: item.model.vendor,
display_name: item.model.name,
max_context_length: item.model.capabilities?.limits?.max_context_window_tokens,
Copy link

Copilot AI Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding max_context_length field to the models API response is a breaking change that may not be expected by API consumers. The field name also differs from the internal naming (max_context_window_tokens). Consider versioning this API change or documenting it clearly for consumers.

Suggested change
max_context_length: item.model.capabilities?.limits?.max_context_window_tokens,
max_context_window_tokens: item.model.capabilities?.limits?.max_context_window_tokens,

Copilot uses AI. Check for mistakes.
}))

return c.json({
object: "list",
Expand Down
5 changes: 5 additions & 0 deletions src/server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,5 +27,10 @@ server.route("/v1/chat/completions", completionRoutes)
server.route("/v1/models", modelRoutes)
server.route("/v1/embeddings", embeddingRoutes)

// Compatibility with tools that expect api/v0/ prefix
server.route("/api/v0/models", modelRoutes)
server.route("/api/v0/chat/completions", completionRoutes)
server.route("/api/v0/embeddings", embeddingRoutes)

// Anthropic compatible endpoints
server.route("/v1/messages", messageRoutes)
24 changes: 23 additions & 1 deletion src/services/copilot/create-chat-completions.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,29 @@ export const createChatCompletions = async (
})

if (!response.ok) {
consola.error("Failed to create chat completions", response)
const errorBody = await response.text()
consola.error(`Failed to create chat completions for model: ${payload.model}`)
consola.error(`Response status: ${response.status} ${response.statusText}`)
consola.error(`Response body: ${errorBody}`)

// Try to parse error details
try {
const errorJson = JSON.parse(errorBody)
if (errorJson.error?.message) {
consola.error(`Error message: ${errorJson.error.message}`)

// If model not supported, list available models
if (errorJson.error.code === "model_not_supported") {
const availableModels = state.models?.data
.filter((m) => typeof m.capabilities?.limits?.max_context_window_tokens === "number")
.map((m) => m.id)
consola.error(`Available models: ${availableModels?.join(", ")}`)
}
}
} catch {
// If parsing fails, we already logged the raw body
}

throw new HTTPError("Failed to create chat completions", response)
}

Expand Down
8 changes: 7 additions & 1 deletion src/services/copilot/get-models.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,12 @@ export const getModels = async () => {

if (!response.ok) throw new HTTPError("Failed to get models", response)

return (await response.json()) as ModelsResponse
const result = await response.json() as ModelsResponse
result.data = result.data.filter(
(model: any) =>
Copy link

Copilot AI Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using any type annotation defeats TypeScript's type safety. Since the Model interface has been updated to include model_picker_category (line 56), the filter should use the typed Model interface instead of any.

Suggested change
(model: any) =>
(model: Model) =>

Copilot uses AI. Check for mistakes.
model.model_picker_category !== undefined && model.model_picker_enabled === true
)
return result
}

export interface ModelsResponse {
Expand Down Expand Up @@ -48,6 +53,7 @@ export interface Model {
preview: boolean
vendor: string
version: string
model_picker_category?: string
policy?: {
state: string
terms: string
Expand Down
Loading
Loading