Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,33 @@

All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).

## [4.1.1] - 2025-12-11

**Minor release**: "none" reasoning effort support for GPT-5.2 and GPT-5.1.

### Added
- **"none" reasoning effort support**: GPT-5.1 and GPT-5.2 support `reasoning_effort: "none"` which disables the reasoning phase entirely. This can result in faster responses when reasoning is not needed.
- `gpt-5.2-none` - GPT-5.2 with reasoning disabled
- `gpt-5.1-none` - GPT-5.1 with reasoning disabled
- **4 new unit tests** for "none" reasoning behavior (now 197 total unit tests).

### Technical Details
- `getReasoningConfig()` now detects GPT-5.1 general purpose models (not Codex variants) and allows "none" to pass through.
- GPT-5.2 inherits "none" support as it's newer than GPT-5.1.
- Codex variants (gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini) do NOT support "none":
- Codex and Codex Max: "none" auto-converts to "low"
- Codex Mini: "none" auto-converts to "medium" (as before)
- Documentation updated with complete reasoning effort support matrix per model family.

### References
- **OpenAI API docs** (`platform.openai.com/docs/api-reference/chat/create`): "gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high."
- **Codex CLI** (`codex-rs/protocol/src/openai_models.rs`): `ReasoningEffort` enum includes `None` variant with `#[serde(rename_all = "lowercase")]` serialization to `"none"`.
- **Codex CLI** (`codex-rs/core/src/client.rs`): Request builder passes `ReasoningEffort::None` through to API without validation/rejection.
- **Codex CLI** (`docs/config.md`): Documents `model_reasoning_effort = "none"` as valid config option.

### Notes
- This plugin defaults to "medium" for better coding assistance; users must explicitly set "none" if desired.

## [4.1.0] - 2025-12-11

**Feature release**: GPT 5.2 model support and image input capabilities.
Expand Down
29 changes: 18 additions & 11 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,28 +46,35 @@ Complete reference for configuring the OpenCode OpenAI Codex Auth Plugin.

Controls computational effort for reasoning.

**GPT-5 Values:**
- `minimal` - Fastest, least reasoning
**GPT-5.2 Values** (per OpenAI API docs and Codex CLI `ReasoningEffort` enum):
- `none` - No dedicated reasoning phase (disables reasoning)
- `low` - Light reasoning
- `medium` - Balanced (default)
- `high` - Deep reasoning
- `xhigh` - Extra depth for long-horizon tasks

**GPT-5.1 Values** (per OpenAI API docs and Codex CLI `ReasoningEffort` enum):
- `none` - No dedicated reasoning phase (disables reasoning)
- `low` - Light reasoning
- `medium` - Balanced (default)
- `high` - Deep reasoning

**GPT-5-Codex Values:**
**GPT-5.1-Codex / GPT-5.1-Codex-Max Values:**
- `low` - Fastest for code
- `medium` - Balanced (default)
- `high` - Maximum code quality
- `xhigh` - Extra depth (Codex Max only)

**GPT-5.1-Codex-Max Values:**
- `none` - No dedicated reasoning phase
- `low` - Light reasoning
- `medium` - Balanced
- `high` - Deep reasoning (default for this family)
- `xhigh` - Extra depth for long-horizon tasks
**GPT-5.1-Codex-Mini Values:**
- `medium` - Balanced (default)
- `high` - Maximum code quality

**Notes**:
- `none` is supported for GPT-5.2 and GPT-5.1 (general purpose) per OpenAI API documentation
- `none` is NOT supported for Codex variants - it auto-converts to `low` for Codex or `medium` for Codex Mini
- `minimal` auto-converts to `low` for Codex models
- `gpt-5-codex-mini*` and `gpt-5.1-codex-mini*` only support `medium` or `high`; lower settings are clamped to `medium` and `xhigh` downgrades to `high`
- Codex Max supports `none` and `xhigh` and defaults to `high` when not specified
- `xhigh` is only supported for GPT-5.2 and GPT-5.1-Codex-Max; other models downgrade to `high`
- Codex Mini only supports `medium` or `high`; lower settings clamp to `medium`

**Example:**
```json
Expand Down
6 changes: 4 additions & 2 deletions lib/request/helpers/model-map.ts
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,10 @@ export const MODEL_MAP: Record<string, string> = {
"gpt-5.1-codex-max-xhigh": "gpt-5.1-codex-max",

// ============================================================================
// GPT-5.2 Models (same reasoning support as Codex Max: low/medium/high/xhigh)
// GPT-5.2 Models (supports none/low/medium/high/xhigh per OpenAI API docs)
// ============================================================================
"gpt-5.2": "gpt-5.2",
"gpt-5.2-none": "gpt-5.2",
"gpt-5.2-low": "gpt-5.2",
"gpt-5.2-medium": "gpt-5.2",
"gpt-5.2-high": "gpt-5.2",
Expand All @@ -46,9 +47,10 @@ export const MODEL_MAP: Record<string, string> = {
"gpt-5.1-codex-mini-high": "gpt-5.1-codex-mini",

// ============================================================================
// GPT-5.1 General Purpose Models
// GPT-5.1 General Purpose Models (supports none/low/medium/high per OpenAI API docs)
// ============================================================================
"gpt-5.1": "gpt-5.1",
"gpt-5.1-none": "gpt-5.1",
"gpt-5.1-low": "gpt-5.1",
"gpt-5.1-medium": "gpt-5.1",
"gpt-5.1-high": "gpt-5.1",
Expand Down
22 changes: 22 additions & 0 deletions lib/request/request-transformer.ts
Original file line number Diff line number Diff line change
Expand Up @@ -146,10 +146,26 @@ export function getReasoningConfig(
(normalizedName.includes("nano") ||
normalizedName.includes("mini"));

// GPT-5.1 general purpose (not codex variants) - supports "none" per OpenAI API docs
const isGpt51General =
(normalizedName.includes("gpt-5.1") || normalizedName.includes("gpt 5.1")) &&
!isCodex &&
!isCodexMax &&
!isCodexMini;

// GPT 5.2 and Codex Max support xhigh reasoning
const supportsXhigh = isGpt52 || isCodexMax;

// GPT 5.1 and GPT 5.2 support "none" reasoning per:
// - OpenAI API docs: "gpt-5.1 defaults to none, supports: none, low, medium, high"
// - Codex CLI: ReasoningEffort enum includes None variant (codex-rs/protocol/src/openai_models.rs)
// - Codex CLI: docs/config.md lists "none" as valid for model_reasoning_effort
// - gpt-5.2 (being newer) also supports: none, low, medium, high, xhigh
const supportsNone = isGpt52 || isGpt51General;

// Default based on model type (Codex CLI defaults)
// Note: OpenAI docs say gpt-5.1 defaults to "none", but we default to "medium"
// for better coding assistance unless user explicitly requests "none"
const defaultEffort: ReasoningConfig["effort"] = isCodexMini
? "medium"
: supportsXhigh
Expand Down Expand Up @@ -178,6 +194,12 @@ export function getReasoningConfig(
effort = "high";
}

// For models that don't support "none", upgrade to "low"
// (Codex models don't support "none" - only GPT-5.1 and GPT-5.2 general purpose do)
if (!supportsNone && effort === "none") {
effort = "low";
}

// Normalize "minimal" to "low" for Codex families
// Codex CLI presets are low/medium/high (or xhigh for Codex Max)
if (isCodex && effort === "minimal") {
Expand Down
17 changes: 9 additions & 8 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "opencode-openai-codex-auth",
"version": "4.1.0",
"version": "4.1.1",
"description": "OpenAI ChatGPT (Codex backend) OAuth auth plugin for opencode - use your ChatGPT Plus/Pro subscription instead of API credits",
"main": "./dist/index.js",
"types": "./dist/index.d.ts",
Expand Down
56 changes: 56 additions & 0 deletions test/request-transformer.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -825,6 +825,62 @@ describe('Request Transformer Module', () => {
expect(result.reasoning?.effort).toBe('high');
});

it('should preserve none for GPT-5.2', async () => {
const body: RequestBody = {
model: 'gpt-5.2-none',
input: [],
};
const userConfig: UserConfig = {
global: { reasoningEffort: 'none' },
models: {},
};
const result = await transformRequestBody(body, codexInstructions, userConfig);
expect(result.model).toBe('gpt-5.2');
expect(result.reasoning?.effort).toBe('none');
});

it('should preserve none for GPT-5.1 general purpose', async () => {
const body: RequestBody = {
model: 'gpt-5.1-none',
input: [],
};
const userConfig: UserConfig = {
global: { reasoningEffort: 'none' },
models: {},
};
const result = await transformRequestBody(body, codexInstructions, userConfig);
expect(result.model).toBe('gpt-5.1');
expect(result.reasoning?.effort).toBe('none');
});

it('should upgrade none to low for GPT-5.1-codex (codex does not support none)', async () => {
const body: RequestBody = {
model: 'gpt-5.1-codex',
input: [],
};
const userConfig: UserConfig = {
global: { reasoningEffort: 'none' },
models: {},
};
const result = await transformRequestBody(body, codexInstructions, userConfig);
expect(result.model).toBe('gpt-5.1-codex');
expect(result.reasoning?.effort).toBe('low');
});

it('should upgrade none to low for GPT-5.1-codex-max (codex max does not support none)', async () => {
const body: RequestBody = {
model: 'gpt-5.1-codex-max',
input: [],
};
const userConfig: UserConfig = {
global: { reasoningEffort: 'none' },
models: {},
};
const result = await transformRequestBody(body, codexInstructions, userConfig);
expect(result.model).toBe('gpt-5.1-codex-max');
expect(result.reasoning?.effort).toBe('low');
});

it('should preserve minimal for non-codex models', async () => {
const body: RequestBody = {
model: 'gpt-5',
Expand Down