Skip to content

Commit 909adb7

Browse files
authored
feat: v4.1.1 - Combined PRs #62, #63, #64
2 parents 78d10cd + 81e9236 commit 909adb7

File tree

10 files changed

+348
-79
lines changed

10 files changed

+348
-79
lines changed

CHANGELOG.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,37 @@
22

33
All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).
44

5+
## [4.1.1] - 2025-12-17
6+
7+
**Minor release**: "none" reasoning effort support, orphaned function_call_output fix, and HTML version update.
8+
9+
### Added
10+
- **"none" reasoning effort support**: GPT-5.1 and GPT-5.2 support `reasoning_effort: "none"` which disables the reasoning phase entirely. This can result in faster responses when reasoning is not needed.
11+
- `gpt-5.2-none` - GPT-5.2 with reasoning disabled
12+
- `gpt-5.1-none` - GPT-5.1 with reasoning disabled
13+
- **4 new unit tests** for "none" reasoning behavior (now 197 total unit tests).
14+
15+
### Fixed
16+
- **Orphaned function_call_output 400 errors**: Fixed API errors when conversation history contains `item_reference` pointing to stored function calls. Previously, orphaned `function_call_output` items were only filtered when `!body.tools`. Now always handles orphans regardless of tools presence, and converts them to assistant messages to preserve context while avoiding API errors.
17+
- **OAuth HTML version display**: Updated version in oauth-success.html from 1.0.4 to 4.1.0.
18+
19+
### Technical Details
20+
- `getReasoningConfig()` now detects GPT-5.1 general purpose models (not Codex variants) and allows "none" to pass through.
21+
- GPT-5.2 inherits "none" support as it's newer than GPT-5.1.
22+
- Codex variants (gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini) do NOT support "none":
23+
- Codex and Codex Max: "none" auto-converts to "low"
24+
- Codex Mini: "none" auto-converts to "medium" (as before)
25+
- Documentation updated with complete reasoning effort support matrix per model family.
26+
27+
### References
28+
- **OpenAI API docs** (`platform.openai.com/docs/api-reference/chat/create`): "gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high."
29+
- **Codex CLI** (`codex-rs/protocol/src/openai_models.rs`): `ReasoningEffort` enum includes `None` variant with `#[serde(rename_all = "lowercase")]` serialization to `"none"`.
30+
- **Codex CLI** (`codex-rs/core/src/client.rs`): Request builder passes `ReasoningEffort::None` through to API without validation/rejection.
31+
- **Codex CLI** (`docs/config.md`): Documents `model_reasoning_effort = "none"` as valid config option.
32+
33+
### Notes
34+
- This plugin defaults to "medium" for better coding assistance; users must explicitly set "none" if desired.
35+
536
## [4.1.0] - 2025-12-11
637

738
**Feature release**: GPT 5.2 model support and image input capabilities.

README.md

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
3333
## Features
3434

3535
-**ChatGPT Plus/Pro OAuth authentication** - Use your existing subscription
36-
-**16 pre-configured model variants** - GPT 5.2, GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for all reasoning levels
36+
-**18 pre-configured model variants** - GPT 5.2, GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for all reasoning levels
3737
-**GPT 5.2 support** - Latest model with `low/medium/high/xhigh` reasoning levels
3838
-**Full image input support** - All models configured with multimodal capabilities for reading screenshots, diagrams, and images
3939
- ⚠️ **GPT 5.1+ only** - Older GPT 5.0 models are deprecated and may not work reliably
@@ -62,7 +62,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
6262
#### Recommended: Pin the Version
6363

6464
```json
65-
"plugin": ["opencode-openai-codex-auth@4.1.0"]
65+
"plugin": ["opencode-openai-codex-auth@4.1.1"]
6666
```
6767

6868
**Why pin versions?** OpenCode uses Bun's lockfile which pins resolved versions. If you use `"opencode-openai-codex-auth"` without a version, it resolves to "latest" once and **never updates** even when new versions are published.
@@ -76,7 +76,7 @@ Simply change the version in your config and restart OpenCode:
7676
"plugin": ["opencode-openai-codex-auth@3.3.0"]
7777

7878
// To:
79-
"plugin": ["opencode-openai-codex-auth@4.1.0"]
79+
"plugin": ["opencode-openai-codex-auth@4.1.1"]
8080
```
8181

8282
OpenCode will detect the version mismatch and install the new version automatically.
@@ -107,12 +107,12 @@ Check [releases](https://github.com/numman-ali/opencode-openai-codex-auth/releas
107107

108108
1. **Copy the full configuration** from [`config/full-opencode.json`](./config/full-opencode.json) to your opencode config file.
109109

110-
The config includes 16 models with image input support. Here's a condensed example showing the structure:
110+
The config includes 18 models with image input support. Here's a condensed example showing the structure:
111111

112112
```json
113113
{
114114
"$schema": "https://opencode.ai/config.json",
115-
"plugin": ["opencode-openai-codex-auth@4.1.0"],
115+
"plugin": ["opencode-openai-codex-auth@4.1.1"],
116116
"provider": {
117117
"openai": {
118118
"options": {
@@ -159,12 +159,12 @@ Check [releases](https://github.com/numman-ali/opencode-openai-codex-auth/releas
159159
**Global config**: `~/.config/opencode/opencode.json`
160160
**Project config**: `<project>/.opencode.json`
161161

162-
This gives you 16 model variants with different reasoning levels:
163-
- **gpt-5.2** (low/medium/high/xhigh) - Latest GPT 5.2 model with full reasoning support
162+
This gives you 18 model variants with different reasoning levels:
163+
- **gpt-5.2** (none/low/medium/high/xhigh) - Latest GPT 5.2 model with full reasoning support
164164
- **gpt-5.1-codex-max** (low/medium/high/xhigh) - Codex Max presets
165165
- **gpt-5.1-codex** (low/medium/high) - Codex model presets
166166
- **gpt-5.1-codex-mini** (medium/high) - Codex mini tier presets
167-
- **gpt-5.1** (low/medium/high) - General-purpose reasoning presets
167+
- **gpt-5.1** (none/low/medium/high) - General-purpose reasoning presets
168168

169169
All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5.1 High (OAuth)", etc.
170170

@@ -305,7 +305,7 @@ These defaults match the official Codex CLI behavior and can be customized (see
305305
### ⚠️ REQUIRED: Use Pre-Configured File
306306

307307
**YOU MUST use [`config/full-opencode.json`](./config/full-opencode.json)** - this is the only officially supported configuration:
308-
- 16 pre-configured model variants (GPT 5.2, GPT 5.1, Codex, Codex Max, Codex Mini)
308+
- 18 pre-configured model variants (GPT 5.2, GPT 5.1, Codex, Codex Max, Codex Mini)
309309
- Image input support enabled for all models
310310
- Optimal configuration for each reasoning level
311311
- All variants visible in the opencode model selector
@@ -323,18 +323,19 @@ If you want to customize settings yourself, you can configure options at provide
323323

324324
⚠️ **Important**: Families have different supported values.
325325

326-
| Setting | GPT-5.2 Values | GPT-5 / GPT-5.1 Values | GPT-5.1-Codex Values | GPT-5.1-Codex-Max Values | Plugin Default |
327-
|---------|---------------|----------------------|----------------------|---------------------------|----------------|
328-
| `reasoningEffort` | `low`, `medium`, `high`, `xhigh` | `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high` | `none`, `low`, `medium`, `high`, `xhigh` | `medium` (global), `high` for Codex Max/5.2 |
326+
| Setting | GPT-5.2 Values | GPT-5.1 Values | GPT-5.1-Codex Values | GPT-5.1-Codex-Max Values | Plugin Default |
327+
|---------|---------------|----------------|----------------------|---------------------------|----------------|
328+
| `reasoningEffort` | `none`, `low`, `medium`, `high`, `xhigh` | `none`, `low`, `medium`, `high` | `low`, `medium`, `high` | `low`, `medium`, `high`, `xhigh` | `medium` (global), `high` for Codex Max/5.2 |
329329
| `reasoningSummary` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed`, `off`, `on` | `auto` |
330330
| `textVerbosity` | `low`, `medium`, `high` | `low`, `medium`, `high` | `medium` or `high` | `medium` or `high` | `medium` |
331331
| `include` | Array of strings | Array of strings | Array of strings | Array of strings | `["reasoning.encrypted_content"]` |
332332

333333
> **Notes**:
334-
> - GPT 5.2 supports `xhigh` reasoning like Codex Max.
334+
> - GPT 5.2 and GPT 5.1 (general purpose) support `none` reasoning per OpenAI API docs.
335+
> - `none` is NOT supported for Codex variants - auto-converts to `low` for Codex/Codex Max, or `medium` for Codex Mini.
336+
> - GPT 5.2 and Codex Max support `xhigh` reasoning.
335337
> - `minimal` effort is auto-normalized to `low` for Codex models.
336338
> - Codex Mini clamps to `medium`/`high`; `xhigh` downgrades to `high`.
337-
> - Codex Max supports `none`/`xhigh` plus extended reasoning options while keeping the same 272k context / 128k output limits.
338339
> - All models have `modalities.input: ["text", "image"]` enabled for multimodal support.
339340
340341
#### Global Configuration Example
@@ -344,7 +345,7 @@ Apply settings to all models:
344345
```json
345346
{
346347
"$schema": "https://opencode.ai/config.json",
347-
"plugin": ["opencode-openai-codex-auth@4.1.0"],
348+
"plugin": ["opencode-openai-codex-auth@4.1.1"],
348349
"model": "openai/gpt-5-codex",
349350
"provider": {
350351
"openai": {
@@ -364,7 +365,7 @@ Create your own named variants in the model selector:
364365
```json
365366
{
366367
"$schema": "https://opencode.ai/config.json",
367-
"plugin": ["opencode-openai-codex-auth@4.1.0"],
368+
"plugin": ["opencode-openai-codex-auth@4.1.1"],
368369
"provider": {
369370
"openai": {
370371
"models": {

0 commit comments

Comments
 (0)