Skip to content

Commit 1ae0377

Browse files
authored
🤖 feat: persist per-workspace model + thinking (#1203)
Persist per-workspace model and thinking/reasoning level on the backend so that opening the same workspace from another browser/device restores the same AI configuration. ### Changes - **New shared schema** `WorkspaceAISettings` (`model`, `thinkingLevel`) added to workspace config and metadata schemas. - **New ORPC endpoint** `workspace.updateAISettings` to explicitly persist settings changes. - **Backend safety net**: `sendMessage` and `resumeStream` also persist "last used" settings for older/CLI clients. - **Frontend thinking is now workspace-scoped** (`thinkingLevel:{workspaceId}`) with migration from legacy per-model keys. - **LocalStorage seeding from backend metadata** ensures cross-device convergence. - **UI changes back-persist** model and thinking to backend immediately. - **Creation flow** copies project-scoped preferences and best-effort persists to backend. ### Testing - Updated `ThinkingContext.test.tsx` for workspace-scoped thinking + migration. - Added `WorkspaceContext.test.tsx` test for backend metadata seeding localStorage. - Added IPC test `tests/ipc/workspaceAISettings.test.ts` verifying persistence + list/getInfo. --- <details> <summary>📋 Implementation Plan</summary> # Persist per-workspace model + reasoning (thinking) on the backend ## Goal Make **model selection** and **thinking/reasoning level**: - **Workspace-scoped** (not per-model, not global) - **Persisted server-side** so opening the same workspace from another browser/device restores the same configuration Non-goals (for this change): persisting provider options (e.g. truncation), global default model, or draft input. --- ## Recommended approach (net +250–400 LoC product) ### 1) Define a single workspace AI settings shape (shared types) **Why:** avoid ad-hoc keys and keep the IPC/API boundary strongly typed. - Add a reusable schema/type (common): - `WorkspaceAISettings`: - `model: string` (canonical `provider:model`, *not* `mux-gateway:provider/model`) - `thinkingLevel: ThinkingLevel` (`off|low|medium|high|xhigh`) - Extend persisted config shape: - `WorkspaceConfigSchema` (in `src/common/orpc/schemas/project.ts`): add optional `aiSettings?: WorkspaceAISettings` - Extend workspace metadata returned to clients: - `WorkspaceMetadataSchema` (in `src/common/orpc/schemas/workspace.ts`): add optional `aiSettings?: WorkspaceAISettings` - (Frontend schema automatically inherits via `FrontendWorkspaceMetadataSchema.extend(...)`) ### 2) Backend: persist + serve workspace AI settings **Persistence location:** `~/.mux/config.json` under each workspace entry (alongside `runtimeConfig`, `mcp`, etc.) #### 2.1 Add an API to update settings explicitly - Add a new ORPC endpoint: - `workspace.updateAISettings` (name bikesheddable) - Input: `{ workspaceId: string, aiSettings: WorkspaceAISettings }` - Output: `Result<void, string>` - Node implementation (`WorkspaceService`): - Validate workspace exists via `config.findWorkspace(workspaceId)`. - `config.editConfig(...)` to locate the workspace entry and set `workspaceEntry.aiSettings = normalizedSettings`. - **Normalize defensively**: - `model = normalizeGatewayModel(model)` (from `src/common/utils/ai/models.ts`) - `thinkingLevel = enforceThinkingPolicy(model, thinkingLevel)` (single source of truth) - After save, re-fetch via `config.getAllWorkspaceMetadata()` and emit `onMetadata` update *only if the value changed* (avoid spam). #### 2.2 Also persist “last used” settings on message send (safety net) Even if a client forgets to call `updateAISettings` (CLI/extension/old client), the backend should learn the last used values. - In `WorkspaceService.sendMessage(...)` and `resumeStream(...)`: - Extract `options.model` + `options.thinkingLevel` and write to `workspaceEntry.aiSettings` (same normalization as above). - Only write when different. > This makes “chatted with it earlier” reliably populate the backend state. ### 3) Frontend: treat backend as source of truth, but keep localStorage as a fast cache #### 3.1 Change thinking persistence to be **workspace-scoped** - Add a new storage helper: - `getThinkingLevelKey(scopeId: string): string` → `thinkingLevel:${scopeId}` - Update `PERSISTENT_WORKSPACE_KEY_FUNCTIONS` to include it (so fork copies it, delete removes it). - Update `ThinkingProvider` to use **scope-based keying**: - Scope priority similar to `ModeProvider`: - workspace: `thinkingLevel:{workspaceId}` - creation: `thinkingLevel:__project__/{projectPath}` - Remove the current “key depends on selected model” behavior. - Update non-React send option reader: - `getSendOptionsFromStorage(...)` should read thinking via `getThinkingLevelKey(scopeId)`. - Update UI copy: - Thinking slider tooltip text from “Saved per model” → “Saved per workspace”. #### 3.2 Seed localStorage from backend workspace metadata **Where:** `WorkspaceContext.loadWorkspaceMetadata()` + the `workspace.onMetadata` subscription handler. - When metadata arrives for a workspace: - If `metadata.aiSettings?.model` exists → write it to `localStorage` key `model:{workspaceId}`. - If `metadata.aiSettings?.thinkingLevel` exists → write it to `thinkingLevel:{workspaceId}`. - Only write when the value differs (avoid unnecessary re-renders from `updatePersistedState`). This ensures: - new device with empty localStorage adopts backend settings - existing device with stale localStorage is corrected to backend #### 3.3 Persist changes back to the backend when the user changes the UI - Model changes: - In `ChatInput`’s `setPreferredModel(...)` (workspace variant only): - Update localStorage as today - Call `api.workspace.updateAISettings({ workspaceId, aiSettings: { model, thinkingLevel: currentThinking } })` - Thinking changes: - Wrap `setThinkingLevel` in `ThinkingProvider` (workspace variant only): - Update localStorage - Call `api.workspace.updateAISettings(...)` with `{ model: currentModel, thinkingLevel: newLevel }` Notes: - Use `enforceThinkingPolicy` client-side too (keep UI/BE consistent) but backend remains final authority. - If `api` is unavailable, keep localStorage update (offline-friendly) and rely on sendMessage persistence later. #### 3.4 Creation flow - Keep project-scoped model + thinking in localStorage while creating. - When a workspace is created: - Continue copying project-scoped values into the new workspace-scoped keys (update `syncCreationPreferences` to include thinking). - Optionally call `workspace.updateAISettings(...)` right after creation (best effort) so the workspace is immediately portable even before the first message sends. ### 4) Migration + compatibility - **Config.json:** new `aiSettings` fields are optional → old configs load fine; old mux versions should ignore unknown fields. - **localStorage:** migrate legacy per-model thinking into the new per-workspace key: - On workspace open, if `thinkingLevel:{workspaceId}` is missing: - read `model:{workspaceId}` (or default) - read old `thinkingLevel:model:{model}` - set `thinkingLevel:{workspaceId}` to that value Put this migration in a single place (e.g., the same “seed from metadata” helper) so it runs once. --- ## Validation / tests - Update unit tests that assert per-model thinking: - `src/browser/contexts/ThinkingContext.test.tsx` should instead verify thinking is stable across model changes (except clamping). - Add a backend test that `workspace.updateAISettings`: - persists into config - is returned by `workspace.list/getInfo` - Add a lightweight frontend test that WorkspaceContext seeding writes the expected localStorage keys when metadata contains `aiSettings`. --- <details> <summary>Alternatives considered</summary> ### A) Persist only on sendMessage (no new endpoint) (net +120–200 LoC) - Backend writes `aiSettings` from `sendMessage/resumeStream` options. - Frontend only seeds localStorage from metadata when local keys are missing. Pros: less surface area. Cons: doesn’t sync if user changes model/thinking but hasn’t sent yet; stale localStorage on an existing device may never converge. ### B) Remove localStorage for model/thinking entirely (net +500–900 LoC) - Replace `usePersistedState(getModelKey(...))` usage with a workspace settings store sourced from backend. Pros: true single source of truth. Cons: much bigger refactor; riskier. </details> </details> --- _Generated with `mux` • Model: `openai:gpt-5.2` • Thinking: `high`_ Signed-off-by: Thomas Kosiewski <tk@coder.com>
1 parent 00940b3 commit 1ae0377

File tree

20 files changed

+594
-79
lines changed

20 files changed

+594
-79
lines changed

src/browser/App.tsx

Lines changed: 39 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,13 @@ import { buildCoreSources, type BuildSourcesParams } from "./utils/commands/sour
3434
import type { ThinkingLevel } from "@/common/types/thinking";
3535
import { CUSTOM_EVENTS } from "@/common/constants/events";
3636
import { isWorkspaceForkSwitchEvent } from "./utils/workspaceEvents";
37-
import { getThinkingLevelByModelKey, getModelKey } from "@/common/constants/storage";
37+
import {
38+
getThinkingLevelByModelKey,
39+
getThinkingLevelKey,
40+
getModelKey,
41+
} from "@/common/constants/storage";
3842
import { migrateGatewayModel } from "@/browser/hooks/useGatewayModels";
43+
import { enforceThinkingPolicy } from "@/browser/utils/thinking/policy";
3944
import { getDefaultModel } from "@/browser/hooks/useModelsFromSettings";
4045
import type { BranchListResult } from "@/common/orpc/types";
4146
import { useTelemetry } from "./hooks/useTelemetry";
@@ -52,7 +57,7 @@ import { TooltipProvider } from "./components/ui/tooltip";
5257
import { ExperimentsProvider } from "./contexts/ExperimentsContext";
5358
import { getWorkspaceSidebarKey } from "./utils/workspace";
5459

55-
const THINKING_LEVELS: ThinkingLevel[] = ["off", "low", "medium", "high"];
60+
const THINKING_LEVELS: ThinkingLevel[] = ["off", "low", "medium", "high", "xhigh"];
5661

5762
function isStorybookIframe(): boolean {
5863
return typeof window !== "undefined" && window.location.pathname.endsWith("iframe.html");
@@ -293,9 +298,25 @@ function AppInner() {
293298
if (!workspaceId) {
294299
return "off";
295300
}
301+
302+
const scopedKey = getThinkingLevelKey(workspaceId);
303+
const scoped = readPersistedState<ThinkingLevel | undefined>(scopedKey, undefined);
304+
if (scoped !== undefined) {
305+
return THINKING_LEVELS.includes(scoped) ? scoped : "off";
306+
}
307+
308+
// Migration: fall back to legacy per-model thinking and seed the workspace-scoped key.
296309
const model = getModelForWorkspace(workspaceId);
297-
const level = readPersistedState<ThinkingLevel>(getThinkingLevelByModelKey(model), "off");
298-
return THINKING_LEVELS.includes(level) ? level : "off";
310+
const legacy = readPersistedState<ThinkingLevel | undefined>(
311+
getThinkingLevelByModelKey(model),
312+
undefined
313+
);
314+
if (legacy !== undefined && THINKING_LEVELS.includes(legacy)) {
315+
updatePersistedState(scopedKey, legacy);
316+
return legacy;
317+
}
318+
319+
return "off";
299320
},
300321
[getModelForWorkspace]
301322
);
@@ -308,22 +329,32 @@ function AppInner() {
308329

309330
const normalized = THINKING_LEVELS.includes(level) ? level : "off";
310331
const model = getModelForWorkspace(workspaceId);
311-
const key = getThinkingLevelByModelKey(model);
332+
const effective = enforceThinkingPolicy(model, normalized);
333+
const key = getThinkingLevelKey(workspaceId);
312334

313335
// Use the utility function which handles localStorage and event dispatch
314336
// ThinkingProvider will pick this up via its listener
315-
updatePersistedState(key, normalized);
337+
updatePersistedState(key, effective);
338+
339+
// Persist to backend so the palette change follows the workspace across devices.
340+
if (api) {
341+
api.workspace
342+
.updateAISettings({ workspaceId, aiSettings: { model, thinkingLevel: effective } })
343+
.catch(() => {
344+
// Best-effort only.
345+
});
346+
}
316347

317348
// Dispatch toast notification event for UI feedback
318349
if (typeof window !== "undefined") {
319350
window.dispatchEvent(
320351
new CustomEvent(CUSTOM_EVENTS.THINKING_LEVEL_TOAST, {
321-
detail: { workspaceId, level: normalized },
352+
detail: { workspaceId, level: effective },
322353
})
323354
);
324355
}
325356
},
326-
[getModelForWorkspace]
357+
[api, getModelForWorkspace]
327358
);
328359

329360
const registerParamsRef = useRef<BuildSourcesParams | null>(null);

src/browser/components/ChatInput/index.tsx

Lines changed: 24 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@ import { useMode } from "@/browser/contexts/ModeContext";
2020
import { ThinkingSliderComponent } from "../ThinkingSlider";
2121
import { ModelSettings } from "../ModelSettings";
2222
import { useAPI } from "@/browser/contexts/API";
23+
import { useThinkingLevel } from "@/browser/hooks/useThinkingLevel";
24+
import { migrateGatewayModel } from "@/browser/hooks/useGatewayModels";
25+
import { enforceThinkingPolicy } from "@/browser/utils/thinking/policy";
2326
import { useSendMessageOptions } from "@/browser/hooks/useSendMessageOptions";
2427
import {
2528
getModelKey,
@@ -133,6 +136,8 @@ export type { ChatInputProps, ChatInputAPI };
133136
const ChatInputInner: React.FC<ChatInputProps> = (props) => {
134137
const { api } = useAPI();
135138
const { variant } = props;
139+
const [thinkingLevel] = useThinkingLevel();
140+
const workspaceId = variant === "workspace" ? props.workspaceId : null;
136141

137142
// Extract workspace-specific props with defaults
138143
const disabled = props.disabled ?? false;
@@ -333,10 +338,26 @@ const ChatInputInner: React.FC<ChatInputProps> = (props) => {
333338

334339
const setPreferredModel = useCallback(
335340
(model: string) => {
336-
ensureModelInSettings(model); // Ensure model exists in Settings
337-
updatePersistedState(storageKeys.modelKey, model); // Update workspace or project-specific
341+
const canonicalModel = migrateGatewayModel(model);
342+
ensureModelInSettings(canonicalModel); // Ensure model exists in Settings
343+
updatePersistedState(storageKeys.modelKey, canonicalModel); // Update workspace or project-specific
344+
345+
// Workspace variant: persist to backend for cross-device consistency.
346+
if (!api || variant !== "workspace" || !workspaceId) {
347+
return;
348+
}
349+
350+
const effectiveThinkingLevel = enforceThinkingPolicy(canonicalModel, thinkingLevel);
351+
api.workspace
352+
.updateAISettings({
353+
workspaceId,
354+
aiSettings: { model: canonicalModel, thinkingLevel: effectiveThinkingLevel },
355+
})
356+
.catch(() => {
357+
// Best-effort only. If offline or backend is old, sendMessage will persist.
358+
});
338359
},
339-
[storageKeys.modelKey, ensureModelInSettings]
360+
[api, storageKeys.modelKey, ensureModelInSettings, thinkingLevel, variant, workspaceId]
340361
);
341362
const deferredModel = useDeferredValue(preferredModel);
342363
const deferredInput = useDeferredValue(input);

src/browser/components/ChatInput/useCreationWorkspace.test.tsx

Lines changed: 34 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ import {
77
getModeKey,
88
getPendingScopeId,
99
getProjectScopeId,
10+
getThinkingLevelKey,
1011
} from "@/common/constants/storage";
1112
import type { SendMessageError as _SendMessageError } from "@/common/types/errors";
1213
import type { WorkspaceChatMessage } from "@/common/orpc/types";
@@ -83,11 +84,18 @@ type ListBranchesArgs = Parameters<APIClient["projects"]["listBranches"]>[0];
8384
type WorkspaceSendMessageArgs = Parameters<APIClient["workspace"]["sendMessage"]>[0];
8485
type WorkspaceSendMessageResult = Awaited<ReturnType<APIClient["workspace"]["sendMessage"]>>;
8586
type WorkspaceCreateArgs = Parameters<APIClient["workspace"]["create"]>[0];
87+
type WorkspaceUpdateAISettingsArgs = Parameters<APIClient["workspace"]["updateAISettings"]>[0];
88+
type WorkspaceUpdateAISettingsResult = Awaited<
89+
ReturnType<APIClient["workspace"]["updateAISettings"]>
90+
>;
8691
type WorkspaceCreateResult = Awaited<ReturnType<APIClient["workspace"]["create"]>>;
8792
type NameGenerationArgs = Parameters<APIClient["nameGeneration"]["generate"]>[0];
8893
type NameGenerationResult = Awaited<ReturnType<APIClient["nameGeneration"]["generate"]>>;
8994
type MockOrpcProjectsClient = Pick<APIClient["projects"], "listBranches">;
90-
type MockOrpcWorkspaceClient = Pick<APIClient["workspace"], "sendMessage" | "create">;
95+
type MockOrpcWorkspaceClient = Pick<
96+
APIClient["workspace"],
97+
"sendMessage" | "create" | "updateAISettings"
98+
>;
9199
type MockOrpcNameGenerationClient = Pick<APIClient["nameGeneration"], "generate">;
92100
type WindowWithApi = Window & typeof globalThis;
93101
type WindowApi = WindowWithApi["api"];
@@ -114,6 +122,9 @@ interface SetupWindowOptions {
114122
sendMessage?: ReturnType<
115123
typeof mock<(args: WorkspaceSendMessageArgs) => Promise<WorkspaceSendMessageResult>>
116124
>;
125+
updateAISettings?: ReturnType<
126+
typeof mock<(args: WorkspaceUpdateAISettingsArgs) => Promise<WorkspaceUpdateAISettingsResult>>
127+
>;
117128
create?: ReturnType<typeof mock<(args: WorkspaceCreateArgs) => Promise<WorkspaceCreateResult>>>;
118129
nameGeneration?: ReturnType<
119130
typeof mock<(args: NameGenerationArgs) => Promise<NameGenerationResult>>
@@ -124,6 +135,7 @@ const setupWindow = ({
124135
listBranches,
125136
sendMessage,
126137
create,
138+
updateAISettings,
127139
nameGeneration,
128140
}: SetupWindowOptions = {}) => {
129141
const listBranchesMock =
@@ -157,6 +169,15 @@ const setupWindow = ({
157169
} as WorkspaceCreateResult);
158170
});
159171

172+
const updateAISettingsMock =
173+
updateAISettings ??
174+
mock<(args: WorkspaceUpdateAISettingsArgs) => Promise<WorkspaceUpdateAISettingsResult>>(() => {
175+
return Promise.resolve({
176+
success: true,
177+
data: undefined,
178+
} as WorkspaceUpdateAISettingsResult);
179+
});
180+
160181
const nameGenerationMock =
161182
nameGeneration ??
162183
mock<(args: NameGenerationArgs) => Promise<NameGenerationResult>>(() => {
@@ -176,6 +197,7 @@ const setupWindow = ({
176197
workspace: {
177198
sendMessage: (input: WorkspaceSendMessageArgs) => sendMessageMock(input),
178199
create: (input: WorkspaceCreateArgs) => createMock(input),
200+
updateAISettings: (input: WorkspaceUpdateAISettingsArgs) => updateAISettingsMock(input),
179201
},
180202
nameGeneration: {
181203
generate: (input: NameGenerationArgs) => nameGenerationMock(input),
@@ -213,6 +235,7 @@ const setupWindow = ({
213235
workspace: {
214236
list: rejectNotImplemented("workspace.list"),
215237
create: (args: WorkspaceCreateArgs) => createMock(args),
238+
updateAISettings: (args: WorkspaceUpdateAISettingsArgs) => updateAISettingsMock(args),
216239
remove: rejectNotImplemented("workspace.remove"),
217240
rename: rejectNotImplemented("workspace.rename"),
218241
fork: rejectNotImplemented("workspace.fork"),
@@ -278,7 +301,11 @@ const setupWindow = ({
278301

279302
return {
280303
projectsApi: { listBranches: listBranchesMock },
281-
workspaceApi: { sendMessage: sendMessageMock, create: createMock },
304+
workspaceApi: {
305+
sendMessage: sendMessageMock,
306+
create: createMock,
307+
updateAISettings: updateAISettingsMock,
308+
},
282309
nameGenerationApi: { generate: nameGenerationMock },
283310
};
284311
};
@@ -466,7 +493,7 @@ describe("useCreationWorkspace", () => {
466493
const pendingInputKey = getInputKey(pendingScopeId);
467494
const pendingImagesKey = getInputImagesKey(pendingScopeId);
468495
expect(updatePersistedStateCalls).toContainEqual([modeKey, "plan"]);
469-
// Note: thinking level is no longer synced per-workspace, it's stored per-model globally
496+
// Thinking is workspace-scoped, but this test doesn't set a project-scoped thinking preference.
470497
expect(updatePersistedStateCalls).toContainEqual([pendingInputKey, ""]);
471498
expect(updatePersistedStateCalls).toContainEqual([pendingImagesKey, undefined]);
472499
});
@@ -510,7 +537,10 @@ describe("useCreationWorkspace", () => {
510537
expect(onWorkspaceCreated.mock.calls.length).toBe(0);
511538
await waitFor(() => expect(getHook().toast?.message).toBe("backend exploded"));
512539
await waitFor(() => expect(getHook().isSending).toBe(false));
513-
expect(updatePersistedStateCalls).toEqual([]);
540+
541+
// Side effect: send-options reader may migrate thinking level into the project scope.
542+
const thinkingKey = getThinkingLevelKey(getProjectScopeId(TEST_PROJECT_PATH));
543+
expect(updatePersistedStateCalls).toEqual([[thinkingKey, "off"]]);
514544
});
515545
});
516546

src/browser/components/ChatInput/useCreationWorkspace.ts

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
import { useState, useEffect, useCallback } from "react";
22
import type { FrontendWorkspaceMetadata } from "@/common/types/workspace";
33
import type { RuntimeConfig, RuntimeMode } from "@/common/types/runtime";
4+
import type { ThinkingLevel } from "@/common/types/thinking";
45
import type { UIMode } from "@/common/types/mode";
56
import { parseRuntimeString } from "@/browser/utils/chatCommands";
67
import { useDraftWorkspaceSettings } from "@/browser/hooks/useDraftWorkspaceSettings";
@@ -11,6 +12,7 @@ import {
1112
getInputImagesKey,
1213
getModelKey,
1314
getModeKey,
15+
getThinkingLevelKey,
1416
getPendingScopeId,
1517
getProjectScopeId,
1618
} from "@/common/constants/storage";
@@ -45,8 +47,13 @@ function syncCreationPreferences(projectPath: string, workspaceId: string): void
4547
updatePersistedState(getModeKey(workspaceId), projectMode);
4648
}
4749

48-
// Note: thinking level is stored per-model globally, not per-workspace,
49-
// so no sync is needed here
50+
const projectThinkingLevel = readPersistedState<ThinkingLevel | null>(
51+
getThinkingLevelKey(projectScopeId),
52+
null
53+
);
54+
if (projectThinkingLevel !== null) {
55+
updatePersistedState(getThinkingLevelKey(workspaceId), projectThinkingLevel);
56+
}
5057
}
5158

5259
interface UseCreationWorkspaceReturn {
@@ -196,6 +203,19 @@ export function useCreationWorkspace({
196203

197204
const { metadata } = createResult;
198205

206+
// Best-effort: persist the initial AI settings to the backend immediately so this workspace
207+
// is portable across devices even before the first stream starts.
208+
api.workspace
209+
.updateAISettings({
210+
workspaceId: metadata.id,
211+
aiSettings: {
212+
model: settings.model,
213+
thinkingLevel: settings.thinkingLevel,
214+
},
215+
})
216+
.catch(() => {
217+
// Ignore (offline / older backend). sendMessage will persist as a fallback.
218+
});
199219
// Sync preferences immediately (before switching)
200220
syncCreationPreferences(projectPath, metadata.id);
201221
if (projectPath) {
@@ -239,6 +259,8 @@ export function useCreationWorkspace({
239259
projectScopeId,
240260
onWorkspaceCreated,
241261
getRuntimeString,
262+
settings.model,
263+
settings.thinkingLevel,
242264
settings.trunkBranch,
243265
waitForGeneration,
244266
]

src/browser/components/ThinkingSlider.tsx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ export const ThinkingSliderComponent: React.FC<ThinkingControlProps> = ({ modelS
199199
</div>
200200
</TooltipTrigger>
201201
<TooltipContent align="center">
202-
Thinking: {formatKeybind(KEYBINDS.TOGGLE_THINKING)} to cycle. Saved per model.
202+
Thinking: {formatKeybind(KEYBINDS.TOGGLE_THINKING)} to cycle. Saved per workspace.
203203
</TooltipContent>
204204
</Tooltip>
205205
);

src/browser/contexts/ThinkingContext.test.tsx

Lines changed: 17 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,11 @@ import { act, cleanup, render, waitFor } from "@testing-library/react";
44
import React from "react";
55
import { ThinkingProvider } from "./ThinkingContext";
66
import { useThinkingLevel } from "@/browser/hooks/useThinkingLevel";
7-
import { getModelKey, getThinkingLevelByModelKey } from "@/common/constants/storage";
7+
import {
8+
getModelKey,
9+
getThinkingLevelByModelKey,
10+
getThinkingLevelKey,
11+
} from "@/common/constants/storage";
812
import { updatePersistedState } from "@/browser/hooks/usePersistedState";
913

1014
// Setup basic DOM environment for testing-library
@@ -49,8 +53,7 @@ describe("ThinkingContext", () => {
4953
const workspaceId = "ws-1";
5054

5155
updatePersistedState(getModelKey(workspaceId), "openai:gpt-5.2");
52-
updatePersistedState(getThinkingLevelByModelKey("openai:gpt-5.2"), "high");
53-
updatePersistedState(getThinkingLevelByModelKey("anthropic:claude-3.5"), "low");
56+
updatePersistedState(getThinkingLevelKey(workspaceId), "high");
5457

5558
let unmounts = 0;
5659

@@ -79,21 +82,18 @@ describe("ThinkingContext", () => {
7982
updatePersistedState(getModelKey(workspaceId), "anthropic:claude-3.5");
8083
});
8184

85+
// Thinking is workspace-scoped (not per-model), so switching models should not change it.
8286
await waitFor(() => {
83-
expect(view.getByTestId("child").textContent).toBe("low");
87+
expect(view.getByTestId("child").textContent).toBe("high");
8488
});
8589

8690
expect(unmounts).toBe(0);
8791
});
88-
test("switching models restores the per-model thinking level", async () => {
92+
test("migrates legacy per-model thinking to the workspace-scoped key", async () => {
8993
const workspaceId = "ws-1";
9094

91-
// Model A
9295
updatePersistedState(getModelKey(workspaceId), "openai:gpt-5.2");
93-
updatePersistedState(getThinkingLevelByModelKey("openai:gpt-5.2"), "high");
94-
95-
// Model B
96-
updatePersistedState(getThinkingLevelByModelKey("anthropic:claude-3.5"), "low");
96+
updatePersistedState(getThinkingLevelByModelKey("openai:gpt-5.2"), "low");
9797

9898
const view = render(
9999
<ThinkingProvider workspaceId={workspaceId}>
@@ -102,10 +102,15 @@ describe("ThinkingContext", () => {
102102
);
103103

104104
await waitFor(() => {
105-
expect(view.getByTestId("thinking").textContent).toBe("high:ws-1");
105+
expect(view.getByTestId("thinking").textContent).toBe("low:ws-1");
106106
});
107107

108-
// Change model -> should restore that model's stored thinking level
108+
// Migration should have populated the new workspace-scoped key.
109+
const persisted = window.localStorage.getItem(getThinkingLevelKey(workspaceId));
110+
expect(persisted).toBeTruthy();
111+
expect(JSON.parse(persisted!)).toBe("low");
112+
113+
// Switching models should not change the workspace-scoped value.
109114
act(() => {
110115
updatePersistedState(getModelKey(workspaceId), "anthropic:claude-3.5");
111116
});

0 commit comments

Comments
 (0)