Skip to content

Commit 833d4f7

Browse files
committed
🤖 fix: correct totalTokens to use last entry, not sum
Previous calculation summed all usageHistory entries, but each entry already contains cumulative prompt tokens (full context at that turn). This caused massive over-counting in multi-turn conversations. Example: - Turn 1: 1,000 tokens - Turn 2: 2,500 tokens (includes turn 1) - Turn 3: 4,200 tokens (includes turns 1-2) Before: 1,000 + 2,500 + 4,200 = 7,700 tokens (183% inflated) After: 4,200 tokens (correct - just use last entry) This fix ensures auto-compaction triggers at actual 70% usage instead of triggering far earlier due to double/triple counting.
1 parent 0ef46ab commit 833d4f7

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

src/browser/stores/WorkspaceStore.ts

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -437,17 +437,17 @@ export class WorkspaceStore {
437437
const model = aggregator.getCurrentModel();
438438
const usageHistory = cumUsageHistory(messages, model);
439439

440-
// Calculate total from usage history (now includes historical)
441-
const totalTokens = usageHistory.reduce(
442-
(sum, u) =>
443-
sum +
444-
u.input.tokens +
445-
u.cached.tokens +
446-
u.cacheCreate.tokens +
447-
u.output.tokens +
448-
u.reasoning.tokens,
449-
0
450-
);
440+
// Use last entry's total (each entry is cumulative, not a delta)
441+
// Each usageHistory entry contains the FULL prompt tokens for that turn,
442+
// so we only need the most recent value, not a sum
443+
const lastEntry = usageHistory[usageHistory.length - 1];
444+
const totalTokens = lastEntry
445+
? lastEntry.input.tokens +
446+
lastEntry.cached.tokens +
447+
lastEntry.cacheCreate.tokens +
448+
lastEntry.output.tokens +
449+
lastEntry.reasoning.tokens
450+
: 0;
451451

452452
return { usageHistory, totalTokens };
453453
});

0 commit comments

Comments
 (0)