🤖 fix: enable message queueing during stream-starting phase #1897
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Fixes a UX bug where the chat input was disabled during the "stream starting" phase (the window between when a user sends a message and when the AI stream actually starts). This prevented users from queueing follow-up messages before streaming begins. The fix generalizes the existing "compaction-starting" concept to all streams.
Background
When a user sends a message, there's a brief window (typically <1s but sometimes longer) where:
During this window, the frontend was disabling the input, and the backend wasn't queuing messages. This was confusing because users couldn't add follow-up thoughts before the AI started responding.
Implementation
Frontend changes:
WorkspaceStore: AddedisStreamStartingstate derived frompendingStreamStartTime !== null && !canInterruptChatInput: UsesisStreamStartingprop to keep input enabled during the starting phase (only blocks when actually streaming)Backend changes:
AgentSession: RenamedcompactionStartingtostreamStarting, applies to allsendMessage()andresumeStream()callsWorkspaceService: Queue condition now checkssession.isStreamStarting()in addition toaiService.isStreaming()Test infrastructure:
[mock:wait-start]marker) for deterministic testingdebugGetLastMockPrompt) to verify both messages reach the modelValidation
tests/ipc/queuedMessages.starting.test.tsthat proves:Risks
Low risk - the change makes the system more permissive (allows queueing where it previously didn't) rather than adding new restrictions. The backend queue logic is additive and uses the same code path as the existing streaming queue.
Generated with
mux• Model:anthropic:claude-opus-4-5• Thinking:high• Cost:$25.80