-
-
Notifications
You must be signed in to change notification settings - Fork 115
feat: improve the event system to emit more data and conventionalize … #239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughIntroduces a structured observability/event system: renames and types events (text:, tools:, image:, speech:, transcription:, video:, client:), adds streamId/context to emissions, emits lifecycle events for media generation, expands devtools store/UI to show activity events, and updates tests and docs. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client App
participant Chat as Chat Activity (ai package)
participant Events as AiEventClient
participant Devtools as DevTools Listener / UI
Client->>Chat: start text request (messages, modelOptions)
Chat->>Events: emit 'text:request:started' (requestId, streamId, context)
Chat->>Events: emit multiple 'text:chunk:*' events (content/tool-call/thinking)
Events->>Devtools: deliver events (clientId, source, timestamp, streamId)
Devtools->>Devtools: record/display ActivityEvent entries (image/speech/transcription/video as applicable)
Chat->>Events: emit 'text:request:completed' (final content, finishReason, usage)
Devtools->>Client: UI updates / recording saved
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution ↗ for commit 3d46c84
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
packages/typescript/ai-devtools/src/components/conversation/ConversationTabs.tsx (1)
107-110: Non-reactive early return may hide tabs that appear later.The
if (visibleTabCount() <= 1) return nullcheck runs only during initial component execution. In SolidJS, if the conversation's events load asynchronously, this check won't re-evaluate, and the tabs will remain hidden even when they should appear.🐛 Wrap in Show for reactive rendering
- // Don't render tabs if only one tab would be visible - if (visibleTabCount() <= 1) { - return null - } - - return ( + return ( + <Show when={visibleTabCount() > 1}> <div class={styles().conversationDetails.tabsContainer}> {/* ... tab buttons ... */} </div> + </Show> )testing/panel/src/lib/recording.ts (1)
77-77: SharedchunkIndexcauses incorrect indices when multiple streams are active.
chunkIndexis shared across all active streams. If multiple streams are being recorded simultaneously, chunk indices will be interleaved incorrectly, resulting in non-sequential indices within each stream's recording.Move the chunk index into the per-stream state:
🐛 Proposed fix
Add
chunkIndexto the stream state (around line 68):toolCalls: Map<string, RecordedToolCall> finishReason: string | null traceId?: string + chunkIndex: number } >() - - let chunkIndex = 0Initialize it when creating stream state (around line 224):
toolCalls: new Map<string, RecordedToolCall>(), finishReason: null, traceId: undefined, + chunkIndex: 0, })Then update all usages to reference
stream.chunkIndexinstead of the shared variable, incrementing after use.packages/typescript/ai/src/event-client.ts (1)
8-22: Remove duplicate type definitions and import from./typesinstead.
ToolCallStateandToolResultStateare duplicated betweenevent-client.tsandtypes.ts. Import and re-export them from./typesto maintain a single source of truth:Suggested change
-export type ToolCallState = - | 'awaiting-input' // Received start but no arguments yet - | 'input-streaming' // Partial arguments received - | 'input-complete' // All arguments received - | 'approval-requested' // Waiting for user approval - | 'approval-responded' // User has approved/denied - -export type ToolResultState = - | 'streaming' // Placeholder for future streamed output - | 'complete' // Result is complete - | 'error' // Error occurred +import type { ToolCallState, ToolResultState } from './types' +export type { ToolCallState, ToolResultState }packages/typescript/ai/src/activities/generateVideo/index.ts (1)
214-286: Emit a completion event whengetVideoStatusthrows.
Ifadapter.getVideoStatusrejects, you emitvideo:request:startedwithout a matching completion event, which leaves traces incomplete and skews observability.🔧 Proposed fix
- const statusResult = await adapter.getVideoStatus(jobId) + let statusResult: VideoStatusResult + try { + statusResult = await adapter.getVideoStatus(jobId) + } catch (error) { + aiEventClient.emit('video:request:completed', { + requestId, + provider: adapter.name, + model: adapter.model, + requestType: 'status', + jobId, + status: 'failed', + error: error instanceof Error ? error.message : 'Failed to get video status', + duration: Date.now() - startTime, + timestamp: Date.now(), + }) + throw error + }
🤖 Fix all issues with AI agents
In `@docs/guides/observability.md`:
- Line 30: Docs list an event (`transcription:usage`) that isn't emitted; update
either the code or the docs. Option A: remove `transcription:usage` from the
docs so the listed events match actual emissions
(`transcription:request:started` and `transcription:request:completed`). Option
B: implement emission of TranscriptionUsageEvent inside the
generateTranscription function (emit `transcription:usage` with the defined
TranscriptionUsageEvent payload at the point you calculate usage/cost after
transcription completes) and ensure any listeners/tests are updated to expect
this new event.
In `@packages/typescript/ai-client/src/events.ts`:
- Line 219: The implementation signature of emitEvent uses data?: Record<string,
any> which mismatches the abstract declaration data?: Record<string, unknown>;
update the concrete method emitEvent(eventName: string, data?: Record<string,
unknown>): void to use Record<string, unknown> so the types align and preserve
type safety (ensure any other references to emitEvent in this class use the same
signature).
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 929-932: The getContentString function currently stringifies null
into "null"; change it to explicitly handle null by returning an empty string
(or appropriate sentinel) when ModelMessage['content'] === null, keeping the
existing behavior for strings (return as-is) and for Array<ContentPart>
(JSON.stringify). Update the getContentString implementation to first check for
content === null and return '' before the typeof/string and JSON.stringify
branches, and adjust any callers/tests if they relied on the former "null"
string.
In `@packages/typescript/ai/src/activities/generateSpeech/index.ts`:
- Around line 111-133: The speech event payloads currently emit sensitive fields
unconditionally; change aiEventClient.emit for 'speech:request:started' and
'speech:request:completed' to gate inclusion of raw content behind an explicit
opt-in flag (e.g., emitSensitiveSpeechPayloads or allowTelemetrySensitiveData)
and otherwise redact/truncate: replace rest.text with a redacted or short
textSnippet (e.g., first N chars + "[REDACTED]") and replace
result.audio/result.audio (base64) with either audioLength or
truncatedBase64/placeholder; ensure modelOptions and other potentially sensitive
fields are also sanitized prior to emitting and that the opt-in flag is checked
(read from config/context) before including full fields.
In `@packages/typescript/ai/src/event-client.ts`:
- Around line 391-429: The client event interfaces are missing the
BaseEventContext/source property which other events use; update
ClientLoadingChangedEvent, ClientErrorChangedEvent, ClientMessagesClearedEvent,
ClientReloadedEvent, and ClientStoppedEvent to extend BaseEventContext (or add
source?: 'client' | 'server') so their shapes match
DefaultChatClientEventEmitter emissions and tests; modify the interface
declarations (e.g., "export interface ClientLoadingChangedEvent extends
BaseEventContext { ... }") for each listed interface and keep existing fields
unchanged.
In `@testing/panel/src/lib/recording.ts`:
- Around line 134-146: normalizeFinishReason currently maps any unrecognized
value (including null) to 'stop', losing the null semantic; update
normalizeFinishReason so that if the incoming finishReason parameter is null it
returns null (preserving FinishReason|null), otherwise validate against the
allowed literals ('stop'|'length'|'content_filter'|'tool_calls') and return the
matching string or fallback to 'stop' for unknown strings; ensure the function
signature and the FinishReason type remain consistent and reference the
normalizeFinishReason and FinishReason symbols when making the change.
🧹 Nitpick comments (19)
packages/typescript/ai-devtools/src/components/conversation/MessageCard.tsx (2)
47-53: Minor simplification possible.The
|| 0default followed by filtering> 0is slightly redundant. Consider simplifying:♻️ Optional simplification
const toolDuration = () => { - const durations = msg() - .toolCalls?.map((tool) => tool.duration || 0) - .filter((duration) => duration > 0) - if (!durations || durations.length === 0) return 0 - return durations.reduce((total, duration) => total + duration, 0) + const durations = msg().toolCalls?.map((tool) => tool.duration).filter(Boolean) as number[] | undefined + if (!durations?.length) return 0 + return durations.reduce((total, duration) => total + duration, 0) }
143-154: MemoizeparseJsonContentto avoid parsing JSON twice.In SolidJS,
parseJsonContent()is computed twice per render cycle—once in thewhencondition (line 144) and again in the render body (line 152). For large JSON payloads, this is inefficient. Additionally, the non-null assertion!could be avoided with memoization.♻️ Memoize with createMemo
Update the function definition to use
createMemo:import { For, Show, createMemo } from 'solid-js'-const parseJsonContent = () => { +const parsedJsonContent = createMemo(() => { const content = msg().content if (typeof content !== 'string') return null const trimmed = content.trim() if (!trimmed.startsWith('{') && !trimmed.startsWith('[')) return null try { return JSON.parse(trimmed) as Record<string, unknown> | Array<unknown> } catch { return null } -} +})Then update the usage:
<Show - when={msg().role === 'tool' && parseJsonContent() !== null} + when={msg().role === 'tool' && parsedJsonContent() !== null} fallback={ <div class={styles().conversationDetails.messageContent}> {msg().content} </div> } > <div class={styles().conversationDetails.toolJsonContainer}> - <JsonTree value={parseJsonContent()!} /> + <JsonTree value={parsedJsonContent()!} /> </div> </Show>packages/typescript/ai-devtools/src/styles/use-styles.ts (1)
742-756: Respect reduced-motion preferences for the pulsing animation.
Consider disabling the pulse when users request reduced motion.♿ Suggested tweak
tabButtonPulse: css` position: relative; animation: activityPulse 1.4s ease-in-out infinite; + `@media` (prefers-reduced-motion: reduce) { + animation: none; + } `@keyframes` activityPulse { 0% { box-shadow: 0 0 0 0 ${colors.pink[400]}55; } 70% { box-shadow: 0 0 0 8px ${colors.pink[400]}00; } 100% { box-shadow: 0 0 0 0 ${colors.pink[400]}00; } } `,packages/typescript/ai-devtools/src/store/ai-context.tsx (1)
1617-1899: Consider extracting a helper for activity event handlers.The image, speech, transcription, and video event handlers follow an identical pattern: resolve conversation ID, create conversation if needed, then add activity event. This could be simplified with a shared helper function.
♻️ Suggested helper extraction
function handleActivityEvent( e: { payload: { requestId: string; clientId?: string; timestamp: number } }, activityType: 'image' | 'speech' | 'transcription' | 'video', eventName: string, ) { const { requestId, clientId, timestamp } = e.payload const eventKey = `${activityType}Events` as const let conversationId = clientId if (!conversationId || !state.conversations[conversationId]) { conversationId = `${activityType}-${requestId}` getOrCreateConversation( conversationId, 'server', `${activityType.charAt(0).toUpperCase() + activityType.slice(1)} (${requestId.substring(0, 8)})`, ) } addActivityEvent(conversationId, eventKey, { id: requestId, name: eventName, timestamp, payload: e.payload, }) }Then each handler becomes:
aiEventClient.on('image:request:started', (e) => { handleActivityEvent(e, 'image', 'image:request:started') })packages/typescript/ai-devtools/src/components/conversation/ConversationTabs.tsx (1)
45-80: Previous count trackers may lose state on component re-render.The
previousImageCount,previousSpeechCount, etc. variables are declared withletoutside the component's reactive scope. If the component re-renders (e.g., due to parent re-render), these will reset to 0, potentially causing unexpected pulse animations.Consider using
createSignalor a ref pattern for these trackers if re-render behavior needs to be preserved.♻️ Alternative using refs
- let previousImageCount = 0 - let previousSpeechCount = 0 - let previousTranscriptionCount = 0 - let previousVideoCount = 0 + const previousCounts = { image: 0, speech: 0, transcription: 0, video: 0 } createEffect(() => { const count = imageCount() - if (count > 0 && previousImageCount === 0) { + if (count > 0 && previousCounts.image === 0) { triggerPulse(setImagePulse) } - previousImageCount = count + previousCounts.image = count })packages/typescript/ai-devtools/src/components/conversation/ActivityEventsTab.tsx (1)
47-49: Consider handling large payloads in JSON preview.
JSON.stringify(event.payload, null, 2)will render the entire payload regardless of size. For events with large payloads, this could impact rendering performance or create an unwieldy UI.Consider truncating large payloads or adding a "show more" pattern for payloads exceeding a certain size.
packages/typescript/ai-devtools/src/components/ConversationDetails.tsx (1)
27-91: Tab selection logic has redundant checks.The conditions like
conv.hasImage || (conv.imageEvents && conv.imageEvents.length > 0)duplicate logic already present inConversationTabs.tsx'shasImage()function. Consider extracting these checks to a shared utility or the Conversation type to maintain consistency.testing/panel/src/lib/recording.ts (3)
197-208: Type casting workaround should be replaced with proper package exports.The local
DevtoolsEventClienttype definition and cast bypass the actual type ofbaseAiEventClient. This appears to be a workaround for the missing type exports flagged earlier.Once
AIDevtoolsEventMapis properly exported from the package, consider using the native types from the package rather than casting.
423-436: Consolidate duplicatetext:request:completedsubscriptions.Two separate handlers subscribe to
text:request:completed:
- Lines 424-436: Updates
accumulatedContentandfinishReason- Lines 465-510: Saves the recording to file
While JavaScript's event loop ordering should ensure the first handler completes its synchronous work before the second handler runs, having two subscriptions for the same event is confusing and fragile. Consolidate into a single handler:
♻️ Proposed consolidation
- // Subscribe to text:request:completed to get final tool calls - const unsubscribeChatCompleted = aiEventClient.on( - 'text:request:completed', - (event) => { - const { streamId, content, finishReason } = event.payload - if (!shouldRecord(streamId)) return - const stream = activeStreams.get(streamId) - if (stream) { - stream.accumulatedContent = content - stream.finishReason = finishReason || null - } - }, - { withEventTarget: false }, - ) - - // Subscribe to text:request:completed to save recording - const unsubscribeStreamEnded = aiEventClient.on( + // Subscribe to text:request:completed to finalize and save recording + const unsubscribeStreamEnded = aiEventClient.on( 'text:request:completed', async (event) => { - const { streamId } = event.payload + const { streamId, content, finishReason } = event.payload if (!shouldRecord(streamId)) return const stream = activeStreams.get(streamId) if (!stream) { return } + + // Update final content and finishReason + stream.accumulatedContent = content + stream.finishReason = finishReason || null try { // ... rest of save logicAlso remove
unsubscribeChatCompleted()from thestop()cleanup.Also applies to: 464-510
3-5: ImportAIDevtoolsEventMapfrom the public package API instead of the internal source path.Line 4 imports from an internal source path (
../../../../packages/typescript/ai/src/event-client) while line 3 correctly uses the published package path (@tanstack/ai/event-client). This inconsistency creates unnecessary coupling to internal file structure when the type is available through the public API.Change the import to:
import { aiEventClient as baseAiEventClient } from '@tanstack/ai/event-client' -import type { AIDevtoolsEventMap } from '../../../../packages/typescript/ai/src/event-client' +import type { AIDevtoolsEventMap } from '@tanstack/ai/event-client' import type { StreamChunk } from '@tanstack/ai'packages/typescript/ai/src/activities/generateTranscription/index.ts (2)
65-67: Consider extractingcreateIdto a shared utility.This helper is duplicated verbatim in
chat/index.ts(line 941-943) andgenerateImage/index.ts(line 86-88). Extracting it to a shared module (e.g.,../../utils.js) would reduce duplication and ensure consistent ID generation across all activities.
126-140: Consider emitting a failure event on error for observability completeness.If
adapter.transcribe()throws, atranscription:request:startedevent will be orphaned without a corresponding completion or failure event. For robust telemetry, wrap in try/catch and emittranscription:request:failedon error.Suggested pattern
+ try { const result = await adapter.transcribe({ ...rest, model }) const duration = Date.now() - startTime aiEventClient.emit('transcription:request:completed', { requestId, provider: adapter.name, model, text: result.text, language: result.language, duration, modelOptions: rest.modelOptions as Record<string, unknown> | undefined, timestamp: Date.now(), }) return result + } catch (error) { + aiEventClient.emit('transcription:request:failed', { + requestId, + provider: adapter.name, + model, + error: error instanceof Error ? error.message : String(error), + duration: Date.now() - startTime, + timestamp: Date.now(), + }) + throw error + }packages/typescript/ai/src/activities/generateImage/index.ts (1)
158-185: Consider using async/await for consistency and easier error handling.This function uses
.then()chaining whilegenerateTranscriptionusesasync/await. Usingasync/awaitwould be more consistent across activities and make it easier to add error handling (e.g., emittingimage:request:failedon rejection).Suggested refactor
- return adapter.generateImages({ ...rest, model }).then((result) => { - const duration = Date.now() - startTime - - aiEventClient.emit('image:request:completed', { - requestId, - provider: adapter.name, - model, - images: result.images.map((image) => ({ - url: image.url, - b64Json: image.b64Json, - })), - duration, - modelOptions: rest.modelOptions as Record<string, unknown> | undefined, - timestamp: Date.now(), - }) - - if (result.usage) { - aiEventClient.emit('image:usage', { - requestId, - model, - usage: result.usage, - modelOptions: rest.modelOptions as Record<string, unknown> | undefined, - timestamp: Date.now(), - }) - } - - return result - }) + const result = await adapter.generateImages({ ...rest, model }) + const duration = Date.now() - startTime + + aiEventClient.emit('image:request:completed', { + requestId, + provider: adapter.name, + model, + images: result.images.map((image) => ({ + url: image.url, + b64Json: image.b64Json, + })), + duration, + modelOptions: rest.modelOptions as Record<string, unknown> | undefined, + timestamp: Date.now(), + }) + + if (result.usage) { + aiEventClient.emit('image:usage', { + requestId, + model, + usage: result.usage, + modelOptions: rest.modelOptions as Record<string, unknown> | undefined, + timestamp: Date.now(), + }) + } + + return resultpackages/typescript/ai-client/src/chat-client.ts (1)
163-172: Consider the empty string fallback for messageId.When
this.currentMessageIdis null, the code passes an empty string toapprovalRequested. If the event consumer expects a valid message ID for correlation, this could cause issues. Consider whether the guard should also check forcurrentMessageId:- if (this.currentStreamId) { + if (this.currentStreamId && this.currentMessageId) { this.events.approvalRequested( this.currentStreamId, - this.currentMessageId || '', + this.currentMessageId, args.toolCallId,Alternatively, if an empty messageId is intentional, document this behavior.
packages/typescript/ai/src/activities/summarize/index.ts (1)
222-265: Consider emitting events for streaming summarization.The streaming path (
runStreamingSummarize) doesn't emitsummarize:request:startedorsummarize:request:completedevents, while the non-streaming path does. This inconsistency could make it harder to track streaming summarization requests in devtools.If intentional (e.g., events are emitted differently for streams), a comment clarifying this would be helpful.
packages/typescript/ai/src/activities/generateVideo/index.ts (1)
40-42: Consider centralizing request ID generation.
createIdduplicates the helper used inactivities/chat/index.ts; extracting a shared utility would prevent drift and keep IDs consistent across activities.packages/typescript/ai/tests/ai-text.test.ts (1)
145-145: Differentiate duplicate test titles for clearer failures.
Both tests share the same name, which makes failure output ambiguous.💡 Possible rename
- it('should emit text:request:started event with correct data', async () => { + it('should emit text:request:started event with correct data (messages/tools)', async () => { @@ - it('should emit text:request:started event with correct data', async () => { + it('should emit text:request:started event with provider/model', async () => {Also applies to: 175-175
packages/typescript/ai/src/activities/generateSpeech/index.ts (2)
65-67: Consider centralizing request-id generation.There’s already similar
createIdlogic in other activities (e.g., chat). Extracting a shared helper avoids drift in format/collision behavior across activities.
111-139: Emit a failure event to complete the lifecycle.If
adapter.generateSpeechrejects,speech:request:startedis emitted but no terminal event is produced, leaving incomplete traces in observability tooling. Consider emitting a failure event in acatchblock (naming aligned with your event conventions).✅ Suggested fix (add failure emission)
- return adapter.generateSpeech({ ...rest, model }).then((result) => { - const duration = Date.now() - startTime - - aiEventClient.emit('speech:request:completed', { - requestId, - provider: adapter.name, - model, - audio: result.audio, - format: result.format, - audioDuration: result.duration, - contentType: result.contentType, - duration, - modelOptions: rest.modelOptions as Record<string, unknown> | undefined, - timestamp: Date.now(), - }) - - return result - }) + try { + const result = await adapter.generateSpeech({ ...rest, model }) + const duration = Date.now() - startTime + + aiEventClient.emit('speech:request:completed', { + requestId, + provider: adapter.name, + model, + audio: result.audio, + format: result.format, + audioDuration: result.duration, + contentType: result.contentType, + duration, + modelOptions: rest.modelOptions as Record<string, unknown> | undefined, + timestamp: Date.now(), + }) + + return result + } catch (error) { + const duration = Date.now() - startTime + aiEventClient.emit('speech:request:failed', { + requestId, + provider: adapter.name, + model, + duration, + error: error instanceof Error ? error.message : String(error), + timestamp: Date.now(), + }) + throw error + }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/typescript/ai/src/activities/chat/index.ts (1)
372-384: Avoid duplicatetext:message:createdfor the same assistantmessageId.
Line 377 emits a created event, and Line 685 emits another created event for the samemessageIdwhen tool calls are added. If consumers assumemessage:createdis unique per message, this will double-count. Consider emitting an update event instead, or removing the later created emission.🛠️ Minimal diff to avoid duplicate “created” emission
- aiEventClient.emit('text:message:created', { - ...this.buildTextEventContext(), - messageId, - role: 'assistant', - content: this.accumulatedContent || '', - toolCalls, - timestamp: Date.now(), - })Also applies to: 675-692
🤖 Fix all issues with AI agents
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 929-935: getContentString currently strips non-text parts and can
return an empty string for multi-modal ModelMessage['content']; update
getContentString to first collect text parts as it does, but if the resulting
text is empty and content is non-string, return a JSON-serialized representation
of the full content (or a clear fallback string containing non-text part
metadata) so non-text data is preserved for consumers; update the logic inside
getContentString (referenced symbol: getContentString and
ModelMessage['content']) to perform the text-join-then-fallback-to-JSON
behavior.
Improves the event system and adds new features to the devtools for improved observability!
🎯 Changes
✅ Checklist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.