Skip to content

Conversation

@AlemTuzlak
Copy link
Contributor

@AlemTuzlak AlemTuzlak commented Jan 20, 2026

Improves the event system and adds new features to the devtools for improved observability!

🎯 Changes

✅ Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • New Features

    • Expanded observability: structured events for text, tools, image, speech, transcription, and video
    • New Activity Events tab and image/speech/transcription/video tabs with pulse indicators
    • Conversation UI: JSON-rendered assistant content, model options and system prompts collapsibles, and per-message tool duration display
  • Documentation

    • Added observability guide with event naming, examples, and chat-replay instructions
    • README updated to highlight structured, typed event support

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 20, 2026

📝 Walkthrough

Walkthrough

Introduces a structured observability/event system: renames and types events (text:, tools:, image:, speech:, transcription:, video:, client:), adds streamId/context to emissions, emits lifecycle events for media generation, expands devtools store/UI to show activity events, and updates tests and docs.

Changes

Cohort / File(s) Summary
Documentation
README.md, docs/guides/observability.md
Added observability feature note, event naming scheme, examples, and "Reconstructing chat" guidance; updated sample listener names.
Event type system & client
packages/typescript/ai/src/event-client.ts, packages/typescript/ai/src/index.ts, packages/typescript/ai-client/src/events.ts, packages/typescript/ai-client/src/chat-client.ts
Added comprehensive typed event interfaces and AIDevtoolsEventMap; changed export surface to export * from event-client; renamed many events to new namespaces; updated emitter signatures to use Record<string, unknown> and added/propagated streamId, source, clientId, and stricter types.
Runtime event emissions (activities)
packages/typescript/ai/src/activities/chat/index.ts, .../generateImage/index.ts, .../generateSpeech/index.ts, .../generateTranscription/index.ts, .../generateVideo/index.ts, .../summarize/index.ts
Emit lifecycle events (started/completed/usage) for text, image, speech, transcription, video; added per-event context augmentation and requestId/duration metadata.
DevTools store & types
packages/typescript/ai-devtools/src/store/ai-context.tsx
Extended Conversation/Chunk/Message types: added ActivityEvent, thinking message part, modelOptions, systemPrompts, hasImage/hasSpeech/hasTranscription/hasVideo flags and event arrays; added isClientTool on chunks.
DevTools UI components
packages/typescript/ai-devtools/src/components/conversation/ActivityEventsTab.tsx, ConversationDetails.tsx, ConversationTabs.tsx, ConversationHeader.tsx, MessageCard.tsx, .../index.ts
New ActivityEventsTab component; added image/speech/transcription/video tabs with pulse animation; render activity tabs and JSON/system-prompts/modelOptions UI; MessageCard JSON/duration improvements; exported ActivityEventsTab.
Styling
packages/typescript/ai-devtools/src/styles/use-styles.ts
Added collapsible section styles, system prompt styles, tab pulse animation, and content area padding adjustments.
Tests
packages/typescript/ai-client/tests/*.ts, packages/typescript/ai/tests/ai-text.test.ts
Updated tests to expect renamed events, adjusted payload assertions for new fields (streamId, clientId, source), and consolidated emits.
Recording tooling / panel
testing/panel/src/lib/recording.ts
Added RecordedToolCall type, richer chunk metadata (id/model/timestamp), per-stream recording control, renamed subscribed events to new names, and improved final serialization.
Example config
examples/ts-react-chat/src/routes/api.tanchat.ts
Switched provider adapter from openrouter/auto to openai/gpt-5.1 and added reasoning: { effort: 'medium' } to modelOptions.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client App
  participant Chat as Chat Activity (ai package)
  participant Events as AiEventClient
  participant Devtools as DevTools Listener / UI

  Client->>Chat: start text request (messages, modelOptions)
  Chat->>Events: emit 'text:request:started' (requestId, streamId, context)
  Chat->>Events: emit multiple 'text:chunk:*' events (content/tool-call/thinking)
  Events->>Devtools: deliver events (clientId, source, timestamp, streamId)
  Devtools->>Devtools: record/display ActivityEvent entries (image/speech/transcription/video as applicable)
  Chat->>Events: emit 'text:request:completed' (final content, finishReason, usage)
  Devtools->>Client: UI updates / recording saved
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • jherr

Poem

🐰 I hopped through logs and named each beat,

Streams now speak with tidy, typed heartbeat.
From text to image, speech to video bright,
I nibble bugs and watch the events take flight. 🥕✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 53.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The description template is mostly followed but lacks critical details. It mentions 'Improves the event system and adds new features' but the 'Changes' section is empty, leaving the detailed explanation missing. All checklist items remain unchecked, including testing confirmation and changeset generation. Complete the 'Changes' section with specific details about event system improvements and new devtools features. Mark completed checklist items, and confirm changeset generation for published code changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main focus of the changeset—improving the event system with conventionalized naming and enhanced data emission. It reflects the primary objective evident across the changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@AlemTuzlak AlemTuzlak requested a review from jherr January 20, 2026 12:48
@nx-cloud
Copy link

nx-cloud bot commented Jan 20, 2026

View your CI Pipeline Execution ↗ for commit 3d46c84

Command Status Duration Result
nx affected --targets=test:sherif,test:knip,tes... ✅ Succeeded 2m 54s View ↗
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 1m 9s View ↗

☁️ Nx Cloud last updated this comment at 2026-01-20 13:06:17 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Jan 20, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@239

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@239

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@239

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@239

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@239

@tanstack/ai-grok

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-grok@239

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@239

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@239

@tanstack/ai-openrouter

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openrouter@239

@tanstack/ai-preact

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-preact@239

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@239

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@239

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@239

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@239

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@239

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@239

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@239

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/preact-ai-devtools@239

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@239

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@239

commit: 3d46c84

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
packages/typescript/ai-devtools/src/components/conversation/ConversationTabs.tsx (1)

107-110: Non-reactive early return may hide tabs that appear later.

The if (visibleTabCount() <= 1) return null check runs only during initial component execution. In SolidJS, if the conversation's events load asynchronously, this check won't re-evaluate, and the tabs will remain hidden even when they should appear.

🐛 Wrap in Show for reactive rendering
-  // Don't render tabs if only one tab would be visible
-  if (visibleTabCount() <= 1) {
-    return null
-  }
-
-  return (
+  return (
+    <Show when={visibleTabCount() > 1}>
     <div class={styles().conversationDetails.tabsContainer}>
       {/* ... tab buttons ... */}
     </div>
+    </Show>
   )
testing/panel/src/lib/recording.ts (1)

77-77: Shared chunkIndex causes incorrect indices when multiple streams are active.

chunkIndex is shared across all active streams. If multiple streams are being recorded simultaneously, chunk indices will be interleaved incorrectly, resulting in non-sequential indices within each stream's recording.

Move the chunk index into the per-stream state:

🐛 Proposed fix

Add chunkIndex to the stream state (around line 68):

       toolCalls: Map<string, RecordedToolCall>
       finishReason: string | null
       traceId?: string
+      chunkIndex: number
     }
   >()
-
-  let chunkIndex = 0

Initialize it when creating stream state (around line 224):

       toolCalls: new Map<string, RecordedToolCall>(),
       finishReason: null,
       traceId: undefined,
+      chunkIndex: 0,
     })

Then update all usages to reference stream.chunkIndex instead of the shared variable, incrementing after use.

packages/typescript/ai/src/event-client.ts (1)

8-22: Remove duplicate type definitions and import from ./types instead.

ToolCallState and ToolResultState are duplicated between event-client.ts and types.ts. Import and re-export them from ./types to maintain a single source of truth:

Suggested change
-export type ToolCallState =
-  | 'awaiting-input' // Received start but no arguments yet
-  | 'input-streaming' // Partial arguments received
-  | 'input-complete' // All arguments received
-  | 'approval-requested' // Waiting for user approval
-  | 'approval-responded' // User has approved/denied
-
-export type ToolResultState =
-  | 'streaming' // Placeholder for future streamed output
-  | 'complete' // Result is complete
-  | 'error' // Error occurred
+import type { ToolCallState, ToolResultState } from './types'
+export type { ToolCallState, ToolResultState }
packages/typescript/ai/src/activities/generateVideo/index.ts (1)

214-286: Emit a completion event when getVideoStatus throws.
If adapter.getVideoStatus rejects, you emit video:request:started without a matching completion event, which leaves traces incomplete and skews observability.

🔧 Proposed fix
-  const statusResult = await adapter.getVideoStatus(jobId)
+  let statusResult: VideoStatusResult
+  try {
+    statusResult = await adapter.getVideoStatus(jobId)
+  } catch (error) {
+    aiEventClient.emit('video:request:completed', {
+      requestId,
+      provider: adapter.name,
+      model: adapter.model,
+      requestType: 'status',
+      jobId,
+      status: 'failed',
+      error: error instanceof Error ? error.message : 'Failed to get video status',
+      duration: Date.now() - startTime,
+      timestamp: Date.now(),
+    })
+    throw error
+  }
🤖 Fix all issues with AI agents
In `@docs/guides/observability.md`:
- Line 30: Docs list an event (`transcription:usage`) that isn't emitted; update
either the code or the docs. Option A: remove `transcription:usage` from the
docs so the listed events match actual emissions
(`transcription:request:started` and `transcription:request:completed`). Option
B: implement emission of TranscriptionUsageEvent inside the
generateTranscription function (emit `transcription:usage` with the defined
TranscriptionUsageEvent payload at the point you calculate usage/cost after
transcription completes) and ensure any listeners/tests are updated to expect
this new event.

In `@packages/typescript/ai-client/src/events.ts`:
- Line 219: The implementation signature of emitEvent uses data?: Record<string,
any> which mismatches the abstract declaration data?: Record<string, unknown>;
update the concrete method emitEvent(eventName: string, data?: Record<string,
unknown>): void to use Record<string, unknown> so the types align and preserve
type safety (ensure any other references to emitEvent in this class use the same
signature).

In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 929-932: The getContentString function currently stringifies null
into "null"; change it to explicitly handle null by returning an empty string
(or appropriate sentinel) when ModelMessage['content'] === null, keeping the
existing behavior for strings (return as-is) and for Array<ContentPart>
(JSON.stringify). Update the getContentString implementation to first check for
content === null and return '' before the typeof/string and JSON.stringify
branches, and adjust any callers/tests if they relied on the former "null"
string.

In `@packages/typescript/ai/src/activities/generateSpeech/index.ts`:
- Around line 111-133: The speech event payloads currently emit sensitive fields
unconditionally; change aiEventClient.emit for 'speech:request:started' and
'speech:request:completed' to gate inclusion of raw content behind an explicit
opt-in flag (e.g., emitSensitiveSpeechPayloads or allowTelemetrySensitiveData)
and otherwise redact/truncate: replace rest.text with a redacted or short
textSnippet (e.g., first N chars + "[REDACTED]") and replace
result.audio/result.audio (base64) with either audioLength or
truncatedBase64/placeholder; ensure modelOptions and other potentially sensitive
fields are also sanitized prior to emitting and that the opt-in flag is checked
(read from config/context) before including full fields.

In `@packages/typescript/ai/src/event-client.ts`:
- Around line 391-429: The client event interfaces are missing the
BaseEventContext/source property which other events use; update
ClientLoadingChangedEvent, ClientErrorChangedEvent, ClientMessagesClearedEvent,
ClientReloadedEvent, and ClientStoppedEvent to extend BaseEventContext (or add
source?: 'client' | 'server') so their shapes match
DefaultChatClientEventEmitter emissions and tests; modify the interface
declarations (e.g., "export interface ClientLoadingChangedEvent extends
BaseEventContext { ... }") for each listed interface and keep existing fields
unchanged.

In `@testing/panel/src/lib/recording.ts`:
- Around line 134-146: normalizeFinishReason currently maps any unrecognized
value (including null) to 'stop', losing the null semantic; update
normalizeFinishReason so that if the incoming finishReason parameter is null it
returns null (preserving FinishReason|null), otherwise validate against the
allowed literals ('stop'|'length'|'content_filter'|'tool_calls') and return the
matching string or fallback to 'stop' for unknown strings; ensure the function
signature and the FinishReason type remain consistent and reference the
normalizeFinishReason and FinishReason symbols when making the change.
🧹 Nitpick comments (19)
packages/typescript/ai-devtools/src/components/conversation/MessageCard.tsx (2)

47-53: Minor simplification possible.

The || 0 default followed by filtering > 0 is slightly redundant. Consider simplifying:

♻️ Optional simplification
 const toolDuration = () => {
-  const durations = msg()
-    .toolCalls?.map((tool) => tool.duration || 0)
-    .filter((duration) => duration > 0)
-  if (!durations || durations.length === 0) return 0
-  return durations.reduce((total, duration) => total + duration, 0)
+  const durations = msg().toolCalls?.map((tool) => tool.duration).filter(Boolean) as number[] | undefined
+  if (!durations?.length) return 0
+  return durations.reduce((total, duration) => total + duration, 0)
 }

143-154: Memoize parseJsonContent to avoid parsing JSON twice.

In SolidJS, parseJsonContent() is computed twice per render cycle—once in the when condition (line 144) and again in the render body (line 152). For large JSON payloads, this is inefficient. Additionally, the non-null assertion ! could be avoided with memoization.

♻️ Memoize with createMemo

Update the function definition to use createMemo:

import { For, Show, createMemo } from 'solid-js'
-const parseJsonContent = () => {
+const parsedJsonContent = createMemo(() => {
   const content = msg().content
   if (typeof content !== 'string') return null
   const trimmed = content.trim()
   if (!trimmed.startsWith('{') && !trimmed.startsWith('[')) return null
   try {
     return JSON.parse(trimmed) as Record<string, unknown> | Array<unknown>
   } catch {
     return null
   }
-}
+})

Then update the usage:

 <Show
-  when={msg().role === 'tool' && parseJsonContent() !== null}
+  when={msg().role === 'tool' && parsedJsonContent() !== null}
   fallback={
     <div class={styles().conversationDetails.messageContent}>
       {msg().content}
     </div>
   }
 >
   <div class={styles().conversationDetails.toolJsonContainer}>
-    <JsonTree value={parseJsonContent()!} />
+    <JsonTree value={parsedJsonContent()!} />
   </div>
 </Show>
packages/typescript/ai-devtools/src/styles/use-styles.ts (1)

742-756: Respect reduced-motion preferences for the pulsing animation.
Consider disabling the pulse when users request reduced motion.

♿ Suggested tweak
      tabButtonPulse: css`
        position: relative;
        animation: activityPulse 1.4s ease-in-out infinite;
+        `@media` (prefers-reduced-motion: reduce) {
+          animation: none;
+        }
        `@keyframes` activityPulse {
          0% {
            box-shadow: 0 0 0 0 ${colors.pink[400]}55;
          }
          70% {
            box-shadow: 0 0 0 8px ${colors.pink[400]}00;
          }
          100% {
            box-shadow: 0 0 0 0 ${colors.pink[400]}00;
          }
        }
      `,
packages/typescript/ai-devtools/src/store/ai-context.tsx (1)

1617-1899: Consider extracting a helper for activity event handlers.

The image, speech, transcription, and video event handlers follow an identical pattern: resolve conversation ID, create conversation if needed, then add activity event. This could be simplified with a shared helper function.

♻️ Suggested helper extraction
function handleActivityEvent(
  e: { payload: { requestId: string; clientId?: string; timestamp: number } },
  activityType: 'image' | 'speech' | 'transcription' | 'video',
  eventName: string,
) {
  const { requestId, clientId, timestamp } = e.payload
  const eventKey = `${activityType}Events` as const

  let conversationId = clientId
  if (!conversationId || !state.conversations[conversationId]) {
    conversationId = `${activityType}-${requestId}`
    getOrCreateConversation(
      conversationId,
      'server',
      `${activityType.charAt(0).toUpperCase() + activityType.slice(1)} (${requestId.substring(0, 8)})`,
    )
  }

  addActivityEvent(conversationId, eventKey, {
    id: requestId,
    name: eventName,
    timestamp,
    payload: e.payload,
  })
}

Then each handler becomes:

aiEventClient.on('image:request:started', (e) => {
  handleActivityEvent(e, 'image', 'image:request:started')
})
packages/typescript/ai-devtools/src/components/conversation/ConversationTabs.tsx (1)

45-80: Previous count trackers may lose state on component re-render.

The previousImageCount, previousSpeechCount, etc. variables are declared with let outside the component's reactive scope. If the component re-renders (e.g., due to parent re-render), these will reset to 0, potentially causing unexpected pulse animations.

Consider using createSignal or a ref pattern for these trackers if re-render behavior needs to be preserved.

♻️ Alternative using refs
-  let previousImageCount = 0
-  let previousSpeechCount = 0
-  let previousTranscriptionCount = 0
-  let previousVideoCount = 0
+  const previousCounts = { image: 0, speech: 0, transcription: 0, video: 0 }

   createEffect(() => {
     const count = imageCount()
-    if (count > 0 && previousImageCount === 0) {
+    if (count > 0 && previousCounts.image === 0) {
       triggerPulse(setImagePulse)
     }
-    previousImageCount = count
+    previousCounts.image = count
   })
packages/typescript/ai-devtools/src/components/conversation/ActivityEventsTab.tsx (1)

47-49: Consider handling large payloads in JSON preview.

JSON.stringify(event.payload, null, 2) will render the entire payload regardless of size. For events with large payloads, this could impact rendering performance or create an unwieldy UI.

Consider truncating large payloads or adding a "show more" pattern for payloads exceeding a certain size.

packages/typescript/ai-devtools/src/components/ConversationDetails.tsx (1)

27-91: Tab selection logic has redundant checks.

The conditions like conv.hasImage || (conv.imageEvents && conv.imageEvents.length > 0) duplicate logic already present in ConversationTabs.tsx's hasImage() function. Consider extracting these checks to a shared utility or the Conversation type to maintain consistency.

testing/panel/src/lib/recording.ts (3)

197-208: Type casting workaround should be replaced with proper package exports.

The local DevtoolsEventClient type definition and cast bypass the actual type of baseAiEventClient. This appears to be a workaround for the missing type exports flagged earlier.

Once AIDevtoolsEventMap is properly exported from the package, consider using the native types from the package rather than casting.


423-436: Consolidate duplicate text:request:completed subscriptions.

Two separate handlers subscribe to text:request:completed:

  1. Lines 424-436: Updates accumulatedContent and finishReason
  2. Lines 465-510: Saves the recording to file

While JavaScript's event loop ordering should ensure the first handler completes its synchronous work before the second handler runs, having two subscriptions for the same event is confusing and fragile. Consolidate into a single handler:

♻️ Proposed consolidation
-  // Subscribe to text:request:completed to get final tool calls
-  const unsubscribeChatCompleted = aiEventClient.on(
-    'text:request:completed',
-    (event) => {
-      const { streamId, content, finishReason } = event.payload
-      if (!shouldRecord(streamId)) return
-      const stream = activeStreams.get(streamId)
-      if (stream) {
-        stream.accumulatedContent = content
-        stream.finishReason = finishReason || null
-      }
-    },
-    { withEventTarget: false },
-  )
-
-  // Subscribe to text:request:completed to save recording
-  const unsubscribeStreamEnded = aiEventClient.on(
+  // Subscribe to text:request:completed to finalize and save recording
+  const unsubscribeStreamEnded = aiEventClient.on(
     'text:request:completed',
     async (event) => {
-      const { streamId } = event.payload
+      const { streamId, content, finishReason } = event.payload
       if (!shouldRecord(streamId)) return
       const stream = activeStreams.get(streamId)
       if (!stream) {
         return
       }
+      
+      // Update final content and finishReason
+      stream.accumulatedContent = content
+      stream.finishReason = finishReason || null

       try {
         // ... rest of save logic

Also remove unsubscribeChatCompleted() from the stop() cleanup.

Also applies to: 464-510


3-5: Import AIDevtoolsEventMap from the public package API instead of the internal source path.

Line 4 imports from an internal source path (../../../../packages/typescript/ai/src/event-client) while line 3 correctly uses the published package path (@tanstack/ai/event-client). This inconsistency creates unnecessary coupling to internal file structure when the type is available through the public API.

Change the import to:

 import { aiEventClient as baseAiEventClient } from '@tanstack/ai/event-client'
-import type { AIDevtoolsEventMap } from '../../../../packages/typescript/ai/src/event-client'
+import type { AIDevtoolsEventMap } from '@tanstack/ai/event-client'
 import type { StreamChunk } from '@tanstack/ai'
packages/typescript/ai/src/activities/generateTranscription/index.ts (2)

65-67: Consider extracting createId to a shared utility.

This helper is duplicated verbatim in chat/index.ts (line 941-943) and generateImage/index.ts (line 86-88). Extracting it to a shared module (e.g., ../../utils.js) would reduce duplication and ensure consistent ID generation across all activities.


126-140: Consider emitting a failure event on error for observability completeness.

If adapter.transcribe() throws, a transcription:request:started event will be orphaned without a corresponding completion or failure event. For robust telemetry, wrap in try/catch and emit transcription:request:failed on error.

Suggested pattern
+  try {
     const result = await adapter.transcribe({ ...rest, model })
     const duration = Date.now() - startTime

     aiEventClient.emit('transcription:request:completed', {
       requestId,
       provider: adapter.name,
       model,
       text: result.text,
       language: result.language,
       duration,
       modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
       timestamp: Date.now(),
     })

     return result
+  } catch (error) {
+    aiEventClient.emit('transcription:request:failed', {
+      requestId,
+      provider: adapter.name,
+      model,
+      error: error instanceof Error ? error.message : String(error),
+      duration: Date.now() - startTime,
+      timestamp: Date.now(),
+    })
+    throw error
+  }
packages/typescript/ai/src/activities/generateImage/index.ts (1)

158-185: Consider using async/await for consistency and easier error handling.

This function uses .then() chaining while generateTranscription uses async/await. Using async/await would be more consistent across activities and make it easier to add error handling (e.g., emitting image:request:failed on rejection).

Suggested refactor
-  return adapter.generateImages({ ...rest, model }).then((result) => {
-    const duration = Date.now() - startTime
-
-    aiEventClient.emit('image:request:completed', {
-      requestId,
-      provider: adapter.name,
-      model,
-      images: result.images.map((image) => ({
-        url: image.url,
-        b64Json: image.b64Json,
-      })),
-      duration,
-      modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
-      timestamp: Date.now(),
-    })
-
-    if (result.usage) {
-      aiEventClient.emit('image:usage', {
-        requestId,
-        model,
-        usage: result.usage,
-        modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
-        timestamp: Date.now(),
-      })
-    }
-
-    return result
-  })
+  const result = await adapter.generateImages({ ...rest, model })
+  const duration = Date.now() - startTime
+
+  aiEventClient.emit('image:request:completed', {
+    requestId,
+    provider: adapter.name,
+    model,
+    images: result.images.map((image) => ({
+      url: image.url,
+      b64Json: image.b64Json,
+    })),
+    duration,
+    modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
+    timestamp: Date.now(),
+  })
+
+  if (result.usage) {
+    aiEventClient.emit('image:usage', {
+      requestId,
+      model,
+      usage: result.usage,
+      modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
+      timestamp: Date.now(),
+    })
+  }
+
+  return result
packages/typescript/ai-client/src/chat-client.ts (1)

163-172: Consider the empty string fallback for messageId.

When this.currentMessageId is null, the code passes an empty string to approvalRequested. If the event consumer expects a valid message ID for correlation, this could cause issues. Consider whether the guard should also check for currentMessageId:

-          if (this.currentStreamId) {
+          if (this.currentStreamId && this.currentMessageId) {
             this.events.approvalRequested(
               this.currentStreamId,
-              this.currentMessageId || '',
+              this.currentMessageId,
               args.toolCallId,

Alternatively, if an empty messageId is intentional, document this behavior.

packages/typescript/ai/src/activities/summarize/index.ts (1)

222-265: Consider emitting events for streaming summarization.

The streaming path (runStreamingSummarize) doesn't emit summarize:request:started or summarize:request:completed events, while the non-streaming path does. This inconsistency could make it harder to track streaming summarization requests in devtools.

If intentional (e.g., events are emitted differently for streams), a comment clarifying this would be helpful.

packages/typescript/ai/src/activities/generateVideo/index.ts (1)

40-42: Consider centralizing request ID generation.
createId duplicates the helper used in activities/chat/index.ts; extracting a shared utility would prevent drift and keep IDs consistent across activities.

packages/typescript/ai/tests/ai-text.test.ts (1)

145-145: Differentiate duplicate test titles for clearer failures.
Both tests share the same name, which makes failure output ambiguous.

💡 Possible rename
-    it('should emit text:request:started event with correct data', async () => {
+    it('should emit text:request:started event with correct data (messages/tools)', async () => {
@@
-    it('should emit text:request:started event with correct data', async () => {
+    it('should emit text:request:started event with provider/model', async () => {

Also applies to: 175-175

packages/typescript/ai/src/activities/generateSpeech/index.ts (2)

65-67: Consider centralizing request-id generation.

There’s already similar createId logic in other activities (e.g., chat). Extracting a shared helper avoids drift in format/collision behavior across activities.


111-139: Emit a failure event to complete the lifecycle.

If adapter.generateSpeech rejects, speech:request:started is emitted but no terminal event is produced, leaving incomplete traces in observability tooling. Consider emitting a failure event in a catch block (naming aligned with your event conventions).

✅ Suggested fix (add failure emission)
-  return adapter.generateSpeech({ ...rest, model }).then((result) => {
-    const duration = Date.now() - startTime
-
-    aiEventClient.emit('speech:request:completed', {
-      requestId,
-      provider: adapter.name,
-      model,
-      audio: result.audio,
-      format: result.format,
-      audioDuration: result.duration,
-      contentType: result.contentType,
-      duration,
-      modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
-      timestamp: Date.now(),
-    })
-
-    return result
-  })
+  try {
+    const result = await adapter.generateSpeech({ ...rest, model })
+    const duration = Date.now() - startTime
+
+    aiEventClient.emit('speech:request:completed', {
+      requestId,
+      provider: adapter.name,
+      model,
+      audio: result.audio,
+      format: result.format,
+      audioDuration: result.duration,
+      contentType: result.contentType,
+      duration,
+      modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
+      timestamp: Date.now(),
+    })
+
+    return result
+  } catch (error) {
+    const duration = Date.now() - startTime
+    aiEventClient.emit('speech:request:failed', {
+      requestId,
+      provider: adapter.name,
+      model,
+      duration,
+      error: error instanceof Error ? error.message : String(error),
+      timestamp: Date.now(),
+    })
+    throw error
+  }

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/typescript/ai/src/activities/chat/index.ts (1)

372-384: Avoid duplicate text:message:created for the same assistant messageId.
Line 377 emits a created event, and Line 685 emits another created event for the same messageId when tool calls are added. If consumers assume message:created is unique per message, this will double-count. Consider emitting an update event instead, or removing the later created emission.

🛠️ Minimal diff to avoid duplicate “created” emission
-    aiEventClient.emit('text:message:created', {
-      ...this.buildTextEventContext(),
-      messageId,
-      role: 'assistant',
-      content: this.accumulatedContent || '',
-      toolCalls,
-      timestamp: Date.now(),
-    })

Also applies to: 675-692

🤖 Fix all issues with AI agents
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 929-935: getContentString currently strips non-text parts and can
return an empty string for multi-modal ModelMessage['content']; update
getContentString to first collect text parts as it does, but if the resulting
text is empty and content is non-string, return a JSON-serialized representation
of the full content (or a clear fallback string containing non-text part
metadata) so non-text data is preserved for consumers; update the logic inside
getContentString (referenced symbol: getContentString and
ModelMessage['content']) to perform the text-join-then-fallback-to-JSON
behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants