diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 48902674..b592ffa5 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -94,6 +94,7 @@ ** xref:ai-agents:observability/index.adoc[Transcripts] *** xref:ai-agents:observability/concepts.adoc[Concepts] *** xref:ai-agents:observability/view-transcripts.adoc[View Transcripts] +*** xref:ai-agents:observability/ingest-custom-traces.adoc[Ingest Traces from Custom Agents] * xref:develop:connect/about.adoc[Redpanda Connect] ** xref:develop:connect/connect-quickstart.adoc[Quickstart] diff --git a/modules/ai-agents/pages/observability/concepts.adoc b/modules/ai-agents/pages/observability/concepts.adoc index 5cf30a1b..1d891dd9 100644 --- a/modules/ai-agents/pages/observability/concepts.adoc +++ b/modules/ai-agents/pages/observability/concepts.adoc @@ -316,7 +316,7 @@ The `events` array captures what happened and when. Use `timeUnixNano` to see ex [[opentelemetry-traces-topic]] == How Redpanda stores trace data -The `redpanda.otel_traces` topic stores OpenTelemetry spans using Redpanda's Schema Registry wire format with a custom Protobuf schema named `redpanda.otel_traces-value` that closely follows the https://opentelemetry.io/docs/specs/otel/protocol/[OpenTelemetry Protocol (OTLP)^] specification. This schema is automatically registered in the Schema Registry with the topic, enabling clients to deserialize trace data correctly. +The `redpanda.otel_traces` topic stores OpenTelemetry spans using Redpanda's Schema Registry wire format, with a custom Protobuf schema named `redpanda.otel_traces-value` that follows the https://opentelemetry.io/docs/specs/otel/protocol/[OpenTelemetry Protocol (OTLP)^] specification. Spans include attributes following OpenTelemetry https://opentelemetry.io/docs/specs/semconv/gen-ai/[semantic conventions for generative AI^], such as `gen_ai.operation.name` and `gen_ai.conversation.id`. The schema is automatically registered in the Schema Registry with the topic, so Kafka clients can consume and deserialize trace data correctly. Redpanda manages both the `redpanda.otel_traces` topic and its schema automatically. If you delete either the topic or the schema, they are recreated automatically. However, deleting the topic permanently deletes all trace data, and the topic comes back empty. Do not produce your own data to this topic. It is reserved for OpenTelemetry traces. diff --git a/modules/ai-agents/pages/observability/ingest-custom-traces.adoc b/modules/ai-agents/pages/observability/ingest-custom-traces.adoc new file mode 100644 index 00000000..96a8656b --- /dev/null +++ b/modules/ai-agents/pages/observability/ingest-custom-traces.adoc @@ -0,0 +1,457 @@ += Ingest OpenTelemetry Traces from Custom Agents +:description: Configure a Redpanda Connect pipeline to ingest OTEL traces from custom agents into Redpanda for unified observability. +:page-topic-type: how-to +:learning-objective-1: Configure a Redpanda Connect pipeline to receive OpenTelemetry traces from custom agents via HTTP and publish them to redpanda.otel_traces +:learning-objective-2: Validate trace data format and compatibility with existing MCP server traces +:learning-objective-3: Secure the ingestion endpoint using authentication mechanisms + +When you build custom agents or instrument applications outside of Remote MCP servers and declarative agents, you can send OpenTelemetry (OTEL) traces to Redpanda for centralized observability. Deploy a Redpanda Connect pipeline as an HTTP ingestion endpoint to collect and publish traces to the `redpanda.otel_traces` topic. + +After reading this page, you will be able to: + +* [ ] {learning-objective-1} +* [ ] {learning-objective-2} +* [ ] {learning-objective-3} + +== Prerequisites + +* A BYOC cluster +* Ability to manage secrets in Redpanda Cloud +* The latest version of `rpk` installed +* Custom agent or application instrumented with OpenTelemetry SDK +* Basic understanding of the https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/[OpenTelemetry span format^] and https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Protocol (OTLP)^] + +== Quickstart for LangChain users + +If you're using LangChain with OpenTelemetry tracing, you can send traces to Redpanda's `redpanda.otel_traces` glossterm:topic[] to view them in the Transcripts view. + +. Configure LangChain's OpenTelemetry integration by following the https://docs.langchain.com/langsmith/trace-with-opentelemetry[LangChain documentation^]. + +. Deploy a Redpanda Connect pipeline using the `otlp_http` input to receive OTLP traces over HTTP. Create the pipeline in the **Connect** page of your cluster, or see the <> section below for a sample configuration. + +. Configure your OTEL exporter to send traces to your Redpanda Connect pipeline using environment variables: + +[,bash] +---- +# Configure LangChain OTEL integration +export LANGSMITH_OTEL_ENABLED=true +export LANGSMITH_TRACING=true + +# Send traces to Redpanda Connect pipeline +export OTEL_EXPORTER_OTLP_ENDPOINT="https://:4318" +export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer " +---- + +By default, traces are sent to both LangSmith and your Redpanda Connect pipeline. If you want to send traces only to Redpanda (not LangSmith), set: + +[,bash] +---- +export LANGSMITH_OTEL_ONLY="true" +---- + +Your LangChain application will send traces to the `redpanda.otel_traces` topic, making them visible in the Transcripts view in your cluster alongside Remote MCP server and declarative agent traces. + +For non-LangChain applications or custom instrumentation, continue with the sections below. + +== About custom trace ingestion + +Custom agents include applications you build with OpenTelemetry instrumentation that operate independently of Redpanda's Remote MCP servers or declarative agents. Examples include: + +* Custom AI agents built with LangChain, CrewAI, or other frameworks +* Applications with manual OpenTelemetry instrumentation +* Services that integrate with third-party AI platforms + +When these applications send traces to Redpanda's `redpanda.otel_traces` glossterm:topic[], you gain unified observability across all agentic components in your system. Custom agent transcripts appear alongside Remote MCP server and declarative agent transcripts in the Transcripts view, creating xref:ai-agents:observability/concepts.adoc#cross-service-transcripts[cross-service transcripts] that allow you to correlate operations and analyze end-to-end request flows. + +=== Trace format requirements + +Custom agents must emit traces in OTLP format. The `otlp_http` input accepts both OTLP Protobuf (`application/x-protobuf`) and JSON (`application/json`) payloads. For <>, use the `otlp_grpc` input. + +Each trace must follow the OTLP specification with these required fields: + +[cols="1,3", options="header"] +|=== +| Field | Description + +| `traceId` +| Hex-encoded unique identifier for the entire trace + +| `spanId` +| Hex-encoded unique identifier for this span + +| `name` +| Descriptive operation name + +| `startTimeUnixNano` and `endTimeUnixNano` +| Timing information in nanoseconds + +| `instrumentationScope` +| Identifies the library that created the span + +| `status` +| Operation status with code (0 = OK, 2 = ERROR) +|=== + +Optional but recommended fields: +- `parentSpanId` for hierarchical traces +- `attributes` for contextual information + +For complete trace structure details, see xref:ai-agents:observability/concepts.adoc#understand-the-transcript-structure[Understand the transcript structure]. + +== Configure the ingestion pipeline + +Create a Redpanda Connect pipeline that receives HTTP requests containing OTLP traces and publishes them to the `redpanda.otel_traces` topic. The pipeline uses the `otlp_http` input component, which is specifically designed to receive OpenTelemetry Protocol data. + +=== Create the pipeline configuration + +Create a pipeline configuration file that defines the OTLP HTTP ingestion endpoint. + +The `otlp_http` input component: + +* Exposes an OpenTelemetry Collector HTTP receiver +* Accepts traces at the standard `/v1/traces` endpoint +* Listens on port 4318 by default (standard OTLP/HTTP port) +* Converts incoming OTLP data into individual Redpanda OTEL v1 Protobuf messages and publishes them to the `redpanda.otel_traces` topic + +Create a file named `trace-ingestion.yaml`: + +[,yaml] +---- +input: + otlp_http: + address: "0.0.0.0:4318" + auth_token: "${secrets.TRACE_AUTH_TOKEN}" + max_body_size: 4194304 # 4MB default + read_timeout: "10s" + write_timeout: "10s" + +output: + redpanda: + seed_brokers: ["${REDPANDA_BROKERS}"] + topic: "redpanda.otel_traces" + compression: snappy + max_in_flight: 10 +---- + +The `otlp_http` input automatically handles format conversion, so no processors are needed for basic trace ingestion. Each span becomes a separate message in the `redpanda.otel_traces` topic. + +[[use-grpc]] +==== Alternative: Use gRPC instead of HTTP + +If your custom agent requires gRPC transport, use the `otlp_grpc` input instead: + +[,yaml] +---- +input: + otlp_grpc: + address: "0.0.0.0:4317" # Standard OTLP/gRPC port + auth_token: "${secrets.TRACE_AUTH_TOKEN}" + max_recv_msg_size: 4194304 + +output: + redpanda: + seed_brokers: ["${REDPANDA_BROKERS}"] + topic: "redpanda.otel_traces" + compression: snappy + max_in_flight: 10 +---- + +The gRPC input works identically to HTTP but uses Protobuf encoding over gRPC. Clients must include the authentication token in gRPC metadata as `authorization: Bearer `. + +=== Deploy the pipeline in Redpanda Cloud + +. In the *Connect* page of your Redpanda Cloud cluster, click *Create Pipeline*. +. For the input, select the *otlp_http* (or *otlp_grpc*) component. +. Skip to *Add a topic* and select `redpanda.otel_traces` from the list of existing topics. Leave the default advanced settings. +. In the *Add permissions* step, you can create a service account with write access to the `redpanda.otel_traces` topic. +. In the *Create pipeline* step, enter a name for your ingestion pipeline and paste your `trace-ingestion.yaml` configuration. Ensure that you've created the TRACE_AUTH_TOKEN secret you're referencing in the configuration. + +== Send traces from your custom agent + +Configure your custom agent to send OpenTelemetry traces to the ingestion endpoint. The endpoint accepts traces in OTLP format via HTTP on port 4318 at the `/v1/traces` path. + +=== Configure your OTEL exporter + +Install the OpenTelemetry SDK for your language and configure the OTLP exporter to target your Redpanda Connect pipeline endpoint. + +The exporter configuration requires: + +* **Endpoint**: Your pipeline's URL including the `/v1/traces` path +* **Headers**: Authorization header with your bearer token +* **Protocol**: HTTP to match the `otlp_http` input (or gRPC for `otlp_grpc`) + +.Python example for OTLP HTTP exporter +[,python] +---- +from opentelemetry import trace +from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import BatchSpanProcessor +from opentelemetry.sdk.resources import Resource + +# Configure resource attributes to identify your agent +resource = Resource(attributes={ + "service.name": "my-custom-agent", + "service.version": "1.0.0" +}) + +# Configure the OTLP HTTP exporter +exporter = OTLPSpanExporter( + endpoint=":4318/v1/traces", + headers={"Authorization": "Bearer YOUR_TOKEN"} +) + +# Set up tracing with batch processing +provider = TracerProvider(resource=resource) +processor = BatchSpanProcessor(exporter) +provider.add_span_processor(processor) +trace.set_tracer_provider(provider) + +# Use the tracer with GenAI semantic conventions +tracer = trace.get_tracer(__name__) +with tracer.start_as_current_span( + "invoke_agent my-assistant", + kind=trace.SpanKind.INTERNAL +) as span: + # Set GenAI semantic convention attributes + span.set_attribute("gen_ai.operation.name", "invoke_agent") + span.set_attribute("gen_ai.agent.name", "my-assistant") + span.set_attribute("gen_ai.provider.name", "openai") + span.set_attribute("gen_ai.request.model", "gpt-4") + + # Your agent logic here + result = process_request() + + # Set token usage if available + span.set_attribute("gen_ai.usage.input_tokens", 150) + span.set_attribute("gen_ai.usage.output_tokens", 75) +---- + +.Node.js example for OTLP HTTP exporter +[,javascript] +---- +const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node'); +const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http'); +const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base'); +const { Resource } = require('@opentelemetry/resources'); +const { trace, SpanKind } = require('@opentelemetry/api'); + +// Configure resource +const resource = new Resource({ + 'service.name': 'my-custom-agent', + 'service.version': '1.0.0' +}); + +// Configure OTLP HTTP exporter +const exporter = new OTLPTraceExporter({ + url: 'https://your-pipeline-endpoint.redpanda.cloud:4318/v1/traces', + headers: { + 'Authorization': 'Bearer YOUR_TOKEN' + } +}); + +// Set up provider +const provider = new NodeTracerProvider({ resource }); +provider.addSpanProcessor(new BatchSpanProcessor(exporter)); +provider.register(); + +// Use the tracer with GenAI semantic conventions +const tracer = trace.getTracer('my-agent'); +const span = tracer.startSpan('invoke_agent my-assistant', { + kind: SpanKind.INTERNAL +}); + +// Set GenAI semantic convention attributes +span.setAttribute('gen_ai.operation.name', 'invoke_agent'); +span.setAttribute('gen_ai.agent.name', 'my-assistant'); +span.setAttribute('gen_ai.provider.name', 'openai'); +span.setAttribute('gen_ai.request.model', 'gpt-4'); + +// Your agent logic +processRequest().then(result => { + // Set token usage if available + span.setAttribute('gen_ai.usage.input_tokens', 150); + span.setAttribute('gen_ai.usage.output_tokens', 75); + span.end(); +}); +---- + +TIP: Use environment variables for the endpoint URL and authentication token to keep credentials out of your code. + +=== Use recommended semantic conventions + +The Transcripts view recognizes https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/[OpenTelemetry semantic conventions for GenAI operations^]. Following these conventions ensures your traces display correctly with proper attribution, token usage, and operation identification. + +==== Required attributes for agent operations + +Following the OpenTelemetry semantic conventions, agent spans should include these attributes: + +* Operation identification: +** `gen_ai.operation.name` - Set to `"invoke_agent"` for agent execution spans +** `gen_ai.agent.name` - Human-readable name of your agent (displayed in Transcripts view) +* LLM provider details: +** `gen_ai.provider.name` - LLM provider identifier (e.g., `"openai"`, `"anthropic"`, `"gcp.vertex_ai"`) +** `gen_ai.request.model` - Model name (e.g., `"gpt-4"`, `"claude-sonnet-4"`) +* Token usage (for cost tracking): +** `gen_ai.usage.input_tokens` - Number of input tokens consumed +** `gen_ai.usage.output_tokens` - Number of output tokens generated +* Session correlation: +** `gen_ai.conversation.id` - Identifier linking related agent invocations in the same conversation + +==== Example with semantic conventions + +.Python example with GenAI semantic conventions +[,python] +---- +from opentelemetry import trace + +tracer = trace.get_tracer(__name__) + +# Create an agent invocation span +with tracer.start_as_current_span( + "invoke_agent my-assistant", + kind=trace.SpanKind.INTERNAL +) as span: + # Set required attributes + span.set_attribute("gen_ai.operation.name", "invoke_agent") + span.set_attribute("gen_ai.agent.name", "my-assistant") + span.set_attribute("gen_ai.provider.name", "openai") + span.set_attribute("gen_ai.request.model", "gpt-4") + span.set_attribute("gen_ai.conversation.id", "session-abc-123") + + # Your agent logic here + response = process_agent_request(user_input) + + # Set token usage after completion + span.set_attribute("gen_ai.usage.input_tokens", response.usage.input_tokens) + span.set_attribute("gen_ai.usage.output_tokens", response.usage.output_tokens) +---- + +.Node.js example with GenAI semantic conventions +[,javascript] +---- +const { trace } = require('@opentelemetry/api'); + +const tracer = trace.getTracer('my-agent'); + +const span = tracer.startSpan('invoke_agent my-assistant', { + kind: SpanKind.INTERNAL +}); + +// Set required attributes +span.setAttribute('gen_ai.operation.name', 'invoke_agent'); +span.setAttribute('gen_ai.agent.name', 'my-assistant'); +span.setAttribute('gen_ai.provider.name', 'openai'); +span.setAttribute('gen_ai.request.model', 'gpt-4'); +span.setAttribute('gen_ai.conversation.id', 'session-abc-123'); + +// Your agent logic +const response = await processAgentRequest(userInput); + +// Set token usage +span.setAttribute('gen_ai.usage.input_tokens', response.usage.inputTokens); +span.setAttribute('gen_ai.usage.output_tokens', response.usage.outputTokens); + +span.end(); +---- + +=== Validate trace format + +Before deploying to production, verify your traces match the expected format. + +//// + +* How to validate trace format against schema +* Common format issues and solutions +* Tools for format validation +==== + +//// + +Test your agent locally and inspect the traces it produces: + +[,bash] +---- +# Example validation steps + +---- + +== Verify trace ingestion + +After deploying your pipeline and configuring your custom agent, verify traces are flowing correctly. + +=== Consume traces from the topic + +Check that traces are being published to the `redpanda.otel_traces` topic: + +[,bash] +---- +rpk topic consume redpanda.otel_traces --offset end -n 10 +---- + +You can also view the `redpanda.otel_traces` topic in the *Topics* page of Redpanda Cloud UI. + +Look for spans with your custom `instrumentationScope.name` to identify traces from your agent. + +=== View traces in Transcripts + +After your custom agent sends traces through the pipeline, they appear in your cluster's *Agentic AI > Transcripts* view alongside traces from Remote MCP servers and declarative agents. + +==== Identify custom agent transcripts + +Custom agent transcripts are identified by the `service.name` resource attribute, which differs from Redpanda's built-in services (`ai-agent` for declarative agents, `mcp-{server-id}` for MCP servers). See xref:ai-agents:observability/concepts.adoc#cross-service-transcripts[Cross-service transcripts] to understand how the `service.name` attribute identifies transcript sources. + +Your custom agent transcripts display with: + +* **Service name** in the service filter dropdown (from your `service.name` resource attribute) +* **Agent name** in span details (from the `gen_ai.agent.name` attribute) +* **Operation names** like `"invoke_agent my-assistant"` indicating agent executions + +For detailed instructions on filtering, searching, and navigating transcripts in the UI, see xref:ai-agents:observability/view-transcripts.adoc[View Transcripts]. + +==== Token usage tracking + +If your spans include the recommended token usage attributes (`gen_ai.usage.input_tokens` and `gen_ai.usage.output_tokens`), they display in the summary panel's token usage section. This enables cost tracking alongside Remote MCP server and declarative agent transcripts. + +== Troubleshooting + +//// +* Common issues and solutions +* How to monitor pipeline health +* Log locations and debugging techniques +* Failure modes and diagnostics + +//// + +=== Pipeline not receiving requests + +If your custom agent cannot reach the ingestion endpoint: + +. Verify the endpoint URL includes the correct port and path: + * HTTP: `https://your-endpoint:4318/v1/traces` + * gRPC: `https://your-endpoint:4317` +. Check network connectivity and firewall rules. +. Ensure authentication tokens are valid and properly formatted in the `Authorization: Bearer ` header (HTTP) or `authorization` metadata field (gRPC). +. Verify the Content-Type header matches your data format (`application/x-protobuf` or `application/json`). +. Review pipeline logs for connection errors or authentication failures. + +=== Traces not appearing in topic + +If requests succeed but traces do not appear in `redpanda.otel_traces`: + +. Check pipeline output configuration. +. Verify topic permissions. +. Validate trace format matches OTLP specification. + +== Limitations + +* The `otlp_http` and `otlp_grpc` inputs accept only traces, logs, and metrics, not profiles. +* Only traces are published to the `redpanda.otel_traces` topic. +* Exceeded rate limits return HTTP 429 (HTTP) or ResourceExhausted status (gRPC). + +== Next steps + +* xref:ai-agents:observability/view-transcripts.adoc[] +* xref:ai-agents:agents/monitor-agents.adoc[Observability for declarative agents] +* https://docs.redpanda.com/redpanda-connect/components/inputs/otlp_http/[OTLP HTTP input reference^] - Complete configuration options for the `otlp_http` component +* https://docs.redpanda.com/redpanda-connect/components/inputs/otlp_grpc/[OTLP gRPC input reference^] - Alternative gRPC-based trace ingestion diff --git a/modules/ai-agents/pages/observability/view-transcripts.adoc b/modules/ai-agents/pages/observability/view-transcripts.adoc index f21ee804..851c9f9b 100644 --- a/modules/ai-agents/pages/observability/view-transcripts.adoc +++ b/modules/ai-agents/pages/observability/view-transcripts.adoc @@ -1,12 +1,12 @@ = View Transcripts -:description: Learn how to filter, search, and navigate the Transcripts interface to investigate agent execution traces using multiple detail views and interactive timeline navigation. +:description: Learn how to filter and navigate the Transcripts interface to investigate agent execution traces using multiple detail views and interactive timeline navigation. :page-topic-type: how-to :personas: agent_developer, platform_admin -:learning-objective-1: Filter and search transcripts to find specific execution traces +:learning-objective-1: Filter transcripts to find specific execution traces :learning-objective-2: Navigate between detail views to inspect span information at different levels :learning-objective-3: Use the timeline interactively to navigate to specific time periods -The Transcripts view provides filtering, searching, and navigation capabilities for investigating agent and MCP server execution transcripts. Use these features to efficiently locate specific operations, analyze performance patterns, and debug issues across tool invocations, LLM calls, and agent reasoning steps. +The Transcripts view provides filtering and navigation capabilities for investigating agent, MCP server, and AI Gateway execution glossterm:transcript[transcripts]. Use this view to quickly locate specific operations, analyze performance patterns, and debug issues across glossterm:tool[] invocations, LLM calls, and glossterm:agent[] reasoning steps. After reading this page, you will be able to: @@ -14,7 +14,13 @@ After reading this page, you will be able to: * [ ] {learning-objective-2} * [ ] {learning-objective-3} -For basic orientation on agent and MCP server monitoring, see xref:ai-agents:agents/monitor-agents.adoc[] or xref:ai-agents:mcp/remote/monitor-mcp-servers.adoc[]. For conceptual background on what transcripts capture and how spans are organized hierarchically, see xref:ai-agents:observability/concepts.adoc[]. +For basic orientation on monitoring each Redpanda Agentic Data Plane component, see: + +* xref:ai-agents:ai-gateway/observability-metrics.adoc[] +* xref:ai-agents:agents/monitor-agents.adoc[] +* xref:ai-agents:mcp/remote/monitor-mcp-servers.adoc[] + +For conceptual background on what transcripts capture and how glossterm:span[spans] are organized hierarchically, see xref:ai-agents:observability/concepts.adoc[]. == Prerequisites @@ -27,37 +33,17 @@ For basic orientation on agent and MCP server monitoring, see xref:ai-agents:age Use the timeline visualization to quickly identify when errors began or patterns changed, and navigate directly to transcripts from particular timestamps. -When viewing time periods with many transcripts (hundreds or thousands), the timeline displays a subset of the data to maintain performance and usability. The timeline bar indicates the actual time range of currently visible data, which may be narrower than your selected range. +When viewing time periods with many transcripts (hundreds or thousands), the timeline displays a subset of the data to maintain performance and usability. The timeline bar indicates the actual time range of currently visible data, which may be narrower than your <>. TIP: See xref:ai-agents:agents/monitor-agents.adoc[] and xref:ai-agents:mcp/remote/monitor-mcp-servers.adoc[] to learn basic execution patterns and health indicators to investigate. -=== Search and filter for transcripts - -Use search and filters together to narrow down transcripts and quickly locate specific executions. - -==== Search for specific transcripts +=== Filter transcripts -The search functionality helps you find transcripts by operation names, span types, or identifiers: +Use filters to narrow down transcripts and quickly locate specific executions. When you use any of the filters, the transcript list updates to show only matching results. You can toggle *Full transcript* on to see the complete execution context, in grayed-out text, for the filtered transcripts. -* Search by span names to find specific xref:ai-agents:observability/concepts.adoc#agent-span-types[agent operations] like `invoke_agent`, or xref:ai-agents:mcp/remote/create-tool.adoc[MCP tools] -* Search by xref:ai-agents:observability/concepts.adoc#instrumentation-layers[scope] to filter by layer (for example, `rpcn-mcp` for MCP tool spans) -* Search by trace IDs (`traceId`) when correlating with external systems or troubleshooting specific requests +==== Filter by attribute -==== Filter by service - -Service filtering shows only transcripts from specific agents or MCP servers using the `service.name` resource attribute. See xref:ai-agents:observability/concepts.adoc#cross-service-transcripts[Cross-service transcripts] to understand how transcripts span multiple services. - -* View executions from a single agent when multiple are running (service name: `ai-agent`) -* Isolate MCP server activity from agent activity (service name: `mcp-{server-id}`) -* Compare behavior across different service instances - -==== Filter by execution status - -Status filtering shows transcripts based on their execution outcome: - -* Show successful executions for health checks -* Show only failed executions for error investigation -* Toggle between success and error views to compare and analyze patterns +// Add details when available ==== Adjust time range @@ -67,15 +53,13 @@ Use the time range selector to focus on specific time periods (from the last fiv * Expand to longer periods for trend analysis over the last day * Narrow to specific time windows when investigating issues that occurred at known times -TIP: Apply broad filters first (time range, service) to reduce the transcript set, then use search to narrow to specific operations. - == Inspect span details -Each row in the transcript table represents a high-level agent or MCP server request flow. Expand each parent span to see the xref:ai-agents:observability/concepts.adoc#agent-transcript-hierarchy[hierarchical structure] of nested operations, including tool calls, LLM interactions, and internal processing steps. Parent-child spans show how operations relate: for example, an agent invocation (parent) triggers LLM calls and tool executions (children). +Each row in the transcript table represents a high-level agent or MCP server request flow. Expand each parent glossterm:span[] to see the xref:ai-agents:observability/concepts.adoc#agent-transcript-hierarchy[hierarchical structure] of nested operations, including tool calls, LLM interactions, and internal processing steps. Parent-child spans show how operations relate: for example, an agent invocation (parent) triggers LLM calls and tool executions (children). -When agents invoke remote MCP servers, transcripts fold together across service boundaries to provide a unified view of the complete operation. The trace ID originates at the initial request touchpoint and propagates across all involved services, linking spans from both the agent and MCP server under a single transcript. Use the tree view to follow the trace flow across multiple services and understand the complete request lifecycle. +When agents invoke remote MCP servers, transcripts fold together under a tree structure to provide a unified view of the complete operation across service boundaries. The glossterm:trace ID[] originates at the initial request touchpoint and propagates across all involved services, linking spans from both the agent and MCP server under a single transcript. Use the tree view to follow the trace flow across multiple services and understand the complete request lifecycle. -If you use external agents that directly invoke MCP servers in the Redpanda Agentic Data Plane, you may only see MCP-level parent transcripts, unless you have configured the agents to also emit traces to the Redpanda OTEL ingestion pipeline. +If you use external agents that directly invoke MCP servers in the Redpanda Agentic Data Plane, you may only see MCP-level parent transcripts, unless you have configured the agents to also emit traces to the Redpanda glossterm:OpenTelemetry[OTEL] ingestion pipeline. Selected spans display detailed information at multiple levels, from high-level summaries to complete raw data: @@ -100,7 +84,7 @@ TIP: Expand the summary panel to full view to easily read long conversations. === Detailed attributes view -The attributes view shows structured metadata for each transcript span. Use this view to quickly locate an attribute value such as conversation ID, then paste it into the search box to find all operations from that conversation session. See xref:ai-agents:observability/concepts.adoc#key-attributes-by-layer[Transcripts and AI Observability] for details on standard attributes by instrumentation layer. +The attributes view shows structured metadata for each transcript span. Use this view to inspect span attributes and understand the context of each operation. See xref:ai-agents:observability/concepts.adoc#key-attributes-by-layer[Transcripts and AI Observability] for details on standard attributes by instrumentation layer. === Raw data view