Skip to content

Conversation

@ThomasK33
Copy link
Member

This PR adds an Agent Client Protocol (ACP) bridge via mux acp so editors like Zed can create a Mux workspace and interact with it over stdio.

Implementation

  • βœ… CLI plumbing: mux acp subcommand with flags for server discovery, runtime, project, etc.
  • βœ… ACP stdio server using the ACP TypeScript SDK
  • βœ… Server discovery helper (centralizes logic used by both mux acp and mux api)
  • βœ… Mux oRPC client (HTTP + WebSocket)
  • βœ… Session mapping: ACP sessions ⇄ Mux workspaces
  • βœ… Chat streaming: Mux workspace.onChat β†’ ACP session/update notifications
  • βœ… Prompt handling: session/prompt sends message and awaits completion
  • βœ… Cancel: session/cancel calls workspace.interruptStream
  • βœ… Tests: unit tests for ACP utils (event translation, prompt parsing)
  • βœ… Documentation: docs/acp.mdx with Zed setup example

Testing

  • Run bun test src/cli/acpUtils.test.ts for unit tests
  • Manual test: start mux server, then run mux acp and send ACP messages over stdin
  • Zed integration: configure agent_servers in Zed settings (see docs)

πŸ“‹ Implementation Plan

Plan: Add mux acp (stdio ACP bridge) subcommand

Goal

Expose a stdio-compliant Agent Client Protocol (ACP) endpoint via mux acp so editors (Zed, etc.) can create a Mux workspace/chat and interact with it.

Non-goals (initial scope)

  • Replacing Mux's existing UI/IPC architecture.
  • Supporting remote (non-localhost) access.
  • Full fidelity feature parity with all Mux UI features.

Recommended approach (summary)

Implement mux acp as an ACP Agent (stdio JSON-RPC/NDJSON) that acts as a thin bridge to Mux's existing oRPC HTTP/WebSocket API server.

  • ACP client (Zed/editor) spawns mux acp and speaks ACP over stdin/stdout.
  • mux acp discovers a running Mux API server (desktop already starts one on localhost unless disabled) via ~/.mux/server.lock or MUX_SERVER_URL, and calls:
    • workspace.create / workspace.list
    • workspace.sendMessage
    • workspace.onChat (stream)
    • workspace.interruptStream (cancel)
flowchart TD
  Editor["Editor (ACP client)"] -->|"stdin/stdout (ACP NDJSON)"| Acp["mux acp (ACP Agent bridge)"]
  Acp -->|"HTTP (request/response)"| Orpc["Mux oRPC server (desktop or mux server)"]
  Acp -->|"WebSocket (stream)"| Orpc
  Orpc --> Workspace["WorkspaceService / AgentSession"]
Loading
  • No fundamental backend changes are required for the core "create workspace + chat + stream responses" loop because the oRPC surface already exists.

Net LoC estimate (product code only): ~700–1000 LoC (mostly the ACP↔Mux translation + streaming/cancel plumbing).


Implementation plan

1) Add mux acp CLI plumbing

  • Add a new subcommand module src/cli/acp.ts following the same commander + lazy-loading pattern used by src/cli/run.ts and src/cli/server.ts.
  • Register the subcommand in src/cli/index.ts (help stub + lazy-load branch).

CLI flags (MVP):

  • --server-url <url> (override discovery)
  • --server-token <token> (override discovery)
  • --project <path> (default project root; used when ACP request omits workingDirectory)
  • --workspace <id|name> (optional: attach instead of creating)
  • --runtime <local|worktree|ssh> (default: local to operate in the editor's folder)
  • --trunk-branch <name> (required if --runtime is worktree or ssh)
  • --log-level <level> (ensure logs go to stderr, never stdout)

Why flags matter: editors usually need a deterministic command line to spawn.


2) Implement ACP stdio server using the ACP TypeScript SDK

  • Add the ACP TS SDK dependency.
  • Use the SDK's NDJSON stdio helper (ndJsonStream) + AgentSideConnection.
  • Implement an Agent class (e.g., MuxAcpAgent) with required ACP methods:
    • initialize
    • session/new
    • session/prompt
    • session/cancel
    • authenticate (no-op for MVP unless we choose to expose auth at the ACP layer)

Hard requirement: keep stdout reserved for ACP messages only. Route all logs/diagnostics to stderr.


3) Connect to a running mux instance (server discovery)

Reuse the existing server discovery model already used by mux api:

  • MUX_SERVER_URL env var override
  • else read ~/.mux/server.lock (baseUrl + token)
  • else fail with actionable error ("start mux server or enable API server in desktop settings")

Implementation detail: centralize this in a helper (so both ACP and future subcommands can reuse it).


4) Implement a typed Mux oRPC client (HTTP + WebSocket)

  • Create a small client wrapper that can:
    • call request/response endpoints over HTTP (workspace.create, workspace.sendMessage, …)
    • subscribe to streaming endpoints over WebSocket (workspace.onChat, workspace.onMetadata)

MVP streaming: only workspace.onChat is required.


5) Map ACP sessions ⇄ Mux workspaces

Use Mux workspace IDs as ACP sessionId to avoid translation tables.

session/new behavior

  1. Determine project path:
    • Prefer ACP workingDirectory (must be absolute per spec)
    • Else fall back to --project CLI flag / process.cwd()
  2. Decide runtime + workspace name:
    • Default to project-dir local runtime (runtimeConfig: { type: "local" }) so any file edits happen in the editor's working directory.
    • Use a unique workspace name (Mux API calls this branchName) like acp/<short-uuid> so each ACP session gets an isolated chat history without touching git branches in LocalRuntime.
  3. Determine workspace:
    • If --workspace given: attach (validate exists via workspace.getInfo)
    • Else create via workspace.create({ projectPath, branchName, trunkBranch?, runtimeConfig }).
  4. Start a workspace.onChat subscription immediately for that workspace and keep it open.

session/load / listing (optional)

If ACP clients need this, implement ACP "unstable" methods by mapping:

  • unstable_list_sessions β†’ workspace.list
  • session/load β†’ attach to an existing workspace

Put this behind a small compatibility layer so we can adjust to ACP spec churn.


6) Translate Mux chat streaming to ACP session/update

Mux's stream is richer than ACP; for MVP, flatten:

  • When Mux emits assistant deltas/events, forward them as ACP session/update notifications of type agent_message_chunk.
  • Optionally forward tool activity as ACP tool_call_update (nice-to-have).

Key edge case: Mux's stream subscription may replay history and then emit a "caught-up" marker; ignore replayed history until caught up.

Event translation (MVP mapping)
  • Mux StreamStartEvent β†’ ACP agent_message_chunk (empty or "…")
  • Mux StreamDeltaEvent β†’ ACP agent_message_chunk with content: { type: "text", text: delta }
  • Mux StreamEndEvent β†’ signal internal completion so session/prompt can return { stopReason: "end_turn" }
  • Mux tool call start/end events β†’ ACP tool_call_update (optional)

7) Implement session/prompt (send message + await completion)

  • Convert ACP messages[] into a single prompt string (or keep last user message for MVP).
  • Call workspace.sendMessage(workspaceId=sessionId, message=text, options=…).
  • Await stream completion:
    • Listen on the workspace.onChat subscription for the next StreamEndEvent that occurs after sending.
    • On completion, return { stopReason: "end_turn" }.

Concurrency rule (MVP): reject concurrent session/prompt calls for the same sessionId with a JSON-RPC error.


8) Implement session/cancel

  • When ACP sends session/cancel, call workspace.interruptStream(sessionId) and stop waiting.
  • Ensure session/prompt resolves with { stopReason: "cancelled" }.

9) Tests & validation

  • Unit tests (pure logic): event translation (Mux stream β†’ ACP updates) + prompt string extraction.
  • Integration test (recommended):
    • Start a mux oRPC server in test mode.
    • Spawn mux acp as a child process.
    • Speak ACP over stdio: initialize β†’ session/new β†’ session/prompt.
    • Assert we get session/update notifications and the final PromptResponse.
  • If tests must avoid real model network calls, add/reuse a deterministic "fake model provider".

10) Documentation + editor setup examples (Zed + others)

Add a user-facing doc page under docs/ (and include it in docs.json nav) covering:

  • What mux acp is: "ACP Agent over stdio that bridges to a running mux instance via the local oRPC server."
  • Prereqs:
    • Mux desktop running (API server starts automatically unless MUX_NO_API_SERVER=1), or mux server running.
    • How to verify connectivity (e.g., mux api workspace list).
  • Zed setup (example):
    • Document the exact Zed configuration required to register an ACP agent command.
    • Include a minimal config snippet that spawns:
      • mux acp (no flags) when the editor supplies ACP workingDirectory.
      • mux acp --project <path> --runtime local as a fallback if Zed doesn't pass workingDirectory.
    • Include troubleshooting for common errors:
      • "No running mux server found" β†’ start desktop or mux server / set MUX_SERVER_URL.
      • "Trunk branch required" β†’ pass --trunk-branch when using --runtime worktree|ssh.
  • Other editors:
    • A generic ACP configuration template: command: mux, args: ["acp", …].
  • Debugging:
    • Emphasize logs must go to stderr; suggest capturing stderr to a file when debugging editor integrations.
    • Suggest MUX_LOG_LEVEL=debug (or --log-level debug) for verbose output.

Note: we should verify Zed's ACP configuration keys/shape against Zed's docs at implementation time so the snippet is copy/paste correct.


Alternatives

A) Run Mux backend in-process inside mux acp (no HTTP server)

mux acp would instantiate ServiceContainer directly and call WorkspaceService methods.

  • Pros: no dependency on a running mux server; simpler deployment.
  • Cons: not truly "connect to an existing mux instance"; risks concurrent access if desktop/server is already running.

Net LoC estimate (product code only): ~500–900 LoC.

B) Implement ACP transport inside mux's existing server (no CLI bridge)

Add an ACP stdio (or socket) transport directly to the mux server runtime.

  • Pros: one long-lived ACP endpoint; avoids an extra proxy process.
  • Cons: violates editor expectation (they spawn a command); significantly more invasive.

Net LoC estimate (product code only): ~900–1500 LoC.


Risks / gotchas

1) stdout contamination (protocol breakage)

ACP over stdio is fragile: any stray console.log breaks the editor connection.

  • Mitigation: enforce stderr logging only; consider a guard that throws if anything writes to stdout outside the protocol writer.

2) Server availability / multi-instance ambiguity

The lockfile model assumes one "active" server.

  • Mitigation: require --server-url when ambiguous; detect stale lockfile (PID no longer running).

3) Streaming correlation

Mux chat events include history replay and rich event types; ACP wants a clean turn-based prompt.

  • Mitigation: ignore pre-caught-up replay; treat StreamEndEvent as the prompt completion signal.

4) ACP permission model mismatch (future)

ACP is designed so the editor mediates FS/terminal operations via request_permission, fs/*, terminal/*.

For an MVP "chat + run mux workspace" bridge, we can accept that Mux will operate directly on disk (the editor will observe file changes normally).

If we want full ACP semantics (editor-driven permission gating / terminal proxy), that likely requires deeper changes:

  • expose a "pause before tool execution" hook in Mux tool execution pipeline
  • allow an external controller (ACP proxy) to approve/deny tool calls

Generated with mux β€’ Model: anthropic:claude-sonnet-4-5 β€’ Thinking: high

Change-Id: I50480960bbcd75e811f496421dd45b87c9d58820
Signed-off-by: Thomas Kosiewski <tk@coder.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant