Skip to content

Conversation

@nexxeln
Copy link
Member

@nexxeln nexxeln commented Jan 30, 2026

adds withSupermemory wrapper and input/output processors for
mastra agents:

  • input processor fetches and injects memories into system prompt
    before llm calls
  • output processor saves conversations to supermemory after
    responses
  • supports profile, query, and full memory search modes
  • includes custom prompt templates and requestcontext support

const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions:
"..." },
"user-123",
{ mode: "full", addMemory: "always", threadId: "conv-456" }
))

includes docs as well

this pr also reworks how the tools package works into shared modules

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Jan 30, 2026

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Updated (UTC)
✅ Deployment successful!
View logs
supermemory-app 9553434 Feb 03 2026, 12:48 AM

Copy link
Member Author

nexxeln commented Jan 30, 2026


How to use the Graphite Merge Queue

Add the label Main to this PR to add it to the merge queue.

You must have a Graphite account in order to use the merge queue. Sign up using this link.

An organization admin has enabled the Graphite Merge Queue in this repository.

Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue.

This stack of pull requests is managed by Graphite. Learn more about stacking.

Copy link
Member Author

nexxeln commented Jan 30, 2026

@claude pls review

@claude
Copy link

claude bot commented Jan 30, 2026

Claude finished @nexxeln's task in 1m 59s —— View job


Code Review Complete ✅

I've completed a comprehensive review of the Mastra integration PR. Here's my detailed feedback:


Overview

This PR successfully adds Mastra integration with a withSupermemory wrapper and processors, while also refactoring the tools package into well-organized shared modules. The implementation is clean, well-tested, and follows consistent patterns with the existing Vercel AI SDK integration.

Summary:

  • ✅ Clean architecture with input/output processors
  • ✅ Excellent test coverage (unit + integration tests)
  • ✅ Comprehensive documentation
  • ✅ Good refactoring of shared code
  • ⚠️ A few minor issues and suggestions below

Detailed Feedback

1. Core Implementation (Mastra)

Strengths:

  • Clean processor pattern: The SupermemoryInputProcessor and SupermemoryOutputProcessor follow Mastra's native processor interface perfectly
  • Proper separation of concerns: Wrapper handles config enhancement, processors handle the actual work
  • Good type safety: Minimal interface definitions without tight coupling to Mastra internals
  • Caching implemented correctly: Per-turn memory cache avoids redundant API calls

Issues:

Minor: Type casting in processor.ts:137-141

The type assertion here feels fragile:

const queryText = extractQueryText(
  messages as unknown as Array<{
    role: string
    content: string | Array<{ type: string; text?: string }>
  }>,
  this.ctx.mode,
)

The as unknown as double cast suggests type mismatch. Consider:

  1. Making GenericMessage in shared/memory-client.ts more flexible to handle both Mastra and Vercel message types
  2. Or create a Mastra-specific message extraction function

Location: packages/tools/src/mastra/processor.ts:137-141


Minor: Error handling in message conversion (processor.ts:282-318)

The convertToConversationMessages method silently filters out messages that don't match expected formats. Consider logging when messages are skipped for debugging:

private convertToConversationMessages(messages: MastraDBMessage[]): ConversationMessage[] {
  const result: ConversationMessage[] = []
  let skippedCount = 0
  
  for (const msg of messages) {
    if (msg.role === "system") {
      continue
    }
    // ... existing logic
    // If nothing was added, increment skippedCount
  }
  
  if (skippedCount > 0) {
    this.ctx.logger.debug(`Skipped ${skippedCount} messages during conversion`)
  }
  
  return result
}

Location: packages/tools/src/mastra/processor.ts:282-318


2. Shared Module Refactoring

Strengths:

  • Excellent organization: Breaking code into shared/ modules (cache.ts, context.ts, logger.ts, memory-client.ts, prompt-builder.ts, types.ts) is a huge improvement
  • DRY principle: Eliminating duplication between Vercel and Mastra integrations
  • Framework-agnostic design: The GenericMessage interface and shared utilities work across different frameworks

Issues:

Minor: GenericMessage type could be more precise

In shared/memory-client.ts:174-177, the GenericMessage interface uses a union type for content:

export interface GenericMessage {
  role: string
  content: string | Array<{ type: string; text?: string }>
}

This is fine, but the extractQueryText function at line 217 has to handle an even more complex case with nested objects:

const objContent = content as unknown as { 
  content?: string
  parts?: Array<{ type: string; text?: string }> 
}

Consider making GenericMessage explicitly support all known message formats or documenting this limitation.

Location: packages/tools/src/shared/memory-client.ts:174-177, 217-228


Minor: tools-shared.ts rename

The file was renamed from shared.ts to tools-shared.ts, which is good for clarity. However, both tools-shared.ts and the new shared/ directory exist. Consider:

  1. Moving the content from tools-shared.ts into shared/tools.ts for consistency
  2. Or keeping tools-shared.ts for tool-specific utilities but renaming to clarify it's for AI SDK tools specifically

Location: packages/tools/src/tools-shared.ts


3. Tests

Strengths:

  • Comprehensive unit tests: 971 lines of unit tests covering processors, wrapper, factory functions, error cases, caching
  • Real integration tests: 642 lines of integration tests that actually call the API
  • Good test structure: Using vitest, proper mocking, clear test descriptions
  • Edge case coverage: Tests handle missing API keys, different message formats, caching behavior

Issues:

None identified - Test coverage is excellent!


4. Documentation

Strengths:

  • Comprehensive MDX docs: 450 lines of well-structured documentation
  • Good examples: Multiple code examples showing different usage patterns
  • Clear explanations: Mode comparison table, sequence diagram, configuration options
  • Updated README: Main README properly documents the new Mastra integration

Issues:

Minor: Missing closing backtick in mastra.mdx

Line 126 in mastra.mdx is missing a closing code fence:

const agent = new Agent(withSupermemory(config, "user-123", { mode: "full" }))

### Mode Comparison  // This should be after closing ```

Location: apps/docs/integrations/mastra.mdx:126


Suggestion: Add troubleshooting section

Consider adding a troubleshooting section to the docs covering:

  • What to do if memories aren't being injected
  • How to verify the API key is working
  • Common issues with threadId configuration

5. Package Configuration

Strengths:

  • Proper exports: Added "./mastra": "./dist/mastra.js" to package.json exports
  • Correct peer dependencies: @mastra/core is in devDependencies (correct, since it's only needed at build time for types)
  • Updated build config: tsdown.config.ts properly includes the new entry

Issues:

Question: Should @mastra/core be a peer dependency?

Currently @mastra/core is only in devDependencies. This works because the integration re-exports Mastra types rather than requiring users to install Mastra themselves. However, users will still need @mastra/core installed to actually use the Agent class. Consider:

  1. Moving it to peerDependencies to make this explicit
  2. Or documenting clearly in the README that users need to install @mastra/core separately

Location: packages/tools/package.json:24


6. Minor Code Quality Issues

Suggestion: Extract magic strings in cache.ts

Line 26 in cache.ts builds a cache key with hardcoded separators:

return `${containerTag}:${threadId || ""}:${mode}:${normalizedMessage}`

Consider extracting the separator as a constant for easier modification:

private static readonly CACHE_KEY_SEPARATOR = ':'

static makeTurnKey(...): string {
  const sep = MemoryCache.CACHE_KEY_SEPARATOR
  return `${containerTag}${sep}${threadId || ""}${sep}${mode}${sep}${normalizedMessage}`
}

Location: packages/tools/src/shared/cache.ts:26


Minor: Inconsistent error messages

In processor.ts:190-193, the error logging uses a ternary for the error message:

error: error instanceof Error ? error.message : "Unknown error"

This pattern is used consistently, which is good. However, consider extracting to a helper:

const getErrorMessage = (error: unknown): string => 
  error instanceof Error ? error.message : String(error) || "Unknown error"

Then use: error: getErrorMessage(error)

This would be more DRY and could handle more error types.


7. Frontend Changes

The changes to auth-context.tsx, header.tsx, and onboarding components appear to be unrelated refactoring. These look fine but would typically be in a separate PR. They seem to be quality improvements (better typing, removing unnecessary code), so no issues there.


Security Considerations

API key handling: Properly validated with validateApiKey
No hardcoded secrets: Uses environment variables with fallbacks
Safe error handling: Errors don't leak sensitive information


Performance Considerations

Caching implemented: Per-turn memory cache prevents redundant API calls
Lazy imports: Uses dynamic imports where appropriate
Efficient deduplication: O(n) deduplication with Set-based tracking


Breaking Changes

None identified. This is a purely additive feature.


Recommendations

  1. High priority: Fix the missing closing backtick in mastra.mdx:126
  2. Medium priority: Consider adding @mastra/core to peerDependencies in package.json
  3. Low priority: Refine type casting in processor message extraction
  4. Nice to have: Add troubleshooting section to documentation

Final Verdict

Approve with minor suggestions

This is a high-quality PR that adds a valuable integration while improving the overall codebase through refactoring. The test coverage is excellent, documentation is comprehensive, and the architecture follows best practices. The issues identified are minor and don't block merging.

Great work on this PR! The shared module refactoring will make future integrations much easier to implement.


@graphite-app
Copy link

graphite-app bot commented Feb 3, 2026

Merge activity

adds withSupermemory wrapper and input/output processors for
mastra agents:

- input processor fetches and injects memories into system prompt
before llm calls
- output processor saves conversations to supermemory after
responses
- supports profile, query, and full memory search modes
- includes custom prompt templates and requestcontext support

const agent = new Agent(withSupermemory(
{ id: "my-assistant", model: openai("gpt-4o"), instructions:
"..." },
"user-123",
{ mode: "full", addMemory: "always", threadId: "conv-456" }
))

includes docs as well

this pr also reworks how the tools package works into shared modules
@graphite-app graphite-app bot force-pushed the 01-30-mastra_integration branch from b823124 to 9553434 Compare February 3, 2026 00:43
@graphite-app graphite-app bot merged commit 9553434 into main Feb 3, 2026
5 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants