Skip to content

Conversation

@taylorwilsdon
Copy link
Contributor

Description

Adding the first Model Context Protocol (MCP) server for QuantConnect, enabling AI assistants to interact with the QuantConnect algorithmic trading platform. This server provides tools for backtesting strategies, analyzing trading performance, managing research notebooks, and accessing market data through natural language interfaces. I've already got other MCP servers listed here, thanks for taking a look! @tadasant helped me out last time iirc

Server Details

Motivation and Context

QuantConnect users face several pain points that this MCP server addresses:

  • Complex API navigation: QuantConnect's event-driven architecture has a steep learning curve. This server allows natural language interaction with the platform
  • Slow iteration cycles: Running backtests to debug issues takes 20-30+ seconds minimum. The MCP server enables rapid prototyping through conversational development
  • Documentation gaps: Key features lack practical examples. AI assistants can now provide contextual code examples and explanations
  • Performance analysis complexity: Interpreting Sharpe ratios, drawdowns, and factor exposures requires expertise. The server translates these metrics into plain language insights

This bridges the gap between AI capabilities and quantitative finance, enabling both beginners and experts to develop trading strategies more efficiently.

How Has This Been Tested?

Tested with Claude Desktop and Continue.dev across the following scenarios:

  • Strategy Development: Created and backtested momentum, mean reversion, and pairs trading strategies
  • Performance Analysis: Analyzed backtest results including Sharpe ratios, maximum drawdown, and alpha generation
  • Research Integration: Connected Jupyter research notebooks with live trading algorithms
  • Market Data Access: Retrieved and analyzed historical price data, fundamental data, and alternative datasets
  • Error Handling: Validated responses for common issues like insufficient capital, data availability, and API rate limits

Breaking Changes

No breaking changes - this is a new server addition. Users will need to:

  1. Configure their QuantConnect API credentials
  2. Add the server to their MCP client configuration
  3. No changes required for existing MCP setups

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update

Checklist

  • I have read the MCP Protocol Documentation
  • My changes follows MCP security best practices
  • I have updated the server's README accordingly
  • I have tested this with an LLM client
  • My code follows the repository's style guidelines
  • New and existing tests pass locally
  • I have added appropriate error handling
  • I have documented all environment variables and configuration options

Additional context

Implementation Details

  • Transport: Supports both STDIO and HTTP+SSE for maximum compatibility
  • Authentication: Secure credential management for QuantConnect API keys (user ID and token)
  • Rate Limiting: Implements appropriate throttling to respect QuantConnect's API limits
  • Error Messages: Provides clear, actionable error messages for common issues (e.g., insufficient buying power, invalid symbols)

@olaservo olaservo added the waiting for submitter Waiting for the submitter to provide more info label Jul 20, 2025
@olaservo olaservo added add-community-server and removed waiting for submitter Waiting for the submitter to provide more info labels Jul 22, 2025
@olaservo
Copy link
Member

olaservo commented Aug 7, 2025

Thanks for your contribution to the servers list. This has been merged in this combined PR: #2475

This is a new process we're trying out, so if you see any issues feel free to re-open the PR and tag me.

@olaservo olaservo closed this Aug 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants