hamzaamjad/mercury-mcp-server
If you are the rightful owner of mercury-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
Mercury MCP Server is a production-ready server that integrates Mercury diffusion-LLM capabilities into AI assistants, supporting MCP-compatible clients.
Mercury MCP Server
A production-ready Model Context Protocol (MCP) server that integrates Mercury diffusion-LLM capabilities into AI assistants like Claude, Cursor, and other MCP-compatible clients.
ā Verified and Tested
This implementation has been tested against the actual Mercury API and all features are confirmed working:
- ā Chat completions with mercury-coder-small and mercury-coder-large
- ā Fill-in-the-Middle (FIM) completions
- ā Streaming responses
- ā Model listing
- ā Proper error handling and retry logic
- ā API structure compatibility verified
Features
- š Full Mercury API Integration: Chat completions, streaming, and Fill-in-the-Middle (FIM)
- š Diffusion Model Support: Specialized handling for Mercury's diffusion-based architecture
- š”ļø Enterprise Security: API key management, rate limiting, and input validation
- š Performance Optimized: Intelligent caching, retry logic, and connection pooling
- š Comprehensive Logging: Structured logging with Winston
- ā Type Safety: Full TypeScript implementation with Zod validation
Quick Start
Prerequisites
- Node.js 18+
- Mercury API key from Inception Labs
- An MCP-compatible client (Claude Desktop, Cursor, etc.)
Installation
# Clone the repository (or download the source)
cd mercury-mcp-server
# Install dependencies
npm install
# Build the server
npm run build
# Set your API key
export MERCURY_API_KEY="your-api-key-here"
# Test the server is working
npm start
Build and Run
# Build the TypeScript code
npm run build
# Run the server
npm start
Configuration
Environment Variables
Create a .env
file with the following configuration:
# Required
MERCURY_API_KEY=sk_your_api_key_here
MERCURY_API_URL=https://api.inceptionlabs.ai/v1
# Optional (with defaults)
LOG_LEVEL=info
CACHE_ENABLED=true
CACHE_TTL=300
DIFFUSION_DEFAULT_STEPS=20
DIFFUSION_STABILITY_THRESHOLD=0.85
Claude Desktop Integration
Add the following to your Claude Desktop configuration (claude_desktop_config.json
):
{
"mcpServers": {
"mercury": {
"command": "node",
"args": ["/path/to/mercury-mcp-server/dist/index.js"],
"env": {
"MERCURY_API_KEY": "sk_your_api_key_here"
}
}
}
}
Available Tools
1. mercury_chat_completion
Generate chat completions using Mercury's diffusion-LLM.
Parameters:
messages
: Array of conversation messagesmodel
: Model to use (default: mercury-coder-small)temperature
: Sampling temperature (0-2)max_tokens
: Maximum tokens to generatediffusion_steps
: Number of diffusion steps
2. mercury_chat_stream
Stream chat completions for real-time responses.
Parameters: Same as chat completion
3. mercury_fim_completion
Generate code completions using Fill-in-the-Middle.
Parameters:
prompt
: Code before the cursorsuffix
: Code after the cursormax_middle_tokens
: Maximum tokens for the middle sectionalternative_completions
: Number of alternatives (1-5)
4. mercury_list_models
List all available Mercury models with their capabilities.
Parameters: None
Development
Running Tests
# Run all tests
npm test
# Run unit tests only
npm run test:unit
# Run with coverage
npm run test -- --coverage
Using MCP Inspector
Test your server with the official MCP Inspector:
npx @modelcontextprotocol/inspector node dist/index.js
Development Mode
# Run with auto-reload
npm run dev
# Format code
npm run format
# Lint code
npm run lint
Architecture
mercury-mcp-server/
āāā src/
ā āāā mercury/ # Mercury API client
ā āāā tools/ # MCP tool implementations
ā āāā utils/ # Utilities (logging, cache, validation)
ā āāā config/ # Configuration management
ā āāā server.ts # MCP server setup
ā āāā index.ts # Entry point
āāā tests/ # Test suites
āāā docs/ # Additional documentation
Error Handling
The server implements comprehensive error handling for:
- Validation Errors: Invalid input parameters
- API Errors: Mercury API failures
- Diffusion Errors: Model convergence issues
- Rate Limiting: Automatic retry with backoff
Security
- API keys are never logged or exposed
- All inputs are validated with Zod schemas
- Rate limiting prevents abuse
- Supports secure environment variable management
Troubleshooting
Common Issues
-
"MERCURY_API_KEY is required"
- Ensure your
.env
file contains a valid API key - Check that the environment variable is loaded
- Ensure your
-
"Failed to connect to Mercury API"
- Verify your internet connection
- Check if the API URL is correct
- Ensure your API key is valid
-
"Low confidence score detected"
- Increase
diffusion_steps
for better quality - Adjust temperature for more deterministic outputs
- Increase
Documentation for AI Agents
This server includes special documentation to help AI models understand Mercury's capabilities:
- - Comprehensive guide for AI models
- - Decision trees and cheat sheets
- Enhanced tool descriptions with context about diffusion models
These resources help AI agents make intelligent decisions about when and how to use Mercury's unique features.
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
License
MIT License - see file for details
Acknowledgments
- Inception Labs for Mercury API
- Anthropic for Model Context Protocol
- The MCP community for tools and inspiration