mercury-mcp-server

hamzaamjad/mercury-mcp-server

3.2

If you are the rightful owner of mercury-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Mercury MCP Server is a production-ready server that integrates Mercury diffusion-LLM capabilities into AI assistants, supporting MCP-compatible clients.

Tools
4
Resources
0
Prompts
0

Mercury MCP Server

npm version License: MIT MCP Compatible Mercury Powered

A production-ready Model Context Protocol (MCP) server that integrates Mercury diffusion-LLM capabilities into AI assistants like Claude, Cursor, and other MCP-compatible clients.

āœ… Verified and Tested

This implementation has been tested against the actual Mercury API and all features are confirmed working:

  • āœ… Chat completions with mercury-coder-small and mercury-coder-large
  • āœ… Fill-in-the-Middle (FIM) completions
  • āœ… Streaming responses
  • āœ… Model listing
  • āœ… Proper error handling and retry logic
  • āœ… API structure compatibility verified

Features

  • šŸš€ Full Mercury API Integration: Chat completions, streaming, and Fill-in-the-Middle (FIM)
  • šŸ”„ Diffusion Model Support: Specialized handling for Mercury's diffusion-based architecture
  • šŸ›”ļø Enterprise Security: API key management, rate limiting, and input validation
  • šŸ“Š Performance Optimized: Intelligent caching, retry logic, and connection pooling
  • šŸ” Comprehensive Logging: Structured logging with Winston
  • āœ… Type Safety: Full TypeScript implementation with Zod validation

Quick Start

Prerequisites

  • Node.js 18+
  • Mercury API key from Inception Labs
  • An MCP-compatible client (Claude Desktop, Cursor, etc.)

Installation

# Clone the repository (or download the source)
cd mercury-mcp-server

# Install dependencies
npm install

# Build the server
npm run build

# Set your API key
export MERCURY_API_KEY="your-api-key-here"

# Test the server is working
npm start

Build and Run

# Build the TypeScript code
npm run build

# Run the server
npm start

Configuration

Environment Variables

Create a .env file with the following configuration:

# Required
MERCURY_API_KEY=sk_your_api_key_here
MERCURY_API_URL=https://api.inceptionlabs.ai/v1

# Optional (with defaults)
LOG_LEVEL=info
CACHE_ENABLED=true
CACHE_TTL=300
DIFFUSION_DEFAULT_STEPS=20
DIFFUSION_STABILITY_THRESHOLD=0.85

Claude Desktop Integration

Add the following to your Claude Desktop configuration (claude_desktop_config.json):

{
  "mcpServers": {
    "mercury": {
      "command": "node",
      "args": ["/path/to/mercury-mcp-server/dist/index.js"],
      "env": {
        "MERCURY_API_KEY": "sk_your_api_key_here"
      }
    }
  }
}

Available Tools

1. mercury_chat_completion

Generate chat completions using Mercury's diffusion-LLM.

Parameters:

  • messages: Array of conversation messages
  • model: Model to use (default: mercury-coder-small)
  • temperature: Sampling temperature (0-2)
  • max_tokens: Maximum tokens to generate
  • diffusion_steps: Number of diffusion steps

2. mercury_chat_stream

Stream chat completions for real-time responses.

Parameters: Same as chat completion

3. mercury_fim_completion

Generate code completions using Fill-in-the-Middle.

Parameters:

  • prompt: Code before the cursor
  • suffix: Code after the cursor
  • max_middle_tokens: Maximum tokens for the middle section
  • alternative_completions: Number of alternatives (1-5)

4. mercury_list_models

List all available Mercury models with their capabilities.

Parameters: None

Development

Running Tests

# Run all tests
npm test

# Run unit tests only
npm run test:unit

# Run with coverage
npm run test -- --coverage

Using MCP Inspector

Test your server with the official MCP Inspector:

npx @modelcontextprotocol/inspector node dist/index.js

Development Mode

# Run with auto-reload
npm run dev

# Format code
npm run format

# Lint code
npm run lint

Architecture

mercury-mcp-server/
ā”œā”€ā”€ src/
│   ā”œā”€ā”€ mercury/          # Mercury API client
│   ā”œā”€ā”€ tools/           # MCP tool implementations
│   ā”œā”€ā”€ utils/           # Utilities (logging, cache, validation)
│   ā”œā”€ā”€ config/          # Configuration management
│   ā”œā”€ā”€ server.ts        # MCP server setup
│   └── index.ts         # Entry point
ā”œā”€ā”€ tests/               # Test suites
└── docs/               # Additional documentation

Error Handling

The server implements comprehensive error handling for:

  • Validation Errors: Invalid input parameters
  • API Errors: Mercury API failures
  • Diffusion Errors: Model convergence issues
  • Rate Limiting: Automatic retry with backoff

Security

  • API keys are never logged or exposed
  • All inputs are validated with Zod schemas
  • Rate limiting prevents abuse
  • Supports secure environment variable management

Troubleshooting

Common Issues

  1. "MERCURY_API_KEY is required"

    • Ensure your .env file contains a valid API key
    • Check that the environment variable is loaded
  2. "Failed to connect to Mercury API"

    • Verify your internet connection
    • Check if the API URL is correct
    • Ensure your API key is valid
  3. "Low confidence score detected"

    • Increase diffusion_steps for better quality
    • Adjust temperature for more deterministic outputs

Documentation for AI Agents

This server includes special documentation to help AI models understand Mercury's capabilities:

  • - Comprehensive guide for AI models
  • - Decision trees and cheat sheets
  • Enhanced tool descriptions with context about diffusion models

These resources help AI agents make intelligent decisions about when and how to use Mercury's unique features.

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

MIT License - see file for details

Acknowledgments