slyfox1186/claude-code-openrouter
If you are the rightful owner of claude-code-openrouter and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
OpenRouter MCP Server is a robust platform that provides unified access to over 400 AI models through OpenRouter's API, facilitating seamless integration and interaction with various AI models.
chat
Main chat interface with model selection and conversation continuation.
list_conversations
View all stored conversation summaries.
get_conversation
Retrieve full conversation history by ID.
delete_conversation
Remove a conversation from storage.
OpenRouter MCP Server
A powerful Model Context Protocol (MCP) server providing unified access to 400+ AI models through OpenRouter's API
π Overview
OpenRouter MCP Server is a Python-based tool that bridges the gap between MCP clients (like Claude Code) and OpenRouter's extensive AI model ecosystem. It provides seamless access to models from OpenAI, Anthropic, Meta, Google, Mistral, and many other providers through a single, unified interface.
β¨ Key Features
- π€ Multi-Model Access: Connect to 400+ AI models from 30+ providers
- π Conversation Continuity: Persistent chat history with UUID-based sessions
- π― Smart Model Selection: Natural language model aliases (
"gemini"
β"google/gemini-2.5-pro"
) - π Multi-Modal Support: Handle text, files, and images seamlessly
- π³ Docker Ready: Containerized deployment with security best practices
- β‘ Performance Optimized: Intelligent caching and token management
- π§ Developer Friendly: Comprehensive logging and debugging tools
ποΈ Architecture
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β MCP Client βββββΆβ OpenRouter MCP βββββΆβ OpenRouter β
β (Claude Code) β β Server β β API β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β Conversation β
β Storage β
βββββββββββββββββββ
Core Components
Component | Description |
---|---|
server.py | Main MCP server with JSON-RPC protocol implementation |
config.py | Configuration management and model alias resolution |
conversation_manager.py | Persistent conversation storage with UUID sessions |
docker_manager.py | Docker container lifecycle management |
π¦ Installation
Prerequisites
- Python 3.12+ (Required)
- Docker & Docker Compose (Recommended)
- OpenRouter API Key (Required)
Method 1: Docker Deployment (Recommended)
-
Clone the repository:
git clone https://github.com/slyfox1186/claude-code-openrouter.git cd claude-code-openrouter
-
Set up environment:
cp .env.example .env # Edit .env with your OpenRouter API key nano .env
-
Build and run:
./scripts/build.sh ./scripts/run.sh
Method 2: Direct Python Installation
-
Clone and install dependencies:
git clone https://github.com/slyfox1186/claude-code-openrouter.git cd claude-code-openrouter pip install -r requirements.txt
-
Configure environment:
cp .env.example .env # Add your OpenRouter API key to .env
-
Run the server:
python run_server.py
βοΈ Configuration
Environment Variables
Create a .env
file based on .env.example
:
# OpenRouter API Configuration
OPENROUTER_API_KEY=your_openrouter_api_key_here
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1
# Default Model Settings
DEFAULT_MODEL=moonshotai/kimi-k2
DEFAULT_TEMPERATURE=0.7
DEFAULT_MAX_TOKENS=4096
# Tool Configuration
ENABLE_WEB_SEARCH=true
MAX_CONTEXT_TOKENS=100000
TOKEN_BUDGET_LIMIT=50000
# Logging Configuration
LOG_LEVEL=INFO
LOG_FILE=openrouter_mcp.log
# Optional: Rate limiting
RATE_LIMIT_REQUESTS_PER_MINUTE=60
RATE_LIMIT_TOKENS_PER_MINUTE=100000
# Docker Compose Bake delegation for better build performance
COMPOSE_BAKE=true
π Getting Your OpenRouter API Key
- Visit OpenRouter.ai
- Sign up for an account
- Navigate to the API Keys section
- Generate a new API key
- Add it to your
.env
file asOPENROUTER_API_KEY
π― Usage
MCP Tools Available
The server exposes these MCP tools for client interaction:
Tool | Description |
---|---|
chat | Main chat interface with model selection and conversation continuation |
list_conversations | View all stored conversation summaries |
get_conversation | Retrieve full conversation history by ID |
delete_conversation | Remove a conversation from storage |
Model Selection
The server supports intelligent model aliases:
{
"model": "gemini" // β google/gemini-2.5-pro
"model": "claude" // β anthropic/claude-4-sonnet
"model": "claude opus" // β anthropic/claude-4-opus
"model": "kimi" // β moonshotai/kimi-k2
"model": "gpt-4" // β openai/gpt-4
}
Conversation Continuity
Each chat session returns a continuation_id
that can be used to maintain context:
{
"prompt": "Follow up question...",
"continuation_id": "uuid-from-previous-response"
}
Multi-Modal Input
Support for various input types:
{
"prompt": "Analyze this code",
"files": ["/path/to/file.py"],
"images": ["/path/to/screenshot.png"],
"model": "gemini"
}
π³ Docker Management
Use the included Docker manager for easy container operations:
# Build image
python tools/docker_manager.py build
# Start container
python tools/docker_manager.py start
# View logs
python tools/docker_manager.py logs
# Interactive shell
python tools/docker_manager.py shell
# Stop container
python tools/docker_manager.py stop
# Full restart
python tools/docker_manager.py restart
# Check status
python tools/docker_manager.py status
π§ Development
Project Structure
claude-code-openrouter/
βββ server.py # Main MCP server implementation
βββ config.py # Configuration and model management
βββ conversation_manager.py # Conversation persistence
βββ docker_manager.py # Docker operations
βββ requirements.txt # Python dependencies
βββ Dockerfile # Container definition
βββ docker-compose.yml # Service orchestration
βββ build.sh # Build script
βββ run.sh # Runtime script
βββ .env.example # Environment template
βββ .gitignore # Git ignore patterns
βββ CLAUDE.md # Claude Code instructions
βββ README.md # This file
Logging and Debugging
- Server logs:
/tmp/openrouter_debug.log
- Application logs:
openrouter_mcp.log
- Container logs:
docker logs openrouter
Model Capabilities
The server automatically detects model capabilities:
- Vision Models: Handle image inputs (Gemini Pro, GPT-4V)
- Large Context: Support extended conversations (Kimi K2, Gemini)
- Function Calling: Tool use capabilities (Gemini Pro, GPT-4)
π¨ Security & Best Practices
Environment Security
- β
Never commit
.env
files (protected by.gitignore
) - β
Use
.env.example
for templates - β Run containers as non-root user
- β Read-only volume mounts
API Key Management
- π Store API keys in
.env
only - π Use environment variables in production
- π Rotate keys regularly
- π Monitor usage and billing
π Monitoring & Performance
Token Management
- Automatic conversation truncation to prevent API limits
- Token usage tracking and reporting
- Configurable token budgets per request
Performance Features
- In-memory conversation caching
- Efficient JSON-RPC protocol
- Streaming response support
- Request rate limiting
π Supported Models
Popular Model Aliases
Alias | Full Model Name | Provider |
---|---|---|
gemini | google/gemini-2.5-pro | |
claude | anthropic/claude-4-sonnet | Anthropic |
claude-opus | anthropic/claude-4-opus | Anthropic |
kimi | moonshotai/kimi-k2 | Moonshot |
gpt-4 | openai/gpt-4 | OpenAI |
llama | meta-llama/llama-3.1-8b-instruct | Meta |
Model Categories
- π¬ Chat Models: General conversation and reasoning
- ποΈ Vision Models: Image understanding and analysis
- π§ Function Models: Tool use and function calling
- π Long Context: Extended conversation memory
- β‘ Fast Models: Quick responses and low latency
π οΈ Troubleshooting
Common Issues
Server won't start:
# Check environment configuration
python -c "from src.config import validate_config; print(validate_config())"
# Verify API key
echo $OPENROUTER_API_KEY
Container issues:
# Check container status
python tools/docker_manager.py status
# View detailed logs
python tools/docker_manager.py logs
# Restart everything
python tools/docker_manager.py restart
Model selection problems:
# Test model alias resolution
python -c "from src.config import get_model_alias; print(get_model_alias('gemini'))"
Debug Mode
Enable detailed logging:
export LOG_LEVEL=DEBUG
python run_server.py
π License
This project is licensed under the MIT License - see the file for details.
π€ Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Development Setup
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
π Acknowledgments
- OpenRouter for providing unified AI model access
- Model Context Protocol for the standard
- Anthropic for Claude and MCP development
π Support
- GitHub Issues: Report bugs or request features
- OpenRouter Documentation: OpenRouter API Docs
- MCP Specification: Model Context Protocol