atsuki-sakai/ai_collaboration_mcp_server
If you are the rightful owner of ai_collaboration_mcp_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Claude Code AI Collaboration MCP Server is a robust platform designed to facilitate AI collaboration through multiple providers, offering advanced strategies and comprehensive tooling.
collaborate
Multi-provider collaboration with strategy selection.
review
Content analysis and quality assessment.
compare
Side-by-side comparison of multiple items.
refine
Iterative content improvement.
Claude Code AI Collaboration MCP Server
A powerful Model Context Protocol (MCP) server that enables AI collaboration through multiple providers with advanced strategies and comprehensive tooling.
๐ Features
๐ค Multi-Provider AI Integration
- DeepSeek: Primary provider with optimized performance
- OpenAI: GPT models integration
- Anthropic: Claude models support
- O3: Next-generation model support
๐ Advanced Collaboration Strategies
- Parallel: Execute requests across multiple providers simultaneously
- Sequential: Chain provider responses for iterative improvement
- Consensus: Build agreement through multiple provider opinions
- Iterative: Refine responses through multiple rounds
๐ ๏ธ Comprehensive MCP Tools
- collaborate: Multi-provider collaboration with strategy selection
- review: Content analysis and quality assessment
- compare: Side-by-side comparison of multiple items
- refine: Iterative content improvement
๐ Enterprise Features
- Caching: Memory and Redis-compatible caching system
- Metrics: OpenTelemetry-compatible performance monitoring
- Search: Full-text search with inverted indexing
- Synthesis: Intelligent response aggregation
๐ Quick Start
๐ New to MCP? Check out our for a 5-minute setup!
Prerequisites
- Node.js 18.0.0 or higher
- pnpm 8.0.0 or higher
- TypeScript 5.3.0 or higher
Installation
# Clone the repository
git clone https://github.com/atsuki-sakai/ai_collaboration_mcp_server.git
cd ai_collaboration_mcp_server
# Install dependencies
pnpm install
# Build the project
pnpm run build
# Run tests
pnpm test
Configuration
-
Environment Variables:
# Required: Set your API keys export DEEPSEEK_API_KEY="your-deepseek-api-key" export OPENAI_API_KEY="your-openai-api-key" export ANTHROPIC_API_KEY="your-anthropic-api-key" # Optional: Configure other settings export MCP_DEFAULT_PROVIDER="deepseek" export MCP_PROTOCOL="stdio"
-
Configuration Files:
config/default.yaml
: Default configurationconfig/development.yaml
: Development settingsconfig/production.yaml
: Production settings
Running the Server
# Start with default settings
pnpm start
# Start with specific protocol
node dist/index.js --protocol stdio
# Start with custom providers
node dist/index.js --providers deepseek,openai --default-provider deepseek
# Enable debug mode
NODE_ENV=development LOG_LEVEL=debug pnpm start
๐ Claude Code Integration
Connecting to Claude Code
To use this MCP server with Claude Code, you need to configure Claude Code to recognize and connect to your server.
1. Automated Setup (Recommended)
Use the automated setup script for easy configuration:
# Navigate to your project directory
cd /Users/atsukisakai/Desktop/ai_collaboration_mcp_server
# Run automated setup with your DeepSeek API key
./scripts/setup-claude-code.sh --api-key "your-deepseek-api-key"
# Or with multiple providers
./scripts/setup-claude-code.sh \
--api-key "your-deepseek-key" \
--openai-key "your-openai-key" \
--anthropic-key "your-anthropic-key"
# Alternative using pnpm
pnpm run setup:claude-code -- --api-key "your-deepseek-key"
The setup script will:
- โ Build the MCP server
- โ Create Claude Code configuration file
- โ Test the server connection
- โ Provide next steps
1b. Manual Setup
If you prefer manual setup:
# Navigate to your project directory
cd /Users/atsukisakai/Desktop/ai_collaboration_mcp_server
# Install dependencies and build
pnpm install
pnpm run build
# Set your DeepSeek API key
export DEEPSEEK_API_KEY="your-deepseek-api-key"
# Test the server
pnpm run verify-deepseek
2. Configure Claude Code
Create or update the Claude Code configuration file:
Note: There are two server options:
simple-server.js
- Simple implementation with DeepSeek only (recommended for testing)index.js
- Full implementation with all providers and features
macOS/Linux:
# Create config directory if it doesn't exist
mkdir -p ~/.config/claude-code
# Create configuration file (simple server - recommended for testing)
cat > ~/.config/claude-code/claude_desktop_config.json << 'EOF'
{
"mcpServers": {
"ai-collaboration": {
"command": "node",
"args": ["/Users/atsukisakai/Desktop/ai_collaboration_mcp_server/dist/simple-server.js"],
"env": {
"DEEPSEEK_API_KEY": "your-deepseek-api-key"
}
}
}
}
EOF
# Or use the full server for all features
# Replace simple-server.js with index.js in the args above
Windows:
# Create config directory
mkdir "%APPDATA%\Claude"
# Create configuration file (use your preferred text editor)
# File: %APPDATA%\Claude\claude_desktop_config.json
3. Configuration Options
{
"mcpServers": {
"ai-collaboration": {
"command": "node",
"args": [
"/Users/atsukisakai/Desktop/ai_collaboration_mcp_server/dist/index.js",
"--default-provider", "deepseek",
"--providers", "deepseek,openai"
],
"env": {
"DEEPSEEK_API_KEY": "your-deepseek-api-key",
"OPENAI_API_KEY": "your-openai-api-key",
"ANTHROPIC_API_KEY": "your-anthropic-api-key",
"NODE_ENV": "production",
"LOG_LEVEL": "info",
"MCP_DISABLE_CACHING": "false",
"MCP_DISABLE_METRICS": "false"
}
}
}
}
4. Available Tools in Claude Code
After restarting Claude Code, you'll have access to these powerful tools:
- ๐ค collaborate - Multi-provider AI collaboration
- ๐ review - Content analysis and quality assessment
- โ๏ธ compare - Side-by-side comparison of multiple items
- โจ refine - Iterative content improvement
5. Usage Examples in Claude Code
# Use DeepSeek for code explanation
Please use the collaborate tool to explain this Python code with DeepSeek
# Review code quality
Use the review tool to analyze the quality of this code
# Compare multiple solutions
Use the compare tool to compare these 3 approaches to solving this problem
# Improve code iteratively
Use the refine tool to make this function more efficient
6. Troubleshooting
Check MCP server connectivity:
# Test if the server starts correctly
DEEPSEEK_API_KEY="your-key" node dist/index.js --help
View logs:
# Check application logs
tail -f logs/application-$(date +%Y-%m-%d).log
Verify Claude Code configuration:
- Restart Claude Code completely
- In a new conversation, ask "What tools are available?"
- You should see the four MCP tools listed
- Test with a simple command like "Use collaborate to say hello"
7. Configuration File Locations
- macOS:
~/.config/claude-code/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- Linux:
~/.config/claude-code/claude_desktop_config.json
๐ Usage
MCP Tools
Collaborate Tool
Execute multi-provider collaboration with strategy selection:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "collaborate",
"arguments": {
"prompt": "Explain quantum computing in simple terms",
"strategy": "consensus",
"providers": ["deepseek", "openai"],
"config": {
"timeout": 30000,
"consensus_threshold": 0.7
}
}
}
}
Review Tool
Analyze content quality and provide detailed feedback:
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "review",
"arguments": {
"content": "Your content here...",
"criteria": ["accuracy", "clarity", "completeness"],
"review_type": "comprehensive"
}
}
}
Compare Tool
Compare multiple items with detailed analysis:
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "compare",
"arguments": {
"items": [
{"id": "1", "content": "Option A"},
{"id": "2", "content": "Option B"}
],
"comparison_dimensions": ["quality", "relevance", "innovation"]
}
}
}
Refine Tool
Iteratively improve content quality:
{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/call",
"params": {
"name": "refine",
"arguments": {
"content": "Content to improve...",
"refinement_goals": {
"primary_goal": "clarity",
"target_audience": "general public"
}
}
}
}
Available Resources
- collaboration_history: Access past collaboration results
- provider_stats: Monitor provider performance metrics
- tool_usage: Track tool utilization statistics
๐๏ธ Architecture
Core Components
src/
โโโ core/ # Core framework components
โ โโโ types.ts # Dependency injection symbols
โ โโโ logger.ts # Structured logging
โ โโโ config.ts # Configuration management
โ โโโ container.ts # DI container setup
โ โโโ provider-manager.ts # AI provider orchestration
โ โโโ strategy-manager.ts # Execution strategy management
โ โโโ tool-manager.ts # MCP tool management
โโโ providers/ # AI provider implementations
โ โโโ base-provider.ts # Common provider functionality
โ โโโ deepseek-provider.ts
โ โโโ openai-provider.ts
โ โโโ anthropic-provider.ts
โ โโโ o3-provider.ts
โโโ strategies/ # Collaboration strategies
โ โโโ parallel-strategy.ts
โ โโโ sequential-strategy.ts
โ โโโ consensus-strategy.ts
โ โโโ iterative-strategy.ts
โโโ tools/ # MCP tool implementations
โ โโโ collaborate-tool.ts
โ โโโ review-tool.ts
โ โโโ compare-tool.ts
โ โโโ refine-tool.ts
โโโ services/ # Enterprise services
โ โโโ cache-service.ts
โ โโโ metrics-service.ts
โ โโโ search-service.ts
โ โโโ synthesis-service.ts
โโโ server/ # MCP server implementation
โ โโโ mcp-server.ts
โโโ types/ # Type definitions
โโโ common.ts
โโโ interfaces.ts
โโโ index.ts
Design Principles
- Dependency Injection: Clean architecture with InversifyJS
- Strategy Pattern: Pluggable collaboration strategies
- Provider Abstraction: Unified interface for different AI services
- Performance: Efficient caching and rate limiting
- Observability: Comprehensive metrics and logging
- Extensibility: Easy to add new providers and strategies
๐ง Configuration
Configuration Schema
The server uses YAML configuration files with JSON Schema validation. See config/schema.json
for the complete schema.
Key Configuration Sections
- Server: Basic server settings (name, version, protocol)
- Providers: AI provider configurations and credentials
- Strategies: Strategy-specific settings and timeouts
- Cache: Caching behavior (memory, Redis, file)
- Metrics: Performance monitoring settings
- Logging: Log levels and output configuration
Environment Variables
Variable | Description | Default |
---|---|---|
DEEPSEEK_API_KEY | DeepSeek API key | Required |
OPENAI_API_KEY | OpenAI API key | Optional |
ANTHROPIC_API_KEY | Anthropic API key | Optional |
O3_API_KEY | O3 API key (defaults to OPENAI_API_KEY) | Optional |
MCP_PROTOCOL | Transport protocol | stdio |
MCP_DEFAULT_PROVIDER | Default AI provider | deepseek |
NODE_ENV | Environment mode | production |
LOG_LEVEL | Logging level | info |
๐ Monitoring & Metrics
Built-in Metrics
- Request Metrics: Response times, success rates, error counts
- Provider Metrics: Individual provider performance
- Tool Metrics: Usage statistics per MCP tool
- Cache Metrics: Hit rates, memory usage
- System Metrics: CPU, memory, and resource utilization
OpenTelemetry Integration
The server supports OpenTelemetry for distributed tracing and metrics collection:
metrics:
enabled: true
export:
enabled: true
format: "opentelemetry"
endpoint: "http://localhost:4317"
๐งช Testing
Test Coverage
- Unit Tests: 95+ individual component tests
- Integration Tests: End-to-end MCP protocol testing
- E2E Tests: Complete workflow validation
- API Tests: Direct provider API validation
Running Tests
# Run all tests
pnpm test
# Run with coverage
pnpm run test:coverage
# Run specific test suites
pnpm run test:unit
pnpm run test:integration
pnpm run test:e2e
# Verify API connectivity
pnpm run verify-deepseek
๐ข Deployment
Docker
# Build image
docker build -t claude-code-ai-collab-mcp .
# Run container
docker run -d \
-e DEEPSEEK_API_KEY=your-key \
-p 3000:3000 \
claude-code-ai-collab-mcp
Production Considerations
- Load Balancing: Multiple server instances for high availability
- Caching: Redis for distributed caching
- Monitoring: Prometheus/Grafana for metrics visualization
- Security: API key rotation and rate limiting
- Backup: Regular configuration and data backups
๐ค Contributing
We welcome contributions! Please see for guidelines.
Development Setup
# Fork and clone the repository
git clone https://github.com/atsuki-sakai/ai_collaboration_mcp_server.git
cd ai_collaboration_mcp_server
# Install dependencies
pnpm install
# Start development
pnpm run dev
# Run tests
pnpm test
# Lint and format
pnpm run lint
pnpm run lint:fix
๐ Roadmap
Version 1.1
- GraphQL API support
- WebSocket transport protocol
- Advanced caching strategies
- Custom strategy plugins
Version 1.2
- Multi-tenant support
- Enhanced security features
- Performance optimizations
- Additional AI providers
Version 2.0
- Distributed architecture
- Advanced workflow orchestration
- Machine learning optimization
- Enterprise SSO integration
๐ License
This project is licensed under the MIT License - see the file for details.
๐ Support
- Documentation: Wiki
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email:
๐ Acknowledgments
- Model Context Protocol for the foundational protocol
- InversifyJS for dependency injection
- TypeScript for type safety
- All AI provider APIs for enabling collaboration
Built with โค๏ธ by the Claude Code AI Collaboration Team# think_hub