afrankenstine/mcp-production-server
If you are the rightful owner of mcp-production-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The MCP Production Server is a robust implementation designed for seamless integration with AI models like Claude, offering a scalable and secure environment for deploying model context protocols.
MCP Production Server
Production-grade Model Context Protocol (MCP) server implementation for integrating with Claude and other AI models.
Features
- Production-Ready: Built for scale with proper error handling, timeouts, and resilience patterns
- FastAPI Framework: High-performance async web framework
- Database Integration: PostgreSQL with SQLAlchemy ORM
- Caching Layer: Redis for high-performance caching
- Observability: Prometheus metrics, structured logging, distributed tracing
- Security: Authentication, authorization, rate limiting, input validation
- Resilience: Circuit breakers, retries with exponential backoff
- Testing: Comprehensive unit, integration, and load tests
- Deployment: Docker, Kubernetes, blue-green deployments
Quick Start
Prerequisites
- Python 3.11+
- PostgreSQL 15+
- Redis 7+
- Docker and Docker Compose (for containerized development)
Local Development
- Clone the repository:
git clone https://github.com/afrankenstine/mcp-production-server.git
cd mcp-production-server
- Create virtual environment:
python3.11 -m venv venv
source venv/bin/activate # On Windows: venv\\Scripts\\activate
- Install dependencies:
pip install -r requirements.txt
- Copy environment variables:
cp .env.example .env
# Edit .env with your configuration
- Start services with Docker Compose:
docker-compose up -d
- Run the server:
python main.py
The server will be available at http://localhost:8000
Project Structure
mcp-production-server/
├── src/
│ ├── api/ # FastAPI server
│ ├── mcp/ # MCP server implementation
│ ├── config/ # Configuration management
│ ├── database/ # Database models and connections
│ ├── errors/ # Error handling
│ ├── cache/ # Redis caching
│ ├── security/ # Auth, validation, rate limiting
│ ├── resilience/ # Circuit breakers, retries
│ ├── metrics.py # Prometheus metrics
│ ├── logging_config.py # Structured logging
│ └── context.py # Request context
├── tests/
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ └── load/ # Load tests
├── deploy/
│ ├── kubernetes/ # Kubernetes manifests
│ ├── docker/ # Docker configurations
│ └── terraform/ # Infrastructure as code
├── scripts/ # Operational scripts
├── docs/ # Documentation
├── Dockerfile
├── docker-compose.yml
├── requirements.txt
└── main.py # Entry point
API Endpoints
POST /mcp- MCP message endpoint (JSON-RPC 2.0)GET /health- Health checkGET /metrics- Prometheus metrics
Configuration
All configuration is done via environment variables. See .env.example for available options.
Key environment variables:
DATABASE_URL- PostgreSQL connection stringREDIS_URL- Redis connection stringCLAUDE_API_KEY- Claude API keyLOG_LEVEL- Logging level (DEBUG, INFO, WARNING, ERROR)ENVIRONMENT- Environment (development, staging, production)
Testing
Run tests:
# Unit tests
pytest tests/unit/
# Integration tests
pytest tests/integration/
# Load tests
locust -f tests/load/locustfile.py --host=http://localhost:8000
Deployment
Docker
Build and run:
docker build -t mcp-server .
docker run -p 8000:8000 mcp-server
Kubernetes
Deploy to Kubernetes:
kubectl apply -f deploy/kubernetes/mcp-server.yaml
Blue-Green Deployment
./scripts/deploy.sh
Monitoring
- Metrics: Prometheus metrics at
/metrics - Logs: Structured JSON logs to stdout
- Dashboards: Grafana dashboards at
http://localhost:3000
Documentation
License
Apache License 2.0 - See file for details.
Contributing
Contributions are welcome! Please read our contributing guidelines first.
Support
For issues and questions, please open a GitHub issue.