Project-1

wintersoldier3912-a11y/Project-1

3.1

If you are the rightful owner of Project-1 and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The MCP Crypto Data Server is a robust and scalable server designed to handle cryptocurrency data efficiently, leveraging the Model Context Protocol (MCP) for seamless integration and deployment.

Project-1

MCP server

MCP Crypto Data Server - Deployment Guide

Local Development

Prerequisites

  • Python 3.11+
  • Git
  • Docker & Docker Compose (optional)

Setup

  1. Clone and setup

    git clone <repository>
    cd mcp-crypto-data-server
    python3.11 -m venv venv
    source venv/bin/activate
    pip install -e ".[dev]"
    
  2. Configure environment

    cp .env.example .env
    # Edit .env with your settings
    
  3. Run server

    python -m uvicorn app.main:app --reload
    
  4. Run tests

    pytest
    pytest --cov=app --cov-report=html
    

Docker Deployment

Using Docker Compose (Recommended for Development)

cd docker
docker-compose up --build

This starts:

  • Redis cache on port 6379
  • FastAPI server on port 8000

Using Docker Directly

# Build image
docker build -f docker/Dockerfile -t mcp-server:latest .

# Run container
docker run -p 8000:8000 \
  -e REDIS_URL=redis://host.docker.internal:6379/0 \
  -e ENABLED_EXCHANGES=binance,kraken,coinbasepro \
  mcp-server:latest

Production Deployment

Environment Variables

See .env.example for all available settings. Key production settings:

  • APP_ENV=production
  • LOG_LEVEL=INFO
  • REDIS_ENABLED=true
  • REDIS_URL=redis://redis-host:6379/0
  • CMC_API_KEY=your_api_key

Kubernetes Deployment

Example deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mcp-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: mcp-server
  template:
    metadata:
      labels:
        app: mcp-server
    spec:
      containers:
      - name: mcp-server
        image: mcp-server:latest
        ports:
        - containerPort: 8000
        env:
        - name: APP_ENV
          value: "production"
        - name: REDIS_URL
          value: "redis://redis-service:6379/0"
        livenessProbe:
          httpGet:
            path: /v1/health
            port: 8000
          initialDelaySeconds: 10
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /v1/health
            port: 8000
          initialDelaySeconds: 5
          periodSeconds: 10

Monitoring

Health Check

curl http://localhost:8000/v1/health

Response:

{
  "status": "ok",
  "uptime": 123.45,
  "version": "0.1.0"
}

Logs

View logs:

# Docker Compose
docker-compose logs -f app

# Docker
docker logs -f <container-id>

# Kubernetes
kubectl logs -f deployment/mcp-server

Performance Tuning

Redis Configuration

  • Use Redis cluster for high availability
  • Configure maxmemory policy: allkeys-lru
  • Enable persistence: appendonly yes

Rate Limiting

  • Adjust RATE_LIMIT_REQUESTS based on API key limits
  • Monitor rate limit errors in logs
  • Increase INITIAL_BACKOFF if hitting limits frequently

Caching

  • Increase TTLs for stable data (OHLCV)
  • Decrease TTLs for volatile data (ticker)
  • Monitor cache hit rates

Server

Use multiple worker processes with Gunicorn:

gunicorn -w 4 -k uvicorn.workers.UvicornWorker app.main:app

Worker count formula: workers = 2 * cpu_count + 1

Troubleshooting

Redis Connection Issues

# Check Redis connectivity
redis-cli -h redis-host ping

# Monitor Redis
redis-cli MONITOR

Rate Limit Errors

  • Check exchange API key limits
  • Verify RATE_LIMIT_REQUESTS configuration
  • Review logs for rate limit patterns

High Memory Usage

  • Check Redis memory: redis-cli INFO memory
  • Reduce cache TTLs
  • Monitor active connections

Slow Responses

  • Check exchange API latency
  • Monitor Redis performance
  • Review application logs for errors

Backup & Recovery

Redis Backup

# Create snapshot
redis-cli BGSAVE

# Copy dump.rdb to backup location
cp /var/lib/redis/dump.rdb /backup/redis-$(date +%Y%m%d).rdb

Application Backup

# Backup configuration
cp .env /backup/.env.$(date +%Y%m%d)

# Backup logs
tar -czf /backup/logs-$(date +%Y%m%d).tar.gz logs/

Scaling

Horizontal Scaling

  • Deploy multiple server instances behind load balancer
  • Use shared Redis for cache
  • Configure sticky sessions if needed

Vertical Scaling

  • Increase server resources (CPU, memory)
  • Optimize database queries
  • Tune connection pools

Security

API Security

  • Use HTTPS in production
  • Implement rate limiting per IP
  • Add authentication if needed

Secrets Management

  • Never commit .env files
  • Use environment variables
  • Rotate API keys regularly

Network Security

  • Use VPC/private networks
  • Restrict Redis access
  • Enable firewall rules

CI/CD Integration

GitHub Actions workflow included (.github/workflows/ci.yml):

  • Runs linting (ruff)
  • Runs tests (pytest)
  • Builds Docker image
  • Reports coverage

Trigger deployment on successful CI:

- name: Deploy to Production
  if: github.ref == 'refs/heads/main' && success()
  run: |
    # Deploy commands here