mcp-server

kroeungcyber/mcp-server

3.1

If you are the rightful owner of mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Model Context Protocol (MCP) server is designed for efficient AI model management and inference, supporting multiple AI models and offering flexible deployment options.

MCP Server

Model Context Protocol (MCP) server for AI model management and inference.

Features

  • FastAPI backend
  • Multiple AI model support (GPT-4, Deepseek)
  • Docker containerization
  • Environment configuration
  • LoRA fine-tuning capabilities

Setup

Local Development

  1. Clone the repository
  2. Create and activate virtual environment:
python -m venv .venv
source .venv/bin/activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Create .env file from .env.example and configure API keys
  2. Run the server:
uvicorn app.main:app --reload

Docker Deployment

  1. Build and run:
docker-compose up --build
  1. Access the API at http://localhost:8000

Cloud Deployment

AWS ECS
  1. Configure AWS CLI
  2. Build and push Docker image to ECR
  3. Create ECS task definition
  4. Deploy service
Google Cloud Run
  1. Install gcloud CLI
  2. Build and push container:
gcloud builds submit --tag gcr.io/PROJECT-ID/mcp-server
  1. Deploy:
gcloud run deploy --image gcr.io/PROJECT-ID/mcp-server

API Documentation

Access Swagger UI at http://localhost:8000/docs

Environment Variables

See .env.example for configuration options

License

MIT