custom-mcp

Dev00355/custom-mcp

3.1

If you are the rightful owner of custom-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

FastMCP is a Model Context Protocol server that facilitates integration between MCP clients and local LLM services.

Tools
3
Resources
0
Prompts
0

FastMCP - Model Context Protocol Server

FastMCP is a Model Context Protocol (MCP) server that provides LLM services through the MCP standard. It acts as a bridge between MCP clients and your local LLM service, enabling seamless integration with MCP-compatible applications.

Features

  • 🚀 MCP Protocol Compliance: Full implementation of Model Context Protocol
  • 🔧 Tools: Chat completion, model listing, health checks
  • 📝 Prompts: Pre-built prompts for common tasks (assistant, code review, summarization)
  • 📊 Resources: Server configuration and LLM service status
  • 🔄 Streaming Support: Both streaming and non-streaming responses
  • 🔒 Configurable: Environment-based configuration
  • 🛡️ Robust: Built-in error handling and health monitoring
  • 🔌 Integration Ready: Works with any OpenAI-compatible LLM service

Getting Started

Prerequisites

  • Python 3.9+
  • pip
  • Local LLM service running on port 5001 (OpenAI-compatible API)
  • MCP client (e.g., Claude Desktop, MCP Inspector)

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/fastmcp.git
    cd fastmcp
    
  2. Create a virtual environment and activate it:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    
  3. Install dependencies:

    pip install -r requirements.txt
    
  4. Create a .env file (copy from .env.mcp) and configure:

    # Server Settings
    MCP_SERVER_NAME=fastmcp-llm-router
    MCP_SERVER_VERSION=0.1.0
    
    # LLM Service Configuration
    LOCAL_LLM_SERVICE_URL=http://localhost:5001
    
    # Optional: API Key for LLM service
    # LLM_SERVICE_API_KEY=your_api_key_here
    
    # Timeouts (in seconds)
    LLM_REQUEST_TIMEOUT=60
    HEALTH_CHECK_TIMEOUT=10
    
    # Logging
    LOG_LEVEL=INFO
    

Running the MCP Server

Option 1: Using the CLI script
python run_server.py
Option 2: Direct execution
python mcp_server.py
Option 3: With custom configuration
python run_server.py --llm-url http://localhost:5001 --log-level DEBUG

The MCP server will run on stdio and can be connected to by MCP clients.

MCP Client Integration

Claude Desktop Integration

Add to your Claude Desktop configuration:

{
  "mcpServers": {
    "fastmcp-llm-router": {
      "command": "python",
      "args": ["/path/to/fastmcp/mcp_server.py"],
      "env": {
        "LOCAL_LLM_SERVICE_URL": "http://localhost:5001"
      }
    }
  }
}

MCP Inspector

Test your server with MCP Inspector:

npx @modelcontextprotocol/inspector python mcp_server.py

Available Tools

1. Chat Completion

Send messages to your LLM service:

{
  "name": "chat_completion",
  "arguments": {
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello!"}
    ],
    "model": "default",
    "temperature": 0.7
  }
}

2. List Models

Get available models from your LLM service:

{
  "name": "list_models",
  "arguments": {}
}

3. Health Check

Check if your LLM service is running:

{
  "name": "health_check",
  "arguments": {}
}

Available Prompts

  • chat_assistant: General AI assistant prompt
  • code_review: Code review and analysis
  • summarize: Text summarization

Available Resources

  • config://server: Server configuration
  • status://llm-service: LLM service status

Project Structure

fastmcp/
├── app/
│   ├── api/
│   │   └── v1/
│   │       └── api.py          # API routes
│   ├── core/
│   │   └── config.py          # Application configuration
│   ├── models/                # Database models
│   ├── services/              # Business logic
│   └── utils/                 # Utility functions
├── tests/                     # Test files
├── .env.example               # Example environment variables
├── requirements.txt           # Project dependencies
└── README.md                  # This file

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the file for details.