basic-mcp-server

shaswata-das/basic-mcp-server

3.1

If you are the rightful owner of basic-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Model Context Protocol (MCP) Server is a versatile and modular server designed for AI services, offering multiple transport options and dynamic service selection.

Model Context Protocol (MCP) Server

A modular Model Context Protocol server for AI services with multiple transport options and dynamic service selection. Built with SOLID principles for maintainability and extensibility.

Features

  • Multiple AI Services: Support for Claude, OpenAI, and mock services
  • Dynamic Service Selection: Choose AI service on a per-request basis
  • Multiple Transports:
    • stdio: For command-line usage and scripting
    • TCP: For network-based applications
    • WebSocket: For web browsers and real-time applications
  • JSON-RPC 2.0: Compliant interface for predictable interactions
  • Modular Architecture: Easy to extend with new services and transports
  • Environment Configuration: Simple setup via .env file
  • Streaming Support: Real-time response streaming for supported transports

Repository Structure

basic-mcp-server/
ā”œā”€ā”€ .env                      # Environment configuration
ā”œā”€ā”€ .gitignore                # Git ignore rules
ā”œā”€ā”€ README.md                 # Project documentation
ā”œā”€ā”€ examples/                 # Example clients
│   ā”œā”€ā”€ example_client.py     # Command-line client example
│   └── websocket_client.html # Browser WebSocket client
ā”œā”€ā”€ mcp_server.py             # Main entry point
└── mcp_server/               # Core package
    ā”œā”€ā”€ config/               # Configuration management
    │   ā”œā”€ā”€ settings.py       # Environment and settings handling
    │   └── __init__.py
    ā”œā”€ā”€ core/                 # Core server logic
    │   ā”œā”€ā”€ server.py         # Main server implementation
    │   └── __init__.py
    ā”œā”€ā”€ handlers/             # Method handlers
    │   ā”œā”€ā”€ base_handlers.py  # Standard MCP handlers
    │   ā”œā”€ā”€ system_handlers.py # System info handlers
    │   └── __init__.py
    ā”œā”€ā”€ models/               # Data models
    │   ā”œā”€ā”€ json_rpc.py       # JSON-RPC data structures
    │   └── __init__.py
    ā”œā”€ā”€ services/             # AI service implementations
    │   ā”œā”€ā”€ claude_service.py # Anthropic Claude API
    │   ā”œā”€ā”€ openai_service.py # OpenAI API
    │   └── __init__.py       # Service registry
    ā”œā”€ā”€ transports/           # Communication protocols
    │   ā”œā”€ā”€ base.py           # Transport interfaces
    │   ā”œā”€ā”€ websocket.py      # WebSocket implementation
    │   └── __init__.py
    └── __init__.py

Installation

  1. Clone the repository:

    git clone https://github.com/shaswata56/basic-mcp-server.git
    cd basic-mcp-server
    
  2. Create a virtual environment and install dependencies:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    pip install -e .
    
  3. Configure your environment by editing the .env file with your API keys and settings.

Configuration

Environment Variables

The server can be configured using environment variables in the .env file:

VariableDescriptionDefault
AI_SERVICE_TYPEDefault AI service to use ("claude", "openai", "mock")"claude"
SECRETS_FILEPath to JSON file with API secretsNone
ANTHROPIC_API_KEYYour Anthropic API keyNone
OPENAI_API_KEYYour OpenAI API keyNone
MCP_SERVER_NAMEName of the server"ai-mcp-server"
MCP_SERVER_VERSIONServer version"1.0.0"
MCP_TRANSPORT_TYPETransport type ("stdio", "tcp", or "websocket")"stdio"
MCP_TCP_HOSTTCP/WebSocket host address"127.0.0.1"
MCP_TCP_PORTTCP server port9000
MCP_WS_PORTWebSocket server port8765
MCP_WS_PATHWebSocket server path"/"
MCP_WS_ORIGINSComma-separated list of allowed originsNone (all allowed)
CLAUDE_DEFAULT_MODELDefault Claude model"claude-3-opus-20240229"
CLAUDE_DEFAULT_MAX_TOKENSDefault max tokens for Claude4096
CLAUDE_DEFAULT_TEMPERATUREDefault temperature for Claude0.7
OPENAI_DEFAULT_MODELDefault OpenAI model"gpt-4o"
OPENAI_DEFAULT_MAX_TOKENSDefault max tokens for OpenAI1024
OPENAI_DEFAULT_TEMPERATUREDefault temperature for OpenAI0.7

| EMBEDDINGS_3_LARGE_API_URL | Azure endpoint for text-embedding-3-large | None | | EMBEDDINGS_3_LARGE_API_KEY | API key for text-embedding-3-large | None | | EMBEDDINGS_3_SMALL_API_URL | Azure endpoint for text-embedding-3-small | None | | EMBEDDINGS_3_SMALL_API_KEY | API key for text-embedding-3-small | None | | AZURE_OPENAI_EMBEDDING_DEPLOYMENT | Azure deployment name for embeddings | <model name> | | QDRANT_URL | URL of the Qdrant server (use :memory: for in-memory) | None | | QDRANT_API_KEY | API key for Qdrant Cloud | None |

When embedding API credentials are not provided, the server will generate deterministic mock embeddings so that testing can proceed without external services.

For production deployments, configure QDRANT_URL to point to a dedicated Qdrant server. Using a remote server provides persistent storage and improved vector search performance compared to the default in-memory mode.

The optional SECRETS_FILE variable allows you to store API keys in a JSON file instead of environment variables. Values defined in the secrets file are used when corresponding environment variables are not set. If a secret value is an array, the server will rotate through the values each time the key is requested, enabling simple key rotation strategies.

Usage

Running the Server

Standard stdio Mode
python mcp_server.py
TCP Server Mode
python mcp_server.py --tcp --host 127.0.0.1 --port 9000
WebSocket Server Mode
python mcp_server.py --websocket --host 127.0.0.1 --port 8765 --ws-path /

Command Line Options

usage: mcp_server.py [-h] [--tcp | --websocket] [--host HOST] [--port PORT]
                     [--ws-path WS_PATH] [--service-type {claude,openai,mock}]
                     [--claude-api-key CLAUDE_API_KEY]
                     [--openai-api-key OPENAI_API_KEY]
                     [--qdrant-url QDRANT_URL]
                     [--qdrant-api-key QDRANT_API_KEY] [--mock]
                     [--log-level {DEBUG,INFO,WARNING,ERROR}]
                     [--env-file ENV_FILE]

AI MCP Server with JSON-RPC

options:
  -h, --help            show this help message and exit
  --log-level {DEBUG,INFO,WARNING,ERROR}
                        Logging level
  --env-file ENV_FILE   Path to .env file (default: .env in project root)

Transport Options:
  --tcp                 Run as TCP server
  --websocket           Run as WebSocket server
  --host HOST           Host to bind server
  --port PORT           Port for server
  --ws-path WS_PATH     URL path for WebSocket server (default: /)

AI Service Options:
  --service-type {claude,openai,mock}
                        AI service to use
  --claude-api-key CLAUDE_API_KEY
                        Anthropic API key
  --openai-api-key OPENAI_API_KEY
                          OpenAI API key
  --qdrant-url QDRANT_URL
                          Qdrant server URL
  --qdrant-api-key QDRANT_API_KEY
                          Qdrant API key
  --mock                Use mock AI service (for testing)

Client Examples

Command Line Client

The examples/example_client.py provides a simple way to interact with the server:

# Initialize connection
python examples/example_client.py initialize

# List available tools
python examples/example_client.py list-tools

# Echo text
python examples/example_client.py echo "Hello, world!"

# Calculate expression
python examples/example_client.py calculate "2 + 3 * 4"

# Ask AI with dynamic service selection
python examples/example_client.py ask "What is the capital of France?" --service claude

# System information
python examples/example_client.py system-info

WebSocket Browser Client

For WebSocket transport, open examples/websocket_client.html in a browser:

  1. Enter the WebSocket URL (e.g., ws://localhost:8765/)
  2. Click "Connect"
  3. Use the interface to send requests to the server

JSON-RPC Interface

Unified AI Message Request

{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "ai/message",
    "arguments": {
      "prompt": "What is the capital of France?",
      "service_name": "claude"  // Optional: "claude", "openai", "mock" or omit for default
    }
  },
  "id": 1
}

Service-Specific Requests (for backward compatibility)

Claude:

{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "claude/message",
    "arguments": {
      "prompt": "What is the capital of France?"
    }
  },
  "id": 2
}

OpenAI:

{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "openai/message",
    "arguments": {
      "prompt": "What is the capital of France?"
    }
  },
  "id": 3
}

Available Methods

MethodDescription
initializeInitialize the server connection
tools/listList available tools
tools/callCall a tool with arguments
resources/listList available resources
resources/readRead a resource
system/infoGet system information
system/healthCheck system health

Extending the Server

Adding a New Method Handler

  1. Create a new handler class implementing the HandlerInterface in the handlers directory
  2. Register it in the AIMCPServerApp.initialize() method

Adding a New AI Service

  1. Create a new service class implementing the AIServiceInterface in the services directory
  2. Add it to the service registry in create_ai_services_from_config()

Adding a New Transport

  1. Create a new transport class extending the Transport class in the transports directory
  2. Update the main function to use your new transport

WebSocket and Load Balancers

When using a TLS/SSL-terminating load balancer (like AWS ELB) in front of this server:

  • Clients connect to the load balancer using secure WebSockets (wss://)
  • The load balancer handles TLS/SSL termination
  • The load balancer forwards traffic to the MCP server using regular WebSockets (ws://)
  • No need to implement WSS in the application itself

Performance Considerations

Processing very large repositories, especially C# projects, can generate many database operations. Consider batching inserts or using alternative storage strategies if you encounter performance issues with MongoDB.

License

Acknowledgements