shaswata-das/basic-mcp-server
If you are the rightful owner of basic-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Model Context Protocol (MCP) Server is a versatile and modular server designed for AI services, offering multiple transport options and dynamic service selection.
Model Context Protocol (MCP) Server
A modular Model Context Protocol server for AI services with multiple transport options and dynamic service selection. Built with SOLID principles for maintainability and extensibility.
Features
- Multiple AI Services: Support for Claude, OpenAI, and mock services
- Dynamic Service Selection: Choose AI service on a per-request basis
- Multiple Transports:
- stdio: For command-line usage and scripting
- TCP: For network-based applications
- WebSocket: For web browsers and real-time applications
- JSON-RPC 2.0: Compliant interface for predictable interactions
- Modular Architecture: Easy to extend with new services and transports
- Environment Configuration: Simple setup via
.env
file - Streaming Support: Real-time response streaming for supported transports
Repository Structure
basic-mcp-server/
āāā .env # Environment configuration
āāā .gitignore # Git ignore rules
āāā README.md # Project documentation
āāā examples/ # Example clients
ā āāā example_client.py # Command-line client example
ā āāā websocket_client.html # Browser WebSocket client
āāā mcp_server.py # Main entry point
āāā mcp_server/ # Core package
āāā config/ # Configuration management
ā āāā settings.py # Environment and settings handling
ā āāā __init__.py
āāā core/ # Core server logic
ā āāā server.py # Main server implementation
ā āāā __init__.py
āāā handlers/ # Method handlers
ā āāā base_handlers.py # Standard MCP handlers
ā āāā system_handlers.py # System info handlers
ā āāā __init__.py
āāā models/ # Data models
ā āāā json_rpc.py # JSON-RPC data structures
ā āāā __init__.py
āāā services/ # AI service implementations
ā āāā claude_service.py # Anthropic Claude API
ā āāā openai_service.py # OpenAI API
ā āāā __init__.py # Service registry
āāā transports/ # Communication protocols
ā āāā base.py # Transport interfaces
ā āāā websocket.py # WebSocket implementation
ā āāā __init__.py
āāā __init__.py
Installation
-
Clone the repository:
git clone https://github.com/shaswata56/basic-mcp-server.git cd basic-mcp-server
-
Create a virtual environment and install dependencies:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -e .
-
Configure your environment by editing the
.env
file with your API keys and settings.
Configuration
Environment Variables
The server can be configured using environment variables in the .env
file:
Variable | Description | Default |
---|---|---|
AI_SERVICE_TYPE | Default AI service to use ("claude", "openai", "mock") | "claude" |
SECRETS_FILE | Path to JSON file with API secrets | None |
ANTHROPIC_API_KEY | Your Anthropic API key | None |
OPENAI_API_KEY | Your OpenAI API key | None |
MCP_SERVER_NAME | Name of the server | "ai-mcp-server" |
MCP_SERVER_VERSION | Server version | "1.0.0" |
MCP_TRANSPORT_TYPE | Transport type ("stdio", "tcp", or "websocket") | "stdio" |
MCP_TCP_HOST | TCP/WebSocket host address | "127.0.0.1" |
MCP_TCP_PORT | TCP server port | 9000 |
MCP_WS_PORT | WebSocket server port | 8765 |
MCP_WS_PATH | WebSocket server path | "/" |
MCP_WS_ORIGINS | Comma-separated list of allowed origins | None (all allowed) |
CLAUDE_DEFAULT_MODEL | Default Claude model | "claude-3-opus-20240229" |
CLAUDE_DEFAULT_MAX_TOKENS | Default max tokens for Claude | 4096 |
CLAUDE_DEFAULT_TEMPERATURE | Default temperature for Claude | 0.7 |
OPENAI_DEFAULT_MODEL | Default OpenAI model | "gpt-4o" |
OPENAI_DEFAULT_MAX_TOKENS | Default max tokens for OpenAI | 1024 |
OPENAI_DEFAULT_TEMPERATURE | Default temperature for OpenAI | 0.7 |
| EMBEDDINGS_3_LARGE_API_URL
| Azure endpoint for text-embedding-3-large
| None |
| EMBEDDINGS_3_LARGE_API_KEY
| API key for text-embedding-3-large
| None |
| EMBEDDINGS_3_SMALL_API_URL
| Azure endpoint for text-embedding-3-small
| None |
| EMBEDDINGS_3_SMALL_API_KEY
| API key for text-embedding-3-small
| None |
| AZURE_OPENAI_EMBEDDING_DEPLOYMENT
| Azure deployment name for embeddings | <model name>
|
| QDRANT_URL
| URL of the Qdrant server (use :memory:
for in-memory) | None |
| QDRANT_API_KEY
| API key for Qdrant Cloud | None |
When embedding API credentials are not provided, the server will generate deterministic mock embeddings so that testing can proceed without external services.
For production deployments, configure QDRANT_URL
to point to a dedicated
Qdrant server. Using a remote server provides persistent storage and improved
vector search performance compared to the default in-memory mode.
The optional SECRETS_FILE
variable allows you to store API keys in a JSON
file instead of environment variables. Values defined in the secrets file are
used when corresponding environment variables are not set. If a secret value is
an array, the server will rotate through the values each time the key is
requested, enabling simple key rotation strategies.
Usage
Running the Server
Standard stdio Mode
python mcp_server.py
TCP Server Mode
python mcp_server.py --tcp --host 127.0.0.1 --port 9000
WebSocket Server Mode
python mcp_server.py --websocket --host 127.0.0.1 --port 8765 --ws-path /
Command Line Options
usage: mcp_server.py [-h] [--tcp | --websocket] [--host HOST] [--port PORT]
[--ws-path WS_PATH] [--service-type {claude,openai,mock}]
[--claude-api-key CLAUDE_API_KEY]
[--openai-api-key OPENAI_API_KEY]
[--qdrant-url QDRANT_URL]
[--qdrant-api-key QDRANT_API_KEY] [--mock]
[--log-level {DEBUG,INFO,WARNING,ERROR}]
[--env-file ENV_FILE]
AI MCP Server with JSON-RPC
options:
-h, --help show this help message and exit
--log-level {DEBUG,INFO,WARNING,ERROR}
Logging level
--env-file ENV_FILE Path to .env file (default: .env in project root)
Transport Options:
--tcp Run as TCP server
--websocket Run as WebSocket server
--host HOST Host to bind server
--port PORT Port for server
--ws-path WS_PATH URL path for WebSocket server (default: /)
AI Service Options:
--service-type {claude,openai,mock}
AI service to use
--claude-api-key CLAUDE_API_KEY
Anthropic API key
--openai-api-key OPENAI_API_KEY
OpenAI API key
--qdrant-url QDRANT_URL
Qdrant server URL
--qdrant-api-key QDRANT_API_KEY
Qdrant API key
--mock Use mock AI service (for testing)
Client Examples
Command Line Client
The examples/example_client.py
provides a simple way to interact with the server:
# Initialize connection
python examples/example_client.py initialize
# List available tools
python examples/example_client.py list-tools
# Echo text
python examples/example_client.py echo "Hello, world!"
# Calculate expression
python examples/example_client.py calculate "2 + 3 * 4"
# Ask AI with dynamic service selection
python examples/example_client.py ask "What is the capital of France?" --service claude
# System information
python examples/example_client.py system-info
WebSocket Browser Client
For WebSocket transport, open examples/websocket_client.html
in a browser:
- Enter the WebSocket URL (e.g.,
ws://localhost:8765/
) - Click "Connect"
- Use the interface to send requests to the server
JSON-RPC Interface
Unified AI Message Request
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "ai/message",
"arguments": {
"prompt": "What is the capital of France?",
"service_name": "claude" // Optional: "claude", "openai", "mock" or omit for default
}
},
"id": 1
}
Service-Specific Requests (for backward compatibility)
Claude:
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "claude/message",
"arguments": {
"prompt": "What is the capital of France?"
}
},
"id": 2
}
OpenAI:
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "openai/message",
"arguments": {
"prompt": "What is the capital of France?"
}
},
"id": 3
}
Available Methods
Method | Description |
---|---|
initialize | Initialize the server connection |
tools/list | List available tools |
tools/call | Call a tool with arguments |
resources/list | List available resources |
resources/read | Read a resource |
system/info | Get system information |
system/health | Check system health |
Extending the Server
Adding a New Method Handler
- Create a new handler class implementing the
HandlerInterface
in the handlers directory - Register it in the
AIMCPServerApp.initialize()
method
Adding a New AI Service
- Create a new service class implementing the
AIServiceInterface
in the services directory - Add it to the service registry in
create_ai_services_from_config()
Adding a New Transport
- Create a new transport class extending the
Transport
class in the transports directory - Update the main function to use your new transport
WebSocket and Load Balancers
When using a TLS/SSL-terminating load balancer (like AWS ELB) in front of this server:
- Clients connect to the load balancer using secure WebSockets (
wss://
) - The load balancer handles TLS/SSL termination
- The load balancer forwards traffic to the MCP server using regular WebSockets (
ws://
) - No need to implement WSS in the application itself
Performance Considerations
Processing very large repositories, especially C# projects, can generate many database operations. Consider batching inserts or using alternative storage strategies if you encounter performance issues with MongoDB.