grapheneaffiliate/unified-mcp-system-v3
If you are the rightful owner of unified-mcp-system-v3 and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Unified MCP System is a comprehensive Model Context Protocol server designed for enterprise-level security, observability, and performance, integrating seamlessly with OpenAI and LangChain.
๐ Unified MCP System
A production-ready, comprehensive Model Context Protocol (MCP) system that combines:
- MCP Agent Server: Advanced MCP server with enterprise security and observability
- LC MCP App: OpenAI-compatible intermediary for seamless LangChain integration
๐๏ธ Architecture Overview
unified-mcp-system/
โโโ mcp_agent/ # MCP Server Components
โ โโโ server.py # Core MCP server implementation
โ โโโ database/ # SQLModel database operations
โ โโโ security/ # Auth, rate limiting, sandboxing
โ โโโ observability/ # Logging, metrics, monitoring
โโโ lc_mcp_app/ # OpenAI-Compatible Intermediary
โ โโโ server.py # FastAPI OpenAI-compatible API
โ โโโ clients/ # MCP client with connection pooling
โ โโโ middleware/ # Auth, rate limiting, metrics
โ โโโ tools/ # Dynamic tool registry
โโโ tests/ # Comprehensive test suite
โโโ docker-compose.yml # Multi-service deployment
โโโ .github/workflows/ # CI/CD pipelines
โจ Key Features
๐ Enterprise Security
- Bearer token authentication
- Rate limiting with Redis backend
- Input validation and sanitization
- Secure sandboxed execution environment
๐ Production Observability
- Structured logging with correlation IDs
- Prometheus metrics and health checks
- Request tracing and performance monitoring
- Comprehensive error handling
โก High Performance
- Async architecture throughout
- Connection pooling for MCP clients
- Streaming API support
- Back-pressure handling
๐ OpenAI Compatibility
- Full OpenAI Chat Completions API
- Streaming and non-streaming responses
- Function calling support
- Model selection and routing
๐ ๏ธ Available MCP Tools
The MCP server provides the following built-in tools:
Tool | Description | Parameters |
---|---|---|
health_check | Check server health status | None - Returns server status, database health, and environment info |
read_file | Read contents of allowed files | path (string) - File path to read (restricted to allowlisted files) |
MCP Protocol Support
- Protocol Version:
2024-11-05
- JSON-RPC Endpoint:
POST /jsonrpc
orPOST /mcp
- Supported Methods:
initialize
- Initialize MCP sessiontools/list
- List available toolstools/call
- Execute a toolresources/list
- List available resourcesprompts/list
- List available prompts
๐ Quick Start
Prerequisites
- Python 3.11+
- Docker (optional)
- Redis (for rate limiting)
Installation
# Clone the repository
git clone https://github.com/grapheneaffiliate/unified-mcp-system-v3.git
cd unified-mcp-system-v3
# Install dependencies
pip install -e ".[dev]"
# Set up environment
cp .env.example .env
# Edit .env with your configuration
Running the System
Option 1: Docker Compose (Recommended)
# Start all services
docker-compose up --build
# MCP Server: http://localhost:8000
# LC MCP App: http://localhost:8001
# Prometheus: http://localhost:9090
# Redis: localhost:6379
Option 2: Manual Setup
# Terminal 1: Start MCP Server
make mcp-server
# Terminal 2: Start LC MCP App
make lc-app
# Terminal 3: Start Redis (if not using Docker)
redis-server
๐ ๏ธ Development
Development Commands
# Install development dependencies
make install-dev
# Run tests
make test
# Run linting
make lint
# Run type checking
make type-check
# Format code
make format
# Run all checks
make check-all
Testing
# Run all tests
pytest
# Run with coverage
pytest --cov=lc_mcp_app --cov=mcp_agent
# Run specific test categories
pytest -m unit # Unit tests only
pytest -m integration # Integration tests only
pytest -m "not slow" # Skip slow tests
๐ก API Usage
MCP Server (Port 8000)
Available MCP Tools
The MCP server provides the following built-in tools:
-
health_check
: Check server health status- No parameters required
- Returns server status, database health, and environment info
-
read_file
: Read contents of allowed files- Parameters:
path
(string) - File path to read - Security: Restricted to allowlisted files (README.md, pyproject.toml, .gitignore)
- Parameters:
MCP Protocol Endpoints
- JSON-RPC Endpoint:
POST /jsonrpc
orPOST /mcp
- Protocol Version:
2024-11-05
- Supported Methods:
initialize
: Initialize MCP sessiontools/list
: List available toolstools/call
: Execute a toolresources/list
: List available resources (empty for now)prompts/list
: List available prompts (empty for now)
Direct MCP Protocol Communication
from mcp import ClientSession
from mcp.client.stdio import stdio_client
import os
# Set the MCP server URL
os.environ['MCP_SERVER_URL'] = 'http://localhost:8000'
# Connect to MCP server
async with stdio_client() as (read, write):
async with ClientSession(read, write) as session:
# List available tools
result = await session.list_tools()
tools = result.tools if hasattr(result, 'tools') else result
for tool in tools:
print(f"Tool: {tool.name} - {tool.description}")
# Call the health_check tool
health = await session.call_tool("health_check", {})
print(f"Health status: {health}")
# Call the read_file tool
content = await session.call_tool("read_file", {
"path": "README.md"
})
print(f"File content: {content}")
Using HTTP Client Directly
# Initialize session
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "initialize",
"params": {},
"id": 1
}'
# List available tools
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/list",
"params": {},
"id": 2
}'
# Call a tool
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "health_check",
"arguments": {}
},
"id": 3
}'
LC MCP App (Port 8001)
OpenAI-compatible API:
import openai
# Configure client
client = openai.OpenAI(
base_url="http://localhost:8001/v1",
api_key="your-api-key"
)
# Chat completion
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": "List files in current directory"}
],
tools=[
{
"type": "function",
"function": {
"name": "execute_command",
"description": "Execute shell command",
"parameters": {
"type": "object",
"properties": {
"command": {"type": "string"}
}
}
}
}
]
)
Streaming Support
# Streaming chat completion
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
๐ง Configuration
Environment Variables
# MCP Server Configuration
MCP_SERVER_HOST=0.0.0.0
MCP_SERVER_PORT=8000
MCP_LOG_LEVEL=INFO
MCP_DATABASE_URL=sqlite:///./mcp_agent.db
# LC MCP App Configuration
LC_APP_HOST=0.0.0.0
LC_APP_PORT=8001
LC_APP_LOG_LEVEL=INFO
OPENAI_API_KEY=your-openai-api-key
# Security
AUTH_SECRET_KEY=your-secret-key
RATE_LIMIT_ENABLED=true
REDIS_URL=redis://localhost:6379
# Observability
PROMETHEUS_ENABLED=true
PROMETHEUS_PORT=9090
STRUCTURED_LOGGING=true
๐ณ Docker Deployment
Production Deployment
# docker-compose.prod.yml
version: '3.8'
services:
mcp-server:
build: .
command: ["python", "-m", "mcp_agent.server"]
environment:
- MCP_SERVER_PORT=8000
- MCP_LOG_LEVEL=INFO
ports:
- "8000:8000"
lc-mcp-app:
build: .
command: ["python", "-m", "lc_mcp_app.server"]
environment:
- LC_APP_PORT=8001
- LC_APP_LOG_LEVEL=INFO
ports:
- "8001:8001"
depends_on:
- mcp-server
- redis
redis:
image: redis:7-alpine
ports:
- "6379:6379"
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
๐ Monitoring
Health Checks
- MCP Server:
GET http://localhost:8000/health
- LC MCP App:
GET http://localhost:8001/health
Metrics
- Prometheus:
http://localhost:9090
- Custom metrics: Request latency, error rates, tool usage
Logging
- Structured JSON logging
- Correlation IDs for request tracing
- Configurable log levels
- Integration with external log aggregators
๐ก๏ธ Security Features
Authentication
- Bearer token authentication
- JWT token support
- API key validation
- Role-based access control
Rate Limiting
- Per-user rate limiting
- Global rate limiting
- Redis-backed counters
- Configurable limits
Sandboxing
- Secure command execution
- File system access controls
- Network restrictions
- Resource limits
๐งช Testing
Test Structure
tests/
โโโ test_health.py # Health check tests
โโโ test_openai_api.py # OpenAI API compatibility tests
โโโ mcp_server/
โโโ test_server.py # MCP server tests
Running Tests
# All tests
make test
# Specific test files
pytest tests/test_health.py
pytest tests/mcp_server/test_server.py
# With coverage
make test-coverage
๐ Documentation
API Documentation
- OpenAI API: Available at
http://localhost:8001/docs
- MCP Server: Available at
http://localhost:8000/docs
Examples
See the examples/
directory for:
- Basic usage examples
- Integration patterns
- Custom tool development
- Deployment configurations
๐ค Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Make your changes
- Run tests:
make test
- Run linting:
make lint
- Commit changes:
git commit -m 'Add amazing feature'
- Push to branch:
git push origin feature/amazing-feature
- Open a Pull Request
Development Setup
# Install development dependencies
make install-dev
# Set up pre-commit hooks
pre-commit install
# Run all checks before committing
make check-all
๐ License
This project is licensed under the MIT License - see the file for details.
๐ Acknowledgments
- Model Context Protocol by Anthropic
- LangChain for LLM integration
- FastAPI for the web framework
- OpenAI for API compatibility standards
๐ Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Project Wiki
Built with โค๏ธ for the MCP community