cnye36/simulator-mcp-server
If you are the rightful owner of simulator-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
A stateless Model Context Protocol (MCP) server designed for running scientific simulations, ideal for integration into chat applications, interactive dashboards, and AI-powered tools.
Simulation MCP Server
A stateless Model Context Protocol (MCP) server for running scientific simulations. Perfect for integrating simulation capabilities into chat applications, interactive dashboards, and AI-powered tools.
🎯 Overview
This MCP server provides a stateless compute service that runs simulations and returns results. It's designed to be integrated into chat applications where:
- ✅ Your chat app handles authentication and user data
- ✅ The MCP server performs computations
- ✅ Results are returned as JSON for immediate use or storage
Key Features:
- 🚀 Stateless Operation - No authentication, no user data storage
- ⚡ Fast Preview Mode - Optimized for real-time interactive UIs (~50-100ms)
- 📊 JSON Data Return - Get simulation data directly, no file downloads needed
- 🔧 Multiple Models - Support for epidemiology, physics, finance, and custom models
- 📈 Comprehensive Logging - Full request/response logging for debugging
- 🌐 HTTP Transport - Standard MCP protocol over HTTP
📋 Table of Contents
🚀 Quick Start
Local Development
# Clone the repository
git clone <repository-url>
cd simulation-mcp-server
# Install dependencies
pip install -r requirements.txt
# Run in stdio mode (for local development)
python server.py
Docker
# Build and run with Docker Compose
docker compose up --build
# Health check
curl http://localhost:8000/health
Example Request
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "simulate_model",
"arguments": {
"domain": "epidemiology",
"model_type": "SIR",
"parameters": {"beta": 0.3, "gamma": 0.1},
"initial_conditions": {"S": 0.99, "I": 0.01, "R": 0},
"time_span": {
"start": 0,
"end": 160,
"steps": 400,
"preview_mode": true
},
"return_data": true,
"save_artifacts": false
}
}
}'
✨ Features
Core Capabilities
- Multiple Simulation Models: Currently supports SIR (epidemiology), with support for Lotka-Volterra, Logistic, Projectile, and Monte Carlo models planned
- Flexible Time Grids: Configurable time spans with preview mode for fast rendering
- Multiple Solvers: RK45, RK23, and DOP853 integration methods
- Sensitivity Analysis: Optional one-way sensitivity analysis support
- Metrics Calculation: Automatic calculation of key metrics (peak values, timing, etc.)
Performance Optimizations
- Preview Mode: Reduces data points to 100 for ~50-100ms response times
- Optional Artifacts: Skip CSV/PNG generation for faster responses
- JSON Data Return: Get simulation data directly without file I/O overhead
Production Ready
- Health Check Endpoint:
/healthfor deployment monitoring - Comprehensive Logging: Full request/response logging with configurable levels
- Error Handling: Detailed error messages with stack traces
- Docker Support: Ready-to-use Dockerfile and docker-compose.yml
- Render Deployment: Pre-configured
render.yamlfor easy deployment
📦 Installation
Requirements
- Python 3.12+
- pip or uv
Dependencies
pip install -r requirements.txt
Key dependencies:
mcp[cli]>=1.2.0- Model Context Protocol SDKpydantic>=2.7- Data validationnumpy>=1.26- Numerical computationsscipy>=1.11- Scientific computing and ODE solversmatplotlib>=3.8- Plot generation (optional, for artifacts)
Environment Variables
| Variable | Default | Description |
|---|---|---|
MCP_TRANSPORT | stdio | Transport mode (streamable-http for production) |
STORAGE_PATH | ./storage | Directory for saving artifacts |
PORT | 8000 | Server port |
FASTMCP_HOST | 0.0.0.0 | Host to bind to (use 0.0.0.0 for external access) |
LOG_LEVEL | INFO | Logging verbosity (DEBUG, INFO, WARNING, ERROR) |
💻 Usage
Basic Simulation
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
# Connect to server
async with stdio_client(StdioServerParameters(
command="python",
args=["server.py"]
)) as (read, write):
async with ClientSession(read, write) as session:
# Run simulation
result = await session.call_tool(
"simulate_model",
arguments={
"domain": "epidemiology",
"model_type": "SIR",
"parameters": {"beta": 0.3, "gamma": 0.1},
"initial_conditions": {"S": 0.99, "I": 0.01, "R": 0},
"time_span": {
"start": 0,
"end": 160,
"steps": 400
},
"return_data": True
}
)
print(result.content[0].text)
Interactive UI Mode (Fast Preview)
# Optimized for real-time slider interactions
result = await session.call_tool(
"simulate_model",
arguments={
# ... parameters ...
"time_span": {
"start": 0,
"end": 160,
"steps": 600,
"preview_mode": True # Limits to 100 points
},
"return_data": True,
"save_artifacts": False # Skip file generation
}
)
High-Quality Export Mode
# Full resolution with artifacts
result = await session.call_tool(
"simulate_model",
arguments={
# ... parameters ...
"time_span": {
"start": 0,
"end": 160,
"steps": 1000,
"preview_mode": False
},
"return_data": True,
"save_artifacts": True # Generate CSV and PNG files
}
)
📚 API Reference
Tool: simulate_model
Run a simulation with specified parameters and return results.
Input Parameters
{
domain: "epidemiology" | "physics" | "finance" | "custom",
model_type: "SIR" | "LotkaVolterra" | "Logistic" | "Projectile" | "MonteCarlo",
parameters: { [key: string]: number }, // Model-specific parameters
initial_conditions: { [key: string]: number }, // Initial state values
time_span: {
start: number, // Start time
end: number, // End time
steps: number, // Number of time points (>= 2)
preview_mode?: boolean // Limit to 100 points for speed
},
method?: "RK45" | "RK23" | "DOP853", // ODE solver method (default: "RK45")
sensitivity?: { [key: string]: number }, // Optional sensitivity analysis
tags?: string[], // Optional tags for organization
return_data?: boolean, // Return data points in response (default: true)
save_artifacts?: boolean // Save CSV/PNG files (default: false)
}
Output Format
{
status: "success" | "error",
message: string,
summary?: string, // Human-readable summary
metrics: {
[key: string]: number // Calculated metrics (e.g., I_peak, t_peak)
},
columns: string[], // Column names (e.g., ["t", "S", "I", "R"])
data?: Array<{ // Time series data points
t: number,
[state_var: string]: number
}>,
artifacts: Array<{
kind: "csv" | "plot" | "json",
path: string
}>
}
Example: SIR Model
Input:
{
"domain": "epidemiology",
"model_type": "SIR",
"parameters": {
"beta": 0.3, // Transmission rate
"gamma": 0.1 // Recovery rate
},
"initial_conditions": {
"S": 0.99, // Susceptible
"I": 0.01, // Infected
"R": 0.0 // Recovered
},
"time_span": {
"start": 0,
"end": 160,
"steps": 400,
"preview_mode": false
},
"return_data": true,
"save_artifacts": false
}
Output:
{
"status": "success",
"message": "Simulation completed",
"summary": "Peak infection 0.3743 at t ≈ 33.89",
"metrics": {
"I_peak": 0.3743,
"t_peak": 33.89
},
"columns": ["t", "S", "I", "R"],
"data": [
{"t": 0.0, "S": 0.99, "I": 0.01, "R": 0.0},
{"t": 0.4, "S": 0.9896, "I": 0.0102, "R": 0.0002},
// ... more data points
],
"artifacts": []
}
Health Check Endpoint
GET /health
Returns server status and configuration:
{
"status": "healthy",
"service": "simulation-mcp",
"version": "0.1.0",
"storage": "/tmp/storage",
"solvers": 1
}
🚀 Deployment
Docker Deployment
# Build image
docker build -t simulation-mcp-server .
# Run container
docker run -p 8000:8000 \
-e MCP_TRANSPORT=streamable-http \
-e PORT=8000 \
-e FASTMCP_HOST=0.0.0.0 \
simulation-mcp-server
Render Deployment
- Connect GitHub Repository to Render
- Render auto-detects
render.yamlconfiguration - Set Environment Variables (if needed):
MCP_TRANSPORT=streamable-httpPORT=8000FASTMCP_HOST=0.0.0.0
- Deploy - Render handles the rest!
The render.yaml file is pre-configured with:
- Docker-based deployment
- Health check endpoint
- Stateless operation (no persistent disk needed)
- Environment variables
Other Platforms
The server can be deployed to any platform that supports:
- Python 3.12+
- HTTP/HTTPS endpoints
- Environment variable configuration
Examples: Heroku, Railway, Fly.io, AWS ECS, Google Cloud Run, etc.
🏗️ Architecture
Design Philosophy
This MCP server is stateless - it performs computations and returns results. It does NOT:
- ❌ Store user data
- ❌ Handle authentication
- ❌ Manage user sessions
- ❌ Persist simulation results
Your application should handle:
- ✅ User authentication (e.g., via Supabase)
- ✅ Chat history and context
- ✅ Storing simulation results (if needed)
- ✅ User-specific data management
Request Flow
┌─────────────────────────────────┐
│ Your Chat App (Supabase) │
│ - User authenticated │
│ - Chat context stored │
│ - User wants simulation │
└──────────────┬──────────────────┘
│ HTTP POST
│ (No auth needed)
▼
┌─────────────────────────────────┐
│ MCP Server (Stateless) │
│ - Receives parameters │
│ - Runs simulation │
│ - Returns JSON data │
│ - No storage, no auth │
└──────────────┬──────────────────┘
│ JSON Response
│ (data + metrics)
▼
┌─────────────────────────────────┐
│ Your Chat App │
│ - Receives results │
│ - Stores in Supabase (optional)│
│ - Displays to user │
│ - Manages user context │
└─────────────────────────────────┘
Storage Strategy
Recommended Approach:
- Call MCP server with
return_data: trueandsave_artifacts: false - Receive JSON data directly
- Store in your own database (e.g., Supabase) if needed
- Generate files client-side if exports are needed
See ARCHITECTURE.md for detailed integration patterns and Supabase schema examples.
📖 Documentation
Comprehensive documentation is available in the docs/ directory:
- - Detailed architecture and integration patterns
- - Complete feature list
- - Deployment guides
- - Frontend integration examples
- - Troubleshooting with logs
- - HTTP transport configuration
- - Connecting AI applications
🔧 Development
Project Structure
simulation-mcp-server/
├── server.py # Main MCP server implementation
├── models/ # Simulation model implementations
│ ├── __init__.py
│ └── sir.py # SIR epidemiology model
├── schemas/ # JSON schemas for validation
│ └── simulate_model_input.json
├── docs/ # Documentation
├── Dockerfile # Docker configuration
├── docker-compose.yml # Docker Compose setup
├── render.yaml # Render deployment config
├── requirements.txt # Python dependencies
└── README.md # This file
Adding New Models
- Create a new model file in
models/(e.g.,models/lotka_volterra.py) - Implement the solver function:
def simulate_lotka_volterra( parameters: Dict[str, float], y0: Dict[str, float], tspan: Tuple[float, float], steps: int, method: str = "RK45" ) -> Tuple[np.ndarray, np.ndarray, Dict[str, float]]: # Implementation return t, Y, metrics - Register in
server.py:SOLVERS = { ("epidemiology", "SIR"): simulate_sir, ("epidemiology", "LotkaVolterra"): simulate_lotka_volterra, # ... } - Update JSON schema in
schemas/simulate_model_input.json
Running Tests
# Run server in stdio mode for testing
python server.py
# Test with curl (HTTP mode)
curl http://localhost:8000/health
Logging
Logging is comprehensive and configurable:
# Set log level
export LOG_LEVEL=DEBUG # DEBUG, INFO, WARNING, ERROR
# Run server
python server.py
Logs include:
- Request parameters
- Execution times
- Data shapes and metrics
- Error stack traces
📊 Performance Benchmarks
| Mode | Steps | Preview | Artifacts | Data Return | Response Time |
|---|---|---|---|---|---|
| Interactive | 600 | ✅ Yes | ❌ No | ✅ Yes | ~50-100ms |
| Standard | 600 | ❌ No | ❌ No | ✅ Yes | ~150-200ms |
| Full Quality | 1000 | ❌ No | ✅ Yes | ✅ Yes | ~300-500ms |
| File Only | 600 | ❌ No | ✅ Yes | ❌ No | ~250-400ms |
Measured on Render Starter tier (0.5 CPU, 512 MB RAM)
🤝 Contributing
Contributions are welcome! Areas for improvement:
- Additional simulation models (Lotka-Volterra, Logistic, Projectile, Monte Carlo)
- Sensitivity analysis enhancements
- Performance optimizations
- Documentation improvements
📝 License
[Add your license here]
🆘 Support
For issues, questions, or contributions:
- Check the directory
- Review logs with
LOG_LEVEL=DEBUG - Check for integration patterns
- See for troubleshooting
🎉 Status
Production Ready! ✅
- HTTP transport configured
- Health check endpoint
- Data returned in JSON format
- Preview mode for performance
- Comprehensive logging
- Docker containerization
- Render deployment config
- Documentation complete
- Error handling improved
Happy Simulating! 🚀