sdirishguy/mcp_server_project
If you are the rightful owner of mcp_server_project and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Model Context Protocol (MCP) Server is a secure, programmable agent tool server designed for orchestrating AI workflows and automating operations over HTTP.
MCP Server Project
A secure Model Context Protocol (MCP) server providing HTTP endpoints for AI agent tool execution. Built with Python 3.12+, Starlette, and FastMCP.
Quick Start
# Clone and setup
git clone https://github.com/sdirishguy/mcp_server_project.git
cd mcp_server_project
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# Run server
uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
# Test
curl http://localhost:8000/health
Docker
docker-compose up -d
curl http://localhost:8000/health
Configuration
Required environment variables:
JWT_SECRET="your-secret-key-32-chars-minimum" # Required for production
ADMIN_USERNAME="admin" # Default admin user
ADMIN_PASSWORD="secure-password" # Change from default
Optional configuration:
SERVER_PORT=8000
MCP_BASE_WORKING_DIR="./shared_host_folder"
ENVIRONMENT="development" # development|staging|production
ALLOW_ARBITRARY_SHELL_COMMANDS="false" # Security: disabled by default
CORS_ORIGINS="http://localhost:3000,https://yourdomain.com"
# API Keys for LLM tools
OPENAI_API_KEY="sk-..."
GEMINI_API_KEY="..."
Authentication
Get a token:
curl -X POST http://localhost:8000/api/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin123"}'
Use token:
curl -H "Authorization: Bearer YOUR_TOKEN" http://localhost:8000/api/protected
Available Tools
Tool | Description |
---|---|
file_system_create_directory | Create directories (sandboxed) |
file_system_write_file | Write text files |
file_system_read_file | Read text files |
file_system_list_directory | List directory contents |
execute_shell_command | Execute shell commands (filtered) |
llm_generate_code_openai | Generate code via OpenAI API |
llm_generate_code_gemini | Generate code via Gemini API |
API Endpoints
GET /health
- Health checkGET /metrics
- Prometheus metricsPOST /api/auth/login
- AuthenticationPOST /mcp/mcp.json/
- MCP JSON-RPC (requires auth)POST /api/adapters/{type}
- Create data adaptersGET /docs
- Interactive API documentation
Security Features
- JWT-based authentication with configurable providers
- Path traversal prevention for file operations
- Shell command filtering and sandboxing
- Rate limiting on authentication endpoints
- Security headers (HSTS, CSP, etc.)
- CORS configuration
- Audit logging for all operations
Development
Run tests:
pytest -q # 53 passing, 21 skipped (FastMCP lifespan issue)
Testing
Run tests: pytest -q
(53 passing, 21 skipped due to FastMCP lifespan integration)
The skipped tests require proper ASGI lifespan management which TestClient doesn't provide by default. Production server works correctly.
Linting:
pre-commit install
pre-commit run --all-files
Production Deployment
- Set strong
JWT_SECRET
(32+ characters) - Change default
ADMIN_PASSWORD
- Set
ENVIRONMENT=production
- Configure appropriate
CORS_ORIGINS
- Use HTTPS termination at load balancer
- Monitor
/health
and/metrics
endpoints
See PRODUCTION_READINESS_REPORT.md
for detailed checklist.
Architecture
- FastMCP: Tool execution via Model Context Protocol
- Starlette: Async web framework with middleware
- Pydantic: Configuration management and validation
- Prometheus: Metrics collection
- JWT: Stateless authentication
- Audit Logging: Structured event logging
License
MIT