somepalli/db_mcp_server
If you are the rightful owner of db_mcp_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The MCP Database Advisor is a comprehensive server designed for database management and analysis, supporting multiple database types with AI-powered features.
MCP Database Advisor
A comprehensive MCP (Model Context Protocol) server for database management and analysis supporting 11+ database types with AI-powered health monitoring, query optimization, and production-ready features.
Features
- Multi-Database Support: PostgreSQL, MySQL, MariaDB, MongoDB, Redis, MSSQL, Oracle, SQLite, Cassandra, Neo4j, Elasticsearch
- AI-Powered Analysis: Query optimization suggestions and database health diagnostics using GitHub Models
- Production Ready: Authentication, structured logging, retry mechanisms, and timeouts
- Comprehensive MCP Tools: Schema inspection, query execution, index analysis, slow query detection
- FastAPI Backend: High-performance async JSON-RPC API with proper error handling
Getting started (using pip):
- Create a virtualenv and install dependencies
python -m venv .mcpvenv; .\.mcpvenv\Scripts\Activate.ps1; pip install -r requirements.txt
- Run the server (development)
uvicorn mcp_server.main:app --reload
- Run unit tests
pytest -q
Configuration
Environment Variables
Authentication:
MCP_API_KEY: API key for endpoint authentication (optional)
Logging:
LOG_LEVEL: Logging level (DEBUG, INFO, WARNING, ERROR) - defaults to INFO
Database Connections:
POSTGRES_URL: PostgreSQL connection stringMYSQL_URL: MySQL connection stringMARIADB_URL: MariaDB connection string (or use MARIADB_* variables)MONGO_URL: MongoDB connection stringREDIS_URL: Redis connection stringMSSQL_*: MSSQL connection variablesORACLE_*: Oracle connection variablesCASSANDRA_*: Cassandra connection variablesNEO4J_*: Neo4j connection variablesELASTICSEARCH_*: Elasticsearch connection variables
Timeouts:
POSTGRES_CONNECTION_TIMEOUT: Connection timeout in seconds (default: 30)POSTGRES_QUERY_TIMEOUT: Query timeout in seconds (default: 30)
AI Configuration:
AI_PROVIDER: Default AI provider (github,ollama,openai,anthropic,google,azure,custom) - default:ollamaAI_MODEL: Default model name (provider-specific) - default varies by providerGITHUB_TOKEN: Required for GitHub Models providerOPENAI_API_KEY: Required for OpenAI providerANTHROPIC_API_KEY: Required for Anthropic Claude providerGOOGLE_API_KEY: Required for Google Gemini providerAZURE_OPENAI_API_KEY: Required for Azure OpenAI providerOLLAMA_URL: Ollama server URL (default:http://localhost:11434/v1)AI_BASE_URL: Custom OpenAI-compatible API base URLAI_API_KEY: Custom OpenAI-compatible API keyMCP_ENCRYPTION_KEY: Encryption key for secure API key storage (optional)
Secure API Key Management: The server now supports secure API key storage for premium AI providers. API keys are encrypted and stored locally. For production deployments, consider integrating with a proper key vault service.
Integration Tests
Start dependencies:
docker-compose up -d
Set environment variables:
$env:POSTGRES_URL = 'postgresql://test:test@localhost:5433/testdb'
$env:MONGO_URL = 'mongodb://localhost:27017'
$env:MONGO_DB = 'test'
$env:MCP_API_KEY = 'your-api-key' # optional
Run integration tests:
pytest tests/test_integration_*.py -q
AI Provider Setup
The MCP server supports multiple AI providers for intelligent database analysis:
GitHub Models (Default)
export AI_PROVIDER=github
export GITHUB_TOKEN=your_github_token
Local Ollama
# Install Ollama: https://ollama.ai/
ollama pull llama2 # or mistral, codellama, etc.
ollama serve
# Configure MCP server
export AI_PROVIDER=ollama
export AI_MODEL=llama2 # or your preferred model
OpenAI API
export AI_PROVIDER=openai
export OPENAI_API_KEY=your_openai_key
export AI_MODEL=gpt-4 # or gpt-3.5-turbo
Custom OpenAI-Compatible API
export AI_PROVIDER=custom
export AI_BASE_URL=https://your-api-endpoint.com/v1
export AI_API_KEY=your_api_key
export AI_MODEL=your-model-name
Dynamic AI Provider Selection
The server now supports dynamic AI provider and model selection per request, eliminating the need to set environment variables for every use case.
NLP Endpoint with Dynamic AI Selection:
{
"message": "How is my database performing?",
"ai_provider": "openai",
"ai_model": "gpt-4",
"stream": true
}
Supported Providers:
ollama: Local Ollama models (default)openai: OpenAI GPT modelsanthropic: Anthropic Claude modelsgoogle: Google Gemini modelsazure: Azure OpenAI modelsgithub: GitHub Modelscustom: Custom OpenAI-compatible APIs
Secure API Key Management: Store API keys securely using the key management endpoints:
# Store an API key
curl -X POST http://localhost:8090/keys/store \
-H "Content-Type: application/json" \
-d '{"provider": "openai", "api_key": "sk-your-key-here"}'
# List stored providers
curl http://localhost:8090/keys/providers
# Delete a stored key
curl -X DELETE http://localhost:8090/keys/openai
Note: AI features are optional. If no AI provider is configured, the server will still work but without AI-powered analysis. Dynamic selection allows using different providers for different requests without restarting the server.
Testing AI Configuration
Use the provided test script to verify your AI setup:
python scripts/test_ai_config.py
This will check your configuration and perform a test analysis to ensure everything is working.
Testing Dynamic AI Selection
Test dynamic AI provider/model selection with different providers:
python scripts/test_dynamic_ai.py
This script demonstrates:
- Using different AI providers per request
- Secure API key storage and retrieval
- Streaming responses with custom AI configurations
Testing
- Unit Tests: 12 test cases covering all adapters and core functionality
- Integration Tests: Tests against real database instances (Postgres, MongoDB)
- CI/CD: GitHub Actions workflow with automated testing
Run all tests:
pytest tests/ -v
Docker / local container run
You can run the app itself with docker-compose (an override file adds an app service that builds the image):
docker-compose up -d --build
# Wait a few seconds for Postgres/Mongo to initialize, then the app will be reachable at http://localhost:8085
If you prefer the app to connect to services on the host (e.g. when running Postgres on localhost:5433), set env vars before starting the app service or run the container manually with appropriate env values.
🚀 Render Deployment
Deploy the MCP server on Render for cloud hosting:
1. Prepare for Deployment
The project includes a render.yaml configuration file for easy deployment.
2. Deploy on Render
- Connect your GitHub repository to Render
- Create a new Web Service from your repository
- Configure the service:
- Runtime: Docker
- Dockerfile Path:
./Dockerfile - Port: 8085
3. Environment Variables
Set these in Render dashboard:
# Required
PORT=8085
# AI Configuration (choose one provider)
AI_PROVIDER=ollama # or github, openai, anthropic, azure
AI_MODEL=qwen2.5:7b-instruct
# Authentication (recommended)
MCP_API_KEY=your-secure-api-key
# Optional: Database connections
POSTGRES_URL=your_postgres_connection_string
MONGO_URL=your_mongo_connection_string
MYSQL_URL=your_mysql_connection_string
# Provider-specific keys
GITHUB_TOKEN=your_github_token
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
4. Deploy
Render will automatically build and deploy your service. The service will be available at https://your-app-name.onrender.com.
📡 Calling from Another PC (Python Client)
Use the provided remote client to call your deployed MCP server:
Installation
pip install httpx pydantic
# or
pip install -r requirements-client.txt
Usage Examples
Basic NLP Query
from mcp_remote_client import RemoteMCPClient
# Initialize client
client = RemoteMCPClient(
base_url="https://your-render-app.onrender.com",
api_key="your-api-key" # if authentication is enabled
)
# Natural language query
result = client.nlp_query("How is my database performing?")
print(result)
MCP Method Calls
# Analyze a query
result = client.analyze_query("postgres", "SELECT * FROM users WHERE id = 1")
print(result)
# List schemas
result = client.list_schemas("postgres")
print(result)
# AI health analysis
health_data = {
"connections": 25,
"slow_queries": 8,
"cache_hit_ratio": 0.65
}
result = client.ai_analyze_health("postgres", health_data)
print(result)
Command Line Usage
# NLP Query
python mcp_remote_client.py --url https://your-app.onrender.com --api-key your-key --method nlp_query --message "How is my database?"
# List schemas
python mcp_remote_client.py --url https://your-app.onrender.com --api-key your-key --method list_schemas --db postgres
# AI Health analysis
python mcp_remote_client.py --url https://your-app.onrender.com --api-key your-key --method ai_analyze_health --db postgres
Advanced Python Usage
import asyncio
from mcp_remote_client import RemoteMCPClient
async def main():
client = RemoteMCPClient("https://your-app.onrender.com", "your-api-key")
# Multiple requests
tasks = [
client.nlp_query("Check database health"),
client.list_schemas("postgres"),
client.analyze_query("postgres", "SELECT 1")
]
results = await asyncio.gather(*tasks)
for result in results:
print(result)
asyncio.run(main())
🔧 Troubleshooting
Common Issues
- Connection Timeout: Render free tier may sleep after inactivity
- API Key Required: Ensure MCP_API_KEY is set if authentication is enabled
- Database Connections: External database URLs must be accessible from Render
- Port Configuration: Ensure PORT=8085 in environment variables
Health Check
Test your deployment:
curl https://your-app.onrender.com/docs
curl -H "X-API-Key: your-key" https://your-app.onrender.com/mcp -X POST -d '{"jsonrpc":"2.0","method":"list_schemas","params":{"dbName":"postgres"},"id":"test"}'