ramkrsna/prometheus-mcp-server
If you are the rightful owner of prometheus-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Prometheus MCP Server is a Model Context Protocol server that allows AI agents to interact with Prometheus metrics data using natural language queries.
Prometheus MCP Server
A Model Context Protocol server that enables AI agents to interact directly with Prometheus metrics data through natural language queries.
Features
This MCP server provides direct integration with any Prometheus instance, allowing AI assistants to:
- Query Metrics: Execute PromQL queries for instant and range data
- Explore Metadata: Discover available metrics, labels, and their values
- Analyze Time Series: Get detailed information about specific metric series
- Natural Language Interface: Ask questions about your metrics in plain English
Available Tools
prometheus_query
Execute instant PromQL queries at a specific point in time.
Example questions:
- "What's the current CPU usage?"
- "Show me the memory consumption right now"
- "How many requests per second is my service handling?"
prometheus_query_range
Execute range queries to get time series data over a period.
Example questions:
- "Show me CPU usage over the last hour"
- "What was the request rate trend for the past 24 hours?"
- "Graph the error rate for my API over the last week"
prometheus_labels
Get the list of available labels for metrics.
Example questions:
- "What labels are available for the http_requests_total metric?"
- "Show me all labels for metrics matching 'cpu'"
prometheus_label_values
Get all possible values for a specific label.
Example questions:
- "What are all the possible values for the 'environment' label?"
- "List all services that have metrics"
- "What instances are reporting metrics?"
prometheus_series
Get metadata about time series matching specific selectors.
Example questions:
- "Show me all time series for the user service"
- "What metrics exist for my database?"
prometheus_metadata
Get help text and type information about metrics.
Example questions:
- "What does the node_cpu_seconds_total metric measure?"
- "Show me documentation for all available metrics"
Installation
Using NPM (Recommended)
# Install globally
npm install -g @prometheus-mcp/server
# Or run directly with npx
npx @prometheus-mcp/server
Building from Source
# Clone the repository
git clone https://github.com/ramkrsna/prometheus-mcp-server.git
cd prometheus-mcp-server
# Build the binary
go build -o prometheus-mcp-server
# Or install directly
go install
Testing with Mock Data (Podman)
# Install Podman (if not already installed)
brew install podman podman-compose
# Start mock environment
./start-mock-env.sh
# Configure and run MCP server
export PROMETHEUS_URL=http://localhost:9090
./prometheus-mcp-server
Configuration
The server requires the following environment variables:
PROMETHEUS_URL(required): URL of your Prometheus instance (e.g.,http://localhost:9090)PROMETHEUS_USERNAME(optional): Username for basic authenticationPROMETHEUS_PASSWORD(optional): Password for basic authenticationHTTP_MODE(optional): Set totrueto run in HTTP/WebSocket mode instead of STDIOHOST(optional): HTTP server host (default:0.0.0.0)PORT(optional): HTTP server port (default:8080)
Usage with AI Assistants
Claude Desktop
- Open Claude Desktop settings
- Go to Developer → Edit Config
- Add the following to your
claude_desktop_config.json:
{
"mcpServers": {
"prometheus": {
"command": "prometheus-mcp",
"env": {
"PROMETHEUS_URL": "http://localhost:9090",
"PROMETHEUS_USERNAME": "admin",
"PROMETHEUS_PASSWORD": "password"
}
}
}
}
Cursor
- Open Cursor settings
- Go to Features → MCP
- Add the server configuration:
{
"mcpServers": {
"prometheus": {
"command": "prometheus-mcp",
"env": {
"PROMETHEUS_URL": "http://localhost:9090"
}
}
}
}
VS Code
- Open VS Code settings
- Search for "MCP"
- Add to your settings.json:
{
"mcp.servers": {
"prometheus": {
"type": "stdio",
"command": "prometheus-mcp",
"env": {
"PROMETHEUS_URL": "http://localhost:9090"
}
}
}
}
Example Conversations
Basic Metrics Query
You: "What's the current memory usage across all my nodes?"
AI: I'll query the memory usage for all nodes.
Uses prometheus_query with: (1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100
Here's the current memory usage:
- node1: 45.2%
- node2: 62.8%
- node3: 38.5%
Troubleshooting High Load
You: "My service seems slow, can you help me investigate?"
AI: I'll help you investigate the performance issue. Let me check a few things:
Uses prometheus_query_range for response time Uses prometheus_query for current request rate Uses prometheus_query for error rate
I found that:
- Your response time increased from 200ms to 850ms in the last hour
- Request rate is 3x higher than usual (15k req/s vs normal 5k req/s)
- Error rate is still low at 0.1%
This appears to be a load-related issue rather than errors.
Exploring Metrics
You: "What metrics do I have available for my PostgreSQL database?"
AI: Let me explore the PostgreSQL metrics in your Prometheus instance.
Uses prometheus_series with match: {job="postgresql"}
You have the following PostgreSQL metrics available:
pg_stat_database_tup_inserted- Number of rows insertedpg_stat_database_tup_updated- Number of rows updatedpg_stat_database_conflicts- Number of conflictspg_stat_activity_count- Number of active connectionspg_database_size_bytes- Database size in bytes [... and 23 more metrics]
Development
Project Structure
prometheus-mcp-server/
├── main.go # Entry point
├── tools.go # MCP tool definitions
├── http_server.go # HTTP/WebSocket transport
├── internal/
│ ├── models/ # Configuration models
│ ├── prometheus/ # Prometheus client and handlers
│ └── utils/ # Utilities
├── bin/ # NPM distribution scripts
└── package.json # NPM package metadata
Adding New Tools
- Add the tool definition in
tools.go - Create the handler in
internal/prometheus/handlers.go - Add any new Prometheus API calls in
internal/prometheus/client.go
Testing
# Run tests
go test ./...
# Test with a local Prometheus
PROMETHEUS_URL=http://localhost:9090 go run .
License
MIT
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
For issues and feature requests, please use the GitHub issue tracker.