mcp-prometheus

jeanlopezxyz/mcp-prometheus

3.2

If you are the rightful owner of mcp-prometheus and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The Prometheus MCP Server is a Model Context Protocol server designed for seamless integration with Prometheus, providing a robust interface for querying and monitoring metrics.

Tools
5
Resources
0
Prompts
0

Prometheus MCP Server

License: MIT npm version Java GitHub release Docker

A Model Context Protocol (MCP) server for Prometheus integration.

Built with Quarkus MCP Server.

Transport Modes

ModeDescriptionUse Case
stdioStandard input/outputDefault for Claude Code, Claude Desktop, Cursor, VS Code
SSEServer-Sent Events over HTTPStandalone server, multiple clients

Requirements

  • Java 21+ - Download
  • Prometheus - Running and accessible

Installation

Quick Install (Claude Code CLI)

claude mcp add prometheus -e PROMETHEUS_URL="http://localhost:9090" -- npx -y mcp-prometheus@latest

Claude Code

Add to ~/.claude/settings.json:

{
  "mcpServers": {
    "prometheus": {
      "command": "npx",
      "args": ["-y", "mcp-prometheus@latest"],
      "env": {
        "PROMETHEUS_URL": "http://localhost:9090"
      }
    }
  }
}

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "prometheus": {
      "command": "npx",
      "args": ["-y", "mcp-prometheus@latest"],
      "env": {
        "PROMETHEUS_URL": "http://localhost:9090"
      }
    }
  }
}

VS Code

code --add-mcp '{"name":"prometheus","command":"npx","args":["-y","mcp-prometheus@latest"],"env":{"PROMETHEUS_URL":"http://localhost:9090"}}'

SSE Mode

PROMETHEUS_URL="http://localhost:9090" npx mcp-prometheus --port 9081

Endpoint: http://localhost:9081/mcp/sse


Configuration

Environment Variables

VariableDescriptionDefault
PROMETHEUS_URLPrometheus API URLhttp://localhost:9090

Command Line Options

OptionDescription
--port <PORT>Start in SSE mode on specified port
--helpShow help message
--versionShow version

Tools

This server provides 5 tools:

query

Execute a PromQL query. Returns current metric values.

ParameterTypeRequiredDescription
promqlstringYesPromQL query to execute

Examples:

  • Check targets: query promql='up'
  • CPU usage: query promql='rate(node_cpu_seconds_total{mode="idle"}[5m])'
  • Memory: query promql='node_memory_MemAvailable_bytes'

queryRange

Execute a range PromQL query. Returns metric values over time.

ParameterTypeRequiredDescription
promqlstringYesPromQL query
durationstringYesTime duration: 1h, 30m, 24h, 7d
stepstringNoStep interval: 1m, 5m (default: 1m)

Example:

  • CPU over 1 hour: queryRange promql='rate(node_cpu_seconds_total[5m])' duration='1h' step='5m'

getTargets

Get Prometheus scrape targets status.

ParameterTypeRequiredDescription
statestringNoFilter: active, dropped, any (default: any)

getRules

Get Prometheus alerting and recording rules.

ParameterTypeRequiredDescription
typestringNoType: alerting, recording, all (default: all)

getPrometheusStatus

Get Prometheus server status: version, build info, and runtime.

Parameters: None


Example Prompts

Use natural language to query Prometheus. Here are prompts organized by use case:

Quick Health Checks

"Are all services up?"
"Show me which targets are healthy"
"What services are down?"
"Check if the API is responding"
"Is Prometheus scraping all targets successfully?"

CPU Monitoring

"What's the current CPU usage?"
"Show CPU usage across all nodes"
"Which pods are using the most CPU?"
"Show me CPU usage for the last hour"
"Is any container hitting CPU limits?"
"What's the CPU usage trend over the past 6 hours?"

Memory Analysis

"How much memory is available on each node?"
"Show memory usage across the cluster"
"Which pods are using the most memory?"
"Are any containers close to their memory limits?"
"Show me memory usage trends for the database"
"What's the memory consumption in the production namespace?"

Kubernetes Monitoring

"How many pods are running?"
"Show pods by namespace"
"Are any pods in CrashLoopBackOff?"
"What's the replica count for the web deployment?"
"Show me pending pods"
"How many nodes are in the cluster?"
"What's the pod distribution across nodes?"

Request & Latency Metrics

"What's the request rate for the API?"
"Show HTTP error rates"
"What's the 99th percentile latency?"
"Show me request duration over the last hour"
"Are there any 5xx errors?"
"What's the traffic pattern for the last 24 hours?"

Disk & Storage

"How much disk space is available?"
"Show disk usage across all nodes"
"Which persistent volumes are running low?"
"What's the disk I/O rate?"
"Show me storage trends for the database volume"

Alerting Rules

"What alerting rules are defined?"
"Which alerts are currently firing in Prometheus?"
"Show me pending alerts"
"What are the thresholds for memory alerts?"
"List all recording rules"

Historical Analysis

"Show me CPU usage for the past week"
"What was the memory consumption yesterday?"
"Graph request latency over the last 24 hours"
"When did the error rate spike?"
"Compare today's traffic to yesterday"

Troubleshooting

"Why might the API be slow? Show me relevant metrics"
"Investigate high memory usage on node-1"
"Show me all metrics for the payment service"
"What changed in the last hour? Something broke"
"Help me understand the current resource usage"

Custom PromQL Queries

"Run this query: rate(http_requests_total[5m])"
"Execute: sum by (namespace) (kube_pod_info)"
"Query the total number of requests in the last hour"
"Calculate the error percentage for the API"
"Show me the top 5 pods by memory usage"

Development

Run in dev mode

export PROMETHEUS_URL="http://localhost:9090"
./mvnw quarkus:dev

Build

./mvnw package -DskipTests -Dquarkus.package.jar.type=uber-jar

License

- Free to use, modify, and distribute.