mcp-tempo-server

InfiniteInsight/mcp-tempo-server

3.2

If you are the rightful owner of mcp-tempo-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The MCP Tempo Server is a Model Context Protocol server that enables interaction with Grafana Tempo for distributed tracing analysis.

Tools
5
Resources
0
Prompts
0

MCP Tempo Server

An MCP (Model Context Protocol) server that allows Claude to interact with Grafana Tempo for distributed tracing analysis.

Features

  • Search Traces - Find traces by service, operation, or time range
  • Get Trace Details - Retrieve specific traces by ID
  • Analyze Traces - Identify bottlenecks and performance issues
  • Service Statistics - Get performance metrics for services
  • Test Connection - Verify Tempo connectivity
  • 17+ Tools - Complete access to all Tempo v2.8 API features

Quick Start

1. Installation

# Clone the repository
git clone https://github.com/InfiniteInsight/mcp-tempo-server.git
cd mcp-tempo-server

# Install dependencies
npm install

2. Configuration

For Claude Code (CLI)

The easiest way to configure for Claude Code:

# Add MCP server to Claude configuration
claude mcp add tempo node $(pwd)/index.js

# Set your Tempo URL (replace with your server)
export TEMPO_URL="http://your-tempo-server:3200"

Manual configuration: Add to your Claude configuration file:

{
  "mcpServers": {
    "tempo": {
      "command": "node",
      "args": ["/path/to/mcp-tempo-server/index.js"],
      "env": {
        "TEMPO_URL": "http://your-tempo-server:3200"
      }
    }
  }
}
For Claude Desktop

Edit your Claude Desktop configuration file:

Mac: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json Linux: ~/.config/Claude/claude_desktop_config.json

Add the MCP server:

{
  "mcpServers": {
    "tempo": {
      "command": "node",
      "args": ["/path/to/mcp-tempo-server/index.js"],
      "env": {
        "TEMPO_URL": "http://your-tempo-server:3200"
      }
    }
  }
}

3. Start Using

Restart Claude and you'll have access to all Tempo tools! Try:

Test the Tempo connection
Search for traces from my-service in the last hour
Get service statistics for api-gateway

Deployment Options

Automated Deployment

Use the included deployment scripts:

Linux/Mac/WSL
# Deploy to another machine
./deploy.sh user@remote-machine /home/user/mcp-tempo-server http://your-tempo:3200

# Examples:
./deploy.sh user@192.168.1.100 ~/mcp-tempo-server http://192.168.1.220:3200
./deploy.sh user@server /opt/mcp-tempo http://tempo.local:3200
Windows (PowerShell)
.\deploy-windows.ps1 -Destination "C:\mcp-tempo" -TempoUrl "http://tempo:3200"

# For WSL on Windows:
.\deploy-windows.ps1 -Destination "/home/user/mcp-tempo" -WSL

Manual Deployment

# Copy files
scp -r mcp-tempo-server user@remote-machine:~/

# Install on remote
ssh user@remote-machine
cd ~/mcp-tempo-server
npm install
claude mcp add tempo node $(pwd)/index.js

Environment Variables

  • TEMPO_URL - Tempo server URL (default: http://192.168.1.220:3200)

Usage in Claude

Once configured, Claude can use these commands:

Test Connection

Use the test_tempo_connection tool to verify Tempo is accessible

Search for Traces

Search for traces from the "api-gateway" service in the last hour
Search for slow traces (>500ms) from the last 30 minutes
Find traces with errors in the "payment-service"

Analyze Performance

Analyze trace ID abc123 for bottlenecks
Get performance stats for the "user-service" from the last 2 hours
Show me the slowest operations in trace xyz789

Debug Issues

Find all failed requests in the last hour
Show traces where the database query took >100ms
Identify which service is causing latency issues

Complete Tool Reference

Query & Search Tools

search_traces

Search for traces with advanced filters:

  • service - Service name filter
  • operation - Operation name filter
  • tags - Key-value tag filters
  • start - Start time (relative: "1h", "30m" or ISO format)
  • end - End time
  • limit - Max results (default: 20)
  • min_duration - Minimum duration (e.g., "100ms")
  • max_duration - Maximum duration (e.g., "5s")
get_trace

Retrieve full trace details:

  • traceId - Trace ID to fetch
  • api_version - Use v1 or v2 API (default: v2)
  • start - Optional start time for v2 API
  • end - Optional end time for v2 API
search_traceql

Search using TraceQL query language:

  • query - TraceQL expression (e.g., {.service.name="api" && duration > 100ms})
  • start - Start time
  • end - End time
  • limit - Max results

Analysis Tools

analyze_trace

Deep trace analysis with:

  • Critical path identification
  • Slowest operations ranking
  • Error detection and details
  • Service time breakdown
  • Performance bottleneck detection
compare_traces

Compare two traces to find:

  • Service differences
  • Performance variations
  • Operation changes
  • Span count differences
get_service_stats

Comprehensive service statistics:

  • Latency percentiles (P50/P90/P95/P99)
  • Error rates
  • Per-operation breakdown
  • Request counts
get_service_dependencies

Map service dependencies:

  • Service-to-service calls
  • Call frequencies
  • Average durations
  • Dependency graph

Operational Monitoring

get_tempo_status

Server health and status:

  • Ready state
  • Build information
  • Version details
get_tempo_metrics

Prometheus metrics:

  • Distributor stats
  • Ingester metrics
  • Compactor performance
  • Raw or parsed format
get_ingestion_stats

Ingestion monitoring:

  • Per-receiver statistics (OTLP, Jaeger, Zipkin)
  • Accepted/refused spans
  • Throughput metrics
get_storage_info

Storage backend status:

  • Block statistics
  • WAL metrics
  • Compaction status

Debug & Profiling

get_debug_profile

Performance profiling:

  • profile_type - heap, goroutine, profile, block, mutex, trace
  • seconds - Duration for CPU/trace profiles

Metrics Generator

get_service_graph

Service dependency visualization data

get_span_metrics

RED metrics (Rate, Errors, Duration) per service/operation

Utilities

test_tempo_connection

Verify Tempo connectivity

flush_traces

Force flush in-memory traces to storage

Development

Run Locally

node index.js

Test with MCP Inspector

npx @modelcontextprotocol/inspector node index.js

Troubleshooting

Connection Issues

  1. Verify Tempo is running: curl http://192.168.1.220:3200/ready
  2. Check network connectivity
  3. Ensure correct TEMPO_URL in config

No Traces Found

  1. Verify applications are sending traces to Tempo
  2. Check time range parameters
  3. Confirm service names are correct

Permission Denied

chmod +x index.js

Example Trace Analysis Flow

  1. Identify Issue

    "Users report slow checkout process"
    
  2. Search for Traces

    Search for traces from "checkout-service" in last hour
    
  3. Analyze Specific Trace

    Analyze trace abc123def456 for bottlenecks
    
  4. Get Service Stats

    Get stats for "payment-gateway" service
    
  5. Identify Root Cause

    Claude identifies database query taking 800ms in payment service
    

Integration with Your Apps

Send traces to Tempo from your applications:

Python

from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter

exporter = OTLPSpanExporter(endpoint="192.168.1.220:4317", insecure=True)

Node.js

const { OTLPTraceExporter } = require('@opentelemetry/exporter-otlp-grpc');

const exporter = new OTLPTraceExporter({
  url: 'http://192.168.1.220:4317',
});

License

MIT