datadog-mcp

zero-to-prod/datadog-mcp

3.2

If you are the rightful owner of datadog-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The Model Context Protocol (MCP) server is designed to facilitate seamless communication between AI models and applications, providing a robust framework for managing model interactions and data exchange.

datadog-mcp

Repo GitHub Actions Workflow Status GitHub Actions Workflow Status GitHub Actions Workflow Status GitHub License Hits-of-Code

Contents

Introduction

MCP Server for DataDog

Requirements

  • PHP 8.1 or higher

Installation

composer require zero-to-prod/datadog-mcp

Quick Start

1. Get Your Datadog API Keys

Get your API keys from: https://app.datadoghq.com/organization-settings/api-keys

You'll need:

  • DD_API_KEY - Your Datadog API key
  • DD_APPLICATION_KEY - Your Datadog application key

2. Run the Docker Image

docker run -d -p 8091:80 \
  -e DD_API_KEY=your_api_key_here \
  -e DD_APPLICATION_KEY=your_app_key_here \
  davidsmith3/datadog-mcp:latest

3. Add the Server to Claude

claude mcp add --transport http datadog-mcp http://localhost:8091/mcp

Alternatively, add the server configuration directly:

{
    "mcpServers": {
        "datadog-mcp": {
            "type": "streamable-http",
            "url": "http://localhost:8091/mcp"
        }
    }
}

Usage

Available Tools

logs - Search Datadog Logs

Search and retrieve logs from your Datadog account using the Logs API v2.

✨ NEW: Simplified API with Auto-Normalization

  • ✅ Smart time parameter accepts multiple formats (no more timestamp calculations!)
  • ✅ @ prefixes added automatically to custom attributes
  • ✅ Boolean operators (and/or/not) uppercased automatically
  • ✅ Write queries naturally without worrying about syntax

Parameters:

  • query (string, required) - Natural Datadog search query (@ and uppercase handled automatically)
  • time (string, optional) - Smart time parameter accepting multiple formats (default: "1h")
    • Relative: "1h", "24h", "7d", "15m"
    • ISO datetime: "2024-01-15T10:00:00Z" or "2024-01-15T10:00:00Z/2024-01-16T10:00:00Z"
    • Natural language: "yesterday", "today", "last hour"
    • Milliseconds: "1765461420000" or "1765461420000/1765547820000"
  • limit (int, optional) - Max logs per request, 1-1000 (default: 10)
  • includeTags (bool, optional) - Include tags array (default: false)
  • cursor (string, optional) - Pagination cursor
  • sort (string, optional) - Sort order: "timestamp" or "-timestamp"
  • format (string, optional) - Output format: "full", "count", or "summary" (default: "full")
  • json_path (string, optional) - Simplified JSON path for field extraction
  • jq_filter (string, optional) - jq expression to transform response data
  • jq_raw_output (bool, optional) - Output raw text instead of JSON (default: false)
  • jq_streaming (bool, optional) - Collect multiple jq outputs into array (default: false)

JSON Path Examples:

The json_path parameter provides a simplified way to extract fields without jq syntax. Use dot notation for nested fields and numbers for array indices.

  1. Get first log entry:
{
  "time": "1h",
  "query": "status:error",
  "json_path": "data.0"
}
  1. Get service name from first log:
{
  "time": "1h",
  "query": "status:error",
  "json_path": "data.0.attributes.service"
}
  1. Get message from first log:
{
  "time": "1h",
  "query": "status:error",
  "json_path": "data.0.attributes.message"
}
  1. Get pagination cursor:
{
  "time": "1h",
  "query": "status:info",
  "limit": 100,
  "json_path": "meta.page.after"
}
  1. Extract plain text message (with raw output):
{
  "time": "1h",
  "query": "status:error",
  "json_path": "data.0.attributes.message",
  "jq_raw_output": true
}

Path Conversion:

  • data.0.data[0]
  • data.0.attributes.service.data[0].attributes.service
  • meta.page.after.meta.page.after

Note: Cannot use both json_path and jq_filter together. Use json_path for simple field extraction, or jq_filter for complex transformations.

jq Filter Examples:

The jq_filter parameter allows you to transform the response data using jq syntax. The filter is applied AFTER format processing.

  1. Get only the first log entry:
{
  "time": "1h",
  "query": "status:error",
  "jq_filter": ".data[0]"
}
  1. Filter logs by service:
{
  "time": "24h",
  "query": "env:production",
  "jq_filter": ".data[] | select(.attributes.service == \"api\")"
}
  1. Extract only message fields:
{
  "time": "1h",
  "query": "status:error",
  "jq_filter": "[.data[].attributes.message]"
}
  1. Custom aggregation:
{
  "time": "24h",
  "query": "status:error",
  "jq_filter": "{total: .data | length, services: [.data[].attributes.service] | unique}"
}

For full jq syntax documentation, see: https://jqlang.github.io/jq/manual/

Raw Output Examples:

Extract plain text message (without JSON quotes):

{
  "time": "1h",
  "query": "status:error",
  "limit": 1,
  "jq_filter": ".data[0].attributes.message",
  "jq_raw_output": true
}

Get service names as plain text lines:

{
  "time": "1h",
  "query": "status:info",
  "limit": 10,
  "jq_filter": ".data[].attributes.service",
  "jq_raw_output": true,
  "jq_streaming": true
}

Streaming Examples:

Get all logs as array (natural .data[] syntax):

{
  "time": "1h",
  "query": "status:info",
  "limit": 10,
  "jq_filter": ".data[]",
  "jq_streaming": true
}

Extract all service names:

{
  "time": "1h",
  "query": "status:info",
  "limit": 10,
  "jq_filter": ".data[].attributes.service",
  "jq_streaming": true
}

Filter logs by service (streaming):

{
  "time": "24h",
  "query": "env:production",
  "limit": 50,
  "jq_filter": ".data[] | select(.attributes.service == \"api\")",
  "jq_streaming": true
}

Query Syntax Examples:

Simple queries (natural syntax):

status:error
service:api-gateway env:production status:error
http.status_code:500 and env:production

Note: @ prefixes and uppercase Boolean operators are added automatically!

Usage Examples:

  1. Get recent error logs (using natural time format):
{
  "time": "24h",
  "query": "status:error env:production"
}
  1. Search with custom attributes (@ added automatically):
{
  "time": "1h",
  "query": "service:api http.status_code:>=500",
  "limit": 100
}
  1. Using ISO datetime for specific time window:
{
  "time": "2024-01-15T10:00:00Z/2024-01-15T12:00:00Z",
  "query": "service:checkout and duration:>5000"
}
  1. Natural language time:
{
  "time": "yesterday",
  "query": "status:error user.id:12345"
}
  1. Paginate through results:
// First request
{
  "time": "24h",
  "query": "service:web-*",
  "limit": 1000
}
// Use cursor from response.meta.page.after for next page
{
  "time": "24h",
  "query": "service:web-*",
  "limit": 1000,
  "cursor": "eyJhZnRlciI6IkFRQUFBWE1rLWc4d..."
}

CLI Commands

vendor/bin/datadog-mcp list

Docker

Run using the Docker image:

docker run -d -p 8091:80 \
  -e DD_API_KEY=your_api_key_here \
  -e DD_APPLICATION_KEY=your_app_key_here \
  davidsmith3/datadog-mcp:latest

Environment Variables

Required:

Optional:

  • APP_DEBUG=false - Enable debug logging (default: false)

Full Example with All Options

docker run -d -p 8091:80 \
  -e DD_API_KEY=your_api_key_here \
  -e DD_APPLICATION_KEY=your_app_key_here \
  -e APP_DEBUG=false \
  -v mcp-sessions:/app/storage/mcp-sessions \
  --name datadog-mcp \
  davidsmith3/datadog-mcp:latest

Using Docker Compose

Create a docker-compose.yml:

services:
  datadog-mcp:
    image: davidsmith3/datadog-mcp:latest
    ports:
      - "8091:80"
    environment:
      - DD_API_KEY=${DD_API_KEY}
      - DD_APPLICATION_KEY=${DD_APPLICATION_KEY}
      - APP_DEBUG=false
    volumes:
      - mcp-sessions:/app/storage/mcp-sessions
    restart: unless-stopped

volumes:
  mcp-sessions:

Create a .env file:

DD_API_KEY=your_api_key_here
DD_APPLICATION_KEY=your_app_key_here

Run:

docker compose up -d

Contributing

See

Links