lumino-mcp-server

spre-sre/lumino-mcp-server

3.3

If you are the rightful owner of lumino-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

LUMINO MCP Server is an open-source Model Context Protocol server designed to enhance Kubernetes, OpenShift, and Tekton operations with AI-powered tools.

Tools
5
Resources
0
Prompts
0

LUMINO MCP Server

License Python MCP

An open source MCP (Model Context Protocol) server providing AI-powered tools for Kubernetes, OpenShift, and Tekton monitoring, analysis, and troubleshooting.

Overview

LUMINO MCP Server transforms how Site Reliability Engineers (SREs) and DevOps teams interact with Kubernetes clusters. By exposing 37 specialized tools through the Model Context Protocol, it enables AI assistants to:

  • Monitor cluster health, resources, and pipeline status in real-time
  • Analyze logs, events, and anomalies using statistical and ML techniques
  • Troubleshoot failed pipelines with automated root cause analysis
  • Predict resource bottlenecks and potential issues before they occur
  • Simulate configuration changes to assess impact before deployment

Features

Kubernetes & OpenShift Operations

  • Namespace and pod management
  • Resource querying with flexible output formats
  • Label-based resource search across clusters
  • OpenShift operator and MachineConfigPool status
  • etcd log analysis

Tekton Pipeline Intelligence

  • Pipeline and task run monitoring across namespaces
  • Detailed log retrieval with optional cleaning
  • Failed pipeline root cause analysis
  • Cross-cluster pipeline tracing
  • CI/CD performance baselining

Advanced Log Analysis

  • Smart log summarization with configurable detail levels
  • Streaming analysis for large log volumes
  • Hybrid analysis combining multiple strategies
  • Semantic search using NLP techniques
  • Anomaly detection with severity classification

Predictive & Proactive Monitoring

  • Statistical anomaly detection using z-score analysis
  • Predictive log analysis for early warning
  • Resource bottleneck forecasting
  • Certificate health monitoring with expiry alerts
  • TLS certificate issue investigation

Event Intelligence

  • Smart event retrieval with multiple strategies
  • Progressive event analysis (overview to deep-dive)
  • Advanced analytics with ML pattern detection
  • Log-event correlation

Simulation & What-If Analysis

  • Monte Carlo simulation for configuration changes
  • Impact analysis before deployment
  • Risk assessment with configurable tolerance
  • Affected component identification

Requirements

  • Python 3.10+
  • Access to a Kubernetes/OpenShift cluster (for Kubernetes tools)
  • uv for dependency management (recommended)

Installation

Using uv (recommended)

# Clone the repository
git clone https://github.com/spre-sre/lumino-mcp-server.git
cd lumino-mcp-server

# Install dependencies
uv sync

# Run the server
uv run python main.py

Using pip

# Clone the repository
git clone https://github.com/spre-sre/lumino-mcp-server.git
cd lumino-mcp-server

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -e .

# Run the server
python main.py

Usage

Local Mode (stdio transport)

By default, the server runs in local mode using stdio transport, suitable for direct integration with MCP clients:

python main.py

Kubernetes Mode (HTTP streaming transport)

When running inside Kubernetes, set the namespace environment variable to enable HTTP streaming:

export KUBERNETES_NAMESPACE=my-namespace
python main.py

The server automatically detects the environment and switches transport modes.

Configuration

Kubernetes Authentication

The server automatically detects Kubernetes configuration:

  1. In-cluster config - When running inside a Kubernetes pod
  2. Local kubeconfig - When running locally (uses ~/.kube/config)

Environment Variables

VariableDescriptionDefault
KUBERNETES_NAMESPACENamespace for K8s mode-
K8S_NAMESPACEAlternative namespace variable-
PROMETHEUS_URLPrometheus server URL for metricsAuto-detected

Available Tools

Kubernetes Core (4 tools)

ToolDescription
list_namespacesList all namespaces in the cluster
list_pods_in_namespaceList pods with status and placement info
get_kubernetes_resourceGet any Kubernetes resource with flexible output
search_resources_by_labelsSearch resources across namespaces by labels

Tekton Pipelines (6 tools)

ToolDescription
list_pipelinerunsList PipelineRuns with status and timing
list_taskrunsList TaskRuns, optionally filtered by pipeline
get_pipelinerun_logsRetrieve pipeline logs with optional cleaning
list_recent_pipeline_runsRecent pipelines across all namespaces
find_pipelineFind pipelines by pattern matching
get_tekton_pipeline_runs_statusCluster-wide pipeline status summary

Log Analysis (6 tools)

ToolDescription
analyze_logsExtract error patterns from log text
smart_summarize_pod_logsIntelligent log summarization
stream_analyze_pod_logsStreaming analysis for large logs
analyze_pod_logs_hybridCombined analysis strategies
detect_log_anomaliesAnomaly detection with severity levels
semantic_log_searchNLP-based semantic log search

Event Analysis (3 tools)

ToolDescription
smart_get_namespace_eventsSmart event retrieval with strategies
progressive_event_analysisMulti-level event analysis
advanced_event_analyticsML-powered event pattern detection

Failure Analysis & RCA (2 tools)

ToolDescription
analyze_failed_pipelineRoot cause analysis for failed pipelines
automated_triage_rca_report_generatorAutomated incident reports

Resource Monitoring (4 tools)

ToolDescription
check_resource_constraintsDetect resource issues in namespace
detect_anomaliesStatistical anomaly detection
prometheus_queryExecute PromQL queries
resource_bottleneck_forecasterPredict resource exhaustion

Namespace Investigation (2 tools)

ToolDescription
conservative_namespace_overviewFocused namespace health check
adaptive_namespace_investigationDynamic investigation based on query

Certificate & Security (3 tools)

ToolDescription
investigate_tls_certificate_issuesFind TLS-related problems
check_cluster_certificate_healthCertificate expiry monitoring

OpenShift Specific (3 tools)

ToolDescription
get_machine_config_pool_statusMachineConfigPool status and updates
get_openshift_cluster_operator_statusCluster operator health
get_etcd_logsetcd log retrieval and analysis

CI/CD Performance (2 tools)

ToolDescription
ci_cd_performance_baselining_toolPipeline performance baselines
cross_cluster_pipeline_tracerTrace pipelines across clusters

Topology & Prediction (2 tools)

ToolDescription
live_system_topology_mapperReal-time system topology mapping
predictive_log_analyzerPredict issues from log patterns

Simulation (1 tool)

ToolDescription
what_if_scenario_simulatorSimulate configuration changes

Architecture

lumino-mcp-server/
├── main.py                 # Entry point with transport detection
├── src/
│   ├── server-mcp.py       # MCP server with all 37 tools
│   └── helpers/
│       ├── constants.py    # Shared constants
│       ├── event_analysis.py    # Event processing logic
│       ├── failure_analysis.py  # RCA algorithms
│       ├── log_analysis.py      # Log processing
│       ├── resource_topology.py # Topology mapping
│       ├── semantic_search.py   # NLP search
│       └── utils.py             # Utility functions
└── pyproject.toml          # Project configuration

MCP Client Integration

Method 1: Using MCPM (Recommended for Claude Code CLI / Gemini CLI)

The easiest way to install LUMINO MCP Server for Claude Code CLI or Gemini CLI is using MCPM - an MCP server package manager.

Install MCPM
# Clone and build MCPM
git clone https://github.com/spre-sre/mcpm.git
cd mcpm
go build -o mcpm .

# Optional: Add to PATH
sudo mv mcpm /usr/local/bin/

Requirements: Go 1.23+, Git, Python 3.10+, uv (or pip)

Install LUMINO MCP Server
# Install from GitHub repository (short syntax)
mcpm install @spre-sre/lumino-mcp-server

# Or use full GitHub URL
mcpm install https://github.com/spre-sre/lumino-mcp-server.git

# For GitLab repositories (if hosted on GitLab)
mcpm install gl:@spre-sre/lumino-mcp-server

# Install for specific client
mcpm install @spre-sre/lumino-mcp-server --claude  # For Claude Code CLI
mcpm install @spre-sre/lumino-mcp-server --gemini  # For Gemini CLI

# Install globally (works with both Claude and Gemini)
mcpm install @spre-sre/lumino-mcp-server --global

Short syntax explained:

  • @owner/repo - Installs from GitHub (default: https://github.com/owner/repo.git)
  • gl:@owner/repo - Installs from GitLab (https://gitlab.com/owner/repo.git)
  • Full URL - Works with any Git repository

This will:

  • Clone the repository to ~/.mcp/servers/lumino-mcp-server/
  • Auto-detect Python project and install dependencies using uv (or pip)
  • Register with Claude Code CLI or Gemini CLI configuration automatically
Manage LUMINO
# List installed servers
mcpm list

# Update LUMINO
mcpm update lumino-mcp-server

# Remove LUMINO
mcpm remove lumino-mcp-server

Method 2: Manual Configuration

If you prefer manual setup or need to configure Claude Desktop / Cursor, follow these client-specific guides:

Claude Desktop
  1. Find your config file location:

    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%\Claude\claude_desktop_config.json
    • Linux: ~/.config/Claude/claude_desktop_config.json
  2. Add LUMINO configuration:

{
  "mcpServers": {
    "lumino": {
      "command": "uv",
      "args": [
        "run",
        "--directory",
        "/path/to/lumino-mcp-server",
        "python",
        "main.py"
      ],
      "env": {
        "PYTHONUNBUFFERED": "1"
      }
    }
  }
}
  1. Restart Claude Desktop

  2. Verify: Look for the hammer icon (🔨) in Claude Desktop to see available tools


Claude Code CLI

Option A: Using MCPM (Recommended - see Method 1 above)

Option B: Manual Configuration

  1. Find your config file location:

    • macOS/Linux: ~/.config/claude/mcp_servers.json
    • Windows: %APPDATA%\claude\mcp_servers.json
  2. Add LUMINO configuration:

{
  "mcpServers": {
    "lumino": {
      "command": "uv",
      "args": [
        "run",
        "--directory",
        "/path/to/lumino-mcp-server",
        "python",
        "main.py"
      ],
      "env": {
        "PYTHONUNBUFFERED": "1"
      }
    }
  }
}
  1. Verify installation:
# Check MCP servers
claude mcp list

# Test with a query
claude "List all namespaces in my cluster"

Gemini CLI

Option A: Using MCPM (Recommended - see Method 1 above)

Option B: Manual Configuration

  1. Find your config file location:

    • macOS/Linux: ~/.config/gemini/mcp_servers.json
    • Windows: %APPDATA%\gemini\mcp_servers.json
  2. Add LUMINO configuration:

{
  "mcpServers": {
    "lumino": {
      "command": "uv",
      "args": [
        "run",
        "--directory",
        "/path/to/lumino-mcp-server",
        "python",
        "main.py"
      ],
      "env": {
        "PYTHONUNBUFFERED": "1"
      }
    }
  }
}
  1. Verify installation:
# Check MCP servers
gemini mcp list

# Test with a query
gemini "Show me failed pipeline runs"

Cursor IDE
  1. Open Cursor Settings:

    • Press Cmd+, (macOS) or Ctrl+, (Windows/Linux)
    • Search for "MCP" or "Model Context Protocol"
  2. Add MCP Server Configuration:

In Cursor's MCP settings, add:

{
  "mcpServers": {
    "lumino": {
      "command": "uv",
      "args": [
        "run",
        "--directory",
        "/path/to/lumino-mcp-server",
        "python",
        "main.py"
      ],
      "env": {
        "PYTHONUNBUFFERED": "1"
      }
    }
  }
}

Alternative - Using Cursor's settings.json:

  1. Open Command Palette (Cmd+Shift+P or Ctrl+Shift+P)
  2. Type "Preferences: Open User Settings (JSON)"
  3. Add the MCP configuration:
{
  "mcp.servers": {
    "lumino": {
      "command": "uv",
      "args": [
        "run",
        "--directory",
        "/path/to/lumino-mcp-server",
        "python",
        "main.py"
      ],
      "env": {
        "PYTHONUNBUFFERED": "1"
      }
    }
  }
}
  1. Restart Cursor IDE

  2. Verify: Open Cursor's AI chat and check if LUMINO tools are available


Configuration Notes

Replace /path/to/lumino-mcp-server with the actual path where you cloned the repository:

# Example paths:
# macOS/Linux: /Users/username/projects/lumino-mcp-server
# Windows: C:\Users\username\projects\lumino-mcp-server

# If installed via MCPM:
# ~/.mcp/servers/lumino-mcp-server/

Environment Variables (optional):

Add these to the env section if needed:

{
  "env": {
    "PYTHONUNBUFFERED": "1",
    "KUBERNETES_NAMESPACE": "default",
    "PROMETHEUS_URL": "http://prometheus:9090",
    "LOG_LEVEL": "INFO"
  }
}

Using Alternative Python Package Managers

With pip instead of uv
{
  "command": "python",
  "args": [
    "/path/to/lumino-mcp-server/main.py"
  ]
}

Note: Ensure you've activated the virtual environment first:

cd /path/to/lumino-mcp-server
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -e .
With poetry
{
  "command": "poetry",
  "args": [
    "run",
    "python",
    "main.py"
  ],
  "cwd": "/path/to/lumino-mcp-server"
}

Testing Your Configuration

After configuring any client, test the connection:

  1. Check if tools are loaded:

    • Claude Desktop: Look for 🔨 hammer icon
    • Claude Code CLI: claude mcp list
    • Gemini CLI: gemini mcp list
    • Cursor: Check AI chat for available tools
  2. Test a simple query:

"List all namespaces in my Kubernetes cluster"
  1. Check server logs (if issues):
# Run server manually to see errors
cd /path/to/lumino-mcp-server
uv run python main.py

Expected output:

MCP Server running in stdio mode
Available tools: 38
Waiting for requests...

Advanced Configuration

Multiple Clusters

Configure multiple LUMINO instances for different clusters:

{
  "mcpServers": {
    "lumino-prod": {
      "command": "uv",
      "args": ["run", "--directory", "/path/to/lumino-mcp-server", "python", "main.py"],
      "env": {
        "KUBECONFIG": "/path/to/prod-kubeconfig.yaml"
      }
    },
    "lumino-dev": {
      "command": "uv",
      "args": ["run", "--directory", "/path/to/lumino-mcp-server", "python", "main.py"],
      "env": {
        "KUBECONFIG": "/path/to/dev-kubeconfig.yaml"
      }
    }
  }
}
Custom Log Level
{
  "env": {
    "LOG_LEVEL": "DEBUG",
    "MCP_SERVER_LOG_LEVEL": "DEBUG"
  }
}

Supported Transports

The server automatically detects the appropriate transport:

  • stdio - For local desktop integrations (Claude Desktop, Claude Code CLI, Gemini CLI, Cursor)
  • streamable-http - For Kubernetes deployments (when KUBERNETES_NAMESPACE is set)

Troubleshooting

Common Issues

No Kubernetes cluster found

Error: Unable to load kubeconfig

Ensure you have a valid kubeconfig at ~/.kube/config or are running inside a cluster.

Permission denied for resources

Error: Forbidden - User cannot list resource

Check your RBAC permissions. The server needs read access to the resources you want to query.

Tool timeout For large clusters, some tools may timeout. Use filtering options (namespace, labels) to reduce scope.

Dependencies

  • mcp[cli]>=1.10.1 - Model Context Protocol SDK
  • kubernetes>=32.0.1 - Kubernetes Python client
  • pandas>=2.0.0 - Data analysis
  • scikit-learn>=1.6.1 - ML algorithms
  • prometheus-client>=0.22.0 - Prometheus integration
  • aiohttp>=3.12.2 - Async HTTP client

Contributing

Contributions are welcome! Please read our before submitting pull requests.

Security

For security vulnerabilities, please see our .

License

This project is licensed under the Apache License 2.0 - see the file for details.

Acknowledgments

  • Built with FastMCP framework
  • Inspired by the needs of SRE teams managing complex Kubernetes environments