spre-sre/lumino-mcp-server
If you are the rightful owner of lumino-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
LUMINO MCP Server is an open-source Model Context Protocol server designed to enhance Kubernetes, OpenShift, and Tekton operations with AI-powered tools.
LUMINO MCP Server
An open source MCP (Model Context Protocol) server providing AI-powered tools for Kubernetes, OpenShift, and Tekton monitoring, analysis, and troubleshooting.
Overview
LUMINO MCP Server transforms how Site Reliability Engineers (SREs) and DevOps teams interact with Kubernetes clusters. By exposing 37 specialized tools through the Model Context Protocol, it enables AI assistants to:
- Monitor cluster health, resources, and pipeline status in real-time
- Analyze logs, events, and anomalies using statistical and ML techniques
- Troubleshoot failed pipelines with automated root cause analysis
- Predict resource bottlenecks and potential issues before they occur
- Simulate configuration changes to assess impact before deployment
Features
Kubernetes & OpenShift Operations
- Namespace and pod management
- Resource querying with flexible output formats
- Label-based resource search across clusters
- OpenShift operator and MachineConfigPool status
- etcd log analysis
Tekton Pipeline Intelligence
- Pipeline and task run monitoring across namespaces
- Detailed log retrieval with optional cleaning
- Failed pipeline root cause analysis
- Cross-cluster pipeline tracing
- CI/CD performance baselining
Advanced Log Analysis
- Smart log summarization with configurable detail levels
- Streaming analysis for large log volumes
- Hybrid analysis combining multiple strategies
- Semantic search using NLP techniques
- Anomaly detection with severity classification
Predictive & Proactive Monitoring
- Statistical anomaly detection using z-score analysis
- Predictive log analysis for early warning
- Resource bottleneck forecasting
- Certificate health monitoring with expiry alerts
- TLS certificate issue investigation
Event Intelligence
- Smart event retrieval with multiple strategies
- Progressive event analysis (overview to deep-dive)
- Advanced analytics with ML pattern detection
- Log-event correlation
Simulation & What-If Analysis
- Monte Carlo simulation for configuration changes
- Impact analysis before deployment
- Risk assessment with configurable tolerance
- Affected component identification
Requirements
- Python 3.10+
- Access to a Kubernetes/OpenShift cluster (for Kubernetes tools)
- uv for dependency management (recommended)
Installation
Using uv (recommended)
# Clone the repository
git clone https://github.com/spre-sre/lumino-mcp-server.git
cd lumino-mcp-server
# Install dependencies
uv sync
# Run the server
uv run python main.py
Using pip
# Clone the repository
git clone https://github.com/spre-sre/lumino-mcp-server.git
cd lumino-mcp-server
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -e .
# Run the server
python main.py
Usage
Local Mode (stdio transport)
By default, the server runs in local mode using stdio transport, suitable for direct integration with MCP clients:
python main.py
Kubernetes Mode (HTTP streaming transport)
When running inside Kubernetes, set the namespace environment variable to enable HTTP streaming:
export KUBERNETES_NAMESPACE=my-namespace
python main.py
The server automatically detects the environment and switches transport modes.
Configuration
Kubernetes Authentication
The server automatically detects Kubernetes configuration:
- In-cluster config - When running inside a Kubernetes pod
- Local kubeconfig - When running locally (uses
~/.kube/config)
Environment Variables
| Variable | Description | Default |
|---|---|---|
KUBERNETES_NAMESPACE | Namespace for K8s mode | - |
K8S_NAMESPACE | Alternative namespace variable | - |
PROMETHEUS_URL | Prometheus server URL for metrics | Auto-detected |
Available Tools
Kubernetes Core (4 tools)
| Tool | Description |
|---|---|
list_namespaces | List all namespaces in the cluster |
list_pods_in_namespace | List pods with status and placement info |
get_kubernetes_resource | Get any Kubernetes resource with flexible output |
search_resources_by_labels | Search resources across namespaces by labels |
Tekton Pipelines (6 tools)
| Tool | Description |
|---|---|
list_pipelineruns | List PipelineRuns with status and timing |
list_taskruns | List TaskRuns, optionally filtered by pipeline |
get_pipelinerun_logs | Retrieve pipeline logs with optional cleaning |
list_recent_pipeline_runs | Recent pipelines across all namespaces |
find_pipeline | Find pipelines by pattern matching |
get_tekton_pipeline_runs_status | Cluster-wide pipeline status summary |
Log Analysis (6 tools)
| Tool | Description |
|---|---|
analyze_logs | Extract error patterns from log text |
smart_summarize_pod_logs | Intelligent log summarization |
stream_analyze_pod_logs | Streaming analysis for large logs |
analyze_pod_logs_hybrid | Combined analysis strategies |
detect_log_anomalies | Anomaly detection with severity levels |
semantic_log_search | NLP-based semantic log search |
Event Analysis (3 tools)
| Tool | Description |
|---|---|
smart_get_namespace_events | Smart event retrieval with strategies |
progressive_event_analysis | Multi-level event analysis |
advanced_event_analytics | ML-powered event pattern detection |
Failure Analysis & RCA (2 tools)
| Tool | Description |
|---|---|
analyze_failed_pipeline | Root cause analysis for failed pipelines |
automated_triage_rca_report_generator | Automated incident reports |
Resource Monitoring (4 tools)
| Tool | Description |
|---|---|
check_resource_constraints | Detect resource issues in namespace |
detect_anomalies | Statistical anomaly detection |
prometheus_query | Execute PromQL queries |
resource_bottleneck_forecaster | Predict resource exhaustion |
Namespace Investigation (2 tools)
| Tool | Description |
|---|---|
conservative_namespace_overview | Focused namespace health check |
adaptive_namespace_investigation | Dynamic investigation based on query |
Certificate & Security (3 tools)
| Tool | Description |
|---|---|
investigate_tls_certificate_issues | Find TLS-related problems |
check_cluster_certificate_health | Certificate expiry monitoring |
OpenShift Specific (3 tools)
| Tool | Description |
|---|---|
get_machine_config_pool_status | MachineConfigPool status and updates |
get_openshift_cluster_operator_status | Cluster operator health |
get_etcd_logs | etcd log retrieval and analysis |
CI/CD Performance (2 tools)
| Tool | Description |
|---|---|
ci_cd_performance_baselining_tool | Pipeline performance baselines |
cross_cluster_pipeline_tracer | Trace pipelines across clusters |
Topology & Prediction (2 tools)
| Tool | Description |
|---|---|
live_system_topology_mapper | Real-time system topology mapping |
predictive_log_analyzer | Predict issues from log patterns |
Simulation (1 tool)
| Tool | Description |
|---|---|
what_if_scenario_simulator | Simulate configuration changes |
Architecture
lumino-mcp-server/
├── main.py # Entry point with transport detection
├── src/
│ ├── server-mcp.py # MCP server with all 37 tools
│ └── helpers/
│ ├── constants.py # Shared constants
│ ├── event_analysis.py # Event processing logic
│ ├── failure_analysis.py # RCA algorithms
│ ├── log_analysis.py # Log processing
│ ├── resource_topology.py # Topology mapping
│ ├── semantic_search.py # NLP search
│ └── utils.py # Utility functions
└── pyproject.toml # Project configuration
MCP Client Integration
Method 1: Using MCPM (Recommended for Claude Code CLI / Gemini CLI)
The easiest way to install LUMINO MCP Server for Claude Code CLI or Gemini CLI is using MCPM - an MCP server package manager.
Install MCPM
# Clone and build MCPM
git clone https://github.com/spre-sre/mcpm.git
cd mcpm
go build -o mcpm .
# Optional: Add to PATH
sudo mv mcpm /usr/local/bin/
Requirements: Go 1.23+, Git, Python 3.10+, uv (or pip)
Install LUMINO MCP Server
# Install from GitHub repository (short syntax)
mcpm install @spre-sre/lumino-mcp-server
# Or use full GitHub URL
mcpm install https://github.com/spre-sre/lumino-mcp-server.git
# For GitLab repositories (if hosted on GitLab)
mcpm install gl:@spre-sre/lumino-mcp-server
# Install for specific client
mcpm install @spre-sre/lumino-mcp-server --claude # For Claude Code CLI
mcpm install @spre-sre/lumino-mcp-server --gemini # For Gemini CLI
# Install globally (works with both Claude and Gemini)
mcpm install @spre-sre/lumino-mcp-server --global
Short syntax explained:
@owner/repo- Installs from GitHub (default:https://github.com/owner/repo.git)gl:@owner/repo- Installs from GitLab (https://gitlab.com/owner/repo.git)- Full URL - Works with any Git repository
This will:
- Clone the repository to
~/.mcp/servers/lumino-mcp-server/ - Auto-detect Python project and install dependencies using
uv(or pip) - Register with Claude Code CLI or Gemini CLI configuration automatically
Manage LUMINO
# List installed servers
mcpm list
# Update LUMINO
mcpm update lumino-mcp-server
# Remove LUMINO
mcpm remove lumino-mcp-server
Method 2: Manual Configuration
If you prefer manual setup or need to configure Claude Desktop / Cursor, follow these client-specific guides:
Claude Desktop
-
Find your config file location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
- macOS:
-
Add LUMINO configuration:
{
"mcpServers": {
"lumino": {
"command": "uv",
"args": [
"run",
"--directory",
"/path/to/lumino-mcp-server",
"python",
"main.py"
],
"env": {
"PYTHONUNBUFFERED": "1"
}
}
}
}
-
Restart Claude Desktop
-
Verify: Look for the hammer icon (🔨) in Claude Desktop to see available tools
Claude Code CLI
Option A: Using MCPM (Recommended - see Method 1 above)
Option B: Manual Configuration
-
Find your config file location:
- macOS/Linux:
~/.config/claude/mcp_servers.json - Windows:
%APPDATA%\claude\mcp_servers.json
- macOS/Linux:
-
Add LUMINO configuration:
{
"mcpServers": {
"lumino": {
"command": "uv",
"args": [
"run",
"--directory",
"/path/to/lumino-mcp-server",
"python",
"main.py"
],
"env": {
"PYTHONUNBUFFERED": "1"
}
}
}
}
- Verify installation:
# Check MCP servers
claude mcp list
# Test with a query
claude "List all namespaces in my cluster"
Gemini CLI
Option A: Using MCPM (Recommended - see Method 1 above)
Option B: Manual Configuration
-
Find your config file location:
- macOS/Linux:
~/.config/gemini/mcp_servers.json - Windows:
%APPDATA%\gemini\mcp_servers.json
- macOS/Linux:
-
Add LUMINO configuration:
{
"mcpServers": {
"lumino": {
"command": "uv",
"args": [
"run",
"--directory",
"/path/to/lumino-mcp-server",
"python",
"main.py"
],
"env": {
"PYTHONUNBUFFERED": "1"
}
}
}
}
- Verify installation:
# Check MCP servers
gemini mcp list
# Test with a query
gemini "Show me failed pipeline runs"
Cursor IDE
-
Open Cursor Settings:
- Press
Cmd+,(macOS) orCtrl+,(Windows/Linux) - Search for "MCP" or "Model Context Protocol"
- Press
-
Add MCP Server Configuration:
In Cursor's MCP settings, add:
{
"mcpServers": {
"lumino": {
"command": "uv",
"args": [
"run",
"--directory",
"/path/to/lumino-mcp-server",
"python",
"main.py"
],
"env": {
"PYTHONUNBUFFERED": "1"
}
}
}
}
Alternative - Using Cursor's settings.json:
- Open Command Palette (
Cmd+Shift+PorCtrl+Shift+P) - Type "Preferences: Open User Settings (JSON)"
- Add the MCP configuration:
{
"mcp.servers": {
"lumino": {
"command": "uv",
"args": [
"run",
"--directory",
"/path/to/lumino-mcp-server",
"python",
"main.py"
],
"env": {
"PYTHONUNBUFFERED": "1"
}
}
}
}
-
Restart Cursor IDE
-
Verify: Open Cursor's AI chat and check if LUMINO tools are available
Configuration Notes
Replace /path/to/lumino-mcp-server with the actual path where you cloned the repository:
# Example paths:
# macOS/Linux: /Users/username/projects/lumino-mcp-server
# Windows: C:\Users\username\projects\lumino-mcp-server
# If installed via MCPM:
# ~/.mcp/servers/lumino-mcp-server/
Environment Variables (optional):
Add these to the env section if needed:
{
"env": {
"PYTHONUNBUFFERED": "1",
"KUBERNETES_NAMESPACE": "default",
"PROMETHEUS_URL": "http://prometheus:9090",
"LOG_LEVEL": "INFO"
}
}
Using Alternative Python Package Managers
With pip instead of uv
{
"command": "python",
"args": [
"/path/to/lumino-mcp-server/main.py"
]
}
Note: Ensure you've activated the virtual environment first:
cd /path/to/lumino-mcp-server
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e .
With poetry
{
"command": "poetry",
"args": [
"run",
"python",
"main.py"
],
"cwd": "/path/to/lumino-mcp-server"
}
Testing Your Configuration
After configuring any client, test the connection:
-
Check if tools are loaded:
- Claude Desktop: Look for 🔨 hammer icon
- Claude Code CLI:
claude mcp list - Gemini CLI:
gemini mcp list - Cursor: Check AI chat for available tools
-
Test a simple query:
"List all namespaces in my Kubernetes cluster"
- Check server logs (if issues):
# Run server manually to see errors
cd /path/to/lumino-mcp-server
uv run python main.py
Expected output:
MCP Server running in stdio mode
Available tools: 38
Waiting for requests...
Advanced Configuration
Multiple Clusters
Configure multiple LUMINO instances for different clusters:
{
"mcpServers": {
"lumino-prod": {
"command": "uv",
"args": ["run", "--directory", "/path/to/lumino-mcp-server", "python", "main.py"],
"env": {
"KUBECONFIG": "/path/to/prod-kubeconfig.yaml"
}
},
"lumino-dev": {
"command": "uv",
"args": ["run", "--directory", "/path/to/lumino-mcp-server", "python", "main.py"],
"env": {
"KUBECONFIG": "/path/to/dev-kubeconfig.yaml"
}
}
}
}
Custom Log Level
{
"env": {
"LOG_LEVEL": "DEBUG",
"MCP_SERVER_LOG_LEVEL": "DEBUG"
}
}
Supported Transports
The server automatically detects the appropriate transport:
- stdio - For local desktop integrations (Claude Desktop, Claude Code CLI, Gemini CLI, Cursor)
- streamable-http - For Kubernetes deployments (when
KUBERNETES_NAMESPACEis set)
Troubleshooting
Common Issues
No Kubernetes cluster found
Error: Unable to load kubeconfig
Ensure you have a valid kubeconfig at ~/.kube/config or are running inside a cluster.
Permission denied for resources
Error: Forbidden - User cannot list resource
Check your RBAC permissions. The server needs read access to the resources you want to query.
Tool timeout For large clusters, some tools may timeout. Use filtering options (namespace, labels) to reduce scope.
Dependencies
mcp[cli]>=1.10.1- Model Context Protocol SDKkubernetes>=32.0.1- Kubernetes Python clientpandas>=2.0.0- Data analysisscikit-learn>=1.6.1- ML algorithmsprometheus-client>=0.22.0- Prometheus integrationaiohttp>=3.12.2- Async HTTP client
Contributing
Contributions are welcome! Please read our before submitting pull requests.
Security
For security vulnerabilities, please see our .
License
This project is licensed under the Apache License 2.0 - see the file for details.
Acknowledgments
- Built with FastMCP framework
- Inspired by the needs of SRE teams managing complex Kubernetes environments