manu-joy/OpenShift-MCP_py
If you are the rightful owner of OpenShift-MCP_py and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The OpenShift MCP Server v1.5 is a production-ready Model Context Protocol server designed to enable secure AI-driven management of OpenShift clusters through natural language interactions.
OpenShift MCP Server v1.5
Production-Ready AI-OpenShift Integration with Secure TLS Communications
📖 What is OpenShift MCP Server?
The OpenShift MCP Server v1.5 is a production-ready Model Context Protocol (MCP) server that enables AI assistants to securely interact with OpenShift clusters through natural language. With 67 specialized tools across 6 categories and built-in TLS encryption, it provides comprehensive cluster management capabilities while protecting your OpenShift credentials and communications.
🔒 Security-First Design
- 🛡️ TLS/HTTPS Encryption: All communications between LLMs and your OpenShift cluster are encrypted using OpenShift's native Route TLS termination
- 🔐 Credential Protection: OpenShift cluster credentials are securely stored as Kubernetes Secrets and never exposed in logs or API responses
- 🎯 RBAC Integration: Comprehensive Role-Based Access Control with minimal required permissions
- 🚫 Zero Credential Exposure: LLMs never see your cluster credentials - all authentication is handled server-side
🎯 Key Benefits
- 🤖 AI-Native Operations: Securely manage OpenShift clusters through natural language with any MCP-compatible AI assistant
- 🛠️ Complete Toolset: 67 tools covering all aspects of cluster management - from basic operations to advanced debugging
- ⚡ Multi-Cluster Support: Connect to multiple OpenShift clusters with dynamic authentication
- 📊 Real-time Capabilities: Live monitoring, log streaming, resource watching, and event tracking
- 🔧 Production-Ready: Enhanced error handling, comprehensive logging, and organized testing framework
🏗️ Architecture Overview
graph TB
subgraph "AI Assistant Environment"
A1[Cursor AI]
A2[Claude Desktop]
A3[Custom LLM App]
A4[OpenAI GPT]
end
subgraph "Network Layer"
TLS[TLS/HTTPS Encryption]
end
subgraph "OpenShift Cluster"
subgraph "MCP Server Deployment"
MCP[OpenShift MCP Server v1.5]
SA[ServiceAccount]
SEC[OpenShift Secrets]
end
subgraph "Kubernetes APIs (Core Resources)"
K8API[Kubernetes API Server]
POD[Pods]
SVC[Services]
DEP[Deployments]
NOD[Nodes]
end
subgraph "OpenShift APIs (Extensions)"
OCAPI[OpenShift API Server]
RT[Routes]
PJ[Projects]
METR[Metrics API]
end
end
A1 -->|MCP Protocol over HTTPS| TLS
A2 -->|MCP Protocol over HTTPS| TLS
A3 -->|MCP Protocol over HTTPS| TLS
A4 -->|MCP Protocol over HTTPS| TLS
TLS --> MCP
%% Kubernetes API connections (Core Resources)
MCP -->|"Core Resources<br/>(Pods, Services, Deployments, Nodes)"| K8API
K8API --> POD
K8API --> SVC
K8API --> DEP
K8API --> NOD
%% OpenShift API connections (Extensions)
MCP -->|"OpenShift Extensions<br/>(Routes, Projects, Metrics)"| OCAPI
OCAPI --> RT
OCAPI --> PJ
OCAPI --> METR
SA -->|RBAC Permissions| MCP
SEC -->|Cluster Credentials| MCP
style TLS fill:#ff6b6b,stroke:#333,stroke-width:3px,color:#fff
style MCP fill:#4ecdc4,stroke:#333,stroke-width:2px,color:#fff
style K8API fill:#45b7d1,stroke:#333,stroke-width:2px,color:#fff
style OCAPI fill:#96ceb4,stroke:#333,stroke-width:2px,color:#fff
style SA fill:#ffd93d,stroke:#333,stroke-width:2px,color:#000
style SEC fill:#6bcf7f,stroke:#333,stroke-width:2px,color:#fff
How it Works:
- 🔐 Secure Communication: AI assistants connect to the MCP server via HTTPS using OpenShift Route TLS termination
- 🎯 Credential Isolation: OpenShift cluster credentials are stored as Kubernetes Secrets, never exposed to LLMs
- 🛡️ Authentication: MCP server uses ServiceAccount tokens or provided credentials to authenticate with target clusters
- 🔧 Hybrid API Approach:
- Core Resources (Pods, Services, Deployments, Nodes): Uses Kubernetes APIs for optimal performance and type safety
- OpenShift Extensions (Routes, Projects, Metrics): Uses OpenShift APIs for enhanced features and metadata
- ⚡ Tool Execution: LLMs specify cluster parameters in tool calls; MCP server handles authentication and API communication
- 📊 Response Security: All responses are sanitized to remove sensitive information before returning to LLMs
🚀 Deployment Guide
Prerequisites
- OpenShift cluster with cluster-admin access
ocCLI installed and authenticated- Internet access for pulling container images
Step 1: Deploy MCP Server to OpenShift
Method A: Quick Deployment (Recommended)
# 1. Clone the repository
git clone https://github.com/manu-joy/OpenShift-MCP-py.git
cd OpenShift-MCP-py
# 2. Create MCP project
oc new-project mcp-server --description="OpenShift MCP Server v1.5"
# 3. Deploy with pre-built image
oc new-app quay.io/massivedynamics/openshift-mcp:openshift-mcp-py-v1.5 --name=openshift-mcp-server
# 4. Apply RBAC permissions
oc apply -f deploy/v1.4/rbac-permissions.yaml
# 5. Create secure HTTPS route
oc create route edge openshift-mcp-server --service=openshift-mcp-server --port=8080
# 6. Verify deployment
oc get pods -l deployment=openshift-mcp-server
oc get route openshift-mcp-server
Method B: Build from Source
# 1. Create build configuration
oc new-build --binary --docker-image=registry.access.redhat.com/ubi9/ubi:latest --name=openshift-mcp-build
# 2. Start build from source
oc start-build openshift-mcp-build --from-dir=. --follow
# 3. Deploy the built image
oc new-app openshift-mcp-build --name=openshift-mcp-server
# 4. Apply RBAC and route (same as Method A steps 4-6)
Step 2: Configure External Cluster Access (Optional)
For multi-cluster operations, configure credentials for target clusters:
# Create secret for target cluster credentials
oc create secret generic target-cluster-creds \
--from-literal=api-url=https://api.your-cluster.com:6443 \
--from-literal=token=your-cluster-token
# Mount secret in deployment
oc set env deployment/openshift-mcp-server \
--from=secret/target-cluster-creds \
--prefix=TARGET_
Step 3: Verify Installation
# Get your secure MCP server URL
export MCP_URL=https://$(oc get route openshift-mcp-server -o jsonpath='{.spec.host}')
echo "Your MCP Server URL: $MCP_URL"
# Test connectivity
curl -k $MCP_URL/
# List available tools
curl -k -X POST $MCP_URL/tools/list | jq '.tools | length'
🤖 Connecting AI Assistants
Option 1: Cursor AI (Recommended)
Cursor provides the most seamless integration experience with the OpenShift MCP Server.
Configuration Steps:
-
Get Your MCP Server URL:
oc get route openshift-mcp-server -o jsonpath='{.spec.host}' # Example: openshift-mcp-server-mcp-server.apps.cluster.example.com -
Configure Cursor MCP Settings:
- Open Cursor → Settings → Features → MCP
- Click "Add MCP Server"
- Configure as follows:
{ "name": "OpenShift MCP Server v1.5", "type": "sse", "url": "https://openshift-mcp-server-mcp-server.apps.cluster.example.com" } -
Verify Connection:
- Save and refresh configuration
- Verify 67 tools are loaded successfully
- Connection status shows "Connected"
Example Cursor Conversations:
🎯 "Show me the current status of all deployments in the production namespace"
📊 "Monitor resource usage for pods in the monitoring namespace and identify any memory issues"
🛠️ "Create a new project called 'web-services' and deploy an nginx pod with 2 replicas"
🔍 "Debug network connectivity issues between services in the microservices namespace"
⚡ "Scale the user-service deployment to 5 replicas and monitor the rollout status"
Option 2: Claude Desktop
Configuration (claude_desktop_config.json):
{
"mcpServers": {
"openshift-mcp-v15": {
"command": "npx",
"args": [
"@modelcontextprotocol/server-fetch",
"https://your-openshift-mcp-route/"
]
}
}
}
Option 3: OpenAI GPT with Custom Tools
Python Integration Example:
import requests
import json
class OpenShiftMCPClient:
def __init__(self, mcp_url):
self.mcp_url = mcp_url.rstrip('/')
def list_tools(self):
"""Get all available MCP tools"""
response = requests.post(f"{self.mcp_url}/tools/list")
return response.json()
def execute_tool(self, tool_name, arguments, cluster_params=None):
"""Execute an OpenShift operation"""
payload = {
"name": tool_name,
"arguments": {
**arguments,
**(cluster_params or {})
}
}
response = requests.post(f"{self.mcp_url}/tools/execute", json=payload)
return response.json()
# Usage example
client = OpenShiftMCPClient("https://your-mcp-server")
# List all pods in default namespace
pods = client.execute_tool("list_pods", {"namespace": "default"})
# Scale deployment with cluster targeting
result = client.execute_tool("scale_deployment", {
"name": "my-app",
"namespace": "production",
"replicas": 3,
# Multi-cluster parameters
"cluster_api_url": "https://api.target-cluster.com:6443",
"cluster_token": "your-cluster-token"
})
Option 4: Custom LLM Integration
Direct API Calls:
# Set your MCP server URL
MCP_URL="https://your-openshift-mcp-server"
# Get cluster information
curl -k -X POST $MCP_URL/tools/execute \
-H "Content-Type: application/json" \
-d '{
"name": "get_cluster_info",
"arguments": {}
}'
# List pods with metrics
curl -k -X POST $MCP_URL/tools/execute \
-H "Content-Type: application/json" \
-d '{
"name": "list_pods",
"arguments": {
"namespace": "default",
"show_metrics": true
}
}'
# Multi-cluster deployment scaling
curl -k -X POST $MCP_URL/tools/execute \
-H "Content-Type: application/json" \
-d '{
"name": "scale_deployment",
"arguments": {
"name": "web-service",
"namespace": "production",
"replicas": 5,
"cluster_api_url": "https://api.remote-cluster.com:6443",
"cluster_token": "sha256~..."
}
}'
🛠️ Complete Tool Reference
🔧 Core Tools (18 tools)
Essential OpenShift operations with multi-cluster support
| Tool | Description | Key Parameters | Example Usage |
|---|---|---|---|
get_cluster_info | Get cluster version and status | cluster_api_url, cluster_token | Basic cluster connectivity check |
list_projects | List all projects/namespaces | cluster_params | View available namespaces |
create_project | Create new project | name, display_name, description | Set up new application namespace |
list_pods | List pods with optional metrics | namespace, show_metrics, label_selector | Monitor running workloads |
get_pod | Get detailed pod information | name, namespace, include_events | Troubleshoot specific pods |
create_pod | Create pod from specification | pod_spec, namespace | Deploy single container workloads |
delete_pod | Delete pod with grace period | name, namespace, grace_period | Remove problematic pods |
list_deployments | List deployments in namespace | namespace, label_selector | View application deployments |
get_deployment | Get deployment details | name, namespace | Inspect deployment configuration |
scale_deployment | Scale deployment replicas | name, namespace, replicas | Adjust application capacity |
list_services | List services | namespace, service_type | View network services |
get_service | Get service details | name, namespace | Inspect service configuration |
create_service | Create service | service_spec, namespace | Expose applications |
delete_service | Delete service | name, namespace | Remove network services |
list_routes | List OpenShift routes | namespace | View external access points |
get_route | Get route details | name, namespace | Inspect route configuration |
apply_yaml | Apply YAML configuration | yaml_content, namespace | Deploy from manifests |
get_resource_yaml | Get resource as YAML | api_version, kind, name, namespace | Export configurations |
Example Usage:
# Create a new application namespace
create_project(
name="my-web-app",
display_name="My Web Application",
description="Production web application services"
)
# Scale deployment based on load
scale_deployment(
name="user-service",
namespace="my-web-app",
replicas=5
)
# Multi-cluster pod listing
list_pods(
namespace="production",
cluster_api_url="https://api.prod-cluster.com:6443",
cluster_token="sha256~prod-token"
)
📊 Monitoring Tools (7 tools)
Comprehensive monitoring and performance tracking
| Tool | Description | Key Parameters | Example Usage |
|---|---|---|---|
pods_top | Pod resource usage (CPU/Memory) | namespace, sort_by, top_n | Identify resource-heavy pods |
nodes_top | Node resource usage | sort_by, include_system_pods | Monitor cluster capacity |
events_list | Cluster events and warnings | namespace, event_types, since | Troubleshoot recent issues |
cluster_resource_usage | Overall cluster metrics | include_storage | Assess cluster health |
logs_get | Enhanced log retrieval | pod_name, namespace, lines, since | Debug application issues |
metrics_get | Custom Prometheus metrics | metric_name, namespace, time_range | Custom monitoring queries |
resource_usage | Resource usage statistics | resource_type, namespace | Capacity planning |
Example Usage:
# Monitor high CPU pods
pods_top(
namespace="production",
sort_by="cpu",
top_n=10
)
# Check recent warning events
events_list(
event_types=["Warning"],
since="2h",
namespace="web-services"
)
# Get application logs with context
logs_get(
pod_name="user-service-abc123",
namespace="production",
lines=100,
since="1h"
)
🔍 Debug Tools (11 tools)
Advanced debugging and troubleshooting
| Tool | Description | Key Parameters | Example Usage |
|---|---|---|---|
pods_exec | Execute commands in pods | name, namespace, command, container | Run diagnostic commands |
pods_logs | Enhanced log retrieval | name, namespace, follow, tail | Stream application logs |
pods_debug_network | Network connectivity tests | name, namespace, target, port | Diagnose network issues |
pods_debug_dns | DNS resolution testing | name, namespace, hostname | Troubleshoot DNS problems |
pods_get_processes | List running processes | name, namespace, container | Inspect container processes |
pods_get_env | Get environment variables | name, namespace, filter | Check configuration |
port_forward | Port forwarding setup | name, namespace, local_port, remote_port | Access services locally |
debug_pod | Create debug containers | target_pod, debug_image | Advanced troubleshooting |
troubleshoot | Automated troubleshooting | resource_type, name, namespace | Get troubleshooting steps |
system_status | System health diagnostics | include_nodes, include_storage | Overall system health |
pods_run_debug | Run debug pods | image, name, namespace | Create debugging environments |
Example Usage:
# Execute diagnostic command in pod
pods_exec(
name="web-server-pod",
namespace="production",
command=["curl", "-v", "http://api-service:8080/health"],
container="web-server"
)
# Test network connectivity
pods_debug_network(
name="client-pod",
namespace="production",
target="database-service",
port=5432
)
# Create debug environment
pods_run_debug(
image="nicolaka/netshoot:latest",
name="network-debug",
namespace="production"
)
⚙️ Config Tools (7 tools)
Configuration and context management
| Tool | Description | Key Parameters | Example Usage |
|---|---|---|---|
config_view | View kubeconfig | minify, include_credentials | Check cluster configuration |
config_current_context | Current context info | None | Verify active cluster |
config_list_contexts | List all contexts | None | See available clusters |
secrets_list | List secrets in namespace | namespace, secret_type | Manage sensitive data |
configmaps_list | List configmaps | namespace | View configuration data |
config_validate | Validate configurations | config_type | Ensure valid setup |
namespaces_list_all | Cross-context namespaces | include_system | Multi-cluster namespace view |
🛠️ Resource Tools (12 tools)
Generic resource operations with full CRUD support
| Tool | Description | Key Parameters | Example Usage |
|---|---|---|---|
resources_list | List any resource type | api_version, kind, namespace | Query any Kubernetes resource |
resources_get | Get specific resource | api_version, kind, name, namespace | Retrieve resource details |
resources_delete | Delete resources | api_version, kind, name, namespace | Remove resources safely |
resources_patch | Patch resources | api_version, kind, name, patch_data | Update resource fields |
resources_create_or_update | Create/update from YAML | resource_data, namespace | Deploy or update resources |
resource_create | Create from definition | resource_definition | Create new resources |
resource_update | Update existing resources | resource_definition | Modify existing resources |
resource_patch | Strategic/merge patches | api_version, kind, name, patch_data | Targeted updates |
resource_watch | Watch for real-time changes | api_version, kind, namespace | Monitor resource changes |
resources_scale | Scale resources | api_version, kind, name, replicas | Adjust resource capacity |
resources_wait | Wait for conditions | api_version, kind, name, condition | Wait for resource states |
resource_get | Enhanced resource retrieval | api_version, kind, name, namespace | Detailed resource inspection |
Example Usage:
# List custom resources
resources_list(
api_version="route.openshift.io/v1",
kind="Route",
namespace="web-services"
)
# Update resource with patch
resources_patch(
api_version="apps/v1",
kind="Deployment",
name="my-app",
namespace="production",
patch_data={"spec": {"replicas": 3}}
)
# Watch for deployment changes
resource_watch(
api_version="apps/v1",
kind="Deployment",
namespace="production"
)
🔒 Security & Best Practices
TLS/HTTPS Security
- 🔐 Route TLS Termination: All external communications are encrypted using OpenShift Route TLS termination
- 🛡️ Certificate Management: OpenShift automatically manages TLS certificates for routes
- 🚫 No Plain HTTP: Production deployments should only accept HTTPS connections
Credential Protection
- 🗝️ Kubernetes Secrets: Cluster credentials stored securely as Kubernetes Secrets
- 🎯 ServiceAccount Authentication: Prefer ServiceAccount tokens over user credentials
- 📝 Audit Logging: All tool executions are logged for security auditing
RBAC Configuration
The MCP server operates with minimal required permissions:
# Core permissions for cluster operations
- apiGroups: [""]
resources: ["namespaces", "pods", "services", "nodes", "configmaps", "secrets", "events"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
# Application management
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "create", "update", "patch", "delete", "scale"]
# OpenShift-specific resources
- apiGroups: ["route.openshift.io"]
resources: ["routes"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
Multi-Cluster Security
When connecting to external clusters:
- 🔑 Token-Based Authentication: Use service account tokens or OAuth tokens
- ⏰ Token Rotation: Regularly rotate cluster access tokens
- 🎯 Principle of Least Privilege: Grant only necessary permissions
- 📊 Audit Trail: Monitor and log all cross-cluster operations
🏗️ API Architecture & Design Decisions
🔧 Hybrid API Approach
The OpenShift MCP Server uses a strategic hybrid API approach that leverages both Kubernetes and OpenShift APIs based on the specific resource type and functionality needed:
✅ Kubernetes APIs for Core Resources
We use the standard kubernetes Python client for these resources:
- Pods (
kubernetes.client.CoreV1Api) - Services (
kubernetes.client.CoreV1Api) - Deployments (
kubernetes.client.AppsV1Api) - Nodes (
kubernetes.client.CoreV1Api)
Why This Design?
- ✅ Performance: Direct API access with full type safety
- ✅ Reliability: Mature, stable client libraries
- ✅ Correctness: These resources are identical in Kubernetes and OpenShift
- ✅ Maintainability: Well-documented, standard Kubernetes patterns
🚀 OpenShift APIs for Extensions
We use OpenShift-specific APIs for enhanced resources:
- Projects (
project.openshift.io/v1) - Enhanced namespaces with display names, descriptions, and RBAC - Routes (
route.openshift.io/v1) - OpenShift-specific ingress with TLS termination - Cluster Info (
config.openshift.io/v1) - OpenShift cluster version and configuration
Why This Design?
- ✅ Enhanced Features: Access to OpenShift-specific metadata and capabilities
- ✅ Native Integration: Proper OpenShift project lifecycle and RBAC
- ✅ User Experience: Display names, descriptions, and rich project metadata
🔄 Fallback Strategy
For maximum compatibility, the MCP server implements intelligent fallbacks:
# Example: Projects API with Kubernetes namespace fallback
try:
# Try OpenShift Projects API first
projects = custom_objects_api.list_cluster_custom_object(
group="project.openshift.io", version="v1", plural="projects"
)
return format_openshift_projects(projects)
except ApiException as e:
if e.status in [404, 403]: # API not available or insufficient permissions
# Fallback to Kubernetes namespaces
namespaces = core_v1_api.list_namespace()
return format_kubernetes_namespaces(namespaces)
Benefits:
- ✅ OpenShift-First: Prioritizes OpenShift features when available
- ✅ Kubernetes Compatibility: Works on plain Kubernetes clusters
- ✅ Graceful Degradation: Maintains functionality even with limited permissions
- ✅ Clear Feedback: API responses indicate which API was used
📊 API Usage Summary
| Resource Type | Primary API | Fallback API | Reason |
|---|---|---|---|
| Pods | Kubernetes CoreV1 | None | Identical in both platforms |
| Services | Kubernetes CoreV1 | None | Identical in both platforms |
| Deployments | Kubernetes AppsV1 | None | Identical in both platforms |
| Nodes | Kubernetes CoreV1 | None | Identical in both platforms |
| Projects | OpenShift Projects API | Kubernetes Namespaces | Enhanced metadata in OpenShift |
| Routes | OpenShift Routes API | None | OpenShift-only feature |
🎯 Why Not "Pure OpenShift" APIs?
Common Misconception: "Since this is an OpenShift MCP server, shouldn't we use only OpenShift APIs?"
Reality:
- ❌ OpenShift doesn't have separate APIs for Pods, Services, Deployments
- ✅ OpenShift IS Kubernetes with extensions - it doesn't replace core APIs
- ✅ Using Kubernetes APIs for core resources is the recommended approach
- ✅ This hybrid approach is what Red Hat recommends for OpenShift applications
🔧 Implementation Best Practices
- API Selection Logic: Use OpenShift APIs only when they provide additional value
- Error Handling: Implement proper fallbacks for environment compatibility
- Response Format: Clearly indicate which API was used in responses
- Performance: Leverage typed clients for core resources, dynamic clients for custom resources
- Maintainability: Follow established patterns from the Kubernetes and OpenShift ecosystems
🚀 Advanced Usage Scenarios
Multi-Cluster Management
# Deploy to multiple clusters simultaneously
clusters = [
{
"name": "production",
"api_url": "https://api.prod.example.com:6443",
"token": "sha256~prod-token"
},
{
"name": "staging",
"api_url": "https://api.staging.example.com:6443",
"token": "sha256~staging-token"
}
]
# Deploy application to all clusters
for cluster in clusters:
create_pod(
pod_spec=my_app_spec,
namespace="web-services",
cluster_api_url=cluster["api_url"],
cluster_token=cluster["token"]
)
Automated Troubleshooting Workflow
# Comprehensive troubleshooting sequence
def troubleshoot_application(app_name, namespace):
# 1. Check deployment status
deployment = get_deployment(name=app_name, namespace=namespace)
# 2. Monitor resource usage
resource_usage = pods_top(namespace=namespace, label_selector=f"app={app_name}")
# 3. Check recent events
events = events_list(namespace=namespace, since="1h", event_types=["Warning"])
# 4. Test network connectivity
pods = list_pods(namespace=namespace, label_selector=f"app={app_name}")
for pod in pods["items"]:
pods_debug_network(
name=pod["metadata"]["name"],
namespace=namespace,
target="external-api.com",
port=443
)
# 5. Generate troubleshooting report
return {
"deployment_status": deployment,
"resource_usage": resource_usage,
"recent_events": events,
"connectivity_tests": connectivity_results
}
GitOps Integration
# Apply GitOps manifests
def deploy_from_git(repo_url, branch, namespace):
# Fetch manifests (implemented by your GitOps tool)
manifests = fetch_manifests(repo_url, branch)
# Apply each manifest
for manifest in manifests:
apply_yaml(
yaml_content=manifest,
namespace=namespace
)
# Monitor deployment progress
return monitor_deployment_progress(namespace)
🔧 Troubleshooting Guide
Connection Issues
Problem: AI assistant cannot connect to MCP server
Solutions:
# Check route accessibility
oc get route openshift-mcp-server
curl -k https://$(oc get route openshift-mcp-server -o jsonpath='{.spec.host}')
# Check pod status
oc get pods -l deployment=openshift-mcp-server
oc logs deployment/openshift-mcp-server --tail=50
# Verify TLS certificate
openssl s_client -connect $(oc get route openshift-mcp-server -o jsonpath='{.spec.host}'):443
Problem: Tools return authentication errors
Solutions:
# Check ServiceAccount permissions
oc describe sa default
oc describe clusterrolebinding openshift-mcp-server-v14
# Verify secret mounting
oc describe deployment openshift-mcp-server
Performance Issues
Problem: Slow tool execution
Solutions:
# Check pod resources
oc describe pod -l deployment=openshift-mcp-server
# Monitor API server response times
oc get --raw /metrics | grep apiserver_request_duration
# Scale MCP server if needed
oc scale deployment openshift-mcp-server --replicas=2
Tool-Specific Issues
Problem: nodes_top returns "Metrics server not available"
Solution: Install metrics server in your cluster or contact your cluster administrator.
Problem: helm_* tools fail
Solution: Helm tools require Helm to be installed in the MCP server container. These tools are excluded from core functionality in v1.5.
📊 Monitoring & Observability
Built-in Health Checks
The MCP server provides several health endpoints:
# Basic health check
curl -k https://your-mcp-server/
# Detailed status
curl -k https://your-mcp-server/status
# Tool availability
curl -k -X POST https://your-mcp-server/tools/list
Logging
Monitor MCP server logs:
# Real-time logs
oc logs deployment/openshift-mcp-server -f
# Recent errors
oc logs deployment/openshift-mcp-server --tail=100 | grep ERROR
# Tool execution logs
oc logs deployment/openshift-mcp-server --tail=200 | grep "Executing tool"
🧪 Testing Framework
The OpenShift MCP Server v1.5 includes a comprehensive testing framework located in the testing/ directory:
Test Categories
- Integration Tests: End-to-end functionality testing
- Comprehensive Tests: All 67 tools validation
- Multi-cluster Tests: Cross-cluster operations
- Performance Tests: Load and stress testing
Running Tests
# Run comprehensive test suite
python testing/comprehensive_aro_test.py https://your-mcp-server
# Run multi-cluster tests
python testing/test_v13_multi_cluster.py https://your-mcp-server
# Run specific integration tests
python -m pytest testing/test_integration.py -v
See testing/README.md for complete testing documentation.
📈 What's New in v1.5
🎯 Production Readiness
- ✅ Organized Testing Framework: Comprehensive test suite in dedicated
testing/directory - ✅ Enhanced Security: TLS/HTTPS encryption for all communications
- ✅ Multi-Cluster Support: Dynamic authentication for multiple OpenShift clusters
- ✅ API-First Design: Consistent error handling and response formatting
- ✅ Container Registry: Production images available on Quay.io
🛠️ Tool Improvements
- ✅ 67 Production-Ready Tools: 98.5% functionality (excluding Helm dependencies)
- ✅ Enhanced Parameter Validation: Comprehensive input validation and sanitization
- ✅ Improved Error Handling: Detailed error messages with troubleshooting guidance
- ✅ Resource Lifecycle Management: Proper cleanup and hierarchical resource management
📚 Documentation & Usability
- ✅ Comprehensive Tool Reference: Detailed parameter documentation and examples
- ✅ Architecture Diagrams: Visual representation of security and data flow
- ✅ Multi-LLM Support: Instructions for Cursor, Claude, OpenAI GPT, and custom integrations
- ✅ Production Deployment Guide: Step-by-step OpenShift deployment instructions
🤝 Contributing
We welcome contributions to the OpenShift MCP Server project!
Development Setup
# Clone the repository
git clone https://github.com/manu-joy/OpenShift-MCP-py.git
cd OpenShift-MCP-py
# Create development environment
python3.12 -m venv venv-mcp
source venv-mcp/bin/activate
# Install dependencies
pip install -r requirements.txt
# Run tests
python -m pytest testing/ -v
Contribution Process
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature - Add tests for new functionality
- Ensure all tests pass:
python -m pytest testing/ -v - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Submit Pull Request
📄 License
This project is licensed under the Apache License 2.0 - see the file for details.
🙏 Acknowledgments
- Red Hat OpenShift Team for the robust enterprise platform
- Anthropic for the Model Context Protocol specification
- Cursor Team for the excellent AI-powered development environment
- OpenShift Community for continuous innovation and support
- Contributors who helped make this project production-ready
🚀 Ready to transform your OpenShift management with AI? Deploy the MCP server and start managing your clusters through natural language today!
Made with ❤️ for the OpenShift and AI community