DeepDiagnostix-AI/mcp-apache-spark-history-server
If you are the rightful owner of mcp-apache-spark-history-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The MCP Server for Apache Spark History Server connects AI agents to the Spark History Server for intelligent job analysis and performance monitoring.
MCP Server for Apache Spark History Server
π€ Connect AI agents to Apache Spark History Server for intelligent job analysis and performance monitoring
Transform your Spark infrastructure monitoring with AI! This Model Context Protocol (MCP) server enables AI agents to analyze job performance, identify bottlenecks, and provide intelligent insights from your Spark History Server data.
π― What is This?
Spark History Server MCP bridges AI agents with your existing Apache Spark infrastructure, enabling:
- π Query job details through natural language
- π Analyze performance metrics across applications
- π Compare multiple jobs to identify regressions
- π¨ Investigate failures with detailed error analysis
- π Generate insights from historical execution data
πΊ See it in action:
ποΈ Architecture
graph TB
A[π€ AI Agent/LLM] --> F[π‘ MCP Client]
B[π¦ LlamaIndex Agent] --> F
C[π LangGraph] --> F
D[οΏ½οΈ Claudep Desktop] --> F
E[π οΈ Amazon Q CLI] --> F
F --> G[β‘ Spark History MCP Server]
G --> H[π₯ Prod Spark History Server]
G --> I[π₯ Staging Spark History Server]
G --> J[π₯ Dev Spark History Server]
H --> K[π Prod Event Logs]
I --> L[π Staging Event Logs]
J --> M[π Dev Event Logs]
π Components:
- π₯ Spark History Server: Your existing infrastructure serving Spark event data
- β‘ MCP Server: This project - provides MCP tools for querying Spark data
- π€ AI Agents: LangChain, custom agents, or any MCP-compatible client
β‘ Quick Start
π Prerequisites
- π₯ Existing Spark History Server (running and accessible)
- π Python 3.12+
- β‘ uv package manager
π Setup & Testing
git clone https://github.com/DeepDiagnostix-AI/mcp-apache-spark-history-server.git
cd mcp-apache-spark-history-server
# Install Task (if not already installed)
brew install go-task # macOS, see https://taskfile.dev/installation/ for others
# Setup and start testing
task start-spark-bg # Start Spark History Server with sample data (default Spark 3.5.5)
# Or specify a different Spark version:
# task start-spark-bg spark_version=3.5.2
task start-mcp-bg # Start MCP Server
# Optional: Opens MCP Inspector on http://localhost:6274 for interactive testing
# Requires Node.js: 22.7.5+ (Check https://github.com/modelcontextprotocol/inspector for latest requirements)
task start-inspector-bg # Start MCP Inspector
# When done, run `task stop-all`
If you just want to run the MCP server without cloning the repository:
# Run with uv without installing the module
uvx --from mcp-apache-spark-history-server spark-mcp
# OR run with pip and python. Use of venv is highly encouraged.
python3 -m venv spark-mcp && source spark-mcp/bin/activate
pip install mcp-apache-spark-history-server
python3 -m spark_history_mcp.core.main
# Deactivate venv
deactivate
π Sample Data
The repository includes real Spark event logs for testing:
spark-bcec39f6201b42b9925124595baad260
- β Successful ETL jobspark-110be3a8424d4a2789cb88134418217b
- π Data processing jobspark-cc4d115f011443d787f03a71a476a745
- π Multi-stage analytics job
See for using them.
βοΈ Server Configuration
Edit config.yaml
for your Spark History Server:
servers:
local:
default: true
url: "http://your-spark-history-server:18080"
auth: # optional
username: "user"
password: "pass"
mcp:
transports:
- streamable-http # streamable-http or stdio.
port: "18888"
debug: true
πΈ Screenshots
π Get Spark Application
β‘ Job Performance Comparison
π οΈ Available Tools
Note: These tools are subject to change as we scale and improve the performance of the MCP server.
The MCP server provides 17 specialized tools organized by analysis patterns. LLMs can intelligently select and combine these tools based on user queries:
π Application Information
Basic application metadata and overview
π§ Tool | π Description |
---|---|
get_application | π Get detailed information about a specific Spark application including status, resource usage, duration, and attempt details |
π Job Analysis
Job-level performance analysis and identification
π§ Tool | π Description |
---|---|
list_jobs | π Get a list of all jobs for a Spark application with optional status filtering |
list_slowest_jobs | β±οΈ Get the N slowest jobs for a Spark application (excludes running jobs by default) |
β‘ Stage Analysis
Stage-level performance deep dive and task metrics
π§ Tool | π Description |
---|---|
list_stages | β‘ Get a list of all stages for a Spark application with optional status filtering and summaries |
list_slowest_stages | π Get the N slowest stages for a Spark application (excludes running stages by default) |
get_stage | π― Get information about a specific stage with optional attempt ID and summary metrics |
get_stage_task_summary | π Get statistical distributions of task metrics for a specific stage (execution times, memory usage, I/O metrics) |
π₯οΈ Executor & Resource Analysis
Resource utilization, executor performance, and allocation tracking
π§ Tool | π Description |
---|---|
list_executors | π₯οΈ Get executor information with optional inactive executor inclusion |
get_executor | π Get information about a specific executor including resource allocation, task statistics, and performance metrics |
get_executor_summary | π Aggregates metrics across all executors (memory usage, disk usage, task counts, performance metrics) |
get_resource_usage_timeline | π Get chronological view of resource allocation and usage patterns including executor additions/removals |
βοΈ Configuration & Environment
Spark configuration, environment variables, and runtime settings
π§ Tool | π Description |
---|---|
get_environment | βοΈ Get comprehensive Spark runtime configuration including JVM info, Spark properties, system properties, and classpath |
π SQL & Query Analysis
SQL performance analysis and execution plan comparison
π§ Tool | π Description |
---|---|
list_slowest_sql_queries | π Get the top N slowest SQL queries for an application with detailed execution metrics |
compare_sql_execution_plans | π Compare SQL execution plans between two Spark jobs, analyzing logical/physical plans and execution metrics |
π¨ Performance & Bottleneck Analysis
Intelligent bottleneck identification and performance recommendations
π§ Tool | π Description |
---|---|
get_job_bottlenecks | π¨ Identify performance bottlenecks by analyzing stages, tasks, and executors with actionable recommendations |
π Comparative Analysis
Cross-application comparison for regression detection and optimization
π§ Tool | π Description |
---|---|
compare_job_environments | βοΈ Compare Spark environment configurations between two jobs to identify differences in properties and settings |
compare_job_performance | π Compare performance metrics between two Spark jobs including execution times, resource usage, and task distribution |
π€ How LLMs Use These Tools
Query Pattern Examples:
- "Why is my job slow?" β
get_job_bottlenecks
+list_slowest_stages
+get_executor_summary
- "Compare today vs yesterday" β
compare_job_performance
+compare_job_environments
- "What's wrong with stage 5?" β
get_stage
+get_stage_task_summary
- "Show me resource usage over time" β
get_resource_usage_timeline
+get_executor_summary
- "Find my slowest SQL queries" β
list_slowest_sql_queries
+compare_sql_execution_plans
π AWS Integration Guides
If you are an existing AWS user looking to analyze your Spark Applications, we provide detailed setup guides for:
- - Connect to Glue Spark History Server
- - Use EMR Persistent UI for Spark analysis
These guides provide step-by-step instructions for setting up the Spark History Server MCP with your AWS services.
π Kubernetes Deployment
Deploy using Kubernetes with Helm:
β οΈ Work in Progress: We are still testing and will soon publish the container image and Helm registry to GitHub for easy deployment.
# π¦ Deploy with Helm
helm install spark-history-mcp ./deploy/kubernetes/helm/spark-history-mcp/
# π― Production configuration
helm install spark-history-mcp ./deploy/kubernetes/helm/spark-history-mcp/ \
--set replicaCount=3 \
--set autoscaling.enabled=true \
--set monitoring.enabled=true
π See for complete deployment manifests and configuration options.
π Multi-Spark History Server Setup
Setup multiple Spark history servers in the config.yaml and choose which server you want the LLM to interact with for each query.
servers:
production:
default: true
url: "http://prod-spark-history:18080"
auth:
username: "user"
password: "pass"
staging:
url: "http://staging-spark-history:18080"
π User Query: "Can you get application <app_id> using production server?"
π€ AI Tool Request:
{
"app_id": "<app_id>",
"server": "production"
}
π€ AI Tool Response:
{
"id": "<app_id>>",
"name": "app_name",
"coresGranted": null,
"maxCores": null,
"coresPerExecutor": null,
"memoryPerExecutorMB": null,
"attempts": [
{
"attemptId": null,
"startTime": "2023-09-06T04:44:37.006000Z",
"endTime": "2023-09-06T04:45:40.431000Z",
"lastUpdated": "2023-09-06T04:45:42Z",
"duration": 63425,
"sparkUser": "spark",
"appSparkVersion": "3.3.0",
"completed": true
}
]
}
π Environment Variables
SHS_MCP_PORT - Port for MCP server (default: 18888)
SHS_MCP_DEBUG - Enable debug mode (default: false)
SHS_MCP_ADDRESS - Address for MCP server (default: localhost)
SHS_MCP_TRANSPORT - MCP transport mode (default: streamable-http)
SHS_SERVERS_*_URL - URL for a specific server
SHS_SERVERS_*_AUTH_USERNAME - Username for a specific server
SHS_SERVERS_*_AUTH_PASSWORD - Password for a specific server
SHS_SERVERS_*_AUTH_TOKEN - Token for a specific server
SHS_SERVERS_*_VERIFY_SSL - Whether to verify SSL for a specific server (true/false)
SHS_SERVERS_*_TIMEOUT - HTTP request timeout in seconds for a specific server (default: 30)
SHS_SERVERS_*_EMR_CLUSTER_ARN - EMR cluster ARN for a specific server
π€ AI Agent Integration
Quick Start Options
Integration | Transport | Best For |
---|---|---|
HTTP | Development, testing tools | |
STDIO | Interactive analysis | |
STDIO | Command-line automation | |
HTTP | IDE integration, code-centric analysis | |
HTTP | Multi-agent workflows | |
HTTP | Multi-agent workflows |
π― Example Use Cases
π Performance Investigation
π€ AI Query: "Why is my ETL job running slower than usual?"
π MCP Actions:
β
Analyze application metrics
β
Compare with historical performance
β
Identify bottleneck stages
β
Generate optimization recommendations
π¨ Failure Analysis
π€ AI Query: "What caused job 42 to fail?"
π MCP Actions:
β
Examine failed tasks and error messages
β
Review executor logs and resource usage
β
Identify root cause and suggest fixes
π Comparative Analysis
π€ AI Query: "Compare today's batch job with yesterday's run"
π MCP Actions:
β
Compare execution times and resource usage
β
Identify performance deltas
β
Highlight configuration differences
π€ Contributing
Check for full guidelines on contributions
π License
Apache License 2.0 - see file for details.
π Trademark Notice
This project is built for use with Apache Sparkβ’ History Server. Not affiliated with or endorsed by the Apache Software Foundation.
π₯ Connect your Spark infrastructure to AI agents
π Get Started | π οΈ View Tools | | π€ Contribute
Built by the community, for the community π