rajeshpenki/databricks-jobs-mcp-server
If you are the rightful owner of databricks-jobs-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A Model Context Protocol (MCP) server for managing and monitoring Databricks jobs through the Databricks Jobs API.
Databricks Jobs MCP Server
A Model Context Protocol (MCP) server for interacting with the Databricks Jobs API. This server provides tools to manage and monitor Databricks jobs through MCP.
Features
- List jobs with pagination and filtering
- Get detailed job information
- Run jobs with custom parameters
- List job runs with various filters
- Get run details and output
- Cancel active runs
- Delete jobs
Available Transports
This server supports two transport methods:
1. Stdio Transport (index.ts
)
Standard MCP server using stdio transport - suitable for Claude Desktop integration.
2. Streamable HTTP Transport (index-http.ts
)
HTTP-based server with streaming capabilities - suitable for web applications and HTTP clients.
Setup
Prerequisites
- Node.js 18+
- Databricks workspace and personal access token
Environment Variables
DATABRICKS_HOST=https://your-workspace.azuredatabricks.net
DATABRICKS_TOKEN=your-personal-access-token
PORT=3000 # Only for HTTP transport
Installation
npm install
Build
# Build both versions
npm run build
# Or build individually
npm run build:http # HTTP version only
Usage
Stdio Transport (for Claude Desktop)
-
Build and run:
npm run build npm start
-
Claude Desktop Configuration: Add to
~/.config/claude/claude_desktop_config.json
:{ "mcpServers": { "databricks-jobs": { "command": "node", "args": ["/path/to/your/dist/index.js"], "env": { "DATABRICKS_HOST": "https://your-workspace.azuredatabricks.net", "DATABRICKS_TOKEN": "your-token-here" } } } }
HTTP Transport (for web apps)
-
Run the HTTP server:
npm run dev:http # Development # or npm run build && npm run start:http # Production
-
Endpoints:
POST http://localhost:3000/mcp
- Main MCP endpointGET http://localhost:3000/sse
- Server-Sent Events endpointGET http://localhost:3000/health
- Health checkDELETE http://localhost:3000/mcp/:sessionId
- Close session
-
Features:
- Session management with UUIDs
- CORS enabled
- Streaming support via SSE
- Health monitoring
Available Tools
list_jobs
List all jobs in the workspace with optional filtering and pagination.
Parameters:
limit
(number): Maximum jobs to return (default: 25, max: 25)offset
(number): Pagination offset (default: 0)expand_tasks
(boolean): Include task details (default: false)name
(string): Filter by job name
get_job
Get detailed information about a specific job.
Parameters:
job_id
(number, required): Job identifier
run_job_now
Trigger a new run of an existing job with optional parameter overrides.
Parameters:
job_id
(number, required): Job identifierjar_params
(array): JAR task parametersnotebook_params
(object): Notebook task parameterspython_params
(array): Python task parametersspark_submit_params
(array): Spark submit parameters
list_runs
List job runs with filtering and pagination options.
Parameters:
job_id
(number): Filter by job IDactive_only
(boolean): Show only active runscompleted_only
(boolean): Show only completed runslimit
(number): Maximum runs to returnoffset
(number): Pagination offsetstart_time_from
(number): Filter by start time (Unix timestamp)start_time_to
(number): Filter by end time (Unix timestamp)
get_run
Get detailed information about a specific job run.
Parameters:
run_id
(number, required): Run identifierinclude_history
(boolean): Include repair history
get_run_output
Get the output of a completed job run.
Parameters:
run_id
(number, required): Run identifier
cancel_run
Cancel an active job run.
Parameters:
run_id
(number, required): Run identifier
delete_job
Delete a job (cannot be undone).
Parameters:
job_id
(number, required): Job identifier
Docker Deployment
Quick Start with Docker
-
Setup environment:
cp .env.example .env # Edit .env with your Databricks credentials
-
Build and run:
./docker-manage.sh build ./docker-manage.sh up
-
Access the server:
- Main server: http://localhost:3000
- Health check: http://localhost:3000/health
Docker Management Script
The docker-manage.sh
script provides easy container management:
# Build the Docker image
./docker-manage.sh build
# Start services (development mode)
./docker-manage.sh up
# Start with nginx proxy (production mode)
./docker-manage.sh up-prod
# Stop services
./docker-manage.sh down
# View logs
./docker-manage.sh logs -f
# Check health
./docker-manage.sh health
# Open shell in container
./docker-manage.sh shell
# Clean up everything
./docker-manage.sh clean
Production Deployment
For production deployment with nginx reverse proxy:
# Start with production profile
./docker-manage.sh up-prod
This includes:
- Nginx reverse proxy with SSL support
- Rate limiting
- Security headers
- Proper SSE handling
- Health checks
Development
Run in development mode:
# Stdio version
npm run dev
# HTTP version
npm run dev:http
# Or with Docker
./docker-manage.sh up
Test the HTTP server:
# Health check
curl http://localhost:3000/health
# Example MCP request
curl -X POST http://localhost:3000/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}'
Architecture
- Transport Layer: Supports both stdio and HTTP transports
- Session Management: HTTP version includes session tracking
- Error Handling: Comprehensive error handling with Databricks API error details
- Type Safety: Full TypeScript implementation with strict typing
Dependencies
@modelcontextprotocol/sdk
: MCP SDK for server implementationaxios
: HTTP client for Databricks API callsexpress
: Web framework (HTTP version only)
License
MIT# databricks-jobs-mcp-server