databricks-custom-mcp-server-example

nklc/databricks-custom-mcp-server-example

3.2

If you are the rightful owner of databricks-custom-mcp-server-example and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

A simple, production-ready template for building Model Context Protocol (MCP) servers using FastMCP and FastAPI.

Tools
4
Resources
0
Prompts
0

MCP Server - Hello World

A simple, production-ready template for building Model Context Protocol (MCP) servers using FastMCP and FastAPI. This project demonstrates how to create custom tools that AI assistants can discover and invoke.

Key Concepts

  • Tools: Callable functions that AI assistants can invoke (e.g., search databases, process data, call APIs)
  • Server: Exposes tools via the MCP protocol over HTTP
  • Client: Applications (like Claude, AI assistants) that discover and call tools

Features

  • ✅ FastMCP-based server with HTTP streaming support
  • ✅ FastAPI integration for additional REST endpoints
  • ✅ Example tools: health check, user information, arithmetic operations, and Databricks job triggering
  • ✅ Production-ready project structure
  • ✅ Ready for Databricks Apps deployment
  • ✅ Comprehensive testing with integration tests

Project Structure

mcp-server-hello-world/
├── server/
│   ├── app.py                    # FastAPI application and MCP server setup
│   ├── main.py                   # Entry point for running the server
│   ├── tools.py                  # MCP tool definitions
│   └── utils.py                  # Databricks authentication helpers
├── scripts/
│   └── dev/
│       ├── start_server.sh           # Start the MCP server locally
│       ├── query_remote.sh           # Interactive script for testing deployed app with OAuth
│       ├── query_remote.py           # Query MCP client (deployed app) with health and user auth
│       └── generate_oauth_token.py   # Generate OAuth tokens for Databricks
├── tests/
│   └── test_integration_server.py   # Integration tests for MCP server
├── pyproject.toml                # Project metadata and dependencies
├── requirements.txt              # Python dependencies (for pip)
├── app.yaml                      # Databricks Apps configuration
├── Claude.md                     # AI assistant context and documentation
└── README.md          

Prerequisites

  • Python 3.11 or higher
  • uv (recommended) or pip

Installation

Option 1: Using uv (Recommended)

# Install uv if you haven't already
# Install dependencies
uv sync

Option 2: Using pip

# Create a virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

Running the Server

Development Mode

# Quick start with script (syncs dependencies and starts server)
./scripts/dev/start_server.sh

# Or manually using uv (default port 8000)
uv run custom-mcp-server

# Or specify a custom port
uv run custom-mcp-server --port 8080

# Or using the installed command (after pip install -e .)
custom-mcp-server --port 3000

The server will start on http://localhost:8000 by default (or your specified port).

Accessing the Server

  • MCP Endpoints: http://localhost:8000/mcp
  • Available Tools:
    • health: Check server status
    • get_current_user: Get authenticated user information
    • add_numbers: Add two numbers together (example of parameterized tool)
    • trigger_job_run: Trigger a Databricks job run by job ID

Testing the MCP Server

This project includes test scripts to verify your MCP server is working correctly in both local and deployed environments.

Integration Tests

The project includes automated integration tests that validate the MCP server functionality:

# Run integration tests
uv run pytest tests/ -v -s

What the tests do:

  • Automatically start the MCP server
  • Test that list_tools() works correctly
  • Test parameter-free tools (health, get_current_user)
  • Test parameterized tools (add_numbers with various inputs)
  • Test error handling (trigger_job_run with invalid job ID)
  • Automatically clean up the server after tests complete

Manual Testing

End-to-end test your locally-running MCP server
./scripts/dev/start_server.sh
from databricks_mcp import DatabricksMCPClient
mcp_client = DatabricksMCPClient(
    server_url="http://localhost:8000"
)
# List available MCP tools
print(mcp_client.list_tools())

The script connects to your local MCP server without authentication and lists available tools.

End-to-end test your deployed MCP server

After deploying to Databricks Apps, use the interactive shell script to test with user-level OAuth authentication:

chmod +x scripts/dev/query_remote.sh
./scripts/dev/query_remote.sh

The script will guide you through:

  1. Profile selection: Choose your Databricks CLI profile
  2. App name: Enter your deployed app name
  3. Automatic configuration: Extracts app scopes and URLs automatically
  4. OAuth flow: Generates user OAuth token via browser
  5. End-to-end test: Tests list_tools(), and invokes each tool returned in list_tools

What it does:

  • Retrieves app configuration using databricks apps get
  • Extracts user authorization scopes from effective_user_api_scopes
  • Gets workspace host from your Databricks profile
  • Optionally accepts a job ID to test trigger_job_run tool
  • Generates OAuth token with the correct scopes
  • Tests MCP client with user-level authentication
  • Verifies all tools work correctly: health, get_current_user, add_numbers, and optionally trigger_job_run

This test simulates the real end-user experience when they authorize your app and use it with their credentials.

Alternatively, test manually with command-line arguments:

python scripts/dev/query_remote.py \
    --host "https://your-workspace.cloud.databricks.com" \
    --token "eyJr...Dkag" \
    --app-url "https://your-workspace.cloud.databricks.com/serving-endpoints/your-app"

The scripts/dev/query_remote.py script connects to your deployed MCP server with OAuth authentication and tests both the health check and user authorization functionality.

Adding New Tools

To add a new tool to your MCP server:

  1. Open server/tools.py
  2. Add a new function inside load_tools() with the @mcp_server.tool decorator:
@mcp_server.tool
def add_numbers(a: float, b: float) -> dict:
    """
    Add two numbers together.
    
    This tool performs basic arithmetic addition of two numeric values.
    
    Args:
        a (float): The first number to add
        b (float): The second number to add
    
    Returns:
        dict: A dictionary containing the result and input values
    """
    result = a + b
    return {
        "result": result,
        "a": a,
        "b": b,
    }
  1. Restart the server - the new tool will be automatically available to clients

Tool Best Practices

  • Clear naming: Use descriptive, action-oriented names
  • Comprehensive docstrings: AI uses these to understand when to call your tool
  • Type hints: Help with validation and documentation
  • Structured returns: Return dicts or Pydantic models for consistent data
  • Error handling: Use try-except blocks and return error information

Connecting to Databricks

The utils.py module provides two helper methods for interacting with Databricks resources via the Databricks SDK Workspace Client:

When deployed as a Databricks App:

  • get_workspace_client() - Returns a client authenticated as the service principal associated with the app. See App Authorization for more details.
  • get_user_authenticated_workspace_client() - Returns a client authenticated as the end user with scopes specified by the app creator. See User Authorization for more details.

When running locally:

  • Both methods return a client authenticated as the current developer, since no service principal identity exists in the local environment.

Example usage in tools:

from server import utils

# Example 1: Get current user information (user-authenticated)
@mcp_server.tool
def get_current_user() -> dict:
    """Get current user information."""
    try:
        w = utils.get_user_authenticated_workspace_client()
        user = w.current_user.me()
        return {
            "display_name": user.display_name,
            "user_name": user.user_name,
            "active": user.active,
        }
    except Exception as e:
        return {"error": str(e)}

# Example 2: Trigger a Databricks job (app-authenticated)
@mcp_server.tool
def trigger_job_run(job_id: int) -> dict:
    """Trigger a Databricks job run."""
    try:
        w = utils.get_workspace_client()
        run = w.jobs.run_now(job_id=job_id)
        
        # Construct the run page URL
        workspace_host = w.config.host.rstrip("/")
        run_page_url = f"{workspace_host}/jobs/{job_id}/runs/{run.run_id}"
        
        return {
            "success": True,
            "run_id": run.run_id,
            "job_id": job_id,
            "run_page_url": run_page_url,
        }
    except Exception as e:
        return {"success": False, "job_id": job_id, "error": str(e)}

See the get_current_user and trigger_job_run tools in server/tools.py for complete implementations.

Generating OAuth Tokens

For advanced use cases, you can manually generate OAuth tokens for Databricks workspace access using the provided script. This implements the OAuth U2M (User-to-Machine) flow.

Generate Workspace-Level OAuth Token

python scripts/dev/generate_oauth_token.py \
    --host https://your-workspace.cloud.databricks.com \
    --scopes "all-apis offline_access"

Parameters:

  • --host: Databricks workspace URL (required)
  • --scopes: Space-separated OAuth scopes (default: all-apis offline_access)
  • --redirect-uri: Callback URI (default: http://localhost:8020)

Note: The script uses the databricks-cli OAuth client ID by default.

The script will:

  1. Generate a PKCE code verifier and challenge
  2. Open your browser for authorization
  3. Capture the authorization code via local HTTP server
  4. Exchange the code for an access token
  5. Display the token response as JSON (token is valid for 1 hour)

Example with custom scopes:

python scripts/dev/generate_oauth_token.py \
    --host https://your-workspace.cloud.databricks.com \
    --scopes "clusters:read jobs:write sql:read"

Configuration

Server Settings

The server can be configured using command-line arguments:

# Change port
uv run custom-mcp-server --port 8080

# Get help
uv run custom-mcp-server --help

The default configuration:

  • Host: 0.0.0.0 (listens on all network interfaces)
  • Port: 8000 (configurable via --port argument)

Deployment

Databricks Apps

This project is configured for Databricks Apps deployment:

  1. Deploy using Databricks CLI or UI
  2. The server will be accessible at your Databricks app URL

For more information refer to the documentation here

Try Your MCP Server in AI Playground

After deploying your MCP server to Databricks Apps, you can test it interactively in the Databricks AI Playground:

  1. Navigate to the AI Playground in your Databricks workspace
  2. Select a model with the Tools enabled label
  3. Click Tools > + Add tool and select your deployed MCP server
  4. Start chatting with the AI agent - it will automatically call your MCP server's tools as needed

The AI Playground provides a visual interface to prototype and test your MCP server with different models and configurations before integrating it into production applications.

For more information, see Prototype tool-calling agents in AI Playground.

Development

Code Formatting

# Format code with ruff
uv run ruff format .

# Check for lint errors
uv run ruff check .

Customization

Rename the Project

  1. Update name in pyproject.toml
  2. Update name parameter in server/app.py: FastMCP(name="your-name")
  3. Update the command script in pyproject.toml under [project.scripts]

Add Custom API Endpoints

Add routes to the app FastAPI instance in server/app.py:

@app.get("/custom-endpoint")
def custom_endpoint():
    return {"message": "Hello from custom endpoint"}

Troubleshooting

Port Already in Use

Change the port in server/main.py or set the PORT environment variable.

Import Errors

Ensure all dependencies are installed:

uv sync  # or pip install -r requirements.txt

Resources

AI Assistant Context

See for detailed project context specifically designed for AI assistants working with this codebase.