bsangars/mcp
If you are the rightful owner of mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
MPO MCP Server is a comprehensive Model Context Protocol server built with FastMCP, offering integrations with GitHub, Confluence, and Databricks.
MPO MCP Server
A comprehensive Model Context Protocol (MCP) server built with FastMCP that provides powerful integrations with GitHub repositories, Confluence documentation, and Databricks Unity Catalog.
š Built with FastMCP! This server leverages FastMCP, a modern, decorator-based framework for building MCP servers with minimal boilerplate.
š Table of Contents
- Overview
- Features
- Installation
- Configuration
- Usage
- Available Tools
- Documentation
- Development
- Troubleshooting
Overview
MPO MCP Server enables AI assistants and LLMs to interact seamlessly with your development and data ecosystem. It exposes a comprehensive set of tools through the Model Context Protocol, allowing intelligent agents to:
- GitHub: Browse repositories, search code, read files, manage branches and pull requests
- Confluence: Search and retrieve documentation, list spaces and pages
- Databricks: Query Unity Catalog metadata, execute SQL queries, explore data schemas
The server is built with a modular architecture, allowing you to configure only the services you need.
Features
š§ Flexible Configuration
- Modular Design: Enable only the services you need (GitHub, Confluence, Databricks, or any combination)
- Environment-based: Simple
.env
file configuration with validation - Secure: API tokens and credentials managed through environment variables
š Multiple Usage Modes
- Interactive LLM Assistant: Natural language interface with autonomous tool selection
- MCP Server: Direct integration with Claude Desktop and other MCP clients
- Command-Line Interface: Direct tool invocation via CLI
š Comprehensive Tool Set
- 18 GitHub Tools: Complete repository management and code exploration
- 5 Confluence Tools: Full documentation search and retrieval
- 10 Databricks Tools: Complete Unity Catalog metadata and SQL execution
Installation
Prerequisites
- Python 3.10 or higher
- pip or uv for package management
- API credentials for the services you want to use
Quick Setup
- Clone the repository:
cd /Users/bsang2/Desktop/mcp_demo/mpo-mcp
- Install dependencies:
pip install -r requirements.txt
Or using uv (faster):
uv pip install -r requirements.txt
- Create configuration file:
cp .env.example .env # If example exists
# Or create .env manually
- Add your credentials to
.env
(see Configuration)
Package Installation
You can also install as a package:
pip install -e .
This enables the command-line tools:
mpo-mcp-server
: Run the MCP servermpo
: Command-line interface
Configuration
Environment Variables
Create a .env
file in the project root with your credentials:
# ============================================
# Anthropic Configuration (for LLM Assistant)
# ============================================
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# ============================================
# GitHub Configuration
# ============================================
GITHUB_TOKEN=your_github_token_here
GITHUB_ORG=your_default_org_or_username
# ============================================
# Confluence Configuration
# ============================================
CONFLUENCE_URL=https://your-domain.atlassian.net
CONFLUENCE_USERNAME=your_email@example.com
CONFLUENCE_API_TOKEN=your_confluence_api_token
CONFLUENCE_SPACE_KEY=your_default_space_key
# ============================================
# Databricks Configuration
# ============================================
DATABRICKS_HOST=https://your-workspace.databricks.com
DATABRICKS_TOKEN=your_databricks_token
DATABRICKS_CATALOG=your_default_catalog
DATABRICKS_WAREHOUSE_ID=your_sql_warehouse_id
Getting API Credentials
Anthropic API Key (for Interactive LLM Assistant)
- Visit console.anthropic.com
- Sign up or log in
- Navigate to API Keys
- Create a new API key
- Copy to
.env
file
GitHub Personal Access Token
- Go to GitHub Settings ā Developer settings ā Personal access tokens ā Tokens (classic)
- Generate new token with scopes:
repo
(for private repositories)read:org
(for organization data)user
(for user data)
- Copy token to
.env
file
Confluence API Token
- Visit id.atlassian.com/manage-profile/security/api-tokens
- Create API token
- Use your Atlassian account email as username
- Copy token to
.env
file
Databricks Access Token
- Go to your Databricks workspace
- Click User Settings ā Developer
- Manage Access tokens ā Generate new token
- Set expiration and comment
- Copy token to
.env
file
Service Validation
The server automatically validates configurations at startup:
- Tools are only exposed for properly configured services
- Partial configuration is supported (e.g., GitHub only)
- Clear error messages for missing credentials
Usage
Method 1: Interactive LLM Assistant (Recommended) š¤
The easiest way to use the server - a conversational interface that autonomously selects and uses tools:
python llm_assistant.py
Features:
- Natural language queries
- Autonomous tool selection
- Context-aware responses
- Conversation history
- Follow-up questions
Example Session:
š¬ You: What are the most popular repositories from nike-goal-analytics-mpo?
š¤ Assistant: [Analyzes and calls github_list_repositories]
Here are Facebook's top repositories:
1. React - 210K stars...
š¬ You: Show me the README from the React repository
š¤ Assistant: [Calls github_get_file_contents]
Here's the React README...
š¬ You: Search for "useState" in that repo
š¤ Assistant: [Calls github_search_code]
Found 147 results for "useState"...
Requirements: Set ANTHROPIC_API_KEY
in .env
See for detailed documentation.
Method 2: MCP Server (For Claude Desktop & Other Clients)
Run the server to expose tools via the Model Context Protocol:
python -m mpo_mcp.server
Or if installed as package:
mpo-mcp-server
Integration with Claude Desktop
Add to your Claude Desktop configuration:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Option 1: Using .env file (Recommended)
{
"mcpServers": {
"mpo-mcp": {
"command": "python",
"args": ["-m", "mpo_mcp.server"],
"cwd": "/Users/bsang2/Desktop/mcp_demo/mpo-mcp"
}
}
}
Option 2: Explicit environment variables
{
"mcpServers": {
"mpo-mcp": {
"command": "python",
"args": ["-m", "mpo_mcp.server"],
"cwd": "/Users/bsang2/Desktop/mcp_demo/mpo-mcp",
"env": {
"GITHUB_TOKEN": "your_token",
"GITHUB_ORG": "your_org",
"CONFLUENCE_URL": "https://your-domain.atlassian.net",
"CONFLUENCE_USERNAME": "your_email@example.com",
"CONFLUENCE_API_TOKEN": "your_token",
"CONFLUENCE_SPACE_KEY": "your_space",
"DATABRICKS_HOST": "https://your-workspace.databricks.com",
"DATABRICKS_TOKEN": "your_token",
"DATABRICKS_CATALOG": "your_catalog",
"DATABRICKS_WAREHOUSE_ID": "your_warehouse_id"
}
}
}
}
See for Cursor AI integration.
Method 3: Command-Line Interface
Direct tool invocation via CLI:
# GitHub commands
mpo github repos --org nike-goal-analytics-mpo --limit 5
mpo github repo --name nike-goal-analytics-mpo/msc-dft-monorepo
mpo github search --query "useState" --repo nike-goal-analytics-mpo/msc-dft-monorepo
mpo github file --repo nike-goal-analytics-mpo/msc-dft-monorepo --path README.md
mpo github branches --repo nike-goal-analytics-mpo/msc-dft-monorepo
mpo github prs --repo nike-goal-analytics-mpo/msc-dft-monorepo --state open
# Confluence commands
mpo confluence spaces --limit 10
mpo confluence pages --space DOCS --limit 20
mpo confluence page --id 123456789
mpo confluence search --query "architecture" --space TECH
mpo confluence page-by-title --title "Getting Started"
# Databricks commands
mpo databricks catalogs
mpo databricks schemas --catalog main
mpo databricks tables --catalog main --schema default
mpo databricks schema --catalog main --schema default --table users
mpo databricks search --query customer --catalog main
mpo databricks catalog --name main
mpo databricks query --sql "SELECT * FROM main.default.users LIMIT 10"
mpo databricks warehouses
# Help
mpo --help
mpo github --help
mpo confluence --help
mpo databricks --help
See and for comprehensive CLI documentation.
Available Tools
GitHub Tools (6 tools)
1. github_list_repositories
List repositories for a user or organization.
Parameters:
org
(optional): Organization or username (defaults toGITHUB_ORG
)limit
(default: 30): Maximum number of repositories
Returns: List of repositories with name, description, stars, forks, language, etc.
Example:
{
"org": "nike-goal-analytics-mpo",
"limit": 10
}
2. github_get_repository_info
Get detailed information about a specific repository.
Parameters:
repo_name
(required): Full repository name (e.g., "nike-goal-analytics-mpo/msc-dft-monorepo")
Returns: Detailed repository metadata including stars, forks, topics, license, etc.
Example:
{
"repo_name": "nike-goal-analytics-mpo/msc-dft-monorepo"
}
3. github_search_code
Search for code across GitHub repositories.
Parameters:
query
(required): Search queryrepo
(optional): Limit search to specific repositorylimit
(default: 10): Maximum results
Returns: List of code matches with file paths and URLs
Example:
{
"query": "useState",
"repo": "nike-goal-analytics-mpo/msc-dft-monorepo",
"limit": 5
}
4. github_get_file_contents
Read file contents from a repository.
Parameters:
repo_name
(required): Full repository namefile_path
(required): Path to fileref
(optional): Branch, tag, or commit SHA
Returns: File contents and metadata
Example:
{
"repo_name": "nike-goal-analytics-mpo/msc-dft-monorepo",
"file_path": "README.md"
}
5. github_list_branches
List branches in a repository.
Parameters:
repo_name
(required): Full repository namelimit
(default: 20): Maximum branches
Returns: List of branches with protection status and commit SHA
Example:
{
"repo_name": "nike-goal-analytics-mpo/msc-dft-monorepo",
"limit": 10
}
6. github_get_pull_requests
Retrieve pull requests for a repository.
Parameters:
repo_name
(required): Full repository namestate
(default: "open"): PR state ("open", "closed", or "all")limit
(default: 20): Maximum PRs
Returns: List of pull requests with status, author, dates, etc.
Example:
{
"repo_name": "nike-goal-analytics-mpo/msc-dft-monorepo",
"state": "open",
"limit": 10
}
Confluence Tools (5 tools)
1. confluence_list_pages
List pages in a Confluence space.
Parameters:
space_key
(optional): Space key (defaults toCONFLUENCE_SPACE_KEY
)limit
(default: 25): Maximum pages
Returns: List of pages with titles, IDs, and URLs
Example:
{
"space_key": "DOCS",
"limit": 20
}
2. confluence_get_page_content
Get full content of a Confluence page.
Parameters:
page_id
(required): Page ID
Returns: Page content with metadata, version info, and HTML/storage content
Example:
{
"page_id": "123456789"
}
3. confluence_search_pages
Search for pages across Confluence.
Parameters:
query
(required): Search queryspace_key
(optional): Limit to specific spacelimit
(default: 20): Maximum results
Returns: Search results with excerpts and relevance
Example:
{
"query": "API documentation",
"space_key": "TECH",
"limit": 10
}
4. confluence_get_page_by_title
Find a page by its exact title.
Parameters:
title
(required): Page titlespace_key
(optional): Space key (defaults toCONFLUENCE_SPACE_KEY
)
Returns: Page content and metadata
Example:
{
"title": "Getting Started Guide",
"space_key": "DOCS"
}
5. confluence_list_spaces
List available Confluence spaces.
Parameters:
limit
(default: 25): Maximum spaces
Returns: List of spaces with keys, names, and URLs
Example:
{
"limit": 10
}
Databricks Tools (10 tools)
1. databricks_list_catalogs
List all Unity Catalog catalogs.
Parameters: None
Returns: List of catalogs with names, owners, storage roots
Example:
{}
2. databricks_list_schemas
List schemas in a catalog.
Parameters:
catalog_name
(optional): Catalog name (defaults toDATABRICKS_CATALOG
)
Returns: List of schemas with full names and metadata
Example:
{
"catalog_name": "main"
}
3. databricks_list_tables
List tables in a schema.
Parameters:
schema_name
(required): Schema namecatalog_name
(optional): Catalog name (defaults toDATABRICKS_CATALOG
)
Returns: List of tables with names, types, formats, and locations
Example:
{
"catalog_name": "main",
"schema_name": "default"
}
4. databricks_get_table_schema
Get detailed schema for a table.
Parameters:
table_name
(required): Table nameschema_name
(required): Schema namecatalog_name
(optional): Catalog name (defaults toDATABRICKS_CATALOG
)
Returns: Complete table schema with columns, types, and properties
Example:
{
"table_name": "users",
"catalog_name": "main",
"schema_name": "default"
}
5. databricks_search_tables
Search for tables by name pattern.
Parameters:
query
(required): Search query (table name pattern)catalog_name
(optional): Limit to specific catalogmax_results
(default: 50): Maximum results
Returns: List of matching tables
Example:
{
"query": "customer",
"catalog_name": "main",
"max_results": 20
}
6. databricks_get_catalog_info
Get detailed catalog information.
Parameters:
catalog_name
(required): Catalog name
Returns: Catalog metadata including properties and configuration
Example:
{
"catalog_name": "main"
}
7. databricks_get_schema_info
Get detailed schema information.
Parameters:
catalog_name
(required): Catalog nameschema_name
(required): Schema name
Returns: Schema metadata and properties
Example:
{
"catalog_name": "main",
"schema_name": "default"
}
8. databricks_execute_query
Execute a SQL query on Databricks.
Parameters:
query
(required): SQL query to executecatalog_name
(optional): Catalog context (defaults toDATABRICKS_CATALOG
)warehouse_id
(optional): SQL warehouse ID (defaults toDATABRICKS_WAREHOUSE_ID
)
Returns: Query results with columns and data rows
Example:
{
"query": "SELECT * FROM main.default.users LIMIT 10",
"catalog_name": "main",
"warehouse_id": "abc123def456"
}
9. databricks_list_warehouses
List available SQL warehouses.
Parameters: None
Returns: List of SQL warehouses with IDs, names, states, and configurations
Example:
{}
10. databricks_list_sql_warehouses
Alias for databricks_list_warehouses
.
Documentation
Comprehensive documentation is available in the docs/
directory:
Getting Started
- - Get started in 5 minutes ā”
- - Detailed setup guide with credential instructions š§
- - Interactive assistant guide š¤
Tools & CLI
- - Complete tool reference with examples š ļø
- - Command-line interface guide š»
- - CLI usage examples š”
FastMCP
- - FastMCP conversion summary ā
- - Quick reference for FastMCP patterns š
- - Side-by-side comparison with traditional MCP š
- - Detailed migration guide š
Architecture & Concepts
- - Deep dive into how MCP works š§
- - Visual diagrams of the complete flow š
- - Implementation details š
- - Cursor AI integration guide šÆ
Development
Project Structure
mpo-mcp/
āāā mpo_mcp/ # Main package
ā āāā __init__.py # Package initialization
ā āāā server.py # FastMCP server implementation
ā āāā config.py # Configuration management
ā āāā github_tools.py # GitHub integration (6 tools)
ā āāā confluence_tools.py # Confluence integration (5 tools)
ā āāā databricks_tools.py # Databricks integration (10 tools)
ā āāā cli.py # Command-line interface
āāā docs/ # Comprehensive documentation
āāā llm_assistant.py # Interactive LLM assistant
āāā example_usage.py # Usage examples
āāā quick_query.py # Quick query utility
āāā requirements.txt # Python dependencies
āāā pyproject.toml # Package configuration
āāā .env # Environment variables (not in git)
āāā .gitignore # Git ignore rules
āāā README.md # This file
Adding New Tools
- Implement the tool in the appropriate tools file:
# In mpo_mcp/github_tools.py
async def new_github_feature(self, param: str) -> Dict[str, Any]:
"""
Description of the new feature.
Args:
param: Parameter description
Returns:
Result description
"""
# Implementation
pass
- Register the tool in
server.py
:
@mcp.tool()
async def github_new_feature(param: str) -> dict:
"""Tool description for MCP clients.
Args:
param: Parameter description
"""
return await github_tools.new_github_feature(param=param)
- Add CLI command in
cli.py
(optional):
@github_group.command()
@click.option('--param', required=True, help='Parameter description')
def new_feature(param: str):
"""Command description."""
result = asyncio.run(github_tools.new_github_feature(param=param))
click.echo(json.dumps(result, indent=2))
Testing Tools
You can test individual tools programmatically:
import asyncio
from mpo_mcp.github_tools import GitHubTools
async def test():
tools = GitHubTools()
repos = await tools.list_repositories(org="nike-goal-analytics-mpo", limit=5)
print(repos)
asyncio.run(test())
Code Quality
- Type hints: All functions use type hints
- Docstrings: Comprehensive docstrings for all public methods
- Error handling: Graceful error handling with informative messages
- Logging: Structured logging throughout
Dependencies
Core dependencies:
fastmcp>=0.1.0
- MCP server frameworkPyGithub>=2.1.1
- GitHub API clientatlassian-python-api>=3.41.0
- Confluence API clientdatabricks-sdk>=0.18.0
- Databricks API clientpython-dotenv>=1.0.0
- Environment variable managementanthropic>=0.39.0
- Anthropic API for LLM assistant
See for complete list.
Troubleshooting
Server Not Starting
Issue: Server fails to start or shows import errors
Solutions:
- Verify Python version:
python --version
(must be 3.10+) - Reinstall dependencies:
pip install -r requirements.txt --force-reinstall
- Check for conflicting packages:
pip list | grep mcp
- Verify virtual environment:
which python
Tools Not Appearing
Issue: Expected tools don't show up in MCP client
Solutions:
- Check configuration validation in server logs
- Verify credentials in
.env
file - Ensure
.env
is in correct location (project root) - Check environment variables are loaded:
python -c "from mpo_mcp.config import Config; print(Config.validate_github())"
- Restart the MCP client after configuration changes
API Authentication Errors
GitHub:
- Verify token has correct scopes (
repo
,read:org
) - Check token hasn't expired
- Test token:
curl -H "Authorization: token YOUR_TOKEN" https://api.github.com/user
Confluence:
- Verify URL format (must include https://)
- Check API token is valid (not password)
- Ensure username is email address
- Test:
curl -u email@example.com:API_TOKEN https://your-domain.atlassian.net/wiki/rest/api/space
Databricks:
- Verify workspace URL is correct
- Check token hasn't expired
- Ensure token has appropriate permissions
- Test:
curl -H "Authorization: Bearer YOUR_TOKEN" https://your-workspace.databricks.com/api/2.0/unity-catalog/catalogs
Rate Limiting
GitHub:
- Authenticated requests: 5,000 requests/hour
- Search API: 30 requests/minute
- Use
limit
parameters to reduce API calls
Confluence:
- Cloud: Rate limits vary by plan
- Implement exponential backoff for production use
Databricks:
- Check workspace quotas
- Use connection pooling for multiple queries
Claude Desktop Integration Issues
Issue: Tools not appearing in Claude Desktop
Solutions:
- Verify config file location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
- Check JSON syntax is valid
- Verify
cwd
path is absolute and correct - Restart Claude Desktop after config changes
- Check Claude Desktop logs for errors
LLM Assistant Issues
Issue: Assistant not responding or showing errors
Solutions:
- Verify
ANTHROPIC_API_KEY
is set correctly - Check API key has sufficient credits
- Ensure FastMCP server can start independently
- Review error messages in console output
Connection Issues
Issue: Tools timing out or failing to connect
Solutions:
- Check network connectivity
- Verify firewall rules allow outbound HTTPS
- Test API endpoints directly with curl
- Check proxy settings if behind corporate firewall
- Increase timeout values if on slow connection
Debugging Tips
- Enable verbose logging:
import logging
logging.basicConfig(level=logging.DEBUG)
- Test configuration:
python -c "from mpo_mcp.config import Config; print(f'GitHub: {Config.validate_github()}, Confluence: {Config.validate_confluence()}, Databricks: {Config.validate_databricks()}')"
- Run server with logging:
python -m mpo_mcp.server 2>&1 | tee server.log
- Test individual tools:
mpo github repos --org nike-goal-analytics-mpo --limit 1
mpo confluence spaces --limit 1
mpo databricks catalogs
Getting Help
If you encounter issues not covered here:
- Check the relevant documentation in
docs/
- Review server logs for detailed error messages
- Verify all credentials are correctly configured
- Test API endpoints independently
- Check you have appropriate permissions for each service
License
This project is provided as-is for demonstration and integration purposes.
Contributing
Contributions are welcome! Please ensure:
- Code follows existing style and conventions
- All functions have type hints and docstrings
- New tools are properly registered
- Documentation is updated accordingly
Acknowledgments
Built with:
- FastMCP - Modern MCP framework
- PyGithub - GitHub API wrapper
- atlassian-python-api - Confluence API wrapper
- databricks-sdk - Databricks SDK
- Anthropic API - Claude AI integration
Version: 0.1.0
Python: 3.10+
License: MIT
Status: Production Ready ā