pvoo/bigquery-mcp
If you are the rightful owner of bigquery-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The BigQuery MCP Server is designed to efficiently navigate BigQuery datasets and tables using LLMs, optimized for large projects with many datasets and tables.
๐๏ธ BigQuery MCP Server
Practical MCP server for navigating BigQuery datasets and tables by LLMs. Designed for larger projects with many datasets/tables, optimized to keep LLM context small while staying fast and safe.
- Minimal by default: list datasets and tables names; fetch details only when asked
- Navigate larger projects: filter by name, request detailed metadata/schemas on demand
- Quick table insight: optional schema, column descriptions and fill-rate to help an agent decide relevance fast
- Safe to run: read-only query execution with guardrails (SELECT/WITH only, comment stripping)
Quick Start
Prerequisites: Python 3.10+ and uv package manager
๐ Quick Setup
Option 1: Direct from PyPI (Recommended)
# 1. Authenticate
gcloud auth application-default login
# 2. Run server
uvx bigquery-mcp --project YOUR_PROJECT --location US
Option 2: Clone locally (development setup)
# 1. Clone and setup
git clone https://github.com/pvoo/bigquery-mcp.git
cd bigquery-mcp
# 2. Configure environment
cp .env.example .env
# Edit .env with your project and location
# 3. Run or inspect
make run # Start server
make inspect # Open MCP inspector
๐ง MCP Client Configuration
Option 1: PyPI package (Recommended) Simplest setup using the published PyPI package:
{
"mcpServers": {
"bigquery": {
"command": "uvx",
"args": [
"bigquery-mcp",
"--project", "your-project-id",
"--location", "US"
]
}
}
}
Option 2: Local clone (for development)
# Clone first
git clone https://github.com/pvoo/bigquery-mcp.git
{
"mcpServers": {
"bigquery": {
"command": "uv",
"args": ["--directory", "/absolute/path/to/bigquery-mcp", "run", "bigquery-mcp"],
"env": {
"GCP_PROJECT_ID": "your-project-id",
"BIGQUERY_LOCATION": "US"
}
}
}
}
๐งช Test Your Setup
# Test with MCP inspector
npx @modelcontextprotocol/inspector uvx bigquery-mcp --project YOUR_PROJECT --location US
๐ง Configuration Options
All configuration can be set via CLI arguments or environment variables. CLI arguments take precedence.
Required Parameters
--project YOUR_PROJECT # Google Cloud project ID
--location US # BigQuery location (US, EU, etc.)
Optional Parameters
# Dataset Access Control
--datasets dataset1 dataset2 # Restrict to specific datasets (default: all datasets)
# Query & Result Limits
--list-max-results 500 # Max results for basic list operations (default: 500)
--detailed-list-max 25 # Max results for detailed list operations (default: 25)
# Table Analysis
--sample-rows 3 # Sample data rows returned in get_table (default: 3)
--stats-sample-size 500 # Rows sampled for column fill rate calculations (default: 500)
# Authentication
--key-file /path/to/key.json # Service account key file (default: ADC)
Environment Variables
All CLI options have corresponding environment variables:
export GCP_PROJECT_ID=your-project
export BIGQUERY_LOCATION=US
export BIGQUERY_ALLOWED_DATASETS=dataset1,dataset2
export BIGQUERY_LIST_MAX_RESULTS=500
export BIGQUERY_LIST_MAX_RESULTS_DETAILED=25
export BIGQUERY_SAMPLE_ROWS=3
export BIGQUERY_SAMPLE_ROWS_FOR_STATS=500
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json
๐ ๏ธ Tools Overview
This MCP server provides 4 core BigQuery tools optimized for LLM efficiency:
๐ Smart Dataset & Table Discovery
list_datasets
- Dual mode: basic (names only) vs detailed (full metadata)list_tables
- Context-aware table browsing with optional schema detailsget_table
- Complete table analysis with schema and sample data
๐ Safe Query Execution
run_query
- Execute SELECT/WITH queries only, with cost tracking and safety validation. Use LIMIT clause in queries to control result size.
Key Features:
- โ Minimal by default - 70% fewer tokens in basic mode
- โ Safe queries only - Blocks all write operations
- โ LLM-optimized - Returns structured data perfect for AI analysis
- โ Cost transparent - Shows bytes processed for each query
๐๏ธ Development Setup
Local Development
# Clone and setup
git clone https://github.com/pvoo/bigquery-mcp.git
cd bigquery-mcp
make install # Setup environment + pre-commit hooks
# Development workflow
make run # Start server
make test # Run test suite
make check # Lint + format + typecheck
make inspect # Launch MCP inspector
Testing & Quality
make test # Full test suite
pytest tests/test_safety.py # SQL safety validation tests
pytest tests/test_server.py # Core server functionality tests
make check # Run all quality checks
๐ Authentication & Permissions
Authentication Methods:
- Application Default Credentials (recommended):
gcloud auth application-default login
- Service Account Key: Use
--key-file
or setGOOGLE_APPLICATION_CREDENTIALS
Required BigQuery Permissions:
bigquery.datasets.get
,bigquery.datasets.list
bigquery.tables.list
,bigquery.tables.get
bigquery.jobs.create
,bigquery.data.get
๐จ Troubleshooting
Authentication Issues:
# Check current auth
gcloud auth application-default print-access-token
# Re-authenticate
gcloud auth application-default login
# Enable BigQuery API
gcloud services enable bigquery.googleapis.com
MCP Connection Issues:
- Ensure absolute paths in MCP config
- Test server manually:
make run
- Check that project and location environment variables or args are set correctly
Performance Issues:
- Use
{"detailed": false}
for faster responses - Add search filters:
{"search": "pattern"}
- Reduce max_results for large datasets
๐ก Usage Examples
๐ SQL Query Example
-- Query public datasets
SELECT
EXTRACT(YEAR FROM pickup_datetime) as year,
COUNT(*) as trips,
ROUND(AVG(fare_amount), 2) as avg_fare
FROM `bigquery-public-data.new_york_taxi_trips.tlc_yellow_trips_2020`
WHERE pickup_datetime BETWEEN '2020-01-01' AND '2020-12-31'
GROUP BY year
LIMIT 20
๐ค Example: Usage with Claude Code subagent
Scenario: Use the specialized BigQuery Table Analyst agent in Claude Code to automatically explore your data warehouse, analyze table relationships, and provide structured insights. By using the subagent you can take the context used for analyzing the tables out of the main thread and return actionable insights into the main agent thread for writing SQL or analyzing.
Setup:
# 1. Clone and configure
git clone https://github.com/pvoo/bigquery-mcp.git
cd bigquery-mcp
# 2. Setup environment
export GCP_PROJECT_ID="your-project-id"
export BIGQUERY_LOCATION="US"
gcloud auth application-default login
# 3. Launch Claude Code
claude-code
Example Usage:
๐ฌ You: "I need to understand our sales data structure and find tables related to customer orders"
๐ค Claude: I'll use the BigQuery Table Analyst agent to explore your sales datasets and identify relevant tables with their relationships.
[Agent automatically:]
- Lists all datasets to identify sales-related ones
- Explores table schemas with detailed metadata
- Shows actual sample data from key tables
- Discovers join relationships between tables
- Provides ready-to-use SQL queries
What the Agent Returns:
- Table schemas with column descriptions and types
- Sample data showing actual values (not placeholders)
- Join relationships with working SQL examples
- Data quality insights (null rates, freshness, etc.)
- Actionable SQL queries you can immediately execute
๐ค Contributing
We welcome contributions! Looking forward to your feedback for improvements.
Quick Start:
# Fork on GitHub, then:
git clone https://github.com/yourusername/bigquery-mcp.git
cd bigquery-mcp
make install # Setup dev environment
make check # Verify everything works
# Make changes, then:
make test # Run tests
make check # Quality checks
# Submit PR!
Development Guidelines:
- Add tests for new features
- Update documentation
- Follow existing code style (enforced by pre-commit hooks)
- Ensure all quality checks pass
Found an issue or have a feature request?
- ๐ Bug reports: Open an issue
- ๐ง Code improvements: Submit a pull request
- ๐ Documentation: See
๐ Star this repo if it helps you!