kenstott/govdata-mcp-server
If you are the rightful owner of govdata-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
Govdata MCP Server is a Model Context Protocol server for Apache Calcite's govdata adapter, providing semantic access to various US government data sources.
Govdata MCP Server
Model Context Protocol (MCP) server for govdata adapter. Provides semantic access to US Census data, SEC filings, economic indicators, and geographic data via MCP tools.
Note: This server requires the Apache Calcite fork with govdata adapter from github.com/kenstott/calcite.
Architecture
┌──────────────────────────────┐
│ Python MCP Server │ ← This repo
│ - FastAPI + SSE transport │
│ - 9 MCP tools │
│ - API Key + JWT/OIDC auth │
└──────────┬───────────────────┘
│ JPype1 (JVM bridge)
▼
┌──────────────────────────────┐
│ Calcite Fat JAR │ ← Built from github.com/kenstott/calcite
│ - JDBC driver │
│ - Govdata adapter │
│ - DuckDB sub-schema │
└──────────────────────────────┘
Prerequisites
- Python 3.9+
- Java 17+ (required by Calcite JAR)
- MinIO or AWS S3 (required for data storage)
- The server stores Parquet files and cached data in S3-compatible storage
- For local development, use MinIO (lightweight S3-compatible server)
- For production, can use AWS S3, MinIO, or other S3-compatible services
- Calcite Fat JAR - Build from the kenstott/calcite fork:
git clone https://github.com/kenstott/calcite.git cd calcite ./gradlew :govdata:shadowJar # JAR will be at: govdata/build/libs/calcite-govdata-1.41.0-SNAPSHOT-all.jar
Quick Start
0. Set Up S3 Storage (MinIO for Local Development)
The server requires S3-compatible storage for data. For local development, use MinIO:
Using Docker (Recommended):
# Start MinIO with Docker
docker run -d \
-p 9000:9000 \
-p 9001:9001 \
--name minio \
-v ~/minio/data:/data \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin \
quay.io/minio/minio server /data --console-address ":9001"
# Create required buckets
docker exec minio mc alias set local http://localhost:9000 minioadmin minioadmin
docker exec minio mc mb local/govdata-parquet
docker exec minio mc mb local/govdata-production-cache
Or using Homebrew (macOS):
brew install minio/stable/minio
minio server ~/minio/data --console-address ":9001"
# In another terminal, create buckets:
mc alias set local http://localhost:9000 minioadmin minioadmin
mc mb local/govdata-parquet
mc mb local/govdata-production-cache
MinIO Console: Access at http://localhost:9001 (user: minioadmin, pass: minioadmin)
For AWS S3: Update .env with your AWS credentials and remove AWS_ENDPOINT_OVERRIDE.
1. Install Dependencies
cd govdata-mcp-server
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
pip install -e . # Install the package in editable mode
2. Required JARs (Logging & DuckDB)
Download the required JARs (SLF4J binding and DuckDB JDBC driver):
./download-jars.sh
This script will download:
- slf4j-reload4j-2.0.13.jar (~11KB) - SLF4J 2.x binding for Calcite logging
- duckdb-jdbc-1.1.3.jar (~70MB) - DuckDB JDBC driver for query execution
These JARs will be automatically added to the classpath before the Calcite JAR when the server starts.
Note: If the JARs are already present, the script will skip downloading them.
3. Configure Environment
Copy .env.example to .env and update paths:
cp .env.example .env
Edit .env and configure the following:
Required - Calcite Configuration:
CALCITE_JAR_PATH=/path/to/calcite/govdata/build/libs/calcite-govdata-1.41.0-SNAPSHOT-all.jar
# For quick testing (downloads in ~5-10 minutes):
CALCITE_MODEL_PATH=/path/to/govdata-mcp-server/govdata-model-sample.json
# For full data (downloads in 1-2 days):
# CALCITE_MODEL_PATH=/path/to/govdata-mcp-server/govdata-model.json
Model File Comparison:
| Model | Data Sources | Download Time | Use Case |
|---|---|---|---|
govdata-model-sample.json | 1 company (Apple), 2023-2024, basic FRED series | ~5-10 minutes | Testing, getting started |
govdata-model.json | 30 DJIA companies, 2010-2025, full data sources | 1-2 days | Production, full analysis |
Recommendation: Start with govdata-model-sample.json to verify everything works, then switch to the full model if needed.
Required - MCP Server Authentication:
API_KEYS=your-api-key-here
Required - AWS/S3 Configuration (for MinIO or AWS S3):
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
AWS_ENDPOINT_OVERRIDE=http://0.0.0.0:9000
GOVDATA_PARQUET_DIR=s3://govdata-parquet
GOVDATA_CACHE_DIR=s3://govdata-production-cache
Required - Government Data API Keys:
The Calcite govdata adapter requires API keys for various government data sources. Register for free at:
- FRED API (https://fred.stlouisfed.org/docs/api/api_key.html)
- BLS API (https://www.bls.gov/developers/api_signature_v2.html)
- BEA API (https://apps.bea.gov/API/signup/)
- Census API (https://api.census.gov/data/key_signup.html)
Add these to .env:
FRED_API_KEY=your-fred-api-key
BLS_API_KEY=your-bls-api-key
BEA_API_KEY=your-bea-api-key
CENSUS_API_KEY=your-census-api-key
See .env.example for additional optional API keys (FBI, NHTSA, FEMA, HUD, etc.).
Optional - Execution Engine
By default, you can use DuckDB as the execution engine for query processing. Configure in your .env:
CALCITE_EXECUTION_ENGINE=DUCKDB
If using DuckDB, ensure the DuckDB JDBC JAR is present (see Required JARs). You may also control long-running downloads:
GOVDATA_DOWNLOAD_TIMEOUT_MINUTES=2147483647
4. Run the Server
Recommended - Using startup script (with prerequisite checks):
# Development mode (with auto-reload)
./start-server.sh
# Production mode
./start-server.sh prod
# With debug logging
LOG_LEVEL=DEBUG ./start-server.sh
Alternative - Direct commands:
# Using Python module
python -m govdata_mcp.server
# Using installed command
govdata-mcp
# Using uvicorn directly (production)
uvicorn govdata_mcp.server:app --host 0.0.0.0 --port 8080
The server will start on http://0.0.0.0:8080 (configurable via SERVER_HOST and SERVER_PORT in .env)
5. Test with Health Check
curl http://0.0.0.0:8080/health
Available MCP Tools
The server exposes 9 MCP tools:
Discovery Tools
- list_schemas - List all database schemas
- list_tables - List tables in a schema
- describe_table - Get column details for a table
Query Tools
- query_data - Execute SQL queries
- sample_table - Sample rows from a table
Analysis Tools
- profile_table - Statistical profiling (row count, distinct counts, min/max, nulls)
- search_metadata - Semantic search across all metadata
Vector Search Tools
- semantic_search - Vector similarity search on embedded data
- list_vector_sources - List source tables for multi-source vectors
Authentication
The server supports two authentication methods:
API Key (Simple)
Add header to requests:
curl -H "X-API-Key: dev-key-12345" http://0.0.0.0:8080/messages
Configure in .env:
API_KEYS=key1,key2,key3
JWT/OAuth2 (Advanced)
Add Bearer token to requests:
curl -H "Authorization: Bearer <your-jwt-token>" http://0.0.0.0:8080/messages
You have two options:
- Locally-signed JWT (simple, not provider-backed)
# .env
JWT_SECRET_KEY=your-secret-key
JWT_ALGORITHM=HS256
- OIDC Provider Tokens (Azure AD, Google, etc.)
Enable OIDC validation to accept tokens issued by an external identity provider. Configure in .env:
# Enable OIDC/OAuth2 token validation
OIDC_ENABLED=true
# Issuer URL:
# - Azure AD: https://login.microsoftonline.com/<tenant-id>/v2.0
# - Google: https://accounts.google.com
OIDC_ISSUER_URL=https://login.microsoftonline.com/<tenant-id>/v2.0
# Audience expected in tokens:
# - Azure AD: your Application (client) ID or api://<app-id>
# - Google: your OAuth client ID
OIDC_AUDIENCE=<your-client-or-audience>
# Optional overrides
# OIDC_JWKS_URL= # normally discovered automatically from the issuer
# OIDC_CACHE_TTL_SECONDS=3600
# Security: when OIDC is enabled, local HS256 JWT fallback is DISABLED by default
# Set AUTH_ALLOW_LOCAL_JWT_FALLBACK=true only if you intentionally need to accept
# both provider-issued tokens and locally-signed JWTs.
# AUTH_ALLOW_LOCAL_JWT_FALLBACK=false
Notes:
- Ensure you use OIDC_ISSUER_URL (not OIDC_ISSUER) and set OIDC_ENABLED=true.
- With OIDC enabled, locally-signed JWTs are rejected by default; you can enable fallback via AUTH_ALLOW_LOCAL_JWT_FALLBACK=true.
- You do not have to delete JWT_* variables; they are ignored unless local fallback is enabled. For stricter security, you can remove them.
Examples:
- Azure AD (single-tenant):
- OIDC_ISSUER_URL=https://login.microsoftonline.com/
/v2.0 - OIDC_AUDIENCE=
- OIDC_ISSUER_URL=https://login.microsoftonline.com/
- Google:
- OIDC_ISSUER_URL=https://accounts.google.com
- OIDC_AUDIENCE=
Notes:
- Only validation is performed (signature, expiry, issuer, audience). This server does not host a login UI; obtain tokens from your provider (e.g., OAuth Authorization Code flow in your client) and present them in the Authorization header.
- API keys remain supported and can co-exist with OIDC.
FAQ: What is the “client id” (audience) when using a private JWT/OIDC server?
- The server validates the aud claim in the presented token against OIDC_AUDIENCE. In many providers this value is referred to as the Client ID or API Identifier of the resource you are protecting (this MCP server).
- In practice, set OIDC_AUDIENCE to the identifier you configured for this API in your identity provider. Examples:
- Keycloak (OIDC):
- OIDC_ISSUER_URL=https://
/realms/ - OIDC_AUDIENCE=
- Notes: Tokens may include multiple audiences. Ensure the client issuing the token includes this API’s client ID in aud (often done by enabling “Include client audience” or adding this API as an audience/scope).
- OIDC_ISSUER_URL=https://
- Auth0:
- OIDC_ISSUER_URL=https://
.auth0.com/ - OIDC_AUDIENCE=https://api.your-company.internal or a UUID-like API Identifier you configured under Applications → APIs.
- Notes: In Auth0, APIs have an Identifier that becomes the aud claim. Use that value here (not the application’s client_id unless you configured it as the API Identifier).
- OIDC_ISSUER_URL=https://
- Azure AD (private tenant):
- OIDC_ISSUER_URL=https://login.microsoftonline.com/
/v2.0 - OIDC_AUDIENCE=<Application (client) ID> or api://
depending on how you configured Expose an API.
- OIDC_ISSUER_URL=https://login.microsoftonline.com/
- Google Identity Platform / Firebase Auth (OIDC mode):
- OIDC_ISSUER_URL=https://accounts.google.com (or your federation issuer)
- OIDC_AUDIENCE=
- Custom OIDC with your own JWKS:
- OIDC_ISSUER_URL=https://auth.your-domain.com
- OIDC_AUDIENCE=
- Optionally set OIDC_JWKS_URL=https://auth.your-domain.com/.well-known/jwks.json if discovery is not standard.
- Keycloak (OIDC):
What if I have a private JWT server that is not OIDC?
- If you cannot expose a standard OIDC discovery document and JWKS, you can either:
- Use locally-signed JWTs with HS256 by configuring JWT_SECRET_KEY and JWT_ALGORITHM=HS256. In this mode, OIDC_* settings are not required and aud is not enforced by this server.
- Or implement an OIDC-compatible JWKS endpoint. Then set OIDC_ENABLED=true, OIDC_ISSUER_URL to your issuer, and optionally OIDC_JWKS_URL to your JWKS if discovery is not available.
Rule of thumb:
- Whatever value ends up in the aud claim of the access token your client presents should match OIDC_AUDIENCE in your .env. That value is usually the API/resource identifier you created in your identity provider for this MCP server.
Security Notes
- Never commit real API keys, JWT secrets, or tokens to version control. Use
.envlocally and keep only sanitized examples in.env.example. - If any secrets were ever committed, rotate them immediately.
- In production, set a strong
API_KEYSvalue or use JWT with a strongJWT_SECRET_KEY, and restrict network access to trusted clients. - Prefer running behind HTTPS (reverse proxy) and monitor logs for unauthorized access attempts.
MCP Client Configuration
Note: The browser version of Claude does not support remote MCP servers unless you have Claude at Work.
Claude Desktop - Option 1: stdio Mode (Simplest for Local)
For local development, run the server directly as a stdio process. This is the simplest approach - no separate server process needed.
Update ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"govdata": {
"command": "python3",
"args": [
"-m",
"govdata_mcp.server"
],
"env": {
"CALCITE_JAR_PATH": "/path/to/calcite/govdata/build/libs/calcite-govdata-1.41.0-SNAPSHOT-all.jar",
"CALCITE_MODEL_PATH": "/path/to/govdata-mcp-server/govdata-model.json",
"FRED_API_KEY": "your-fred-key",
"BLS_API_KEY": "your-bls-key",
"BEA_API_KEY": "your-bea-key",
"CENSUS_API_KEY": "your-census-key",
"AWS_ACCESS_KEY_ID": "minioadmin",
"AWS_SECRET_ACCESS_KEY": "minioadmin",
"AWS_ENDPOINT_OVERRIDE": "http://localhost:9000",
"GOVDATA_PARQUET_DIR": "s3://govdata-parquet",
"GOVDATA_CACHE_DIR": "s3://govdata-production-cache"
}
}
}
}
Important notes:
- Replace paths and API keys with your actual values
- Ensure MinIO is running (see step 0 in Quick Start)
- The server auto-detects stdio mode and runs without HTTP/SSE
- Restart Claude Desktop after editing the config
- Logs appear in Claude Desktop's developer console
Advantages:
- Simplest setup - no separate server process
- No API key authentication needed
- Perfect for local development and testing
Claude Desktop - Option 2: HTTP/SSE with mcp-remote
For running the server as a separate HTTP service (useful if sharing across multiple clients or debugging):
-
Start the server separately:
./start-server.sh -
Configure Claude Desktop:
{
"mcpServers": {
"govdata": {
"command": "npx",
"args": [
"mcp-remote",
"http://127.0.0.1:8080/messages/",
"--header",
"X-API-KEY: your-api-key-here",
"--debug",
"--allow-http"
]
}
}
}
Important notes:
- Replace
your-api-key-herewith one of the keys fromAPI_KEYSin your.env - The
--allow-httpflag is required for local development (non-HTTPS) - The
--debugflag provides verbose logging for troubleshooting - Restart Claude Desktop after editing the config
Advantages:
- Server runs independently - can share across multiple clients
- Easier to monitor with
LOG_LEVEL=DEBUGin terminal - Can test with curl/HTTP tools
Local Development Workflow
To run the MCP server locally and connect with Claude Desktop:
-
Start the server:
./start-server.sh # Or with debug logging: LOG_LEVEL=DEBUG ./start-server.sh -
Configure Claude Desktop with the config shown above (using
http://127.0.0.1:8080/messages/) -
Restart Claude Desktop to load the new configuration
-
Test the connection by asking Claude: "List the available tools from the govdata MCP server"
Important - Initial Data Download:
⚠️ The first time you start the server, it will download government data:
-
Using
govdata-model-sample.json(RECOMMENDED for testing): ~5-10 minutes- 1 company (Apple) with 2023-2024 data
- Basic FRED economic series
- Perfect for getting started and testing
-
Using
govdata-model.json(full production data): 1-2 days- 30 DJIA companies with 2010-2025 data (10s of GB)
- All economic data sources (FRED, BLS, BEA, Treasury)
- Census and geographic data
Tips for production configuration:
- Start with the sample model to verify setup
- Edit
govdata-model.jsonto adjust year ranges, CIKs, and data sources - Set
autoDownload: falsein the model to manually control what downloads - Use MinIO or S3 to share downloaded data across instances
Monitoring download progress:
- Server logs show download progress for each data source
- With
LOG_LEVEL=DEBUG, you'll see detailed progress for SEC filings - Data is cached in
.aperio/(local) ands3://govdata-parquet(S3/MinIO) - Subsequent starts are fast (~1-2 seconds) as cached data is reused
Claude at Work (Remote Deployment)
If you have Claude at Work, you can configure direct HTTP/SSE connection to a remote (publicly accessible) instance:
Requirements:
- Server must be hosted at a public URL (not localhost)
- OIDC authentication must be configured (see Authentication section above)
- HTTPS is strongly recommended for production
Example configuration:
{
"mcpServers": {
"govdata": {
"command": "true",
"url": "https://your-mcp-server.example.com/messages",
"headers": {
"Authorization": "Bearer your-oidc-token"
}
}
}
}
Note:
- The
"command": "true"workaround enables remote-only servers in Claude at Work - You must implement OIDC authentication (set OIDC_ENABLED=true, OIDC_ISSUER_URL, OIDC_AUDIENCE in .env)
- API keys alone are not sufficient for Claude at Work - use proper OIDC tokens
Other MCP Clients
The server implements MCP over HTTP with Server-Sent Events (SSE) and should work with any MCP-compatible client that supports HTTP/SSE transport.
Note: While the server follows the MCP specification and should work with other clients, it has primarily been tested with Claude Desktop. Feedback on compatibility with other MCP clients is welcome.
Endpoints:
- Primary:
http://0.0.0.0:8080/messages - Alias:
http://0.0.0.0:8080/sse(same behavior; provided for clarity)
Usage:
- GET to open the SSE read stream.
- POST to the announced endpoint (includes
session_id) to send data on the write channel. - Compatibility: a POST initialize to the base path (without
session_id) returns 200 OK so some clients (e.g., mcp-remote) won’t mark the server as failed.
Transport and Auth:
- Transport: SSE (Server-Sent Events)
- Authentication: X-API-Key header or Authorization: Bearer token
Direct mode quick test (curl):
- Initialize without session_id (compat path):
- curl -s -H "X-API-Key:
" -H "Content-Type: application/json"
-d '{"jsonrpc":"2.0","id":0,"method":"initialize","params":{}}'
http://127.0.0.1:8080/messages | jq .
- curl -s -H "X-API-Key:
- Open SSE stream (observe endpoint event):
- curl -N -H "X-API-Key:
" http://127.0.0.1:8080/messages
- curl -N -H "X-API-Key:
FAQ: Is /messages for Streamable HTTP and /sse for SSE?
- No hard split is required. This server uses a single ASGI handler for both and exposes two paths for convenience:
/messagesis the primary endpoint for MCP over HTTP/SSE./sseis an alias that behaves identically.
- Both support:
- SSE GET to establish the read stream (you’ll receive an
endpointevent with?session_id=...). - POST to the announced endpoint (including
session_id) for the write channel. - A compatibility path where a base-path POST with
{ "method": "initialize" }gets a 200 OK response so legacy clients don’t fail fast.
- SSE GET to establish the read stream (you’ll receive an
- Recommendation: point clients to
/messagesunless you have a policy or tooling preference for/sse. Both are equivalent in this server.
Example Queries
List Available Schemas
# Via MCP tool call
{
"tool": "list_schemas",
"arguments": {}
}
Query Census Data
{
"tool": "query_data",
"arguments": {
"sql": "SELECT state_fips, population_estimate FROM census.population_estimates WHERE year = 2020 LIMIT 10",
"limit": 100
}
}
Profile a Table
{
"tool": "profile_table",
"arguments": {
"schema": "census",
"table": "acs_income",
"columns": ["median_household_income", "poverty_rate"]
}
}
Working Without the MCP Server (No Java/Calcite)
If you don’t have access to the Calcite govdata MCP server or don’t want to run Java yet, you can still fetch public employment data directly using the included example script.
What you can do right now
- Query U.S. Census ACS employment profile metrics (DP03) by state
- Query BLS time series (e.g., total nonfarm employment)
- No JVM or Calcite JAR required
Prereqs
- Have Python deps installed: pip install -r requirements.txt (requests is already included)
- Put your API keys in .env (at least CENSUS_API_KEY and/or BLS_API_KEY)
- Export them into your environment when running the script, e.g.:
- export $(grep -E '^(CENSUS_API_KEY|BLS_API_KEY)=' .env | xargs)
Run examples
- Census (ACS 1-year DP03 profile – employment):
- python examples/census_employment_example.py census --state CA --year 2022 --limit 10
- Omitting --state will return all states. Use two-letter state code or FIPS.
- BLS (Current Employment Statistics series):
- python examples/census_employment_example.py bls --series CES0000000001 --start 2024-01 --end 2024-12
Notes
- The script prints JSON to stdout so you can pipe to jq if desired.
- Rate limits apply; see the Census and BLS API docs for details.
- When you’re ready to use Claude/other MCP clients with richer tools and SQL, follow the setup in Quick Start to run the MCP server, then use the Verification Checklist below.
Development
Project Structure
govdata-mcp-server/
├── src/govdata_mcp/
│ ├── __init__.py
│ ├── server.py # Main MCP server
│ ├── config.py # Configuration management
│ ├── jdbc.py # JDBC connection via JPype
│ ├── auth.py # Authentication middleware
│ └── tools/
│ ├── discovery.py # Schema/table discovery
│ ├── query.py # SQL execution
│ ├── profile.py # Table profiling
│ ├── metadata.py # Metadata search
│ └── vector.py # Vector similarity search
├── tests/
├── .env # Environment configuration
├── .env.example # Environment template
├── log4j.properties # JVM logging configuration
├── pyproject.toml
├── requirements.txt
└── README.md
Running Tests
pytest tests/
Code Formatting
black src/
ruff check src/
mypy src/
Docker Deployment
Build Image
docker build -t govdata-mcp-server .
Run with Docker Compose
docker-compose up
Logging Configuration
The server uses log4j for JVM-side logging (Calcite, AWS SDK) and Python's standard logging for the MCP server.
JVM Logging (log4j.properties)
Configure Java logging in log4j.properties:
# Root logger
log4j.rootLogger=INFO, stdout
# Reduce AWS SDK verbosity
log4j.logger.com.amazonaws=WARN
# Calcite logging
log4j.logger.org.apache.calcite=INFO
# Govdata adapter - DEBUG shows detailed operations (data loading, queries, etc.)
log4j.logger.org.apache.calcite.adapter.govdata=DEBUG
Note: The govdata adapter logging level is also controlled by the JVM system property -Dorg.apache.calcite.adapter.govdata.level=DEBUG which is set in jdbc.py. Both must be configured for detailed logging.
Python Logging
Set log level in .env:
LOG_LEVEL=INFO # Options: DEBUG, INFO, WARN, ERROR
Startup Warnings
You may see SLF4J warnings during startup:
SLF4J(W): No SLF4J providers were found.
SLF4J(W): Defaulting to no-operation (NOP) logger implementation
These warnings are harmless and can be safely ignored. They appear because the Calcite JAR contains SLF4J bindings but no provider. Logging is handled by log4j instead.
Troubleshooting
JVM Not Starting
- Ensure Java 17+ is installed:
java -version - Check Calcite JAR path is correct
- Verify JAR file exists and is readable
- Check JVM memory settings in
jdbc.py(default: 8GB max, 2GB initial)
Connection Errors
- Check Calcite model JSON path
- Ensure MinIO is running (if using S3 backend)
- Verify environment variables in
.env - Enable debug logging: Set
log4j.logger.org.apache.calcite.adapter.govdata=DEBUGinlog4j.properties
Authentication Failures
- Check API key matches
.envconfiguration - For JWT, verify secret key and algorithm
- Ensure header name is correct (
X-API-KeyorAuthorization)
Performance Notes
- JVM Startup: ~1-2 seconds on first connection
- Query Speed: Native JDBC performance after warmup
- Memory: Python process + JVM (allocate ~2GB for Java)
License
Apache License 2.0
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
Support
For issues related to:
- MCP Server: Open issue in this repo
- Calcite/JDBC: Open issue in kenstott/calcite repo
- Data Sources: Check govdata adapter documentation in the kenstott/calcite repo
Related Repositories
- Calcite Fork with Govdata Adapter: github.com/kenstott/calcite
- This MCP Server: github.com/kenstott/govdata-mcp-server
MCP Client (Claude) Verification Checklist
Use these prompts in Claude Desktop to confirm it is actually using this server. Keep this server running with LOG_LEVEL=DEBUG so you can observe requests.
Prereqs
- Claude Desktop config includes:
{
"mcpServers": {
"govdata": {
"command": "npx",
"args": [
"mcp-remote",
"http://127.0.0.1:8080/messages/",
"--header",
"X-API-KEY:
", "--debug", "--allow-http" ] } } } - Restart Claude Desktop after editing its config.
What to ask Claude (copy/paste)
- Initialization & tools
- "List the available tools exposed by the govdata MCP server."
- "What MCP tools are available from the govdata server?"
Expected logs here:
/messagesGET/POST, aninitializemessage, and tool discovery.
- Force a simple tool call
- "Using the govdata MCP server, call the 'list_schemas' tool and show me the result."
Expected logs:
call_tool name=list_schemasand a JSON array response.
- List tables in a schema
- "From the govdata MCP server, run list_tables with schema=census."
Expected logs:
call_tool name=list_tables arguments={"schema":"census"}.
- Describe a table
- "Use the govdata MCP tool describe_table for schema=census and table=acs_income."
Expected logs:
call_tool name=describe_table ...with column details in response.
- Minimal query
- "Using the govdata MCP server, call query_data with sql='SELECT 1 AS one' and limit=1."
Expected logs:
call_tool name=query_datawith one row{ "one": 1 }.
- Real data smoke test
- "With the govdata MCP server, call sample_table for schema=census table=population_estimates limit=5."
Expected logs:
call_tool name=sample_table ...and a few rows returned.
- Error-path check
- “Use list_tables with schema=not_a_schema and show the result.” Expected: server logs an error for that tool call; Claude returns an error payload/explanation.
- Authentication confirmation Look for one of these during first connection:
Auth: OIDC enabled (issuer=..., audience=..., ...)ORAuth: OIDC disabled. Accepting API keys and local JWT (...)Also on each request:[SSE] /messages auth succeeded ... mode=API Key|Bearer.
- SSE handshake correctness
- After Claude connects:
Sent endpoint event: /messages?session_id=...and periodic pings. - If you ever see a
POST /messageswithoutsession_id, the server returns 400 with guidance (indicates a misrouted client), but Claude Desktop should post to the session URL automatically.
If Claude doesn't use the server for a natural-language question
- Ask: "Using the govdata MCP server, find 5 table names related to employment in the census schema."
- If no tool call appears in logs, force usage: "You must use the govdata MCP tools to answer. Start by calling list_schemas."
Troubleshooting
- Config name must match (
govdata) and URL reachable from Claude. - API key in Claude must match
API_KEYS. - Try
http://127.0.0.1:8080/messages/if loopback issues arise. - Keep
LOG_LEVEL=DEBUGto see[SSE]andcall_toollines.