dhannusch/groq-docs-mcp
If you are the rightful owner of groq-docs-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
An MCP server that provides semantic search over Groq's documentation using Cloudflare AI Search with R2 as the data source.
Groq Documentation MCP Server
An MCP (Model Context Protocol) server that provides semantic search over Groq's documentation using Cloudflare AI Search (AutoRAG) with R2 as the data source.
Features
search_documentation
Tool: Query Groq's API documentation using natural language- AI-Powered RAG: Uses Cloudflare AI Search for semantic search and retrieval
- Fast & Scalable: Built on Cloudflare Workers for global edge deployment
- MCP Compatible: Works with Claude Desktop and other MCP clients
Setup Instructions
Prerequisites
- Cloudflare account with Workers enabled
- Wrangler CLI installed:
npm install -g wrangler
- Authenticated with Wrangler:
wrangler login
1. Install Dependencies
npm install
2. Install and Configure Rclone
Install rclone for fast bulk uploads:
brew install rclone # macOS
# Or: curl https://rclone.org/install.sh | sudo bash # Linux
Configure rclone for R2:
rclone config
# Choose 'n', name: 'r2', storage: 5, provider: 24
# Enter your Account ID and R2 API Token
3. Create R2 Bucket
wrangler r2 bucket create groq-docs
4. Scrape Documentation
Set up Browser Rendering API credentials:
export CLOUDFLARE_ACCOUNT_ID="your-account-id"
export CLOUDFLARE_API_TOKEN="your-api-token"
Run the scraper:
npm run scrape
This will:
- Use Browser Rendering API for clean content extraction
- Scrape all pages from https://console.groq.com/docs
- Save locally to
./scraped-docs/
- Bulk upload to R2 using rclone
Note: Takes several minutes depending on page count.
5. Configure AI Search (Manual)
In the Cloudflare Dashboard:
- Go to AI > AI Search
- Create a new AI Search instance named
groq-docs-ai-search
- Configure the data source:
- Select R2 as the data source
- Choose the
groq-docs
bucket
- Select embedding and generation models (use defaults)
- Set up AI Gateway for monitoring
- Assign a Service API token
- Wait for indexing to complete (monitor in the AI Search dashboard)
6. Deploy the Worker
Deploy the MCP server to Cloudflare Workers:
npm run deploy
Your server will be available at: groq-docs-mcp.<your-account>.workers.dev/mcp
or groq-docs-mcp.<your-account>.workers.dev/sse
Usage
Connect to Claude Desktop
To use this MCP server with Claude Desktop:
- Open Claude Desktop settings
- Go to Settings > Developer > Edit Config
- Add this configuration:
{
"mcpServers": {
"groq-docs": {
"command": "npx",
"args": [
"mcp-remote",
"https://groq-docs-mcp.<your-account>.workers.dev/sse"
]
}
}
}
- Restart Claude Desktop
Connect to Cloudflare AI Playground
- Go to https://playground.ai.cloudflare.com/
- Enter your deployed MCP server URL:
groq-docs-mcp.<your-account>.workers.dev/sse
- Start using the
search_documentation
tool!
Example Queries
Try asking:
- "How do I use the Groq API?"
- "What models are available on Groq?"
- "How do I implement streaming with Groq?"
- "What are the rate limits for Groq API?"
- "How do I use OpenAI compatibility with Groq?"
Development
Local Development
Run the server locally:
npm run dev
The server will be available at http://localhost:8787
Type Checking
Generate types for Cloudflare bindings:
npm run cf-typegen
Check types:
npm run type-check
Code Formatting
Format code with Biome:
npm run format
npm run lint:fix
Project Structure
groq-docs-mcp/
āāā src/
ā āāā index.ts # Main MCP server implementation
āāā scripts/
ā āāā scrape-groq-docs.js # Documentation scraper script
āāā scraped-docs/ # Local cache of scraped docs (git-ignored)
āāā package.json # Dependencies and scripts
āāā wrangler.jsonc # Cloudflare Worker configuration
āāā README.md # This file
How It Works
- Scraping: Uses Cloudflare Browser Rendering API to extract clean markdown from Groq's documentation
- Storage: Documentation is stored as markdown files in R2 (uploaded via rclone)
- Indexing: Cloudflare AI Search indexes the R2 content using embeddings
- Query: The MCP tool queries the AI Search index and returns relevant documentation snippets
- Results: Formatted results include URLs, titles, content, and relevance scores
Troubleshooting
Scraper Issues
If the scraper fails:
- Check your internet connection
- Verify Groq's documentation site is accessible
- Ensure Wrangler is authenticated:
wrangler whoami
AI Search Not Working
If searches return no results:
- Verify the AI Search instance is created and named
groq-docs-ai-search
- Check that indexing is complete in the AI Search dashboard
- Ensure the R2 bucket contains the scraped files:
wrangler r2 object list groq-docs
Worker Deployment Issues
If deployment fails:
- Verify Wrangler is up to date:
npm install -g wrangler@latest
- Check your Cloudflare account has Workers enabled
- Ensure the R2 bucket exists:
wrangler r2 bucket list
License
MIT