groq-docs-mcp

dhannusch/groq-docs-mcp

3.2

If you are the rightful owner of groq-docs-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

An MCP server that provides semantic search over Groq's documentation using Cloudflare AI Search with R2 as the data source.

Tools
1
Resources
0
Prompts
0

Groq Documentation MCP Server

An MCP (Model Context Protocol) server that provides semantic search over Groq's documentation using Cloudflare AI Search (AutoRAG) with R2 as the data source.

Features

  • search_documentation Tool: Query Groq's API documentation using natural language
  • AI-Powered RAG: Uses Cloudflare AI Search for semantic search and retrieval
  • Fast & Scalable: Built on Cloudflare Workers for global edge deployment
  • MCP Compatible: Works with Claude Desktop and other MCP clients

Setup Instructions

Prerequisites

  1. Cloudflare account with Workers enabled
  2. Wrangler CLI installed: npm install -g wrangler
  3. Authenticated with Wrangler: wrangler login

1. Install Dependencies

npm install

2. Install and Configure Rclone

Install rclone for fast bulk uploads:

brew install rclone  # macOS
# Or: curl https://rclone.org/install.sh | sudo bash  # Linux

Configure rclone for R2:

rclone config
# Choose 'n', name: 'r2', storage: 5, provider: 24
# Enter your Account ID and R2 API Token

3. Create R2 Bucket

wrangler r2 bucket create groq-docs

4. Scrape Documentation

Set up Browser Rendering API credentials:

export CLOUDFLARE_ACCOUNT_ID="your-account-id"
export CLOUDFLARE_API_TOKEN="your-api-token"

Run the scraper:

npm run scrape

This will:

  • Use Browser Rendering API for clean content extraction
  • Scrape all pages from https://console.groq.com/docs
  • Save locally to ./scraped-docs/
  • Bulk upload to R2 using rclone

Note: Takes several minutes depending on page count.

5. Configure AI Search (Manual)

In the Cloudflare Dashboard:

  1. Go to AI > AI Search
  2. Create a new AI Search instance named groq-docs-ai-search
  3. Configure the data source:
    • Select R2 as the data source
    • Choose the groq-docs bucket
  4. Select embedding and generation models (use defaults)
  5. Set up AI Gateway for monitoring
  6. Assign a Service API token
  7. Wait for indexing to complete (monitor in the AI Search dashboard)

6. Deploy the Worker

Deploy the MCP server to Cloudflare Workers:

npm run deploy

Your server will be available at: groq-docs-mcp.<your-account>.workers.dev/mcp or groq-docs-mcp.<your-account>.workers.dev/sse

Usage

Connect to Claude Desktop

To use this MCP server with Claude Desktop:

  1. Open Claude Desktop settings
  2. Go to Settings > Developer > Edit Config
  3. Add this configuration:
{
  "mcpServers": {
    "groq-docs": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://groq-docs-mcp.<your-account>.workers.dev/sse"
      ]
    }
  }
}
  1. Restart Claude Desktop

Connect to Cloudflare AI Playground

  1. Go to https://playground.ai.cloudflare.com/
  2. Enter your deployed MCP server URL: groq-docs-mcp.<your-account>.workers.dev/sse
  3. Start using the search_documentation tool!

Example Queries

Try asking:

  • "How do I use the Groq API?"
  • "What models are available on Groq?"
  • "How do I implement streaming with Groq?"
  • "What are the rate limits for Groq API?"
  • "How do I use OpenAI compatibility with Groq?"

Development

Local Development

Run the server locally:

npm run dev

The server will be available at http://localhost:8787

Type Checking

Generate types for Cloudflare bindings:

npm run cf-typegen

Check types:

npm run type-check

Code Formatting

Format code with Biome:

npm run format
npm run lint:fix

Project Structure

groq-docs-mcp/
ā”œā”€ā”€ src/
│   └── index.ts           # Main MCP server implementation
ā”œā”€ā”€ scripts/
│   └── scrape-groq-docs.js # Documentation scraper script
ā”œā”€ā”€ scraped-docs/          # Local cache of scraped docs (git-ignored)
ā”œā”€ā”€ package.json           # Dependencies and scripts
ā”œā”€ā”€ wrangler.jsonc         # Cloudflare Worker configuration
└── README.md             # This file

How It Works

  1. Scraping: Uses Cloudflare Browser Rendering API to extract clean markdown from Groq's documentation
  2. Storage: Documentation is stored as markdown files in R2 (uploaded via rclone)
  3. Indexing: Cloudflare AI Search indexes the R2 content using embeddings
  4. Query: The MCP tool queries the AI Search index and returns relevant documentation snippets
  5. Results: Formatted results include URLs, titles, content, and relevance scores

Troubleshooting

Scraper Issues

If the scraper fails:

  • Check your internet connection
  • Verify Groq's documentation site is accessible
  • Ensure Wrangler is authenticated: wrangler whoami

AI Search Not Working

If searches return no results:

  • Verify the AI Search instance is created and named groq-docs-ai-search
  • Check that indexing is complete in the AI Search dashboard
  • Ensure the R2 bucket contains the scraped files: wrangler r2 object list groq-docs

Worker Deployment Issues

If deployment fails:

  • Verify Wrangler is up to date: npm install -g wrangler@latest
  • Check your Cloudflare account has Workers enabled
  • Ensure the R2 bucket exists: wrangler r2 bucket list

License

MIT