IgorWarzocha/TheGrokMCP
If you are the rightful owner of TheGrokMCP and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
Grok MCP Server is a production-ready Model Context Protocol server designed for Grok AI models, utilizing FastMCP for a clean and Pythonic implementation.
Grok MCP Server 🚀
A production-ready Model Context Protocol (MCP) server for Grok AI models, built with FastMCP for clean, Pythonic implementation. Developed using my own Superprompt System
Created by Igor Warzocha | LinkedIn | GitHub
☕ Support This Project
If this has helped you, consider buying me a coffee so I can finance my Opus credits!
Your support helps me create more AI optimization resources and keep this project updated!
Features
- Full Model Support: Access to all Grok models (grok-2-latest, grok-3, grok-3-reasoner, grok-3-deepsearch, grok-3-mini-beta)
- Intelligent Model Selection: Automatically choose the best model based on task complexity
- Vision Capabilities: Analyze images with Grok's visual understanding
- Text Embeddings: Generate embeddings for semantic search and analysis
- Retry Logic: Built-in exponential backoff for robust API interactions
- Comprehensive Error Handling: Detailed error messages and graceful degradation
- FastMCP Integration: Clean, decorator-based API for easy extension
Quick Start
1. Installation
# Clone the repository
git clone https://github.com/IgorWarzocha/TheGrokMCP.git
cd TheGrokMCP
# Install dependencies
pip install -r requirements.txt
2. Configuration
# Copy the example environment file
cp .env.example .env
# Edit .env and add your Grok API key
# XAI_API_KEY=your_actual_api_key_here
3. Run the Server
# Run the MCP server
python -m src.server
# Or with debug logging
DEBUG=true python -m src.server
4. Connect with Claude Desktop
Add to your Claude Desktop configuration (claude_desktop_config.json):
{
"mcpServers": {
"grok": {
"command": "python",
"args": ["-m", "src.server"],
"cwd": "/path/to/TheGrokMCP",
"env": {
"XAI_API_KEY": "your_api_key_here"
}
}
}
}
Available Tools
1. chat_completion
Send chat messages to Grok with intelligent model selection.
# Simple chat
await chat_completion([{"role": "user", "content": "Hello!"}])
# Auto-select model based on complexity
await chat_completion(
messages=[{"role": "user", "content": "Explain quantum computing"}],
task_complexity="complex" # Options: simple, complex, reasoning, research
)
# Use specific model
await chat_completion(
messages=[{"role": "user", "content": "Solve this puzzle"}],
model="grok-3-reasoner"
)
2. image_understanding
Analyze images using Grok's vision capabilities.
# From file path
await image_understanding(
image_path="/path/to/image.jpg",
prompt="What's in this image?"
)
# From base64 data
await image_understanding(
image_base64="base64_encoded_data",
prompt="Describe the scene"
)
3. create_embeddings
Generate text embeddings for semantic analysis.
# Single text
await create_embeddings("Hello world")
# Multiple texts
await create_embeddings([
"First document",
"Second document",
"Third document"
])
4. list_models
Get information about all available Grok models.
models = await list_models()
# Returns model capabilities, context windows, and specifications
Model Comparison
| Model | Best For | Context Window | Max Output |
|---|---|---|---|
| grok-3-mini-beta | Quick responses, simple tasks | 64,000 | 4,096 |
| grok-2-latest | General purpose, image analysis | 128,000 | 4,096 |
| grok-3 | Complex tasks, advanced features | 128,000 | 8,192 |
| grok-3-reasoner | Multi-step reasoning, analysis | 128,000 | 8,192 |
| grok-3-deepsearch | Research, fact-checking | 128,000 | 8,192 |
Configuration Options
Environment Variables
XAI_API_KEY(required): Your Grok API keyDEFAULT_MODEL(optional): Default model to use (default: grok-3-mini-beta)DEBUG(optional): Enable debug logging (default: false)
Troubleshooting
Common Issues
-
API Key Not Found
Error: XAI_API_KEY not found in environment variablesSolution: Ensure your
.envfile containsXAI_API_KEY=your_key_here -
Model Not Available
Error: Invalid model: grok-4Solution: Use
list_models()to see available models -
Rate Limiting
Error: Rate limit exceededSolution: The client includes automatic retry with exponential backoff
-
Image Analysis Fails
Error: Image file not foundSolution: Ensure the image path is absolute or use base64 encoding
Debug Mode
Enable detailed logging:
DEBUG=true python -m src.server
Development
Project Structure
TheGrokMCP/
├── src/
│ ├── __init__.py
│ ├── server.py # Main MCP server
│ ├── grok_client.py # Grok API client
│ └── utils/ # Helper utilities
├── tests/ # Test suite
├── docs/ # Additional documentation
├── requirements.txt # Python dependencies
├── .env.example # Environment template
├── README.md # This file
└── TODO.md # Development roadmap
Running Tests
# Run all tests
pytest
# Run with coverage
pytest --cov=src
Adding New Tools
- Add the tool to
server.pyusing the@mcp.tooldecorator:
@mcp.tool
async def my_new_tool(param1: str, param2: int) -> Dict[str, Any]:
"""Tool description here."""
# Implementation
pass
- Update the README with usage examples
- Add tests for the new functionality
License
MIT License - see LICENSE file for details
Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Support
- GitHub Issues: Report bugs or request features
- Documentation: See the
docs/directory for detailed guides - API Reference: Grok API Documentation