nicoferdi96/CrewAIDocsMCP
If you are the rightful owner of CrewAIDocsMCP and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
CrewAI Documentation MCP Server is an intelligent server designed to provide optimized access to CrewAI documentation, specifically for developers.
crewai/search-docs
Search across all CrewAI documentation with relevance scoring.
crewai/get-section
Retrieve specific documentation sections.
crewai/get-example
Get code examples for specific CrewAI features.
crewai/get-api-reference
Get detailed API documentation for CrewAI classes and methods.
crewai/list-sections
List all available documentation sections.
MCP Servers over Streamable HTTP โ Complete Guide
๐ Read the full article here: MCP Servers over Streamable HTTP (Step-by-Step)
This repository provides a complete, production-ready example of building and deploying MCP (Model Context Protocol) servers using Python, mcp
, FastAPI, and uvicorn. You'll learn how to:
- Build MCP servers with custom tools and functions
- Expose tools over HTTP using streamable transport
- Test MCP servers locally with the MCP Inspector
- Deploy MCP servers to production (e.g., Render)
- Connect MCP servers to AI assistants like Cursor
- Mount multiple MCP servers in a single FastAPI application
๐ Project Structure
.
โโโ docs/ # Documentation assets and diagrams
โ โโโ mcp-client-server.png # MCP architecture diagram
โโโ fast_api/ # Multi-server FastAPI setup
โ โโโ crewai_docs_server.py # CrewAI documentation MCP server
โ โโโ echo_server.py # Simple echo tool MCP server
โ โโโ math_server.py # Math operations MCP server
โ โโโ server.py # FastAPI app mounting all servers
โ โโโ tavily_server.py # Tavily web search MCP server
โโโ services/ # Shared services and clients
โ โโโ __init__.py
โ โโโ github_client.py # GitHub API client for docs
โ โโโ search_engine.py # Documentation search engine
โโโ utils/ # Utility functions
โ โโโ __init__.py
โ โโโ doc_parser.py # MDX parsing utilities
โโโ .gitignore
โโโ .python-version # Python 3.11.0
โโโ CLAUDE.md # Codebase documentation for AI assistants
โโโ pyproject.toml # Project dependencies and metadata
โโโ README.md # This file
โโโ runtime.txt # Python runtime specification for deployment
โโโ server.py # Standalone Tavily search server
โโโ uv.lock # Dependency lockfile for uv
๐ Quick Start
Prerequisites
- Python 3.11+ (3.12+ recommended)
- uv package manager (recommended)
- Tavily API key for web search functionality (get one at tavily.com)
- OpenAI API key for semantic search (get one at platform.openai.com)
Installation
- Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh
- Clone the repository and install dependencies:
git clone https://github.com/yourusername/CrewAIDocsMCP.git
cd CrewAIDocsMCP
uv sync
- Set up environment variables:
echo "TAVILY_API_KEY=your_tavily_api_key_here" > .env
echo "OPENAI_API_KEY=your_openai_api_key_here" >> .env
๐๏ธ Building MCP Servers
Basic MCP Server
The simplest way to create an MCP server is using the FastMCP
class:
from mcp.server.fastmcp import FastMCP
# Create server instance
mcp = FastMCP("my-server", host="0.0.0.0", port=10000)
# Define tools using decorators
@mcp.tool()
async def my_tool(query: str) -> str:
"""Tool description shown to the AI"""
return f"Processed: {query}"
# Run the server
mcp.run(transport="streamable-http")
Running the Servers
Single MCP server (Tavily search):
uv run server.py
CrewAI Documentation server:
PYTHONPATH=. uv run python fast_api/crewai_docs_server.py
Multiple MCP servers via FastAPI:
PYTHONPATH=. uv run python fast_api/server.py
This mounts:
- Echo server at
http://localhost:8000/echo/mcp/
- Math server at
http://localhost:8000/math/mcp/
- Tavily search at
http://localhost:8000/tavily/mcp/
- CrewAI docs at
http://localhost:8000/crewai/mcp/
๐งช Testing MCP Servers
Using MCP Inspector
The MCP Inspector is the recommended tool for testing MCP servers during development.
- Install the MCP Inspector globally:
npm install -g @modelcontextprotocol/inspector
- Launch the inspector for single server:
npx @modelcontextprotocol/inspector http://localhost:10000/mcp/
โ ๏ธ Important: For streamable HTTP transport, you MUST append /mcp/
to your server URL.
- Testing multiple servers mounted on FastAPI:
When testing servers mounted on different paths, modify the URL accordingly:
# Test the echo server
npx @modelcontextprotocol/inspector http://localhost:8000/echo/mcp/
# Test the math server
npx @modelcontextprotocol/inspector http://localhost:8000/math/mcp/
# Test the CrewAI documentation server
npx @modelcontextprotocol/inspector http://localhost:8000/crewai/mcp/
# Test the Tavily search server
npx @modelcontextprotocol/inspector http://localhost:8000/tavily/mcp/
Alternative: Using uv's built-in MCP dev tools
# Add MCP CLI support to the project
uv add 'mcp[cli]'
# Run the inspector via uv
uv run mcp dev server.py
Then navigate to the URL shown (e.g., http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=...
)
๐ Deployment
Deploying to Render
This project is configured for easy deployment to Render.
-
Create a new Web Service on Render
-
Connect your GitHub repository
-
Configure the service:
- Build Command:
uv sync
- Start Command:
PYTHONPATH=. uv run python fast_api/server.py
- Environment: Python 3
- Instance Type: Free or paid tier based on your needs
- Build Command:
-
Add environment variables:
TAVILY_API_KEY
: Your Tavily API keyOPENAI_API_KEY
: Your OpenAI API key for embeddingsPORT
: Set by Render automatically- Any other required secrets
-
Deploy: Render will automatically deploy your service
Environment Variables for Production
The FastAPI server automatically uses the PORT
environment variable:
port = int(os.getenv("PORT", 8000))
Other Deployment Options
Docker
FROM python:3.11-slim
# Install uv
RUN pip install uv
WORKDIR /app
COPY . .
# Install dependencies
RUN uv sync
# Expose port
EXPOSE 8000
# Run the server
CMD ["sh", "-c", "PYTHONPATH=. uv run python fast_api/server.py"]
Heroku
Create a Procfile
:
web: PYTHONPATH=. uv run python fast_api/server.py
Railway/Fly.io
Use similar configuration with uv sync
for build and PYTHONPATH=. uv run python fast_api/server.py
for start command.
๐ Connecting to AI Assistants
Cursor Configuration
- Open Cursor Settings โ MCP Servers
- Add your server configuration:
For local development:
{
"mcpServers": {
"tavily-search": {
"url": "http://localhost:10000/mcp/"
}
}
}
For deployed servers:
{
"mcpServers": {
"tavily-search": {
"url": "https://your-app.onrender.com/mcp/"
}
}
}
Multiple servers configuration:
{
"mcpServers": {
"echo-server": {
"url": "http://localhost:8000/echo/mcp/"
},
"math-server": {
"url": "http://localhost:8000/math/mcp/"
}
}
}
โ ๏ธ Important: Always include the trailing /
in the URL.
๐ Available MCP Servers
1. Tavily Web Search Server
- Tool:
web_search
- Search the web using Tavily API - Port: 10000 (standalone)
- Requires:
TAVILY_API_KEY
environment variable
2. CrewAI Documentation Server (AI-Powered Vector Search)
- Tools:
search_crewai_docs
- AI-powered semantic search using OpenAI embeddingsget_search_suggestions
- Example queries for semantic searchget_search_status
- Check indexing status and progresslist_available_concepts
- Dynamically discovered concept listget_concept_docs
- Get documentation for specific concepts (auto-discovered)get_code_examples
- Extract code examples with semantic relevanceget_doc_file
- Retrieve full documentation filesrefresh_search_index
- Force refresh of search index
- Port: 10001 (standalone)
- Features:
- AI-powered search: Semantic search using OpenAI's text-embedding-3-small model
- Natural language queries: Ask questions like "How do I create an agent?"
- No timeouts: Background embedding generation with status tracking
- Auto-discovery: Dynamic concept mapping using pathlib
- Persistent embeddings: Fast server restarts with cached vectors
- Smart chunking: Documents split into ~500 token chunks for granular search
- Once-per-day indexing: Automatic refresh every 24 hours
- Category filtering: Search within specific documentation categories
3. Echo Server (Example)
- Tools:
echo
- Echo back messagesreverse_echo
- Echo messages in reverse
- Port: 9001 (standalone)
4. Math Server (Example)
- Tools:
add
- Add two numbersmultiply
- Multiply two numberscalculate
- Evaluate mathematical expressions
- Port: 9002 (standalone)
๐ ๏ธ Development
Local Development Setup
# Clone and setup
git clone <repository>
cd CrewAIDocsMCP
# Install dependencies
uv sync
# Run single server
uv run server.py
# Or run multi-server FastAPI app
PYTHONPATH=. uv run python fast_api/server.py
Creating New Tools
- Create a new MCP server file:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("my-tools")
@mcp.tool()
async def my_custom_tool(param: str) -> dict:
"""Description of what this tool does"""
# Tool implementation
return {"result": "processed"}
if __name__ == "__main__":
mcp.run(transport="streamable-http")
- Test with MCP Inspector:
npx @modelcontextprotocol/inspector http://localhost:10000/mcp/
- Add to FastAPI app (optional):
# In fast_api/server.py
from my_tools_server import mcp as my_tools_mcp
# Mount the server
app.mount("/my-tools", my_tools_mcp.get_app(with_lifespan=False))
Managing Dependencies
# Add a new dependency
uv add package-name
# Add development dependency
uv add --dev pytest
# Update all dependencies
uv sync --upgrade
# Lock dependencies
uv lock
Environment Variables
Create a .env
file in the project root:
TAVILY_API_KEY=your_tavily_api_key
OPENAI_API_KEY=your_openai_api_key
PORT=10000
HOST=0.0.0.0
๐ Resources
๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.