LiteObject/llamaindex-mcp-server
If you are the rightful owner of llamaindex-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The LlamaIndex Documentation MCP Server is a FastAPI-based HTTP service designed to fetch and serve LlamaIndex documentation for integration with VS Code Copilot.
search_llamaindex_docs
Search through LlamaIndex documentation
get_llamaindex_resource
Get full content of a specific documentation resource
LlamaIndex Documentation MCP Server
A Model Context Protocol (MCP) server that fetches and serves LlamaIndex documentation for VS Code Copilot integration. This server now runs as an HTTP service (using FastAPI) and provides searchable access to LlamaIndex documentation.
Features
- 🔍 Search through LlamaIndex documentation
- 📚 Fetch specific documentation resources
- 🐳 Containerized for easy deployment
- 🔧 VS Code Copilot integration
- ⚡ Async HTTP client for fast fetching
- 💾 Content caching for improved performance
- 🌐 HTTP API for multi-client (multi-VS Code) support
Table of Contents
- Prerequisites
- Quick Start
- Healthcheck
- VS Code Integration
- MCP Server Types
- API Usage
- Environment Variables
- Local Development
- Troubleshooting
- Architecture
- MCP Protocol Guide
- Contributing
- License
- Support
Prerequisites
- Docker and Docker Compose installed
- VS Code with Copilot extension
- Python 3.11+ (for local development)
Quick Start
-
Clone or create the project structure:
git clone <repo-url> cd llamaindex-mcp-server
-
Build and run with Docker Compose:
docker-compose up -d --build
- The service will be available at
http://localhost:8000
. - The container will be named
mcp-server
. - The Docker image will be named
liteobject/llamaindex-mcp-server
.
- The service will be available at
Healthcheck
The container exposes a healthcheck endpoint:
GET http://localhost:8000/rpc
Response:
{"status": "ok", "method": "GET /rpc healthcheck"}
VS Code Integration
To use this MCP server with VS Code Copilot or compatible extensions, add the following to your VS Code settings.json
:
"mcp": {
"inputs": [],
"servers": {
"llamaindex-docs": {
"type": "http",
"url": "http://localhost:8000/rpc"
}
}
}
- Make sure your server is running and accessible at the specified URL.
- You can add this block to your global or workspace
settings.json
.
MCP Server Types
When configuring MCP servers in VS Code or other clients, you may encounter different server types. Here is a brief explanation of each:
- http: Communicates with the MCP server over HTTP(S) using a URL (e.g.,
http://localhost:8000/rpc
). This is the type used by this project. - stdio: Communicates with the MCP server via standard input/output (stdin/stdout). Typically used for local processes started by the client.
- sse: Uses Server-Sent Events (SSE) over HTTP for streaming responses from the server. Useful for real-time updates or long-running operations.
- websocket: Uses a WebSocket connection for bidirectional communication between client and server.
For this server, use the http
type as shown in the VS Code Integration section above.
API Usage
Once configured, the MCP server provides the following tools to VS Code Copilot via HTTP:
Tools Available
- search_llamaindex_docs: Search through LlamaIndex documentation
- Parameters:
query
(string),limit
(integer, optional)
- Parameters:
- get_llamaindex_resource: Get full content of a specific documentation resource
- Parameters:
uri
(string)
- Parameters:
Resources Available
The server automatically discovers and provides access to:
- Getting Started guides
- Module guides (loading, indexing, querying)
- Agent documentation
- API references
- Examples and tutorials
Example: JSON-RPC Request
Send a JSON-RPC 2.0 request to the server:
curl -X POST http://localhost:8000/rpc \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "id": 1, "method": "initialize"}'
Example response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"resources": {"subscribe": true, "listChanged": true},
"tools": {"listChanged": true}
},
"serverInfo": {
"name": "llamaindex-docs-server",
"version": "1.0.0"
}
}
}
Environment Variables
PYTHONUNBUFFERED=1
PYTHONPATH=/app
UVICORN_LOG_LEVEL=warning
Local Development
To run locally without Docker:
pip install -r requirements.txt
uvicorn app.main:app --host 0.0.0.0 --port 8000
Troubleshooting
Common Issues
-
Container fails to start:
- Check Docker is running
- Verify the image was built successfully
- Check container logs:
docker logs mcp-server
-
VS Code doesn't recognize the MCP server:
- Ensure the configuration is in the correct settings.json
- Restart VS Code completely
- Check VS Code developer console for errors
-
Documentation fetching fails:
- Check internet connectivity from container
- Verify LlamaIndex docs are accessible
- Check container logs for HTTP errors
Architecture
VS Code Copilot
↓
MCP Protocol (HTTP)
↓
Docker Container / Python MCP Server (FastAPI)
↓
LlamaIndex Docs API
MCP Protocol Guide
For a comprehensive guide to the Model Context Protocol (MCP), including protocol architecture, message formats, implementation patterns, and best practices, see .
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Test with Docker
- Submit a pull request
License
MIT License - feel free to use and modify as needed.
Support
For issues related to:
- MCP Protocol: Check the MCP specification
- VS Code Integration: Check VS Code Copilot documentation
- LlamaIndex Docs: Check LlamaIndex documentation
- This Server: Create an issue in the repository