mcp-ragdocs
If you are the rightful owner of mcp-ragdocs and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context.
The RAG Documentation MCP Server is designed to enhance AI responses by integrating relevant documentation context through vector search. It allows AI assistants to access and process documentation efficiently, making them more context-aware and capable of providing accurate information. The server supports various tools for managing documentation sources, processing queues, and embedding configurations. It is particularly useful for developers looking to build documentation-aware AI systems, implement semantic search capabilities, and augment existing knowledge bases. The server can be deployed using Docker Compose, and it includes a web interface for real-time monitoring and management. The system prioritizes local processing with Ollama as the default embedding provider, while OpenAI serves as a reliable fallback option.
Features
- Vector search for documentation retrieval
- Documentation source management
- Queue processing and management
- Local and cloud-based embedding support
- Web interface for real-time monitoring
Tools
search_documentation
Search for documents through vectors and return relevant fragments and source information
list_sources
List all available document sources and their metadata
extract_urls
Extract URL from text and check if it already exists in the document
remove_documentation
Delete documents from specific sources
list_queue
List all items in the processing queue and their status
run_queue
Process all items in the queue and automatically add new documents to vector storage
clear_queue
Clear all items in the processing queue
add_documentation
Add new documents to the processing queue