MCP-Server

MCP-Server

3.3

If you are the rightful owner of MCP-Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Model Context Protocol (MCP) server extends AI assistants' capabilities by allowing them to access and retrieve information from custom document collections, enhancing their knowledge base with specific content.

The MCP server for document processing is designed to overcome the limitations of large language models by providing up-to-date information from custom document collections. It allows AI assistants to access the latest framework documentation, private codebases, and technical specifications, effectively extending their knowledge base. The server processes Markdown and text files, generates embeddings, and stores them in a vector database, which can be queried by AI assistants through MCP tools. This setup is particularly useful for upgrading AI knowledge with the latest framework documentation and using private codebase documentation.

Features

  • Processes and retrieves information from custom document collections.
  • Generates embeddings and stores them in a vector database.
  • Supports multiple embedding models, including free local models.
  • Operates in full processing and context retrieval modes.
  • Exposes tools for reading files, searching content, and retrieving context.

Tools

  1. read_md_files

    Process and retrieve files. Parameters: file_path (optional path to a specific file or directory)

  2. search_content

    Search across processed content. Parameters: query (required search query)

  3. get_context

    Retrieve contextual information. Parameters: query (required context query), window_size (optional number of context items to retrieve)

  4. project_structure

    Provide project structure information. No parameters.

  5. suggest_implementation

    Generate implementation suggestions. Parameters: description (required description of what to implement)