llamaindex-parser-mcp

LeoWanty/llamaindex-parser-mcp

3.1

If you are the rightful owner of llamaindex-parser-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

MCP server for llamaindex parsers facilitates seamless integration and communication between different parsing components using the Model Context Protocol.

LlamaIndex Parser MCP Server

This project provides a server that exposes LlamaIndex's parsing capabilities through a simple interface. It is built using the FastMCP framework.

The server has a tool called load_markdown_data that can load and parse markdown files from a specified directory and return the parsed nodes as a JSON string.

Installation

To install the project and its dependencies, run the following commands:

uv sync
pip install -e .

Usage

To run the server, execute the following command from the root of the repository:

python src/mcp_llamaindex/server.py

The server will start and listen for requests on stdio.

Gradio Interface

This project also includes a Gradio interface for interacting with the RAG pipeline.

Running the Gradio App

To start a local web server, and access the Gradio interface in your browser:

gradio ./src/mcp_llamaindex/app.py

Dev mode

To run the server in dev mode:

fastmcp dev src/mcp_llamaindex/mcp_server.py

Configuration

The application's configuration is managed using Pydantic and environment files. The environment can be set to either dev or prod.

To select an environment, set the ENV_TYPE environment variable:

export ENV_TYPE=prod

The application will then load its settings from the corresponding .env file (.prod.env in this case).

Note: The .env files are not committed to version control. You should create your own .dev.env and .prod.env files based on the .example.env file.

Contributing

We welcome contributions to this project! Please read our to learn how you can contribute.