MCP_Server

MM-27-dev/MCP_Server

3.2

If you are the rightful owner of MCP_Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

MCP Server is a backend AI and embedding service that powers the core AI functionalities of DocuMind, handling document parsing, vector embeddings, and Retrieval-Augmented Generation (RAG) via OpenAI and Pinecone.

๐Ÿง  MCP Server โ€“ DocuMind Backend AI & Embedding Service This repository powers the core AI functionalities of DocuMind, handling document parsing, vector embeddings, and Retrieval-Augmented Generation (RAG) via OpenAI and Pinecone.

It acts as a microservice responsible for ingesting documents, embedding content, and responding to user queries with intelligent answers based on the context of the documents.

๐ŸŒ Live Deployment MCP Server (RAG API): https://mcp-server-thci.onrender.com

โš™๏ธ How to Run Locally

  1. Clone and Install git clone https://github.com/MM-27-dev/MCP_Server.git cd MCP_Server npm install

  2. Configure .env Create a .env file using the provided .env.example and fill in: PORT=5002

OpenAI

OPENAI_API_KEY=your_openai_key

Pinecone

PINECONE_API_KEY=your_pinecone_key PINECONE_ENVIRONMENT=your_pinecone_env PINECONE_INDEX=your_pinecone_index

Redis

REDIS_HOST=localhost REDIS_PORT=6379

Queue

RAG_BUILDER_QUEUE_NAME=rag-builder-queue

  1. Start the Server npm run dev

It will run both: API server (src/server.ts)

๐Ÿงพ What is mcpServer.ts? The mcpServer.ts file is the background worker entry point. It performs the following tasks: Connects to Redis and initializes the RAG Builder Queue Listens for ingestion jobs added when a user uploads or connects Google Drive Extracts text from files, chunks it, creates embeddings using OpenAI Stores the resulting vectors into Pinecone for retrieval during AI responses This allows the ingestion process to run asynchronously in the background, keeping the API responsive and scalable.