AnmoL11221/Research-MCP-Server
If you are the rightful owner of Research-MCP-Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
This project is an AI-powered research assistant platform that integrates with Model-Context Protocol (MCP) for seamless tool usage in AI hosts.
search
Search for academic papers across all sources.
summarize
Summarize a given text.
search_and_summarize
Search and summarize abstracts from all sources.
synthesize
Synthesize information from multiple papers into a single summary.
qa
Q&A over a set of papers using Retrieval-Augmented Generation (RAG).
cite
Generate an APA-style citation for a paper.
AI-Driven Research Assistant MCP Server
Overview
This project is an AI-powered research assistant platform that aggregates, summarizes, synthesizes, and answers questions about academic literature from arXiv, PubMed, and Semantic Scholar. It provides both a robust backend API (FastAPI) and a modern frontend (Streamlit) for searching, summarizing, synthesizing, Q&A, and citing academic papers.
Features
- Unified academic search across arXiv, PubMed, and Semantic Scholar
- Summarization of academic texts using state-of-the-art models
- Synthesis of multiple paper abstracts into a single summary
- Q&A over a set of papers using Retrieval-Augmented Generation (RAG)
- Citation generation in APA style
- Redis caching for efficient repeated queries
- Professional logging and error handling
- Streamlit frontend: User-friendly web app for search, citation, summarization, and Q&A
- Configurable backend URL in the frontend
- Backend warnings (e.g., rate limits) are suppressed in the UI for a cleaner experience
Quickstart
1. Clone the Repository
git clone <your-repo-url>
cd research-mcp-server
2. Install Dependencies
It is recommended to use a virtual environment:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
3. Environment Variables
Create a .env
file in the project root:
SEMANTIC_SCHOLAR_API_KEY=your_actual_api_key_here # Optional, for higher rate limits
REDIS_URL=redis://localhost:6379 # Or your Redis instance URL
- The server will work without a Semantic Scholar API key, but at a lower rate limit.
.env
is loaded automatically usingpython-dotenv
.
4. Start Redis (if not already running)
Make sure you have a Redis server running locally or update REDIS_URL
in your .env
.
5. Run the Backend API
uvicorn main:app --reload
6. Run the Streamlit Frontend
streamlit run app.py
- Open your browser to
http://localhost:8501
. - Enter your backend API URL (e.g.,
http://localhost:8000
for local development).
Streamlit App Features
- Configurable Backend URL: Easily switch between local and deployed backends.
- Search: Enter a research query and view results from all sources.
- Cite: Generate APA-style citations for any paper.
- Summarize: Get concise summaries of paper abstracts.
- Q&A: Ask natural language questions about the search results using RAG.
- Clean UI: Backend warnings (e.g., rate limits) are not shown to users; only critical errors are displayed.
API Endpoints
POST /search
Search for academic papers across all sources.
- Request Body:
{ "query": "deep learning", "max_results": 5 }
- Response:
{ "results": [ ... ], "errors": [ ... ] }
POST /summarize
Summarize a given text.
- Request Body:
{ "text": "...", "max_length": 150 }
- Response:
{ "summary": "..." }
POST /search_and_summarize
Search and summarize abstracts from all sources.
- Request Body:
{ "query": "transformer models", "max_results": 3, "summary_max_length": 50, "summary_min_length": 25 }
- Response:
{ "results": [ { "source": "arXiv", "title": "...", "original_abstract": "...", "summary": "..." } ], "errors": [ ... ] }
POST /synthesize
Synthesize information from multiple papers into a single summary.
- Request Body:
{ "papers": [ { "title": "...", "authors": ["..."], "abstract": "...", ... } ] }
- Response:
{ "synthesis": "..." }
POST /qa
Q&A over a set of papers using Retrieval-Augmented Generation (RAG).
- Request Body:
{ "papers": [ { "title": "Paper 1", "authors": ["Alice"], "abstract": "This study explores...", "source": "arXiv", "publication_date": "2023-01-01", "url": "..." }, { "title": "Paper 2", "authors": ["Bob"], "abstract": "The main limitation was...", "source": "PubMed", "publication_date": "2022-12-01", "url": "..." } ], "question": "What was the primary limitation mentioned in these studies?" }
- Response:
{ "answer": "The main limitation mentioned was..." }
POST /cite
Generate an APA-style citation for a paper.
- Request Body:
{ "title": "...", "authors": ["..."], "publication_date": "2023-01-01", "source": "arXiv", "abstract": "...", "url": "..." }
- Response:
{ "citation": "..." }
Environment Variables
SEMANTIC_SCHOLAR_API_KEY
(optional): For higher rate limits on Semantic Scholar API.REDIS_URL
: Redis connection string (default:redis://localhost:6379
).
Logging
- Logs are output to the console with timestamps, log level, and message.
- Info, warning, and error logs are included for observability and debugging.
Contribution Guidelines
- Fork the repository and create a feature branch.
- Write clear, well-documented code.
- Submit a pull request with a description of your changes.
- Please do not commit secrets or
.env
files.
License
MIT License (or specify your license here)
MCP (Model-Context Protocol) Tool Integration
This project is now fully compliant with the Model-Context Protocol (MCP), making it directly usable as a tool by AI hosts like Claude Desktop and other MCP-compatible systems.
How to Use as an MCP Tool
-
Install dependencies (if not already done):
pip install -r requirements.txt
-
Start the MCP server:
python mcp_server.py
- By default, this uses the
stdio
transport, which is required for Claude Desktop integration. - The server exposes the following tools:
search
,summarize
,search_and_summarize
,synthesize
,cite
, andqa
.
- By default, this uses the
-
Connect to Claude Desktop or another MCP host:
- In Claude Desktop, add a new MCP tool in your config (e.g.,
claude_desktop_config.json
) pointing to yourmcp_server.py
script. - Example config snippet:
{ "mcpServers": { "research-assistant": { "command": "/usr/bin/python3", "args": ["/absolute/path/to/mcp_server.py"] } } }
- Restart Claude Desktop and enable the tool from the hammer icon in the chat UI.
- In Claude Desktop, add a new MCP tool in your config (e.g.,
-
Usage:
- Claude (or any MCP host) will now be able to call your research tools directly, with full support for tool invocation, argument passing, and structured results.
Dual API and MCP Support
- You can continue to use the FastAPI server (
main.py
) for HTTP API access and the Streamlit frontend (app.py
). - The MCP server (
mcp_server.py
) is for direct tool integration with AI hosts.