teelaitila/deep-research-mcp
If you are the rightful owner of deep-research-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
Open Deep Research MCP Server is an AI-powered research assistant that performs deep, iterative research on any topic, combining search engines, web scraping, and AI to generate comprehensive reports.
DISCLAIMER
This repo is an experiment on agent coding. 95% of the code is written by LLM's
Open Deep Research MCP Server
An AI-powered research assistant that performs deep, iterative research on any topic. It combines search engines, web scraping, and AI to explore topics in depth and generate comprehensive reports. Available as a Model Context Protocol (MCP) tool or standalone CLI. Look at exampleout.md to see what a report might look like.
Quick Start
- Clone and install:
git clone https://github.com/Ozamatash/deep-research
cd deep-research
npm install
- Set up environment in
.env.local:
# Copy the example environment file
cp .env.example .env.local
- Build:
# Build the server
npm run build
- Run the cli version:
npm run start
- Test MCP Server with Claude Desktop:
Follow the guide thats at the bottom of server quickstart to add the server to Claude Desktop:
https://modelcontextprotocol.io/quickstart/server
For remote servers: Streamable HTTP
npm run start:http
Server runs on http://localhost:3000/mcp without session management.
Features
- Performs deep, iterative research by generating targeted search queries
- Controls research scope with depth (how deep) and breadth (how wide) parameters
- Evaluates source reliability with detailed scoring (0-1) and reasoning
- Prioritizes high-reliability sources (≥0.7) and verifies less reliable information
- Generates follow-up questions to better understand research needs
- Produces detailed markdown reports with findings, sources, and reliability assessments
- Available as a Model Context Protocol (MCP) tool for AI agents
- For now MCP version doesn't ask follow up questions
- Natural-language source preferences (avoid listicles, forums, affiliate reviews, specific domains)
Model Selection (OpenAI, Anthropic, Google, xAI)
Pick a provider and model per run.
- CLI: you will be prompted for provider and model. Example:
openai+gpt-5.2. - MCP/HTTP: pass
model, e.g.openai:gpt-5.2(also acceptsopenai/gpt-5.2).
Set the corresponding API key in .env.local:
OPENAI_API_KEY=...
ANTHROPIC_API_KEY=...
GOOGLE_API_KEY=...
XAI_API_KEY=...
Optionally set default models per provider:
OPENAI_MODEL=gpt-5.2
ANTHROPIC_MODEL=claude-opus-4-5
GOOGLE_MODEL=gemini-3-pro-preview
XAI_MODEL=grok-4-1-fast-reasoning
If you use a non-default OpenAI endpoint:
OPENAI_ENDPOINT=https://api.openai.com/v1
How It Works
flowchart TB
subgraph Input
Q[User Query]
B[Breadth Parameter]
D[Depth Parameter]
FQ[Feedback Questions]
end
subgraph Research[Deep Research]
direction TB
SQ[Generate SERP Queries]
SR[Search]
RE[Source Reliability Evaluation]
PR[Process Results]
end
subgraph Results[Research Output]
direction TB
L((Learnings with
Reliability Scores))
SM((Source Metadata))
ND((Next Directions:
Prior Goals,
New Questions))
end
%% Main Flow
Q & FQ --> CQ[Combined Query]
CQ & B & D --> SQ
SQ --> SR
SR --> RE
RE --> PR
%% Results Flow
PR --> L
PR --> SM
PR --> ND
%% Depth Decision and Recursion
L & ND --> DP{depth > 0?}
DP -->|Yes| SQ
%% Final Output
DP -->|No| MR[Markdown Report]
%% Styling
classDef input fill:#7bed9f,stroke:#2ed573,color:black
classDef process fill:#70a1ff,stroke:#1e90ff,color:black
classDef output fill:#ff4757,stroke:#ff6b81,color:black
classDef results fill:#a8e6cf,stroke:#3b7a57,color:black,width:150px,height:150px
class Q,B,D,FQ input
class SQ,SR,RE,PR process
class MR output
class L,SM,ND results
Advanced Setup
Using Local Firecrawl (Free Option)
Instead of using the Firecrawl API, you can run a local instance. You can use the official repo or my fork which uses searXNG as the search backend to avoid using a searchapi key:
- Set up local Firecrawl:
git clone https://github.com/Ozamatash/localfirecrawl
cd localfirecrawl
# Follow setup in localfirecrawl README
- Update
.env.local:
FIRECRAWL_BASE_URL="http://localhost:3002"
Optional: Observability
Add observability to track research flows, queries, and results using Langfuse:
# Add to .env.local
LANGFUSE_PUBLIC_KEY="your_langfuse_public_key"
LANGFUSE_SECRET_KEY="your_langfuse_secret_key"
The app works normally without observability if no Langfuse keys are provided.
License
MIT License