anombyte93/deep-research-mcp
If you are the rightful owner of deep-research-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
Deep Research MCP is a standalone server implementing parallel multi-agent research with configurable depth modes, designed to address industry gaps in AI tool research.
Deep Research MCP Server
Multi-agent research system with configurable depth for Claude Desktop
Deep Research MCP is an exportable, standalone MCP server that implements parallel multi-agent research with configurable depth modes. Inspired by the /deep-research slash command pattern, it addresses key industry gaps identified in 2024-2025 AI tool research.
Key Features
- Multi-Agent Parallel Research: 2-10 agents simultaneously (not sequential)
- Configurable Depth Modes: Quick (5min, ~$0.30) | Standard (15min, ~$1.00) | Deep (45min, ~$3.00)
- Provider Flexibility: Works with Anthropic Claude, OpenAI GPT, or Google Gemini
- Evidence-Based: Confidence scores + citations for all findings
- Industry Gap-Filling: Addresses context rot, verification overhead, ROI measurement
- Export-Ready: Standalone server, anyone can install with their API key
Quick Start (5 Minutes)
See for detailed setup instructions.
# 1. Clone and install
git clone https://github.com/your-org/deep-research-mcp.git
cd deep-research-mcp
npm install
# 2. Configure API key
cp .env.example .env
# Edit .env and add your ANTHROPIC_API_KEY
# 3. Build
npm run build
# 4. Test
npm run dev
# 5. Add to Claude Desktop config
# See QUICKSTART.md for claude_desktop_config.json setup
Installation
Prerequisites
- Node.js >=18.0.0
- At least one API key:
- Anthropic Claude (recommended):
ANTHROPIC_API_KEY - OpenAI GPT (alternative):
OPENAI_API_KEY - Google Gemini (cost-effective):
GOOGLE_API_KEY
- Anthropic Claude (recommended):
Step-by-Step Installation
- Clone Repository
git clone https://github.com/your-org/deep-research-mcp.git
cd deep-research-mcp
- Install Dependencies
npm install
- Configure Environment
cp .env.example .env
Edit .env and add your API key:
# Minimum required: ONE of these
ANTHROPIC_API_KEY=sk-ant-api03-...
# OPENAI_API_KEY=sk-...
# GOOGLE_API_KEY=...
# Optional: Configure defaults
DEFAULT_RESEARCH_MODE=standard
MAX_AGENTS=10
- Build
npm run build
- Verify Installation
npm run dev
Should output: [DeepResearch] ✓ Server ready
Claude Desktop Integration
Add to your claude_desktop_config.json:
Location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Configuration:
{
"mcpServers": {
"deep-research": {
"command": "node",
"args": ["/absolute/path/to/deep-research-mcp/dist/index.js"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-api03-..."
}
}
}
}
Restart Claude Desktop and verify the server appears in the MCP tools list.
Usage
Basic Research
Use deep-research MCP:
start_research(
query: "What are the best practices for React hooks in 2025?",
mode: "standard"
)
Output (after 10-15 minutes):
- Research ID for tracking
- Estimated cost (~$1.00)
- 4 agent reports (Industry, Docs, Community, Tools)
- 40+ sources with citations
- Confidence-scored findings
Research Modes
| Mode | Agents | Time | Tokens | Cost | Use Case |
|---|---|---|---|---|---|
| quick | 2 | 3-5 min | 20-30K | $0.30-0.45 | Quick validation, sanity check |
| standard | 4 | 10-15 min | 60-80K | $0.90-1.20 | Normal research (matches /deep-research) |
| deep | 8 | 30-45 min | 150-250K | $2.25-3.75 | Architecture decisions, complex problems |
| custom | 1-10 | Variable | Variable | Variable | User-defined flags |
Advanced Usage
Custom Flags:
start_research(
query: "Should we use microservices?",
mode: "custom",
flags: {
agentCount: 6,
ultrathink: true,
minSources: 30,
timeout: 30
}
)
Check Status:
ping_research(researchId: "uuid-here")
Get Report:
get_report(researchId: "uuid-here", format: "markdown")
MCP Tools Reference
1. start_research
Start multi-agent research session.
Inputs:
query(required): Research questionmode(optional):quick|standard|deep|custom(default:standard)flags(optional): Configuration overrides
Returns:
researchId: UUID for trackingestimatedTime: MinutesestimatedTokens: Token budgetestimatedCost: USD
2. ping_research
Check status of active research.
Inputs:
researchId(required): UUID from start_research
Returns:
status:running|consolidating|completed|failedprogress: 0-100%agentsCompleted: CountfindingsCount: Total findings
3. get_report
Get full research report.
Inputs:
researchId(required): UUIDformat(optional):markdown|json(default:markdown)
Returns:
- Executive summary
- Recommendation
- Agent reports (4-10)
- Quality metrics
- Citations
- Implementation steps
4. get_suggestions
Get proactive research suggestions.
Inputs:
context(optional): Additional contextlimit(optional): Max suggestions (default: 5)
Returns:
- List of suggested research topics
Architecture
Multi-Agent System
Orchestrator
├─ Agent 1: Industry Best Practices
│ └─ Web search, GitHub trending, Academic papers
├─ Agent 2: Official Documentation
│ └─ Claude Code docs, Framework docs, APIs
├─ Agent 3: Community Insights
│ └─ Reddit, Discord, HackerNews, Twitter
├─ Agent 4: Tools & Repositories
│ └─ GitHub repos (>500 stars), npm, PyPI
└─ Consolidation Engine
└─ Cross-reference validation → Unified report
Provider Abstraction
- Anthropic Claude: Best results, recommended
- OpenAI GPT: Alternative provider
- Google Gemini: Cost-effective option
- Fallback Chain: Automatic retry if provider fails
Key Design Principles
- Fresh Context Per Agent: Prevents context rot
- Evidence-Based Everything: Confidence scores + citations
- Token Budget Control: Hard limits on spend
- Parallel Execution: 4-10 agents simultaneously
- Configurable Depth: User decides time/cost tradeoff
Configuration
Environment Variables
See .env.example for all available options.
Essential:
ANTHROPIC_API_KEY=sk-ant-...
DEFAULT_RESEARCH_MODE=standard
MAX_AGENTS=10
Advanced:
RESEARCH_TIMEOUT=900 # seconds
SOURCE_FILTER=all # all | trusted | recent
RECENCY_FILTER=6months # 6months | 1year | 2years
ENABLE_ULTRATHINK=true
MIN_SOURCES_PER_AGENT=10
Budget Control:
MAX_TOKENS_PER_SESSION=100000
MAX_COST_PER_SESSION=5.00 # USD
Mode Presets
Presets defined in src/config/modes.ts:
- QUICK_MODE: Fast validation
- STANDARD_MODE: Matches
/deep-researchbehavior - DEEP_MODE: Comprehensive analysis
Examples
Example 1: Technology Selection
Query: "Should I use LangChain, AutoGen, or CrewAI?"
Mode: standard
Result: Comparison table + recommendation + tradeoffs
Cost: ~$1.00
Example 2: Security Audit
Query: "Is express-session safe for production auth?"
Mode: quick
Result: Risk level + CVEs + mitigations
Cost: ~$0.35
Example 3: Migration Planning
Query: "How to migrate from Webpack to Vite?"
Mode: deep
Result: Step-by-step guide + gotchas + code examples
Cost: ~$3.00
Industry Gaps Addressed
Based on 2024-2025 research, this tool fills these gaps:
| Gap | Our Solution |
|---|---|
| Context Rot | Fresh context per agent (parallel, not additive) |
| Verification Overhead | Confidence scores + citations built-in |
| Measurement Problems | Token usage + cost tracking |
| Security Issues | Source filtering + validation |
| Context Switching | Fire-and-forget background research |
Troubleshooting
"No LLM providers initialized"
Cause: No API key configured
Solution: Add at least one API key to .env:
ANTHROPIC_API_KEY=sk-ant-...
"Research session not found"
Cause: Invalid research ID or session expired
Solution: Use researchId from start_research response
"Provider failed"
Cause: API key invalid or network issue Solution:
- Check API key format
- Verify network connectivity
- Check provider status page
High Costs
Cause: Using deep mode frequently Solution:
- Use
quickorstandardmode for most tasks - Set budget limits in
.env:MAX_TOKENS_PER_SESSION=50000 MAX_COST_PER_SESSION=1.00
Development
Build
npm run build
Watch Mode
npm run dev
Type Check
npm run type-check
Lint
npm run lint
Contributing
Contributions welcome! Please:
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
License
MIT License - see file
Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation:
Roadmap
- Core multi-agent system
- Anthropic/OpenAI/Gemini support
- Configurable modes (quick/standard/deep)
- Security validation tool
- Comparison tool
- Persistent research cache
- Team knowledge sharing
- Web UI dashboard
Acknowledgments
- Inspired by the
/deep-researchslash command pattern - Built on Model Context Protocol (MCP)
- Powered by LangChain
Made with ❤️ by the AI Lecture MCP Team