research-powerpack-mcp

yigitkonur/research-powerpack-mcp

3.3

If you are the rightful owner of research-powerpack-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

Research Powerpack MCP is a comprehensive research toolkit designed to enhance AI coding assistants by providing structured context through web searches, Reddit mining, URL scraping, and AI synthesis.

Tools
5
Resources
0
Prompts
0

🔬 Research Powerpack MCP 🔬

Stop tab-hopping for research. Start getting god-tier context.

The ultimate research toolkit for your AI coding assistant. It searches the web, mines Reddit, scrapes any URL, and synthesizes everything into perfectly structured context your LLM actually understands.

  •  


research-powerpack-mcp is the research assistant your AI wishes it had. Stop asking your LLM to guess about things it doesn't know. This MCP server acts like a senior researcher, searching the web, mining Reddit discussions, scraping documentation, and synthesizing everything into perfectly structured context so your AI can actually give you answers worth a damn.

🔍

Batch Web Search
100 keywords in parallel

💬

Reddit Mining
Real opinions, not marketing

🌐

Universal Scraping
JS rendering + geo-targeting

🧠

Deep Research
AI synthesis with citations

How it slaps:

  • You: "What's the best database for my use case?"
  • AI + Powerpack: Searches Google, mines Reddit threads, scrapes docs, synthesizes findings.
  • You: Get an actually informed answer with real community opinions and citations.
  • Result: Ship better decisions. Skip the 47 browser tabs.

💥 Why This Slaps Other Methods

Manually researching is a vibe-killer. research-powerpack-mcp makes other methods look ancient.

❌ The Old Way (Pain)✅ The Powerpack Way (Glory)
  1. Open 15 browser tabs.
  2. Skim Stack Overflow answers from 2019.
  3. Search Reddit, get distracted by drama.
  4. Copy-paste random snippets to your AI.
  5. Get a mediocre answer from confused context.
  1. Ask your AI to research it.
  2. AI searches, scrapes, mines Reddit automatically.
  3. Receive synthesized insights with sources.
  4. Make an informed decision.
  5. Go grab a coffee. ☕

We're not just fetching random pages. We're building high-signal, low-noise context with CTR-weighted ranking, smart comment allocation, and intelligent token distribution that prevents massive responses from breaking your LLM's context window.


🚀 Get Started in 60 Seconds

1. Install

npm install research-powerpack-mcp

2. Configure Your MCP Client

ClientConfig FileDocs
🖥️ Claude Desktopclaude_desktop_config.jsonSetup
⌨️ Claude Code~/.claude.json or CLISetup
🎯 Cursor.cursor/mcp.jsonSetup
🏄 WindsurfMCP settingsSetup
Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "research-powerpack": {
      "command": "npx",
      "args": ["research-powerpack-mcp"],
      "env": {
        "SERPER_API_KEY": "your_key",
        "REDDIT_CLIENT_ID": "your_id",
        "REDDIT_CLIENT_SECRET": "your_secret",
        "SCRAPEDO_API_KEY": "your_key",
        "OPENROUTER_API_KEY": "your_key"
      }
    }
  }
}
Claude Code (CLI)

One command to rule them all:

claude mcp add research-powerpack npx \
  --scope user \
  --env SERPER_API_KEY=your_key \
  --env REDDIT_CLIENT_ID=your_id \
  --env REDDIT_CLIENT_SECRET=your_secret \
  --env OPENROUTER_API_KEY=your_key \
  --env OPENROUTER_BASE_URL=https://openrouter.ai/api/v1 \
  --env RESEARCH_MODEL=x-ai/grok-4.1-fast \
  -- research-powerpack-mcp

Or manually add to ~/.claude.json:

{
  "mcpServers": {
    "research-powerpack": {
      "command": "npx",
      "args": ["research-powerpack-mcp"],
      "env": {
        "SERPER_API_KEY": "your_key",
        "REDDIT_CLIENT_ID": "your_id",
        "REDDIT_CLIENT_SECRET": "your_secret",
        "OPENROUTER_API_KEY": "your_key",
        "OPENROUTER_BASE_URL": "https://openrouter.ai/api/v1",
        "RESEARCH_MODEL": "x-ai/grok-4.1-fast"
      }
    }
  }
}
Cursor/Windsurf

Add to .cursor/mcp.json or equivalent:

{
  "mcpServers": {
    "research-powerpack": {
      "command": "npx",
      "args": ["research-powerpack-mcp"],
      "env": {
        "SERPER_API_KEY": "your_key"
      }
    }
  }
}

✨ Zero Crash Promise: Missing API keys? No problem. The server always starts. Tools just return helpful setup instructions instead of exploding.


✨ Feature Breakdown: The Secret Sauce

FeatureWhat It DoesWhy You Care
🔍 Batch Search
100 keywords parallel
Search Google for up to 100 queries simultaneouslyCover every angle of a topic in one shot
📊 CTR Ranking
Smart URL scoring
Identifies URLs that appear across multiple searchesSurfaces high-consensus authoritative sources
💬 Reddit Mining
Real human opinions
Google-powered Reddit search + native API fetchingGet actual user experiences, not marketing fluff
🎯 Smart Allocation
Token-aware budgets
1,000 comment budget distributed across postsDeep dive on 2 posts or quick scan on 50
🌐 Universal Scraping
Works on everything
Auto-fallback: basic → JS render → geo-targetingHandles SPAs, paywalls, and geo-restricted content
🧠 Deep Research
AI-powered synthesis
Batch research with web search and citationsGet comprehensive answers to complex questions
🧩 Modular Design
Use what you need
Each tool works independentlyPay only for the APIs you actually use

🎮 Tool Reference

🔍

web_search
Batch Google search

💬

search_reddit
Find Reddit discussions

📖

get_reddit_post
Fetch posts + comments

🌐

scrape_links
Extract any URL

🧠

deep_research
AI synthesis

web_search

Batch web search using Google via Serper API. Search up to 100 keywords in parallel.

ParameterTypeRequiredDescription
keywordsstring[]YesSearch queries (1-100). Use distinct keywords for maximum coverage.

Supports Google operators: site:, -exclusion, "exact phrase", filetype:

{
  "keywords": [
    "best IDE 2025",
    "VS Code alternatives",
    "Cursor vs Windsurf comparison"
  ]
}

search_reddit

Search Reddit via Google with automatic site:reddit.com filtering.

ParameterTypeRequiredDescription
queriesstring[]YesSearch queries (max 10)
date_afterstringNoFilter results after date (YYYY-MM-DD)

Search operators: intitle:keyword, "exact phrase", OR, -exclude

{
  "queries": [
    "best mechanical keyboard 2025",
    "intitle:keyboard recommendation"
  ],
  "date_after": "2024-01-01"
}

get_reddit_post

Fetch Reddit posts with smart comment allocation (1,000 comment budget distributed automatically).

ParameterTypeRequiredDefaultDescription
urlsstring[]YesReddit post URLs (2-50)
fetch_commentsbooleanNotrueWhether to fetch comments
max_commentsnumberNoautoOverride comment allocation

Smart Allocation:

  • 2 posts → ~500 comments/post (deep dive)
  • 10 posts → ~100 comments/post
  • 50 posts → ~20 comments/post (quick scan)
{
  "urls": [
    "https://reddit.com/r/programming/comments/abc123/post_title",
    "https://reddit.com/r/webdev/comments/def456/another_post"
  ]
}

scrape_links

Universal URL content extraction with automatic fallback modes.

ParameterTypeRequiredDefaultDescription
urlsstring[]YesURLs to scrape (3-50)
timeoutnumberNo30Timeout per URL (seconds)
use_llmbooleanNofalseEnable AI extraction
what_to_extractstringNoExtraction instructions for AI

Automatic Fallback: Basic → JS rendering → JS + US geo-targeting

{
  "urls": ["https://example.com/article1", "https://example.com/article2"],
  "use_llm": true,
  "what_to_extract": "Extract the main arguments and key statistics"
}

deep_research

AI-powered batch research with web search and citations.

ParameterTypeRequiredDescription
questionsobject[]YesResearch questions (2-10)
questions[].questionstringYesThe research question
questions[].file_attachmentsobject[]NoFiles to include as context

Token Allocation: 32,000 tokens distributed across questions:

  • 2 questions → 16,000 tokens/question (deep dive)
  • 10 questions → 3,200 tokens/question (rapid multi-topic)
{
  "questions": [
    { "question": "What are the current best practices for React Server Components in 2025?" },
    { "question": "Compare Bun vs Node.js for production workloads with benchmarks." }
  ]
}

⚙️ Environment Variables & Tool Availability

Research Powerpack uses a modular architecture. Tools are automatically enabled based on which API keys you provide:

ENV VariableTools EnabledFree Tier
SERPER_API_KEYweb_search, search_reddit2,500 queries/mo
REDDIT_CLIENT_ID + SECRETget_reddit_postUnlimited
SCRAPEDO_API_KEYscrape_links1,000 credits/mo
OPENROUTER_API_KEYdeep_research + AI in scrape_linksPay-as-you-go
RESEARCH_MODELModel for deep_researchDefault: perplexity/sonar-deep-research
LLM_EXTRACTION_MODELModel for AI extraction in scrape_linksDefault: openrouter/gpt-oss-120b:nitro

Configuration Examples

# Search-only mode (just web_search and search_reddit)
SERPER_API_KEY=xxx

# Reddit research mode (search + fetch posts)
SERPER_API_KEY=xxx
REDDIT_CLIENT_ID=xxx
REDDIT_CLIENT_SECRET=xxx

# Full research mode (all 5 tools)
SERPER_API_KEY=xxx
REDDIT_CLIENT_ID=xxx
REDDIT_CLIENT_SECRET=xxx
SCRAPEDO_API_KEY=xxx
OPENROUTER_API_KEY=xxx

🔑 API Key Setup Guides

🔍 Serper API (Google Search) — FREE: 2,500 queries/month

What you get

  • Fast Google search results via API
  • Enables web_search and search_reddit tools

Setup Steps

  1. Go to serper.dev
  2. Click "Get API Key" (top right)
  3. Sign up with email or Google
  4. Copy your API key from the dashboard
  5. Add to your config:
    SERPER_API_KEY=your_key_here
    

Pricing

  • Free: 2,500 queries/month
  • Paid: $50/month for 50,000 queries
🤖 Reddit OAuth — FREE: Unlimited access

What you get

  • Full Reddit API access
  • Fetch posts and comments with upvote sorting
  • Enables get_reddit_post tool

Setup Steps

  1. Go to reddit.com/prefs/apps
  2. Scroll down and click "create another app..."
  3. Fill in:
    • Name: research-powerpack (or any name)
    • App type: Select "script" (important!)
    • Redirect URI: http://localhost:8080
  4. Click "create app"
  5. Copy your credentials:
    • Client ID: The string under your app name
    • Client Secret: The "secret" field
  6. Add to your config:
    REDDIT_CLIENT_ID=your_client_id
    REDDIT_CLIENT_SECRET=your_client_secret
    
🌐 Scrape.do (Web Scraping) — FREE: 1,000 credits/month

What you get

  • JavaScript rendering support
  • Geo-targeting and CAPTCHA handling
  • Enables scrape_links tool

Setup Steps

  1. Go to scrape.do
  2. Click "Start Free"
  3. Sign up with email
  4. Copy your API key from the dashboard
  5. Add to your config:
    SCRAPEDO_API_KEY=your_key_here
    

Credit Usage

  • Basic scrape: 1 credit
  • JavaScript rendering: 5 credits
  • Geo-targeting: +25 credits
🧠 OpenRouter (AI Models) — Pay-as-you-go

What you get

  • Access to 100+ AI models via one API
  • Enables deep_research tool
  • Enables AI extraction in scrape_links

Setup Steps

  1. Go to openrouter.ai
  2. Sign up with Google/GitHub/email
  3. Go to openrouter.ai/keys
  4. Click "Create Key"
  5. Copy the key (starts with sk-or-...)
  6. Add to your config:
    OPENROUTER_API_KEY=sk-or-v1-xxxxx
    

Recommended Models for Deep Research

# Default (optimized for research)
RESEARCH_MODEL=perplexity/sonar-deep-research

# Fast and capable
RESEARCH_MODEL=x-ai/grok-4.1-fast

# High quality
RESEARCH_MODEL=anthropic/claude-3.5-sonnet

# Budget-friendly
RESEARCH_MODEL=openai/gpt-4o-mini

Recommended Models for AI Extraction (use_llm in scrape_links)

# Default (fast and cost-effective for extraction)
LLM_EXTRACTION_MODEL=openrouter/gpt-oss-120b:nitro

# High quality extraction
LLM_EXTRACTION_MODEL=anthropic/claude-3.5-sonnet

# Budget-friendly
LLM_EXTRACTION_MODEL=openai/gpt-4o-mini

Note: RESEARCH_MODEL and LLM_EXTRACTION_MODEL are independent. You can use a powerful model for deep research and a faster/cheaper model for content extraction, or vice versa.


🔥 Recommended Workflows

Research a Technology Decision

1. web_search → ["React vs Vue 2025", "Next.js vs Nuxt comparison"]
2. search_reddit → ["best frontend framework 2025", "Next.js production experience"]
3. get_reddit_post → [URLs from step 2]
4. scrape_links → [Documentation and blog URLs from step 1]
5. deep_research → [Synthesize findings into specific questions]

Competitive Analysis

1. web_search → ["competitor name review", "competitor vs alternatives"]
2. scrape_links → [Competitor websites, review sites]
3. search_reddit → ["competitor name experience", "switching from competitor"]
4. get_reddit_post → [URLs from step 3]

Debug an Obscure Error

1. web_search → ["exact error message", "error + framework name"]
2. search_reddit → ["error message", "framework + error type"]
3. get_reddit_post → [URLs with solutions]
4. scrape_links → [Stack Overflow answers, GitHub issues]

🔥 Enable Full Power Mode

For the best research experience, configure all four API keys:

SERPER_API_KEY=your_serper_key       # Free: 2,500 queries/month
REDDIT_CLIENT_ID=your_reddit_id       # Free: Unlimited
REDDIT_CLIENT_SECRET=your_reddit_secret
SCRAPEDO_API_KEY=your_scrapedo_key   # Free: 1,000 credits/month
OPENROUTER_API_KEY=your_openrouter_key # Pay-as-you-go

This unlocks:

  • 5 research tools working together
  • AI-powered content extraction in scrape_links
  • Deep research with web search and citations
  • Complete Reddit mining (search → fetch → analyze)

Total setup time: ~10 minutes. Total free tier value: ~$50/month equivalent.


🛠️ Development

# Clone
git clone https://github.com/yigitkonur/research-powerpack-mcp.git
cd research-powerpack-mcp

# Install
npm install

# Development
npm run dev

# Build
npm run build

# Type check
npm run typecheck

🔥 Common Issues & Quick Fixes

Expand for troubleshooting tips
ProblemSolution
Tool returns "API key not configured"Add the required ENV variable to your MCP config. The error message tells you exactly which key is missing.
Reddit posts returning emptyCheck your REDDIT_CLIENT_ID and REDDIT_CLIENT_SECRET. Make sure you created a "script" type app.
Scraping fails on JavaScript sitesThis is expected for first attempt. The tool auto-retries with JS rendering. If still failing, the site may be blocking scrapers.
Deep research taking too longUse a faster model like x-ai/grok-4.1-fast instead of perplexity/sonar-deep-research.
Token limit errorsReduce the number of URLs/questions per request. The tool distributes a fixed token budget.

Built with 🔥 because manually researching for your AI is a soul-crushing waste of time.

MIT © Yiğit Konur