fetchv2-mcp-server

praveenc/fetchv2-mcp-server

3.2

If you are the rightful owner of fetchv2-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

FetchV2 MCP Server is a Model Context Protocol server designed for efficient web content fetching and extraction.

Tools
3
Resources
0
Prompts
0

FetchV2 MCP Server

PyPI version CI Python 3.10+ License: MIT

Model Context Protocol (MCP) server for web content fetching and extraction.

This MCP server provides tools to fetch webpages, extract clean content using Trafilatura, and discover links for batch processing.

Features

  • Fetch Webpages: Extract clean markdown content from any URL
  • Batch Fetching: Fetch up to 10 URLs in a single request
  • Link Discovery: Find and filter links on any webpage
  • llms.txt Support: Parse and fetch LLM-friendly documentation indexes
  • Smart Extraction: Trafilatura removes boilerplate (navbars, ads, footers)
  • Robots.txt Compliance: Respects robots.txt with graceful timeout handling
  • Pagination Support: Handle large pages with start_index parameter

Prerequisites

  1. Install uv from Astral
  2. Install Python 3.10 or newer using uv python install 3.10

Installation

Or configure manually in your MCP client:

{
  "mcpServers": {
    "fetchv2": {
      "command": "uvx",
      "args": ["fetchv2-mcp-server@latest"],
      "disabled": false,
      "autoApprove": []
    }
  }
}

Config file locations:

  • Claude Desktop (macOS): ~/Library/Application Support/Claude/claude_desktop_config.json
  • Claude Desktop (Windows): %APPDATA%\Claude\claude_desktop_config.json
  • Windsurf: ~/.codeium/windsurf/mcp_config.json
  • Kiro: .kiro/settings/mcp.json in your project

Install from PyPI

# Using uv
uv add fetchv2-mcp-server

# Using pip
pip install fetchv2-mcp-server

Basic Usage

Example prompts to try:

  • "Fetch the documentation from <URL>"
  • "Find all links on <docs URL> that contain 'tutorial'"
  • "Read these three pages and summarize the differences: [url1, url2, url3]"

Available Tools

fetch

Fetches a webpage and extracts its main content as clean markdown.

fetch(url: str, max_length: int = 5000, start_index: int = 0) -> str
ParameterTypeDefaultDescription
urlstrrequiredThe webpage URL to fetch
max_lengthint5000Maximum characters to return
start_indexint0Character offset for pagination
get_raw_htmlboolfalseSkip extraction, return raw HTML
include_metadatabooltrueInclude title, author, date
include_tablesbooltruePreserve tables in markdown
include_linksboolfalsePreserve hyperlinks
bypass_robots_txtboolfalseSkip robots.txt check

fetch_batch

Fetches multiple webpages in a single request.

fetch_batch(urls: list[str], max_length_per_url: int = 2000) -> str
ParameterTypeDefaultDescription
urlslist[str]requiredList of URLs (max 10)
max_length_per_urlint2000Character limit per URL
get_raw_htmlboolfalseSkip extraction for all URLs

discover_links

Discovers all links on a webpage with optional filtering.

discover_links(url: str, filter_pattern: str = "") -> str
ParameterTypeDefaultDescription
urlstrrequiredThe webpage URL to scan
filter_patternstr""Regex to filter links (e.g., /docs/)

fetch_llms_txt

Fetch and parse an llms.txt file to discover LLM-friendly documentation.

fetch_llms_txt(url: str, include_content: bool = False) -> str
ParameterTypeDefaultDescription
urlstrrequiredURL to an llms.txt file
include_contentboolfalseAlso fetch content of all linked pages
max_length_per_urlint2000When include_content=True, max chars per page

⚠️ Important: By default, only the llms.txt index is fetched — the linked markdown files are NOT downloaded to context. Set include_content=True to explicitly fetch all linked pages.

Example:

# DEFAULT: Only fetches the index (lightweight, ~1KB)
fetch_llms_txt(url="https://docs.example.com/llms.txt")
# Returns: title + list of links with descriptions

# EXPLICIT: Fetches index + all linked .md files (can be large)
fetch_llms_txt(url="https://docs.example.com/llms.txt", include_content=True)
# Returns: structure + content of all linked pages

Note: Relative URLs (e.g., /docs/guide.md) are automatically resolved to absolute URLs.

Workflow Example

Step 1: Discover relevant documentation pages

discover_links(url="https://docs.example.com/", filter_pattern="/guide/")

Step 2: Batch fetch the pages you need

fetch_batch(urls=["https://docs.example.com/guide/intro", "https://docs.example.com/guide/setup"])

Prompts

  • fetch_manual - User-initiated fetch that bypasses robots.txt
  • research_topic - Research a topic by fetching multiple relevant URLs

Development

# Clone and install
git clone https://github.com/praveenc/fetchv2-mcp-server.git
cd fetchv2-mcp-server
uv sync --dev
source .venv/bin/activate

# Run tests
uv run pytest

# Run with MCP Inspector
mcp dev src/fetchv2_mcp_server/server.py

# Linting and type checking
uv run ruff check .
uv run pyright

License

MIT - see for details.

Contributing

Contributions welcome! Please see for guidelines.

Support

For issues and questions, use the GitHub issue tracker.