cs-crawler-mcp

CoachSteff/cs-crawler-mcp

3.2

If you are the rightful owner of cs-crawler-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

CS Crawler MCP is a Model Context Protocol server designed to provide web crawling functionality using the crawl4ai library.

Tools
2
Resources
0
Prompts
0

CS Crawler MCP

A Model Context Protocol (MCP) server that provides web crawling functionality via the crawl4ai library.

Features

  • Web crawling with multiple output formats
  • Metadata extraction
  • MCP integration
  • Easy deployment

Quick Start

  1. Clone the repository
  2. Install dependencies: pip install -r requirements.txt
  3. Install Playwright: python -m playwright install chromium
  4. Configure your MCP client

Installation

git clone https://github.com/CoachSteff/cs-crawler-mcp.git
cd cs-crawler-mcp
pip install -r requirements.txt
python -m playwright install chromium

Usage

Add to your Claude Desktop configuration:

{
  "mcpServers": {
    "cs-crawler": {
      "command": "/path/to/cs-crawler-mcp/cs-crawler-mcp",
      "args": [],
      "env": {}
    }
  }
}

Available Tools

  • crawl_url - Crawl a single URL
  • get_page_metadata - Extract page metadata

License

MIT License - see LICENSE file for details.