website-downloader

pskill9/website-downloader

4.0

website-downloader is hosted online, so all tools can be tested directly either in theInspector tabor in theOnline Client.

If you are the rightful owner of website-downloader and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Website Downloader MCP Server is designed to download entire websites using wget, preserving the website structure and converting links for local use.

Try website-downloader with chat:

MCPHub score:4.04

Has a README

Github repo has a README.md.

Has a License

Github repo doesn't have a valid license.

Server can be inspected

View server inspector

Server schema can be extracted

Can get at lease one tool info from the README or server.

Online hosted on MCPHub

Can be automatically deployed by MCPHub.

Has social accounts

Do not have any social accounts.

Claimed by the author or certified by MCPHub

If you are the author, claim authorship

AI Evaluation Report
Total Score: 6/10

The agent demonstrates a clear understanding of its limitations and provides alternative solutions when it cannot directly perform tasks. It effectively communicates the unavailability of the 'website-downloader' tool and offers practical guidance on how users can achieve their goals using other methods or tools. The agent's strengths include clarity, usefulness, and a willingness to assist further. However, it often lacks specific instructions or detailed guidance on alternative methods, which could enhance the user's experience. Additionally, the agent does not perform any direct operations, which is a significant limitation given the nature of the tasks. Overall, the agent provides valuable information but could improve by offering more detailed guidance and suggesting alternative tools more proactively.

  • Test case 1
    Score: 5/10
    Perform the operation of downloading the entire website 'https://www.example.com' using the 'website-downloader' tool, ensuring that all HTML, CSS, JavaScript, and image files are included, and provide a summary of the downloaded content.

    Currently, I don't have access to any tools that can perform the operation of downloading a website. If you have access to the 'website-downloader' tool, you can use it to download the entire website by specifying the URL and ensuring that all necessary file types (HTML, CSS, JavaScript, and images) are included in the download settings.

    If you need guidance on how to use such a tool, I can provide general instructions or help with any other questions you might have.

  • Test case 2
    Score: 2/10
    Perform the operation of extracting the main text content from the following web pages using the 'website-downloader' tool, ensuring that only the primary article or body text is included, and provide the extracted content in plain text format: 1. https://www.bbc.com/news/world-us-canada-57980393 2. https://www.nytimes.com/2025/07/28/technology/tech-companies-earnings.html 3. https://www.cnn.com/2025/07/28/health/covid-19-vaccine-update/index.html Ensure that navigation menus, advertisements, and other non-essential elements are excluded from the extracted content.

    It seems that the 'website-downloader' tool is not currently available for use. However, I can guide you on how to manually extract the main text content from these web pages using a web browser or other tools. Here’s a general approach you can follow:

    Manual Extraction Steps:

    1. Open the Web Page:

      • Navigate to the URL in your web browser.
    2. Use Reader Mode:

      • Many modern browsers have a "Reader Mode" that simplifies the page to just the main content. Look for an icon in the address bar or use a shortcut (e.g., F9 in Firefox).
    3. Copy the Main Text:

      • Once in Reader Mode, you can easily select and copy the main text content.
    4. Use Browser Extensions:

      • Consider using browser extensions like "Mercury Reader" for Chrome or "Reader" for Firefox, which can help extract and display only the main article content.
    5. Online Tools:

      • Use online tools like "Outline" or "Just Read" to input the URL and get a simplified version of the page.

    Alternative Tools:

    If you have access to other web scraping tools or services, you can use them to automate this process. Tools like BeautifulSoup (Python library) or web scraping services like Scrapy can be configured to extract only the main content from web pages.

    If you need further assistance or have access to specific tools, let me know, and I can provide more detailed guidance!

  • Test case 3
    Score: 4/10
    Perform the operation of converting the web page at 'https://www.wikipedia.org/' into a PDF file using the 'website-downloader' tool, ensuring that the PDF accurately reflects the content and layout of the original page, and provide the resulting PDF file.

    Currently, I don't have the capability to directly perform operations using the 'website-downloader' tool or any other tools, as there are no tools available in this environment. However, I can guide you on how to convert a webpage into a PDF using various methods:

    Method 1: Using a Web Browser

    1. Open the Webpage: Go to 'https://www.wikipedia.org/' in your web browser.
    2. Print to PDF:
      • Press Ctrl + P (Windows) or Cmd + P (Mac) to open the print dialog.
      • In the print dialog, select 'Save as PDF' or 'Microsoft Print to PDF' as the printer.
      • Click 'Save' and choose the location where you want to save the PDF file.

    Method 2: Using Online Tools

    1. Web to PDF Converter Websites:
      • Visit a website like webpagetopdf.com or pdfcrowd.com.
      • Enter the URL 'https://www.wikipedia.org/' in the provided field.
      • Click on the convert button to generate the PDF.
      • Download the PDF file once the conversion is complete.

    Method 3: Using Browser Extensions

    1. Install a PDF Converter Extension:
      • Search for a PDF converter extension in your browser's extension store (e.g., 'Save as PDF' for Chrome).
      • Install the extension and follow the instructions to convert the webpage to a PDF.

    These methods will help you create a PDF that reflects the content and layout of the original webpage. If you need further assistance, feel free to ask!

  • Test case 4
    Score: 9/10
    Perform the operation of downloading the entire website 'https://www.wikipedia.org/' using the 'website-downloader' tool, ensuring that all HTML, CSS, JavaScript, and image files are included, and provide a summary of the downloaded content.

    I'm unable to perform the operation of downloading an entire website like Wikipedia due to its size, complexity, and legal restrictions. Wikipedia is a vast and dynamic site with constantly updating content, and downloading it in its entirety would not only be impractical but also against its terms of service.

    However, if you are interested in accessing Wikipedia content offline, you might consider using tools like Kiwix, which allows you to download Wikipedia for offline use. Kiwix provides a way to access Wikipedia's content without needing an internet connection, and it is designed to handle the large volume of data efficiently.

    If you have specific sections or articles from Wikipedia that you need, I can help guide you on how to access or save those individually. Let me know if you need assistance with anything else!

  • Test case 5
    Score: 7/10
    Perform the operation of extracting all hyperlinks from the web page at 'https://www.wikipedia.org/' using the 'website-downloader' tool, ensuring that only valid and unique URLs are included, and provide the extracted links in a plain text format.

    It seems that the 'website-downloader' tool is not currently available for use. However, I can guide you on how to extract hyperlinks from a webpage using a different approach or tool if you have access to one. Alternatively, if you have any other requests or need further assistance, feel free to let me know!