mcp-cro-analyzer

amuyakkala/mcp-cro-analyzer

3.1

If you are the rightful owner of mcp-cro-analyzer and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The CRO MCP Server is an asynchronous conversion-rate-optimization analyzer that evaluates web pages to identify and prioritize issues, providing recommendations and estimated impacts.

Tools
2
Resources
0
Prompts
0

CRO MCP Server

Asynchronous conversion-rate-optimization (CRO) analyzer that fetches a web page, builds structural metrics with BeautifulSoup, and feeds them into an Ollama-hosted LLM to return three prioritized issues with recommendations and estimated impact. Designed to run as a simple CLI-driven MCP (Model Context Protocol) helper or to be embedded inside a larger automation flow.

Highlights

  • Async HTTP fetching with httpx (redirect support and per-request timeouts)
  • Deterministic DOM parsing via BeautifulSoup plus lightweight heuristics for CTAs, forms, and trust signals
  • Structured metrics layer (PageMetrics) keeps parsing separate from reasoning logic
  • Ollama client (llama3 by default) coerced into strict JSON output for reproducible issue lists
  • MCP tools for single or batch URL analysis

Project Layout

  • src/api.py – Orchestrates fetch → parse → metrics → reasoning flow
  • src/analyzer.py – HTML parsing utilities and MetricsBuilder
  • src/metrics.py – Dataclasses (PageMetrics, CROIssue)
  • src/reasoning.pyCROReasoningEngine that talks to Ollama and enforces JSON contract
  • src/utils.py – Logging, URL validation, safe JSON helpers
  • examples/sample_output.json – Example response payload
  • tests/test_analyzer.py – Pytest coverage for parser + metrics builder

Prerequisites

  • Python 3.12+
  • Ollama running locally or remotely with an accessible model (defaults to llama3)

Setup

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Create a .env file (or export the variables) to tweak runtime behavior:

OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama3
MCP_PORT=3000
DEBUG=false

Configuration is loaded in src/config.py, which also applies sensible timeout defaults.

Using as an MCP Server

Run the stdio-based MCP server:

python server.py

Connect from Cursor

  1. Open Settings → Features → MCP.
  2. Click + Add New MCP Server.
  3. Choose stdio and set the command to python /absolute/path/to/server.py.
  4. Save, then invoke the analyze_url or analyze_urls tools from Composer/Chat.

Connect from ChatGPT Desktop (or other MCP clients)

  • Add a custom MCP source and point it at the same stdio command above.
  • Ensure the environment has access to Ollama (OLLAMA_HOST) before launching the client.
  • Once registered, call the exposed tools just like any other MCP integration—the server returns JSON with metrics plus three issues.

Testing

pytest

Current tests cover the DOM parser and metrics builder. Extend tests/ with reasoning-engine mocks before changing prompt formats or dataclasses.

Development Notes

  • All logging goes through src/utils.logger; adjust the log level there if you need debug traces.
  • CROReasoningEngine.safe_json_parse defensively trims malformed outputs, but keeping prompts compliant with the “JSON only” rule yields the best reliability.
  • If you add new metrics, update PageMetrics, the builder, and the response serializer inside src/api.py together.

Troubleshooting

  • Timeouts fetching pages: raise FETCH_TIMEOUT in src/config.py or verify the target URL is reachable.
  • Ollama connection errors: ensure ollama serve is running and that OLLAMA_HOST is reachable from this machine.
  • Invalid JSON from the model: the engine falls back to a single “Unable to analyze page” issue; inspect the logs and iterate on the prompt.