amuyakkala/mcp-cro-analyzer
If you are the rightful owner of mcp-cro-analyzer and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The CRO MCP Server is an asynchronous conversion-rate-optimization analyzer that evaluates web pages to identify and prioritize issues, providing recommendations and estimated impacts.
CRO MCP Server
Asynchronous conversion-rate-optimization (CRO) analyzer that fetches a web page, builds structural metrics with BeautifulSoup, and feeds them into an Ollama-hosted LLM to return three prioritized issues with recommendations and estimated impact. Designed to run as a simple CLI-driven MCP (Model Context Protocol) helper or to be embedded inside a larger automation flow.
Highlights
- Async HTTP fetching with
httpx(redirect support and per-request timeouts) - Deterministic DOM parsing via
BeautifulSoupplus lightweight heuristics for CTAs, forms, and trust signals - Structured metrics layer (
PageMetrics) keeps parsing separate from reasoning logic - Ollama client (
llama3by default) coerced into strict JSON output for reproducible issue lists - MCP tools for single or batch URL analysis
Project Layout
src/api.py– Orchestrates fetch → parse → metrics → reasoning flowsrc/analyzer.py– HTML parsing utilities andMetricsBuildersrc/metrics.py– Dataclasses (PageMetrics,CROIssue)src/reasoning.py–CROReasoningEnginethat talks to Ollama and enforces JSON contractsrc/utils.py– Logging, URL validation, safe JSON helpersexamples/sample_output.json– Example response payloadtests/test_analyzer.py– Pytest coverage for parser + metrics builder
Prerequisites
- Python 3.12+
- Ollama running locally or remotely with an accessible model (defaults to
llama3)
Setup
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Create a .env file (or export the variables) to tweak runtime behavior:
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama3
MCP_PORT=3000
DEBUG=false
Configuration is loaded in src/config.py, which also applies sensible timeout defaults.
Using as an MCP Server
Run the stdio-based MCP server:
python server.py
Connect from Cursor
- Open
Settings → Features → MCP. - Click
+ Add New MCP Server. - Choose
stdioand set the command topython /absolute/path/to/server.py. - Save, then invoke the
analyze_urloranalyze_urlstools from Composer/Chat.
Connect from ChatGPT Desktop (or other MCP clients)
- Add a custom MCP source and point it at the same stdio command above.
- Ensure the environment has access to Ollama (
OLLAMA_HOST) before launching the client. - Once registered, call the exposed tools just like any other MCP integration—the server returns JSON with metrics plus three issues.
Testing
pytest
Current tests cover the DOM parser and metrics builder. Extend tests/ with reasoning-engine mocks before changing prompt formats or dataclasses.
Development Notes
- All logging goes through
src/utils.logger; adjust the log level there if you need debug traces. CROReasoningEngine.safe_json_parsedefensively trims malformed outputs, but keeping prompts compliant with the “JSON only” rule yields the best reliability.- If you add new metrics, update
PageMetrics, the builder, and the response serializer insidesrc/api.pytogether.
Troubleshooting
- Timeouts fetching pages: raise
FETCH_TIMEOUTinsrc/config.pyor verify the target URL is reachable. - Ollama connection errors: ensure
ollama serveis running and thatOLLAMA_HOSTis reachable from this machine. - Invalid JSON from the model: the engine falls back to a single “Unable to analyze page” issue; inspect the logs and iterate on the prompt.