alexrwilliam/http-mcp-server
If you are the rightful owner of http-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The HTTP MCP Server is a Model Context Protocol server designed to facilitate HTTP debugging and testing for AI-driven web scraping workflows.
make_request
Make HTTP requests with full control.
analyze_response
Deep response analysis.
extract_headers
Categorize and analyze headers.
validate_status
Validate response status codes.
debug_request
Request with session logging.
compare_responses
Compare two HTTP responses.
profile_performance
Multi-iteration performance testing.
close_http_client
Clean shutdown of HTTP client.
HTTP MCP Server
A Model Context Protocol (MCP) server that provides HTTP debugging and testing capabilities for AI-driven web scraping workflows.
๐ Features
- ๐ HTTP Requests: Make GET, POST, PUT, DELETE requests with full control over headers, data, and timeouts
- ๐ Response Analysis: Deep analysis of response headers, status codes, content types, and performance metrics
- โก Performance Testing: Profile request performance with multiple iterations and statistical analysis
- ๐ Debug Workflows: Compare responses, validate endpoints, and debug HTTP interactions
- ๐ Integration Ready: Designed to work seamlessly with Debug MCP and Playwright MCP servers
๐ฆ Installation
From GitHub (Recommended)
pip install git+https://github.com/alexwilliamson/http-mcp-server.git
For Development
git clone https://github.com/alexwilliamson/http-mcp-server.git
cd http-mcp-server
pip install -e .
๐ Quick Start
1. Start the Server
# HTTP transport (recommended for AI agents)
http-mcp http --port 8933
# Or stdio transport (for direct MCP clients)
http-mcp stdio
2. Connect from AI Agent
from mcp.client.sse import sse_client
from mcp import ClientSession
from langchain_mcp_adapters.tools import load_mcp_tools
async with sse_client("http://localhost:8933/sse") as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
http_tools = await load_mcp_tools(session)
# Now you have 8 powerful HTTP debugging tools available!
for tool in http_tools:
print(f"Available: {tool.name}")
3. Use in Scraper Testing
This server enables AI agents to test and debug HTTP aspects of web scraping:
- Test API endpoints directly
- Compare browser vs direct HTTP responses
- Analyze headers and response structure
- Profile performance and identify bottlenecks
- Debug authentication and cookies
๐ ๏ธ Available Tools
Core HTTP Operations
Tool | Description | Example Use |
---|---|---|
make_request | Make HTTP requests with full control | Test API endpoints, download pages |
analyze_response | Deep response analysis | Understand content type, encoding, structure |
extract_headers | Categorize and analyze headers | Check security headers, caching rules |
validate_status | Validate response status codes | Ensure requests succeed as expected |
Advanced Debug Tools
Tool | Description | Example Use |
---|---|---|
debug_request | Request with session logging | Debug failing requests with artifacts |
compare_responses | Compare two HTTP responses | Browser vs API response differences |
profile_performance | Multi-iteration performance testing | Find fastest endpoints, identify slowdowns |
Utility Tools
Tool | Description | Example Use |
---|---|---|
close_http_client | Clean shutdown of HTTP client | Proper cleanup in workflows |
๐๏ธ Architecture Integration
This server is part of a complete AI scraper debugging stack:
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Playwright MCP โ โ Debug MCP โ โ HTTP MCP โ
โ Browser Auto โ โ File/Terminal โ โ This Server โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโ
โ AI Agent โ
โ โ
โ 1. Plan Scraping โ
โ 2. Test HTTP First โ
โ 3. Generate Code โ
โ 4. Debug & Fix โ
โโโโโโโโโโโโโโโโโโโโโโโ
๐ง Usage Examples
Basic HTTP Request
# Make a simple GET request
response = await make_request("GET", "https://api.example.com/data")
# Analyze the response
analysis = await analyze_response(response)
print(f"Content type: {analysis.content_type}")
print(f"Response time: {analysis.performance_metrics['response_time_ms']}ms")
Debug API Endpoint
# Debug request with session logging
debug_result = await debug_request({
"method": "POST",
"url": "https://api.example.com/search",
"headers": {"Authorization": "Bearer token"},
"data": {"query": "test"}
}, session_id="debug-123")
# Check if request succeeded
if debug_result["success"]:
response = debug_result["response"]
analysis = debug_result["analysis"]
print(f"API returned {len(response['content'])} bytes")
else:
print(f"Request failed: {debug_result['error']}")
Compare Browser vs API
# Get response from browser (via Playwright MCP)
browser_response = await playwright_get_page_content(url)
# Get same content via direct HTTP
api_response = await make_request("GET", url)
# Compare responses
comparison = await compare_responses(browser_response, api_response)
print(f"Content identical: {comparison['content_similarity']['identical']}")
print(f"Header differences: {len(comparison['header_differences']['value_differences'])}")
Performance Profiling
# Profile endpoint performance
profile = await profile_performance("https://api.example.com/data", iterations=5)
stats = profile["statistics"]
print(f"Average response time: {stats['avg_response_time_ms']:.2f}ms")
print(f"Success rate: {stats['success_rate']:.1f}%")
๐ Response Analysis Features
Content Analysis
- Content type detection (JSON, HTML, XML, etc.)
- Encoding detection and validation
- Size metrics and compression analysis
Header Analysis
- Security headers audit (CSP, HSTS, X-Frame-Options)
- Caching headers analysis (Cache-Control, ETag)
- Server information extraction
- Custom header categorization
Performance Metrics
- Response time measurement
- Content size analysis
- Transfer speed calculation
- Statistical analysis across multiple requests
๐งช Testing
# Test the server directly
python -m http_mcp.server http --port 8933
# In another terminal, test basic functionality:
curl -X POST http://localhost:8933/mcp \
-H "Content-Type: application/json" \
-d '{"method": "tools/list"}'
๐ง Configuration
Request Defaults
The server uses sensible defaults:
- Timeout: 30 seconds
- Max Redirects: 10
- User Agent: Standard HTTP client
- SSL Verification: Enabled
Debug Artifacts
When using debug_request
with session management:
debug_artifacts/sessions/{session_id}/responses/
โโโ http_response_1234567890.json # Full response data
โโโ http_content_1234567890.html # Response content (if HTML)
โโโ http_analysis_1234567890.json # Analysis results
๐ Troubleshooting
Common Issues
- SSL Errors: For development, consider SSL verification settings
- Timeout Issues: Adjust timeout for slow endpoints
- Memory Usage: Large responses are handled efficiently with streaming
- Rate Limiting: Built-in delays between performance test iterations
Debug Mode
Enable detailed logging:
http-mcp http --port 8933 --log-level DEBUG
๐ค Integration Examples
With Debug MCP
# Combined HTTP testing and file operations
http_response = await make_request("GET", target_url)
await write_file("debug_response.html", http_response["content"])
await search_file("debug_response.html", "error")
With LangGraph Workflows
# HTTP testing in scraper debug workflow
if strategy == "api_direct":
# Test API endpoint first
api_response = await make_request("GET", api_url)
api_analysis = await analyze_response(api_response)
if api_analysis.is_json:
# Generate API scraper
scraper_code = generate_api_scraper(api_response)
else:
# Fall back to HTML scraping
scraper_code = generate_html_scraper(api_response)
๐ค Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
๐ License
This project is licensed under the MIT License - see the file for details.
๐ Acknowledgments
- Built for AI-driven development workflows
- Integrates with Model Context Protocol (MCP)
- Designed for LangGraph agent workflows
- HTTP client powered by httpx
- Part of automated scraper development pipeline