ARJ999/HTTP-API-To-MCP-Server
If you are the rightful owner of HTTP-API-To-MCP-Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
Transform any HTTP/REST API into a Model Context Protocol (MCP) server, enabling seamless interaction with AI assistants.
HTTP-API-To-MCP-Server
Transform any HTTP/REST API into a Model Context Protocol (MCP) server β enabling AI assistants like Cursor, VS Code Copilot, Windsurf, and Claude Desktop to interact with your existing APIs without any code modifications.
π― What is This?
HTTP-API-To-MCP-Server is a lightweight protocol gateway that bridges the gap between traditional HTTP/REST APIs and the emerging Model Context Protocol (MCP) ecosystem. It acts as a transparent proxy that:
- Converts HTTP requests/responses to MCP protocol format
- Enables AI assistants to discover and call your APIs automatically
- Requires zero modifications to your existing backend services
- Supports both Server-Sent Events (SSE) and Streamable HTTP transports
- Scales to handle production workloads with Cloudflare's Pingora engine
Built on Pingora β the same ultra-high-performance proxy library that powers Cloudflare's infrastructure (40M+ requests/second).
π Quick Start
Option 1: Deploy to Railway (Recommended)
The fastest way to get started β one-click deployment to Railway with automatic HTTPS and persistent storage.
- Click the Deploy on Railway button above
- Configure your environment variables (see Railway Deployment)
- Add your OpenAPI specification
- Connect your MCP client
Your MCP server will be live at: https://your-app.railway.app
Option 2: Run with Docker
# Pull the pre-built image
docker pull ghcr.io/arj999/http-api-to-mcp-server:latest
# Run with your configuration
docker run -d \
--name mcp-server \
-p 8080:8080 \
-v $(pwd)/config.yaml:/app/config/config.yaml \
ghcr.io/arj999/http-api-to-mcp-server:latest
Option 3: Run Locally
# Download the binary
git clone https://github.com/ARJ999/HTTP-API-To-MCP-Server.git
cd HTTP-API-To-MCP-Server
# Edit configuration
cp config/config.example.yaml config/config.yaml
nano config/config.yaml
# Run the server
./mcp-access-point -c config/config.yaml
π Prerequisites
- For Railway: Railway account (free tier available)
- For Docker: Docker installed
- For Local: Linux/macOS system (Ubuntu 20.04+ recommended)
- OpenAPI Spec: Your API's OpenAPI/Swagger specification (JSON or YAML)
π¨ Architecture
graph LR
A[AI Assistant<br/>Cursor/VS Code] -->|MCP Protocol<br/>SSE or HTTP| B[HTTP-API-To-MCP-Server<br/>Railway/Docker]
B -->|Standard HTTP| C1[Your REST API<br/>api.example.com]
B -->|Standard HTTP| C2[Another API<br/>internal-service:8080]
B -->|Standard HTTP| C3[Third-party API<br/>api.weather.gov]
style A fill:#e1f5ff,stroke:#01579b,stroke-width:2px
style B fill:#fff9c4,stroke:#f57f17,stroke-width:2px
style C1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
style C2 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
style C3 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
How it works:
- AI Assistant sends MCP-formatted requests via SSE or Streamable HTTP
- HTTP-API-To-MCP-Server receives the request, parses the MCP payload
- Server translates the request to standard HTTP based on OpenAPI spec
- Backend API processes the HTTP request normally (no changes needed)
- Server converts the HTTP response back to MCP format
- AI Assistant receives the structured MCP response
βοΈ Configuration
Basic Configuration Structure
Create a config.yaml
file with the following structure:
# Server Configuration
pingora:
version: 1
threads: 4
pid_file: /tmp/pingora.pid
upgrade_sock: /tmp/pingora_upgrade.sock
access_point:
listeners:
- address: 0.0.0.0:8080
admin:
address: "127.0.0.1:8081"
api_key: "your-secure-api-key-here"
# MCP Services
mcps:
- id: my-api-service
upstream_id: 1
path: config/openapi.json # Local file or https:// URL
# Backend Services
upstreams:
- id: 1
nodes:
"api.example.com": 1
type: roundrobin
scheme: https
pass_host: rewrite
upstream_host: api.example.com
Configuration Examples
See the directory for complete configuration examples:
- β Basic REST API proxy
- β Multiple APIs with different auth
- β External API integration
- β Private network APIs
OpenAPI Specification
Your OpenAPI spec can be:
- Local file:
config/openapi.json
orconfig/openapi.yaml
- Remote URL:
https://api.example.com/openapi.json
- Swagger 2.0 or OpenAPI 3.0+ formats supported
The server automatically:
- Parses operation IDs, paths, methods, and parameters
- Generates MCP tool definitions from your API endpoints
- Validates requests against your schema
- Maps responses to MCP format
π Railway Deployment
Step 1: Prepare Your Repository
- Fork this repository to your GitHub account
- Add your
config.yaml
to theconfig/
directory - Add your OpenAPI spec to
config/openapi.json
Step 2: Deploy to Railway
- Go to Railway
- Click New Project β Deploy from GitHub repo
- Select your forked repository
- Railway will automatically detect the configuration
Step 3: Configure Environment Variables
In Railway dashboard, add these variables:
Variable | Description | Example |
---|---|---|
PORT | Server listening port | 8080 |
ADMIN_API_KEY | Admin API authentication key | your-secure-key-123 |
CONFIG_PATH | Path to config file | /app/config/config.yaml |
Step 4: Configure Networking
- In Railway dashboard, go to Settings β Networking
- Click Generate Domain to get a public URL
- Your MCP server will be available at:
https://your-app.railway.app
Step 5: Verify Deployment
# Test SSE endpoint
curl https://your-app.railway.app/sse
# Test Streamable HTTP endpoint
curl https://your-app.railway.app/mcp
Railway Configuration Files
The repository includes:
railway.toml
β Railway build and deployment configurationDockerfile
β Container image definitiondocker-entrypoint.sh
β Container startup script
π Connecting MCP Clients
Cursor Desktop
- Open Cursor Settings (Cmd/Ctrl + ,)
- Navigate to Features β Model Context Protocol
- Add a new MCP server:
{
"mcpServers": {
"my-api": {
"url": "https://your-app.railway.app/sse",
"transport": "sse"
}
}
}
- Restart Cursor
- Your API tools will now appear in the AI chat context
VS Code (GitHub Copilot)
Add to .vscode/settings.json
:
{
"github.copilot.chat.mcpServers": {
"my-api": {
"url": "https://your-app.railway.app/mcp",
"transport": "http"
}
}
}
Windsurf IDE
Configure in Windsurf settings:
{
"cascade.mcp.servers": {
"my-api": {
"url": "https://your-app.railway.app/sse",
"transport": "sse"
}
}
}
Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json
(macOS):
{
"mcpServers": {
"my-api": {
"url": "https://your-app.railway.app/sse",
"transport": "sse"
}
}
}
MCP Inspector (Testing)
npx @modelcontextprotocol/inspector
# Navigate to http://localhost:6274/
# Select transport: SSE or Streamable HTTP
# Enter URL: https://your-app.railway.app/sse
π Security Best Practices
1. API Key Management
- Generate strong keys:
openssl rand -hex 32
- Use environment variables for sensitive values
- Rotate keys regularly (every 90 days recommended)
- Use different keys for dev/staging/production
2. Network Security
- Enable HTTPS (automatic with Railway)
- Restrict Admin API to localhost only (
127.0.0.1:8081
) - Use firewall rules to limit access
- Enable rate limiting in production
3. Configuration Security
# β
Good: Use environment variables
upstreams:
- id: 1
headers:
Authorization: "${API_KEY}" # Read from environment
# β Bad: Hardcoded secrets
upstreams:
- id: 1
headers:
Authorization: "Bearer secret-token-123"
4. Railway-Specific Security
- Use Railway's Private Networking for internal services
- Enable Deploy Protection for production environments
- Use Environment Variables for all secrets
- Enable Health Checks for automatic recovery
π Monitoring & Troubleshooting
Health Checks
# Check if server is running
curl https://your-app.railway.app/sse
# Expected response: 405 Method Not Allowed (normal for GET on SSE endpoint)
View Logs (Railway)
- Go to Railway dashboard
- Select your project
- Click Deployments β View Logs
- Filter by severity: Info, Warning, Error
Common Issues
Issue | Cause | Solution |
---|---|---|
Connection refused | Server not running | Check Railway logs, verify deployment |
404 Not Found | Wrong endpoint URL | Use /sse or /mcp , not root / |
401 Unauthorized | Missing/invalid API key | Check upstream headers configuration |
OpenAPI spec not found | File path incorrect | Verify path: in config.yaml |
Timeout errors | Upstream service slow | Increase timeout in config |
Debug Mode
Enable verbose logging in config.yaml
:
pingora:
error_log: /tmp/error.log
log_level: debug
π§ͺ Testing
Local Testing
# Start the server
./mcp-access-point -c config/config.yaml
# Test with MCP Inspector
npx @modelcontextprotocol/inspector
# Test with curl
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}'
Integration Testing
See for a Python MCP client example.
π Documentation
- β Detailed Railway setup
- β Complete config options
- β Connect various MCP clients
- β Admin API documentation
- β Common issues and solutions
π€ Contributing
Contributions are welcome! Please read our for details.
Development Setup
# Clone the repository
git clone https://github.com/ARJ999/HTTP-API-To-MCP-Server.git
cd HTTP-API-To-MCP-Server
# Build from source (requires Rust)
cargo build --release
# Run tests
cargo test
π License
This project is licensed under the MIT License - see the file for details.
π Acknowledgments
- Built on Pingora by Cloudflare
- Inspired by mcp-access-point by sxhxliang
- Model Context Protocol by Anthropic
π Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Wiki
πΊοΈ Roadmap
- SSE transport support
- Streamable HTTP transport support
- Railway deployment template
- WebSocket transport support
- Built-in authentication plugins
- Prometheus metrics export
- GraphQL API support
- gRPC protocol support
Made with β€οΈ for the MCP community