Zyla-Labs/mcp-server
If you are the rightful owner of mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
Zyla API Hub MCP Server enables LLMs to access any public API from the Zyla API Hub using the Model Context Protocol.
Zyla API Hub MCP Server
An MCP (Model Context Protocol) server that gives any AI agent the ability to call any API on the Zyla API Hub ā with a single tool.
Zyla API Hub ⢠Browse APIs ā¢
You connect this server to your AI agent (Claude, Cursor, OpenClaw, OpenAI Agents, etc.), and the agent automatically learns how to make HTTP requests to any Zyla API endpoint. No custom code needed.
Built with the official MCP Python SDK (v1.x).
Table of Contents
- How It Works (Simple Explanation)
- Quick Start (3 Steps)
- Connecting to AI Agents
- What the Agent Sees
- Usage Examples
- Docker Deployment
- Project Structure
- Troubleshooting
- License
How It Works (Simple Explanation)
You (human) AI Agent This MCP Server Zyla API Hub
ā ā ā ā
ā "Get crime data ā ā ā
ā for zip 90210" ā ā ā
ā āāāāāāāāāāāāāāāāāāāŗ ā ā ā
ā ā call_api(GET, url, ā ā
ā ā headers, params) ā ā
ā ā āāāāāāāāāāāāāāāāāāāāāāŗ ā HTTP GET ā
ā ā ā āāāāāāāāāāāāāāāāāāāāāāŗ ā
ā ā ā JSON response ā
ā ā ā āāāāāāāāāāāāāāāāāāāāāā ā
ā ā {status: 200, ā ā
ā ā response: {...}} ā ā
ā ā āāāāāāāāāāāāāāāāāāāāāā ā ā
ā "Crime grade is ā ā ā
ā B+ for 90210..." ā ā ā
ā āāāāāāāāāāāāāāāāāāā ā ā ā
In plain words:
- You ask your AI agent a question (in natural language).
- The agent decides it needs to call an API and uses the
call_apitool from this server. - This server makes the HTTP request to the Zyla API Hub and returns the data.
- The agent reads the data and answers you in natural language.
The agent figures out which API to call, what parameters to use, and how to interpret the results ā all on its own. You just ask the question.
Quick Start (3 Steps)
1. Install
git clone https://github.com/zyla-labs/zyla-api-hub-mcp.git
cd zyla-api-hub-mcp
pip install -r requirements.txt
2. Run
python mcp_server.py
3. Connect your AI agent
Pick your agent from the list below and follow the one-time setup. After that, just chat normally ā the agent will use the Zyla APIs when needed.
Connecting to AI Agents
Claude Desktop
Add to your config file (claude_desktop_config.json):
| OS | Path |
|---|---|
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
| Linux | ~/.config/Claude/claude_desktop_config.json |
{
"mcpServers": {
"zyla-api-hub": {
"command": "python",
"args": ["/absolute/path/to/mcp_server.py"],
"env": {}
}
}
}
Restart Claude Desktop. You'll see a hammer icon in the chat ā that means the tool is available. Just ask:
"Use the Zyla API to get the crime rates for zip code 90210. My API key is Bearer sk-zyla..."
Claude Code (CLI)
claude mcp add zyla-api-hub -- python /absolute/path/to/mcp_server.py
Then chat normally. Claude Code will invoke call_api when it needs to call an API.
Cursor IDE
Add to .cursor/mcp.json in your project (or global Cursor settings):
{
"mcpServers": {
"zyla-api-hub": {
"command": "python",
"args": ["/absolute/path/to/mcp_server.py"]
}
}
}
In Cursor's Agent mode, the AI will use call_api when you ask it to fetch data from an API.
OpenClaw
OpenClaw is a self-hosted AI agent gateway that supports multiple chat channels (WhatsApp, Telegram, Slack, Discord, iMessage, etc.) and can use external tools via MCP.
Option A ā Add via CLI (recommended):
openclaw mcp add --transport stdio zyla-api-hub python /absolute/path/to/mcp_server.py
This registers the MCP server so the OpenClaw agent can discover and use the call_api tool.
Option B ā Add via config (~/.openclaw/openclaw.json):
If you prefer manual configuration, add the MCP server in your OpenClaw config:
{
// ... your existing openclaw.json config ...
"mcpServers": {
"zyla-api-hub": {
"command": "python",
"args": ["/absolute/path/to/mcp_server.py"],
"transport": "stdio"
}
}
}
Then restart the gateway:
openclaw gateway restart
Option C ā Docker + SSE (network deployment):
If your OpenClaw gateway runs on a remote server or in Docker, use SSE transport:
# Start the MCP server with SSE transport
docker run -p 8000:8000 -e MCP_TRANSPORT=sse ghcr.io/zyla-labs/mcp-server:latest
# Then register it in OpenClaw pointing to the network URL
openclaw mcp add --transport sse zyla-api-hub http://localhost:8000/sse
Using it:
Once connected, chat with your OpenClaw agent through any channel (WhatsApp, Slack, Telegram, etc.) and it will automatically use the Zyla API when needed:
You (via WhatsApp): "What's the weather like in Buenos Aires?"
OpenClaw agent ā calls call_api(
method="GET",
url="https://www.zylalabs.com/api/.../weather",
headers={"Authorization": "Bearer sk-zyla..."},
params={"city": "Buenos Aires"}
)
OpenClaw agent: "It's currently 18C and partly cloudy in Buenos Aires."
For more on OpenClaw setup, see the Getting Started guide.
OpenAI Agents SDK
The OpenAI Agents SDK supports MCP servers as tool providers natively:
from agents import Agent
from agents.mcp import MCPServerStdio
async with MCPServerStdio(
command="python",
args=["/absolute/path/to/mcp_server.py"],
) as mcp_server:
agent = Agent(
name="Zyla Assistant",
instructions="You can call any API on the Zyla API Hub using the call_api tool.",
mcp_servers=[mcp_server],
)
# The agent now has access to call_api
LangChain
Use the LangChain MCP Adapter to wrap MCP tools as LangChain tools:
from langchain_mcp_adapters.client import MultiServerMCPClient
async with MultiServerMCPClient({
"zyla-api-hub": {
"command": "python",
"args": ["/absolute/path/to/mcp_server.py"],
"transport": "stdio",
}
}) as client:
tools = client.get_tools()
# Use tools with any LangChain agent
Custom Python Agent
Build your own agent using the official MCP Python SDK:
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def main():
# 1. Point to the MCP server
server_params = StdioServerParameters(
command="python",
args=["mcp_server.py"],
)
# 2. Connect
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# 3. See available tools
tools = await session.list_tools()
print("Tools:", [t.name for t in tools.tools])
# Output: Tools: ['call_api']
# 4. Call an API
result = await session.call_tool("call_api", arguments={
"method": "GET",
"url": "https://www.zylalabs.com/api/824/crime+data+by+zipcode+api/583/get+crime+rates+by+zip",
"headers": {"Authorization": "Bearer YOUR_ZYLA_API_KEY"},
"params": {"zip": "90210"},
})
# 5. Read the response
print("Status:", result.structured_content["status_code"])
print("Data:", result.structured_content["response"])
asyncio.run(main())
SSE Network Agent
For agents connecting over HTTP (web apps, microservices, remote deployments):
# Start the server with SSE transport first
python mcp_server.py sse
import asyncio
from mcp import ClientSession
from mcp.client.sse import sse_client
async def main():
async with sse_client("http://localhost:8000/sse") as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
result = await session.call_tool("call_api", arguments={
"method": "GET",
"url": "https://www.zylalabs.com/api/XXXX/your+api/YYY/endpoint",
"headers": {"Authorization": "Bearer YOUR_KEY"},
})
print(result.structured_content)
asyncio.run(main())
MCP Inspector (Testing)
The MCP Inspector lets you test the server interactively in a browser:
# Terminal 1: start the server
python mcp_server.py sse
# Terminal 2: start the inspector
npx -y @modelcontextprotocol/inspector
Open the Inspector UI and connect to http://localhost:8000/sse. You can browse the tool schema and invoke call_api manually.
What the Agent Sees
When an AI agent connects, it receives this tool schema automatically via the MCP protocol:
Tool name: call_api
Description (read by the LLM):
Call any API endpoint from the Zyla API Hub. Supports GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS. Always include an Authorization header with your Zyla API key.
Input parameters:
| Parameter | Type | Required | What it does |
|---|---|---|---|
method | string | Yes | HTTP method (GET, POST, PUT, DELETE, PATCH, etc.) |
url | string | Yes | Full API endpoint URL |
headers | dict[str, str] | No | HTTP headers (include Authorization here) |
params | dict[str, str] | No | Query-string parameters (sent on all methods) |
data | dict | No | JSON body (for POST, PUT, PATCH only) |
Output (structured):
| Field | Type | What it means |
|---|---|---|
status_code | int | HTTP status code (200, 404, etc.). 0 = request didn't reach the server |
response | str | dict | list | Parsed JSON body, or raw text if not JSON |
error | str | null | Error message if something went wrong; null on success |
Usage Examples
Simple GET
{
"method": "GET",
"url": "https://www.zylalabs.com/api/824/crime+data+by+zipcode+api/583/get+crime+rates+by+zip",
"headers": { "Authorization": "Bearer YOUR_ZYLA_API_KEY" },
"params": { "zip": "90210" }
}
POST with JSON body
{
"method": "POST",
"url": "https://www.zylalabs.com/api/XXXX/some+api/YYY/endpoint",
"headers": {
"Authorization": "Bearer YOUR_ZYLA_API_KEY",
"Content-Type": "application/json"
},
"data": { "input_text": "Hello, world!", "language": "en" }
}
POST with query params + body
{
"method": "POST",
"url": "https://www.zylalabs.com/api/XXXX/some+api/YYY/search",
"headers": { "Authorization": "Bearer YOUR_ZYLA_API_KEY" },
"params": { "page": "1", "limit": "10" },
"data": { "query": "machine learning" }
}
Success response
{
"status_code": 200,
"response": { "data": "...", "count": 42 },
"error": null
}
Error response
{
"status_code": 0,
"response": "",
"error": "Request timed out after 30 seconds"
}
Docker Deployment
Build
docker build -t zyla-mcp-server .
Run (stdio ā for local agents)
docker run -i zyla-mcp-server
Run (SSE ā for network agents)
docker run -p 8000:8000 -e MCP_TRANSPORT=sse zyla-mcp-server
Pre-built image
docker pull ghcr.io/zyla-labs/mcp-server:latest
docker run -p 8000:8000 -e MCP_TRANSPORT=sse ghcr.io/zyla-labs/mcp-server:latest
The CI pipeline auto-publishes to GitHub Container Registry on every push to master.
Project Structure
zyla-api-hub-mcp/
āāā mcp_server.py # MCP server (single file, all logic)
āāā pyproject.toml # Python packaging (PEP 621)
āāā requirements.txt # Pinned dependencies
āāā Dockerfile # Docker image
āāā .dockerignore
āāā .github/
ā āāā workflows/
ā āāā publish.yml # CI/CD ā GHCR
āāā README.md
Dependencies: mcp[cli]>=1.26.0, httpx>=0.27.0 ā that's it.
Key design choices:
- Official
mcpSDK (mcp.server.fastmcp.FastMCP), not the third-partyfastmcppackage httpxinstead ofrequests(async-ready, aligned with the MCP ecosystem)- Pydantic
ApiResponsemodel for structured output (LLMs get a typed JSON schema) - No mutable default arguments (
Noneinstead of{}) - Granular error handling (timeout, request error, unexpected error ā each with a clear message)
- Query params sent on all HTTP methods (not just GET)
- Transport selectable via CLI argument (
stdioorsse)
Troubleshooting
| Problem | Solution |
|---|---|
ModuleNotFoundError: No module named 'mcp' | Run pip install -r requirements.txt |
| Server starts but agent can't see the tool | Make sure you're using the right transport. Local agents (Claude Desktop, Cursor) use stdio (default). Network agents use sse. |
| Request timeout errors | The default timeout is 30 seconds. If the upstream Zyla API is slow, you'll get a clear error message. |
| Docker SSE "connection refused" | Expose the port: docker run -p 8000:8000 -e MCP_TRANSPORT=sse zyla-mcp-server |
| OpenClaw doesn't see the tool | Restart the gateway after adding the MCP server: openclaw gateway restart |
License
MIT