S0lidStat3/cape-mcp-server
If you are the rightful owner of cape-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The CAPE MCP Worker is a Cloudflare Worker that provides a Model Context Protocol (MCP) interface to the CAPE Sandbox API, enabling seamless integration and execution of sandbox tasks over HTTPS.
CAPE MCP Worker
Cloudflare Worker that exposes the full CAPE Sandbox API surface as Model Context Protocol (MCP) tools. It serves MCP metadata and executes tool calls over HTTPS without requiring any Node/Express runtime.
✨ Highlights
- 32 MCP tools mapped directly to CAPE endpoints (task submission, lifecycle, files, reports, platform status).
- Zod validation for every tool input and automatic JSON Schema metadata at
/.well-known/mcp.json. - Worker-first HTTP implementation (Request/Response) with zero Node polyfills.
- Binary artifact streaming with size guardrails and base64 packaging for MCP clients.
Requirements
- Node.js 20+ for TypeScript tooling and Vitest.
- Wrangler 3.0+ for local dev and deploys (
npm install -g wrangler). - A reachable CAPE Sandbox instance plus optional remote-download API keys.
Configuration
The Worker reads all configuration from environment bindings (Wrangler vars/secret). The same names can be placed in .dev.vars for local wrangler dev.
| Variable | Required | Description |
|---|---|---|
CAPE_BASE_URL | ✅ | CAPE API base URL (include /apiv2). |
CAPE_API_TOKEN | optional | API token for CAPE deployments that require auth. |
CAPE_VT_API_KEY | optional | Default VirusTotal/MalwareBazaar key used by cape.file.remote-create when caller omits apiKey. |
MAX_BINARY_BYTES | optional | Max size (bytes) allowed when proxying binary artifacts (default 25 MB). |
HTTP_TIMEOUT_MS | optional | CAPE HTTP timeout in milliseconds (default 60 000). |
Tip: Copy
.env.exampleto.dev.varsand adjust values for local testing. Secrets should be stored withwrangler secret put NAMEin production.
Setup & Local Development
npm install
Start a locally emulated Worker (watches and hot-reloads):
npm run dev
This runs wrangler dev, exposing the MCP HTTP surface on http://127.0.0.1:8787:
| Method | Path | Description |
|---|---|---|
GET | /healthz | Worker health check. |
GET | /.well-known/mcp.json | MCP metadata blob (tools + schemas). |
POST | /mcp/tools/list | Returns { tools } array describing every tool. |
POST | /mcp/tools/call | Executes a tool via { toolName, arguments }. |
Example invocation:
curl -X POST http://127.0.0.1:8787/mcp/tools/call \
-H "Content-Type: application/json" \
-d '{"toolName":"cape.tasks.status","arguments":{"taskId":1234}}'
Deploying to Cloudflare
npm run deploy # wraps `wrangler publish`
wrangler.toml already points to src/worker.ts and sets a default compatibility_date. Provide production bindings/secrets via wrangler.toml, wrangler secret put, or the Cloudflare Dashboard before publishing.
Testing
npm test
Vitest validates the tool registry (unique names + JSON Schema emission). Add new tests alongside tests/mcp.tools.test.ts as you extend the worker.
Extending the Worker
- New CAPE operation: add a descriptor in
src/mcp/tools.tsand a companion method insrc/services/mcpService.ts(reusingCapeApi). - Expose configuration: extend
ConfigSchemainsrc/config.ts, then document the binding in this README andwrangler.toml. - Hardening: enhance
src/mcp/router.tsto emit richer error codes or rate limiting if needed.
Because the Worker is stateless, multiple deployments can run in parallel without coordination. Keep MAX_BINARY_BYTES conservative to avoid excessive memory usage when proxying large dumps.
Happy reversing! 🕵️♀️