trieder83/mcp-n8n-server
If you are the rightful owner of mcp-n8n-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
This project provides a Model Context Protocol (MCP) server that integrates n8n workflows with AI assistants and other MCP clients using Streamable HTTP (Server-Sent Events).
MCP Server for n8n Workflow Integration
This project provides a Model Context Protocol (MCP) server that exposes n8n workflows as MCP tools using Streamable HTTP (Server-Sent Events). It allows you to integrate n8n automation workflows with AI assistants and other MCP clients.
!! use goose_1.12.0_amd64.deb since goose_1.12.1_amd64.deb is broken
Features
- Expose n8n webhooks as MCP tools
- Support for multiple workflows with custom input schemas
- Authentication support (Basic Auth and Custom Headers)
- Docker containerization
- Kubernetes deployment with NodePort service
- Configurable via environment variables or configuration files
- SSE (Server-Sent Events) transport for MCP communication
- Compatible with Codename Goose, Claude Desktop, and other MCP clients
Table of Contents
- Architecture
- Quick Start with Codename Goose
- Configuration
- HTTP Configuration
- Running Locally
- Docker Deployment
- Kubernetes Deployment
- Testing
- Configuring MCP Clients
- Workflow Authentication
- Monitoring
- Troubleshooting
- Development
Architecture
┌─────────────┐ ┌──────────────┐ ┌─────────┐
│ MCP Client │ ◄─HTTP─►│ MCP Server │ ◄─HTTP─► │ n8n │
│ (AI Model) │ │ (Python) │ │ │
│ Goose │ │ Port 8000 │ │Webhooks │
└─────────────┘ └──────────────┘ └─────────┘
│ │ │
│ POST /mcp │ POST /webhook/{id} │
│ (JSON-RPC 2.0) │ │
└───────────────────────┘ │
Streamable HTTP │
│
HTTP POST ──────────────┘
Quick Start with Codename Goose
The fastest way to get started with Codename Goose:
- Start the MCP server:
docker run -d -p 8000:8000 \
-e N8N_BASE_URL=http://your-n8n-host:31678 \
-v $(pwd)/workflows.json:/app/workflows.json:ro \
trieder83/mcp-n8n-server:latest
- Configure Goose (
~/.config/goose/config.yaml):
# Edit Goose configuration
goose configure
# Or manually add to ~/.config/goose/config.yaml:
# Add this under 'extensions:' section:
extensions:
mcp-n8n-server:
available_tools: []
bundled: null
description: n8n workflow integration via MCP
enabled: true
env_keys: []
envs: {}
headers: {}
name: mcp-n8n-server
timeout: 300
type: streamable_http
uri: http://localhost:8000/mcp
- Start using workflows in Goose:
goose
# Inside Goose:
# "Run the chat_workflow with message 'Hello n8n!'"
See the Configuring MCP Clients section for detailed setup instructions.
Configuration
Workflow Configuration
Workflows are configured via a JSON file (workflows.json) or environment variable. Each workflow can specify:
name: Unique identifier for the MCP toolwebhook_id: n8n webhook ID (from webhook URL)description: Description of the workflowinput_schema: JSON Schema for input parametersrequired_fields: List of required input fieldsauth_type: Authentication type ("basic", "header", or "none")basic_auth: Base64-encoded credentials for Basic Authauth_header_name: Custom header name for authenticationauth_header_value: Custom header value for authenticationcustom_headers: Additional custom headers (key-value pairs)
Example Workflow Configuration
[
{
"name": "chat_workflow",
"webhook_id": "919df572-8dcd-4cd6-b592-9e90ba0db414",
"description": "Process chat input through n8n workflow",
"input_schema": {
"chatInput": {
"type": "string",
"description": "The chat message to process"
}
},
"required_fields": ["chatInput"]
},
{
"name": "secure_workflow",
"webhook_id": "secure-webhook-id",
"description": "Workflow with Basic authentication",
"auth_type": "basic",
"basic_auth": "dXNlcm5hbWU6cGFzc3dvcmQ=",
"input_schema": {
"message": {
"type": "string",
"description": "Message to send"
}
},
"required_fields": ["message"]
}
]
Environment Variables
| Variable | Description | Default |
|---|---|---|
SERVER_HOST | Server bind address | 0.0.0.0 |
SERVER_PORT | Server port | 8000 |
N8N_BASE_URL | n8n instance base URL | http://localhost:31678 |
N8N_WORKFLOWS_CONFIG | Path to workflows JSON file | workflows.json |
N8N_WORKFLOWS_JSON | Workflows JSON as string (alternative to file) | - |
N8N_AUTH_TYPE | Global auth type (basic/header/none) | none |
N8N_BASIC_AUTH | Global Basic Auth (base64) | - |
N8N_AUTH_HEADER_NAME | Global auth header name | Authorization |
N8N_AUTH_HEADER_VALUE | Global auth header value | - |
HTTP Configuration
This MCP server uses Streamable HTTP as the transport protocol for MCP communication. It implements the MCP protocol over standard HTTP POST requests using JSON-RPC 2.0.
Transport Type
- Transport:
http(Streamable HTTP) - Protocol: HTTP/HTTPS
- Type: JSON-RPC 2.0 over HTTP POST
- Format: Request-response pattern
HTTP Endpoints
The server exposes the following HTTP endpoints:
| Endpoint | Method | Purpose | Description |
|---|---|---|---|
/mcp | POST | MCP Protocol | Main endpoint for MCP JSON-RPC 2.0 messages |
/ | GET | Health Check | Returns server health status with workflow count |
/health | GET | Health Check | Same as / - returns detailed health information |
/ready | GET | Readiness Check | Returns 200 if workflows loaded, 503 otherwise |
Server Configuration
# Default configuration
SERVER_HOST=0.0.0.0 # Bind to all interfaces
SERVER_PORT=8000 # HTTP port
# The server runs on plain HTTP (not HTTPS) by default
# Access via: http://localhost:8000
HTTP Transport Protocol
The MCP server implements streamable HTTP transport using JSON-RPC 2.0:
Client Server
│ │
│ POST /mcp (initialize) │
├──────────────────────────────>│
│ <── JSON Response ───────────│
│ │
│ POST /mcp (tools/list) │
├──────────────────────────────>│
│ <── JSON Response ───────────│
│ │
│ POST /mcp (tools/call) │
├──────────────────────────────>│
│ <── JSON Response ───────────│
│ │
How it works:
- Initialize: Client sends
initializemethod to/mcpendpoint - List Tools: Client sends
tools/listto get available workflows - Call Tool: Client sends
tools/callto execute a workflow - Response: Server responds synchronously with JSON-RPC result
Supported MCP Methods:
initialize- Initialize the MCP connectiontools/list- List all available n8n workflowstools/call- Execute a specific workflow
Example MCP Client Configuration:
{
"url": "http://localhost:8000/mcp",
"transport": "http",
"description": "n8n MCP server with HTTP transport"
}
Verification:
# Test initialization
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}'
# List tools
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/list"}'
# Call a tool
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"chat_workflow","arguments":{"chatInput":"test"}}}'
Port Exposure
Local Development:
- Server runs on
http://0.0.0.0:8000 - Accessible via
http://localhost:8000
Docker:
- Container port:
8000 - Host mapping:
-p 8000:8000 - Access via:
http://localhost:8000
Kubernetes:
- Service port:
8000 - NodePort:
30888(configured ink8s/service.yaml) - Access via:
http://<node-ip>:30888
Adding HTTPS Support
For production deployments with HTTPS:
Option 1: Use a Reverse Proxy (Recommended)
# Nginx example
server {
listen 443 ssl;
server_name mcp.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Option 2: Kubernetes Ingress with TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: n8n-mcp-server
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
tls:
- hosts:
- mcp.example.com
secretName: mcp-tls
rules:
- host: mcp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: n8n-mcp-server
port:
number: 8000
Health Checks
The server provides dedicated health check endpoints:
# Basic health check
curl http://localhost:8000/health
# Expected response
{
"status": "healthy",
"service": "n8n-mcp-server",
"workflows_loaded": 4,
"n8n_base_url": "http://localhost:31678"
}
# Readiness check (returns 503 if no workflows loaded)
curl http://localhost:8000/ready
# Expected response when ready
{
"status": "ready"
}
Kubernetes Probes:
- Liveness Probe:
GET /health- Checks if the server is running - Readiness Probe:
GET /ready- Checks if workflows are loaded and server is ready to accept requests
Running Locally
Prerequisites
- Python 3.11+
- n8n instance with webhooks enabled
Setup
- Clone the repository:
git clone <repository-url>
cd mcp-n8n-server
- Install dependencies:
pip install -r requirements.txt
-
Configure your workflows in
workflows.json -
Set environment variables (optional):
cp .env.example .env
# Edit .env with your configuration
- Run the server:
python server.py
The server will be available at http://localhost:8000
Docker Deployment
Build the Docker Image
docker build --network=host -t trieder83/mcp-n8n-server:latest .
# Or use make
make docker-build
Push to Docker Hub
docker push trieder83/mcp-n8n-server:latest
# Or use make
make docker-push
Run the Container
docker run -d \
--name n8n-mcp-server \
-p 8000:8000 \
-e N8N_BASE_URL=http://your-n8n-instance:31678 \
-v $(pwd)/workflows.json:/app/workflows.json:ro \
trieder83/mcp-n8n-server:latest
# Or use make
make docker-run
Kubernetes Deployment
Prerequisites
- Kubernetes cluster (v1.20+)
- kubectl configured
Deploy to Kubernetes
-
Update the ConfigMap in
k8s/deployment.yamlwith your workflows -
Update the n8n base URL:
# In k8s/deployment.yaml, update:
- name: N8N_BASE_URL
value: "http://your-n8n-service:31678"
- (Optional) Configure authentication secrets:
# Create Basic Auth secret
echo -n "username:password" | base64
kubectl create secret generic n8n-mcp-secrets \
--from-literal=basic-auth='<base64-encoded-credentials>'
# Or create header auth secret
kubectl create secret generic n8n-mcp-secrets \
--from-literal=auth-token='your-api-key'
- Deploy:
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
- Check the deployment:
kubectl get pods -l app=n8n-mcp-server
kubectl get svc n8n-mcp-server
The service will be exposed on NodePort 30800. Access it via:
http://<node-ip>:30800
Accessing the Service
# Get node IP
kubectl get nodes -o wide
# Access the service
curl http://<node-ip>:30800/
Testing
Test n8n Webhook Directly
curl -X POST \
-H "Content-Type: application/json" \
http://localhost:31678/webhook/919df572-8dcd-4cd6-b592-9e90ba0db414 \
-d '{"chatInput":"test"}'
Test MCP Server
The MCP server uses SSE transport. Connect using an MCP client or test with curl:
# List available tools
curl http://localhost:8000/messages
Configuring MCP Clients
Codename Goose Configuration
To use this n8n MCP server with Codename Goose, you need to configure it in Goose's MCP settings.
1. Local Development Setup
Edit your Goose configuration file at ~/.config/goose/config.yaml:
extensions:
mcp-n8n-server:
available_tools: []
bundled: null
description: n8n workflow integration via MCP
enabled: true
env_keys: []
envs: {}
headers: {}
name: mcp-n8n-server
timeout: 300
type: streamable_http
uri: http://localhost:8000/mcp
2. Kubernetes/Remote Setup
If your MCP server is running in Kubernetes with NodePort:
extensions:
mcp-n8n-server:
available_tools: []
bundled: null
description: n8n workflow integration via Kubernetes
enabled: true
env_keys: []
envs: {}
headers: {}
name: mcp-n8n-server
timeout: 300
type: streamable_http
uri: http://<kubernetes-node-ip>:30888/mcp
3. Using Docker Compose (Recommended for local development)
Create a docker-compose.yml file to run both Goose and the MCP server together:
version: '3.8'
services:
mcp-server:
image: trieder83/mcp-n8n-server:latest
container_name: n8n-mcp-server
ports:
- "8000:8000"
environment:
- N8N_BASE_URL=http://your-n8n-instance:31678
- N8N_WORKFLOWS_CONFIG=/config/workflows.json
volumes:
- ./workflows.json:/config/workflows.json:ro
restart: unless-stopped
Then add to ~/.config/goose/config.yaml:
extensions:
mcp-n8n-server:
type: streamable_http
uri: http://mcp-server:8000/mcp # Or http://localhost:8000/mcp
enabled: true
timeout: 300
4. Verify Connection
Once configured, start Goose and verify the connection:
goose
In the Goose interface, you should see the n8n workflows available as tools. You can list them with:
list tools
5. Using n8n Workflows in Goose
Once connected, you can invoke n8n workflows directly from Goose:
use tool chat_workflow with {"chatInput": "Hello from Goose!"}
Or simply ask Goose to use the workflow:
Run the chat_workflow with the message "analyze this data"
Goose will automatically call the appropriate n8n workflow through the MCP server.
Other MCP Clients
This server implements the standard MCP protocol with HTTP transport and should work with any MCP-compatible client:
- Claude Desktop: Add to
claude_desktop_config.json - Continue.dev: Configure in Continue settings
- Custom clients: Connect to
http://host:port/mcp
Example for Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"n8n-workflows": {
"url": "http://localhost:8000/mcp",
"transport": "http",
"description": "n8n workflow integration"
}
}
}
Workflow Authentication
Basic Authentication
For workflows requiring Basic Auth:
- Generate base64 credentials:
echo -n "username:password" | base64
- Add to workflow config:
{
"name": "secure_workflow",
"auth_type": "basic",
"basic_auth": "dXNlcm5hbWU6cGFzc3dvcmQ=",
...
}
Header Authentication
For workflows requiring custom header authentication:
{
"name": "api_workflow",
"auth_type": "header",
"auth_header_name": "X-API-Key",
"auth_header_value": "your-api-key-here",
...
}
Global vs Workflow-Specific Authentication
- Set global authentication via environment variables (applies to all workflows)
- Override with workflow-specific settings in
workflows.json - Workflow-specific settings take precedence over global settings
Monitoring
Logs
View logs in Kubernetes:
kubectl logs -f deployment/n8n-mcp-server
View logs in Docker:
docker logs -f n8n-mcp-server
Health Checks
The server includes health check endpoints for Kubernetes:
- Liveness probe: HTTP GET on port 8000
- Readiness probe: HTTP GET on port 8000
Troubleshooting
Common Issues
-
Connection refused to n8n
- Check that
N8N_BASE_URLis correct - Verify n8n is accessible from the container/pod
- In Kubernetes, use service DNS names
- Check that
-
Workflow not found
- Verify workflow configuration is loaded
- Check logs for configuration errors
- Ensure
workflows.jsonis properly mounted
-
Authentication failures
- Verify credentials are correctly base64-encoded
- Check that auth headers match n8n requirements
- Review logs for authentication-related errors
-
Port conflicts
- Change
SERVER_PORTif 8000 is in use - Update NodePort in service.yaml if 30888 conflicts
- Change
-
Goose not connecting to MCP server
- Verify the MCP server is running:
curl -X POST http://localhost:8000/mcp -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}' - Check your Goose configuration at
~/.config/goose/config.yaml - Verify the extension is enabled:
enabled: true - Check the URI is correct:
uri: http://localhost:8000/mcp(or NodePort 30888 for K8s) - Verify the type:
type: streamable_http - Try restarting Goose after configuration changes
- Check Goose logs: Run
goosewith verbose mode or check console output
- Verify the MCP server is running:
-
Goose can't find n8n workflows
- Check that workflows are loaded:
docker logs n8n-mcp-server - Use Goose's
list toolscommand to see available tools - Verify the MCP server has access to n8n (test with curl)
- Check that workflows are loaded:
-
Docker can't reach n8n on localhost
- Use
host.docker.internalinstead oflocalhostin Docker - Set
N8N_BASE_URL=http://host.docker.internal:31678 - On Linux, you may need to use
--network=hostfor the container
- Use
Development
Project Structure
mcp-n8n-server/
├── server.py # Main MCP server implementation
├── requirements.txt # Python dependencies
├── workflows.json # Workflow configuration examples
├── Dockerfile # Docker image definition
├── docker-compose.yml # Docker Compose configuration
├── .dockerignore # Docker build exclusions
├── .env.example # Environment variables template
├── .gitignore # Git exclusions
├── Makefile # Build and deployment commands
├── build-and-push.sh # Automated build and push script
├── goose-mcp-config.json # Example Goose MCP configuration
├── k8s/ # Kubernetes manifests
│ ├── deployment.yaml # Deployment + ConfigMap + Secret
│ └── service.yaml # Service (NodePort 30888)
└── README.md # This file
Adding New Workflows
- Add workflow configuration to
workflows.json - Restart the server to load new configuration
- The workflow will be automatically exposed as an MCP tool
Extending the Server
The server can be extended to:
- Add resource providers for n8n workflow results
- Implement prompt templates for workflow guidance
- Add caching for frequently-used workflows
- Support webhooks with GET requests
License
MIT License
Contributing
Contributions are welcome! Please open an issue or submit a pull request.
Support
For issues and questions:
- GitHub Issues:
/issues - n8n Documentation: https://docs.n8n.io/
- MCP Protocol: https://modelcontextprotocol.io/
Test
goose run --provider ollama --model qwen3:4b -t "ask n8n workflow what candy colors are best"