One-Stop-Shop-N8N-MCP

Zevas1993/One-Stop-Shop-N8N-MCP

3.3

If you are the rightful owner of One-Stop-Shop-N8N-MCP and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

n8n-MCP is a comprehensive Model Context Protocol server designed to provide AI agents with full access to n8n workflow automation, enabling them to discover, create, manage, and verify workflows efficiently.

Tools
3
Resources
0
Prompts
0

n8n Co-Pilot MCP Server

Stateless • Validated • Live Sync • LLM-Powered

Transform your n8n workflow development with an intelligent co-pilot that prevents errors before they happen.

🚀 What's New in v3.0

FeatureDescription
Live Node SyncNode catalog syncs directly from YOUR n8n instance - no pre-built database
Bulletproof Validation6-layer validation blocks broken workflows before they reach n8n
Dual LLM ArchitectureEmbedding model + Generation model optimized for your hardware
Stateless Designn8n is the source of truth - no workflow storage in MCP
Dual InterfaceMCP for AI agents (Claude) + HTTP for humans (Open WebUI)

⚡ Quick Start

One-Command Start

# Clone and enter directory
git clone https://github.com/Zevas1993/One-Stop-Shop-N8N-MCP.git
cd One-Stop-Shop-N8N-MCP

# Install dependencies
npm install

# Start the server (smart launcher handles everything)
npm run go

Windows users: Just double-click Start-MCP-Server.bat

The smart launcher automatically:

  • ✅ Checks Node.js version
  • ✅ Uses pre-built dist/ if available
  • ✅ Falls back to ts-node if needed
  • ✅ Sets sensible defaults

Configure Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "n8n-copilot": {
      "command": "node",
      "args": ["C:/path/to/One-Stop-Shop-N8N-MCP/start.js"],
      "env": {
        "N8N_API_URL": "http://localhost:5678",
        "N8N_API_KEY": "your-api-key"
      }
    }
  }
}

Option 2: Docker Compose (Full Stack)

# Clone the repo
git clone https://github.com/Zevas1993/One-Stop-Shop-N8N-MCP.git
cd One-Stop-Shop-N8N-MCP

# Configure
cp .env.example .env
# Edit .env with your N8N_API_KEY

# Start everything (n8n + MCP + Ollama + Open WebUI)
docker compose up -d

Access:

Option 3: Docker (MCP Only)

# Build
docker build -t n8n-mcp:latest .

# Run in MCP mode (for Claude Desktop)
docker run -it --rm \
  -e N8N_API_URL=http://host.docker.internal:5678 \
  -e N8N_API_KEY=your-key \
  n8n-mcp:latest

# Run in HTTP mode (for Open WebUI)
docker run -d -p 3001:3001 \
  -e MCP_MODE=http \
  -e N8N_API_URL=http://your-n8n:5678 \
  -e N8N_API_KEY=your-key \
  n8n-mcp:latest

🛡️ Validation Gateway

Every workflow passes through 6 layers of validation before reaching n8n:

Workflow Input
     │
     ▼
┌─────────────────────┐
│ 1. Schema (Zod)     │ ──▶ Structure correct?
├─────────────────────┤
│ 2. Node Existence   │ ──▶ Do nodes exist in n8n?
├─────────────────────┤
│ 3. Connections      │ ──▶ Are connections valid?
├─────────────────────┤
│ 4. Credentials      │ ──▶ Required creds configured?
├─────────────────────┤
│ 5. Semantic (LLM)   │ ──▶ Does this make sense?
├─────────────────────┤
│ 6. Dry Run (n8n)    │ ──▶ Test in n8n itself
└─────────────────────┘
     │
     ▼
  n8n API ✅

Result: Invalid workflows are rejected with clear error messages and fix suggestions.


🤖 Dual LLM Architecture

The system uses two specialized models optimized for different tasks:

Model TypePurposeExamples
EmbeddingSemantic search, similaritynomic-embed-text, embedding-gemma-300m
GenerationChat, validation, suggestionsllama3.2:1b/3b, gemma:2b, nemotron-nano-4b

Models are auto-selected based on your hardware:

RAMCPU CoresEmbedding ModelGeneration Model
<4GBAnyembedding-gemma-300mgemma:2b
4-8GB2-4embedding-gemma-300mllama3.2:1b
8-16GB4+nomic-embed-textllama3.2:3b
16GB+8+nomic-embed-textnemotron-nano-4b

🔧 MCP Tools

Workflow Management

ToolDescription
n8n_create_workflowCreate a validated workflow
n8n_update_workflowUpdate with validation
n8n_delete_workflowDelete a workflow
n8n_list_workflowsList all workflows
n8n_activate_workflowActivate/deactivate

Validation

ToolDescription
n8n_validate_workflowCheck without creating

Execution

ToolDescription
n8n_execute_workflowRun a workflow
n8n_get_executionGet execution details
n8n_list_executionsList recent executions

Node Discovery

ToolDescription
n8n_search_nodesSearch available nodes
n8n_get_node_infoGet node details
n8n_list_trigger_nodesList triggers
n8n_list_ai_nodesList AI/LangChain nodes

System

ToolDescription
n8n_statusSystem status
n8n_resync_catalogForce node catalog refresh
n8n_list_credentialsList available credentials

🌐 Open WebUI Integration

The MCP server exposes tools that Open WebUI can use:

  1. Get the pipeline code:

    curl http://localhost:3001/api/openwebui-pipeline
    
  2. Install in Open WebUI:

    • Go to Admin > Pipelines
    • Create new pipeline
    • Paste the generated code
  3. Start chatting:

    • "List my workflows"
    • "Create a webhook that sends to Slack"
    • "What nodes can I use for email?"

📁 Architecture

src/
├── core/                    # NEW: Core architecture
│   ├── index.ts             # Core orchestrator
│   ├── node-catalog.ts      # Live sync from n8n
│   ├── validation-gateway.ts # 6-layer validation
│   ├── n8n-connector.ts     # Stateless passthrough
│   └── llm-brain.ts         # Dual LLM integration
├── interfaces/              # NEW: Dual interface
│   ├── mcp-interface.ts     # For AI agents
│   └── openwebui-interface.ts # For humans
├── ai/                      # Existing LLM support
│   └── hardware-detector.ts # Auto-detects optimal models
├── services/                # Existing services
└── main.ts                  # NEW: Unified entry point

🔐 Environment Variables

VariableRequiredDefaultDescription
N8N_API_URLYeshttp://localhost:5678n8n instance URL
N8N_API_KEYYes*-n8n API key (*required for most features)
OLLAMA_URLNohttp://localhost:11434Ollama server URL
MCP_MODENostdiostdio for Claude, http for Open WebUI
PORTNo3001HTTP server port
AUTH_TOKENNo-HTTP API authentication
ENABLE_DRY_RUNNotrueEnable n8n dry-run validation

🐛 Troubleshooting

"Node type not found"

The node doesn't exist in your n8n instance. Use n8n_search_nodes to find available nodes.

"Validation failed at layer: dryRun"

n8n rejected the workflow. Check the error message for details.

"LLM not available"

Ollama isn't running or reachable. Start Ollama or disable semantic validation.

"Connection refused to n8n"

Check that n8n is running and N8N_API_URL is correct.


📜 License

MIT


🙏 Credits