smian0/graphiti-mcp
If you are the rightful owner of graphiti-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
Graphiti MCP is a self-contained server that integrates with Ollama cloud models for local, private knowledge graph operations.
Graphiti MCP with Ollama Integration
Self-contained Graphiti MCP server using Ollama cloud models for local, private knowledge graph operations.
Quick Start
cd /Users/smian/Services/graphiti-mcp
# Start services
./scripts/start.sh
# Stop services
./scripts/stop.sh
# Check status
./scripts/status.sh
# View logs
./scripts/logs.sh
Configuration
Models Used:
- LLM: GLM-4.6:cloud (via Ollama)
- Embeddings: nomic-embed-text (via Ollama)
- Database: FalkorDB (Redis-based graph database)
Endpoints:
- MCP Server: http://localhost:8000/mcp/
- Health Check: http://localhost:8000/health
- FalkorDB UI: http://localhost:3100
How It Works
Ollama Integration
Graphiti MCP requires OpenAI-compatible LLM clients. The challenge:
- Standard
OpenAIprovider uses betaresponses.parse()API → Ollama doesn't support - Solution: Custom
openai_genericprovider using standardchat.completionsAPI
Automatic Patching
The scripts/patch-and-start.sh script automatically:
- Installs Groq support (for provider patterns)
- Adds
OpenAIGenericClientimport to factory - Registers
openai_genericprovider case - Starts MCP server with Ollama configuration
This happens on every container start - patches are ephemeral but automatically reapplied.
Configuration Files
config.yaml
llm:
provider: "openai_generic" # Uses standard chat.completions API
model: "glm-4.6:cloud"
providers:
openai:
api_key: "ollama-dummy-key"
api_url: "http://host.docker.internal:11434/v1"
embedder:
provider: "openai"
model: "nomic-embed-text"
providers:
openai:
api_key: "ollama-dummy-key"
api_url: "http://host.docker.internal:11434/v1"
database:
provider: "falkordb"
server:
group_id: "main"
transport: "http"
docker-compose.yml
- Image:
zepai/knowledge-graph-mcp:latest - Volumes: Config, data, and patch scripts
- Network: Uses
host.docker.internalto reach Ollama on host
.mcp.json (Claude Code)
{
"mcpServers": {
"graphiti": {
"type": "http",
"url": "http://localhost:8000/mcp/",
"description": "Graphiti temporal knowledge graph"
}
}
}
Architecture
┌─────────────────────────────────────────────────┐
│ Claude Code (MCP Client) │
│ /Users/smian/dotfiles/.mcp.json │
└────────────┬────────────────────────────────────┘
│ HTTP MCP Protocol
▼
┌─────────────────────────────────────────────────┐
│ Graphiti MCP Server (Docker) │
│ • Port 8000: MCP endpoint (/mcp/) │
│ • Patched with openai_generic provider │
│ • Config: /app/config.yaml │
└────┬───────────────────────────┬────────────────┘
│ │
│ OpenAI-compatible API │ Graph storage
▼ ▼
┌─────────────────────┐ ┌─────────────────────────┐
│ Ollama (Host) │ │ FalkorDB (Container) │
│ Port 11434 │ │ Port 6379 │
│ • GLM-4.6:cloud │ │ Redis-based graph DB │
│ • nomic-embed-text │ │ UI: localhost:3100 │
└─────────────────────┘ └─────────────────────────┘
Troubleshooting
Check Server Status
curl http://localhost:8000/health
# Expected: {"status":"healthy","service":"graphiti-mcp"}
Verify Ollama Models
ollama list | grep -E "glm-4.6:cloud|nomic-embed-text"
Check Logs for Ollama Calls
docker-compose logs graphiti-mcp | grep "HTTP Request.*11434"
Common Issues
❌ "OpenAI provider configuration not found"
- Config file not mounted correctly
- Check:
docker exec graphiti-mcp-server cat /app/config.yaml
❌ "404 page not found" from Ollama
- Using wrong provider (should be
openai_generic, notopenai) - Patches not applied - check startup logs for "Applying Ollama compatibility patches"
❌ "No valid session ID provided"
- Restart MCP client (Claude Code) after server restarts
Development
Update Configuration
- Edit
config.yaml - Run
docker-compose restart - Restart MCP client (Claude Code)
Switch Models
Update config.yaml and restart:
llm:
model: "qwen2.5:14b" # Any Ollama model
embedder:
model: "mxbai-embed-large" # Any Ollama embedding model
Inspect Patches
docker exec graphiti-mcp-server cat /app/mcp/src/services/factories.py | grep -A 20 'case "openai_generic"'
Data Persistence
- Graph Data:
./data/falkordb/(persists across restarts) - Episodes: Stored in FalkorDB
- Patches: Applied automatically on startup (not persisted in image)
Performance
- GLM-4.6:cloud: ~2-5s per entity extraction
- nomic-embed-text: ~100ms per embedding
- FalkorDB: 496x faster than Neo4j for graph queries
References
Last Updated
2025-11-13
Status: ✅ Working - Ollama integration complete with automatic patching