mcp-graphiti

mcp-graphiti

3.5

If you are the rightful owner of mcp-graphiti and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Graphiti MCP Server is a fork and extension of the official getzep/graphiti MCP server, designed to build per-project temporal knowledge graphs that AI agents can query over the Model Context Protocol.

Graphiti MCP Server is a powerful tool for creating and managing temporal knowledge graphs using the Model Context Protocol (MCP). It extends the capabilities of the original getzep/graphiti server by supporting multiple projects with a single database, enhancing developer experience with a dedicated CLI. The server ingests unstructured text, extracts entities and relationships using LLMs, and records changes as time-stamped episodes. This allows AI agents to query versioned data efficiently. The CLI automates the setup process by generating a Docker Compose file that spins up a Neo4j instance, a root MCP server, and multiple project-scoped MCP servers. This setup ensures project isolation, editor auto-discovery, crash containment, and zero-downtime tweaks. The server is ideal for environments where multiple projects need to be managed simultaneously without interference, and it supports seamless integration with IDEs and agent frameworks that use MCP.

Features

  • Multi-Project Support: Allows multiple project-specific MCP servers to run against a single Neo4j database, ensuring project isolation and efficient resource usage.
  • Automated Setup: CLI tool generates Docker Compose files and IDE configurations, simplifying the deployment and management of MCP servers.
  • Temporal Knowledge Graphs: Transforms unstructured text into temporal graphs, enabling AI agents to query versioned data using LLMs.
  • Crash Containment: Isolates project-specific issues to prevent them from affecting other projects, enhancing stability and reliability.
  • Zero-Downtime Configuration: Allows hot-swapping of entity YAMLs or LLM models without restarting other projects, ensuring continuous operation.