heimdall-mcp-server

heimdall-mcp-server

3.5

If you are the rightful owner of heimdall-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Heimdall MCP Server is an AI coding assistant's long-term memory solution, providing persistent, context-rich memory for LLMs.

Heimdall MCP Server - Your AI Coding Assistant's Long-Term Memory

The Problem: Your AI coding assistant has short-lived memory. Every chat session starts from a blank slate.

The Solution: Heimdall gives your LLM a persistent, growing, cognitive memory of your specific codebase, lessons and memories carry over time.

https://github.com/user-attachments/assets/120b3d32-72d1-4d42-b3ab-285e8a711981

Key Features

  • 🧠 Context-Rich Memory: Heimdall learns from your documentation, session insights, and development history, allowing your LLM to recall specific solutions and architectural patterns across conversations.
  • πŸ“š Git-Aware Context: It indexes your project's entire git history, understanding not just what changed, but also who changed it, when, and context.
  • πŸ”— Isolated & Organized: Each project gets its own isolated memory space, ensuring that context from one project doesn't leak into another.
  • ⚑ Efficient Integration: Built on the Model Context Protocol (MCP), it provides a standardized, low-overhead way for LLMs to access this powerful memory.

πŸš€ Getting Started

Prerequisites: Python 3.10+ and Docker (for Qdrant vector database).

Heimdall provides a unified heimdall CLI that manages everything from project setup to MCP integration.

1. Install Heimdall

pip install heimdall-mcp

This installs the heimdall command-line tool with all necessary dependencies.

2. Initialize Your Project

Navigate to your project directory and set up Heimdall:

cd /path/to/your/project

# Initialize project memory (starts Qdrant, creates collections, sets up config)
heimdall project init

This single command interactively builds up everything asking user preferences:

  • βœ… Starts Qdrant vector database automatically
  • βœ… Creates project-specific memory collections
  • βœ… Sets up .heimdall/ configuration directory
  • βœ… Downloads required AI models
  • βœ… File monitoring
  • βœ… Git hooks
  • βœ… MCP integration

Note: this creates a .heimdall/ directory in your project for configuration - you should NOT commit this - add to .gitignore!

Load Project Knowledge

Recommended: Use automatic file monitoring and place files in .heimdall/docs/:

# Copy or symlink your documentation to the monitored directory
ln -r -s my-project-docs ./.heimdall/docs/project-docs

# Start automatic monitoring (files are loaded instantly when changed)
heimdall monitor start

Alternative: Manual loading for one-time imports:

# Load documentation and files manually
heimdall load docs/ --recursive
heimdall load README.md

Your project's memory is now active and ready for your LLM.

Real-time Git Integration

You can parse your entire git history with:

# Load git commit history
heimdall git-load .

You can also install git hooks for automatic memory updates on commits:

# Install the post-commit hook (Python-based, cross-platform)
heimdall git-hooks install

Note: If you have existing post-commit hooks, they'll be safely chained and preserved - but proceed carefully.

🧹 Cleanup

To remove Heimdall from a project:

# Navigate to the project you want to clean up
cd /path/to/project

# Cleanup data, remove collections, uninstall git hooks
memory_system project clean

This cleanly removes project-specific data while preserving the shared Qdrant instance for other projects.

βš™οΈ How It Works Under the Hood

Heimdall extracts unstructured knowledge from your documentation and structured data from your git history. This information is vectorized and stored in a Qdrant database. The LLM can then query this database using a simple set of tools to retrieve relevant, context-aware information.

graph TD
    %% Main client outside the server architecture
    AI_Assistant["πŸ€– AI Assistant (e.g., Claude)"]

    %% Top-level subgraph for the entire server
    subgraph Heimdall MCP Server Architecture

        %% 1. Application Interface Layer
        subgraph Application Interface
            MCP_Server["MCP Server (heimdall-mcp)"]
            CLI["CognitiveCLI (heimdall/cli.py)"]
            style MCP_Server fill:#b2ebf2,stroke:#00acc1,color:#212121
            style CLI fill:#b2ebf2,stroke:#00acc1,color:#212121
        end

        %% 2. Core Logic Engine
        style Cognitive_System fill:#ccff90,stroke:#689f38,color:#212121
        Cognitive_System["🧠 CognitiveSystem (core/cognitive_system.py)<br/>"]

        %% 3. Storage Layer (components side-by-side)
        subgraph Storage Layer
            Qdrant["πŸ—‚οΈ Qdrant Storage<br/><hr/>- Vector Similarity Search<br/>- Multi-dimensional Encoding"]
            SQLite["πŸ—ƒοΈ SQLite Persistence<br/><hr/>- Memory Metadata & Connections<br/>- Caching & Retrieval Stats"]
        end

        %% 4. Output Formatting
        style Formatted_Response fill:#fff9c4,stroke:#fbc02d,color:#212121
        Formatted_Response["πŸ“¦ Formatted MCP Response<br/><i>{ core, peripheral, bridge }</i>"]

        %% Define internal flow
        MCP_Server -- calls --> CLI
        CLI -- calls --> Cognitive_System

        Cognitive_System -- "1\. Vector search for candidates" --> Qdrant
        Cognitive_System -- "2\. Hydrates with metadata" --> SQLite
        Cognitive_System -- "3\. Performs Bridge Discovery" --> Formatted_Response

    end

    %% Define overall request/response flow between client and server
    AI_Assistant -- "recall_memorie" --> MCP_Server
    Formatted_Response -- "Returns structured memories" --> AI_Assistant

    %% --- Styling Block ---

    %% 1. Node Styling using Class Definitions
    classDef aiClientStyle fill:#dbeafe,stroke:#3b82f6,color:#1e3a8a
    classDef interfaceNodeStyle fill:#cffafe,stroke:#22d3ee,color:#0e7490
    classDef coreLogicStyle fill:#dcfce7,stroke:#4ade80,color:#166534
    classDef qdrantNodeStyle fill:#ede9fe,stroke:#a78bfa,color:#5b21b6
    classDef sqliteNodeStyle fill:#fee2e2,stroke:#f87171,color:#991b1b
    classDef responseNodeStyle fill:#fef9c3,stroke:#facc15,color:#854d0e

    %% 2. Assigning Classes to Nodes
    class AI_Assistant aiClientStyle
    class MCP_Server,CLI interfaceNodeStyle
    class Cognitive_System coreLogicStyle
    class Qdrant qdrantNodeStyle
    class SQLite sqliteNodeStyle
    class Formatted_Response responseNodeStyle

    %% 3. Link (Arrow) Styling
    %% Note: Styling edge label text is not reliably supported. This styles the arrow lines themselves.
    %% Primary request/response flow (links 0 and 1)
    linkStyle 0,1 stroke:#3b82f6,stroke-width:2px
    %% Internal application calls (links 2 and 3)
    linkStyle 2,3 stroke:#22d3ee,stroke-width:2px,stroke-dasharray: 5 5
    %% Internal data access calls (links 4 and 5)
    linkStyle 4,5 stroke:#9ca3af,stroke-width:2px
    %% Final processing call (link 6)
    linkStyle 6 stroke:#4ade80,stroke-width:2px

LLM Tool Reference

You can instruct your LLM to use the following six tools to interact with its memory:

ToolDescription
store_memoryStores a new piece of information, such as an insight or a solution.
recall_memoriesPerforms a semantic search for relevant memories based on a query.
session_lessonsRecords a key takeaway from the current session for future use.
memory_statusChecks the health and statistics of the memory system.
delete_memoryDelete a specific memory by its unique ID.
delete_memories_by_tagsDelete all memories that have any of the specified tags.

πŸ’‘ Best Practices

To maximize the effectiveness of Heimdall:

  • Provide Quality Documentation: Think architecture decision records, style guides, and API documentation.
  • Keep documents updated: Heilmdall will use documents in .heimdall/docs to provide memories - if they are outdated, so will be the memories. We suggest you use symbolic links to your actual docs directory in .heimdall/docs so Heimdall automatically refreshes memories with latest document versions.
  • Maintain Good Git Hygiene: Write clear and descriptive commit messages. A message like feat(api): add user authentication endpoint is far more valuable than more stuff.
  • Set Up Automation: Use heimdall monitor start and heimdall git-hooks install for hands-free memory updates.
  • Guide Your Assistant: Use a system prompt (like a CLAUDE.md file) to instruct your LLM on how and when to use the available memory tools.
  • Use Strategic Tagging: Establish rules for your LLM to tag memories consistently. Use temporary tags like temp-analysis, task-specific, or cleanup-after-project for memories that should be deleted after completion, enabling easy cleanup with delete_memories_by_tags.

πŸ› οΈ Command Reference

Core Commands

CommandDescription
heimdall store <text>Store experience in cognitive memory
heimdall recall <query>Retrieve relevant memories based on query
heimdall load <path>Load files/directories into memory
heimdall git-load [repo]Load git commit patterns into memory
heimdall statusShow system status and memory statistics
heimdall remove-file <path>Remove memories for deleted file
heimdall delete-memory <id>Delete specific memory by ID
heimdall delete-memories-by-tags --tag <tag>Delete memories by tags
heimdall doctorRun comprehensive health checks
heimdall shellStart interactive memory shell

Project Management

CommandDescription
heimdall project initInitialize project memory with interactive setup
heimdall project listList all projects in shared Qdrant instance
heimdall project cleanRemove project collections and cleanup

Vector Database (Qdrant)

CommandDescription
heimdall qdrant startStart Qdrant vector database service
heimdall qdrant stopStop Qdrant service
heimdall qdrant statusCheck Qdrant service status
heimdall qdrant logsView Qdrant service logs

File Monitoring

CommandDescription
heimdall monitor startStart automatic file monitoring service
heimdall monitor stopStop file monitoring service
heimdall monitor restartRestart monitoring service
heimdall monitor statusCheck monitoring service status
heimdall monitor healthDetailed monitoring health check

Git Integration

CommandDescription
heimdall git-hook installInstall post-commit hook for automatic memory processing
heimdall git-hook uninstallRemove Heimdall git hooks
heimdall git-hook statusCheck git hook installation status

MCP Integration

CommandDescription
heimdall mcp install <platform>Install MCP server for platform (vscode, cursor, claude-code, visual-studio)
heimdall mcp remove <platform>Remove MCP integration from platform
heimdall mcp statusShow installation status for all platforms
heimdall mcp listList available platforms and installation status
heimdall mcp generate <platform>Generate configuration snippets for manual installation
Platforms

Heimdall MCP server is compatible with any platform that supports STDIO MCP servers. The following platforms are supported for automatic installation using heimdall mcp commands.

  • vscode - Visual Studio Code
  • cursor - Cursor IDE
  • claude-code - Claude Code
  • visual-studio - Visual Studio

Technology Stack:

  • Python 3.10
  • Vector Storage: Qdrant
  • Mmeory information and metadata: SQLite
  • Embeddings: all-MiniLM-L6-v2
  • Sentiment analysis: NRCLex emotion lexicon
  • Semantic analysis: spaCy
  • Integration: Model Context Protocol (MCP)

πŸ—ΊοΈShort Term Roadmap

  • Git post-commit hook for automatic, real-time memory updates βœ… Completed
  • Watcher to auto-detect and load new documents in the .heimdall-mcp directory. βœ… Completed
  • Release v0.1.0 publicly βœ… Completed
  • Heimdall pip package available βœ… Completed
  • Simplify installation βœ… Completed
  • Delete memories support (manually or by tags - for md docs already supported) βœ… Completed

🀝 Contributing

We welcome contributions! Please see our for details on:

  • Setting up the development environment
  • Our dual licensing model
  • Code style guidelines
  • Pull request process

Important: All contributors must agree to our before their contributions can be merged.

Quick Start for Contributors

  1. Fork the repository
  2. Create a feature branch targeting dev (not main)
  3. Make your changes following our style guidelines
  4. Submit a pull request with the provided template
  5. Sign the CLA when prompted by the CLA Assistant

For questions, open an issue or start a discussion!

πŸ“„ License

This project is licensed under the Apache 2.0 License for open source use. See our for information about our dual licensing model for commercial applications.