local-memory-mcp

danieleugenewilliams/local-memory-mcp

3.3

If you are the rightful owner of local-memory-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Local Memory MCP Server is a standalone server that provides persistent local memory functionality for AI assistants, enhancing their ability to maintain context and provide personalized assistance.

Tools
6
Resources
0
Prompts
0

Local Memory MCP Server v2.5.1

Privacy-First AI Memory for True Intelligence

A production-ready Model Context Protocol (MCP) server that provides private, local memory for AI assistants. Your conversations, insights, and accumulated knowledge belong to YOU, but secured on your own machine, not in commercial corporate clouds.

βœ… Production Status

Fully tested and production-ready with comprehensive test suite, robust error handling, performance optimization, and clean codebase.

🧠 Why Local Memory Matters

Your AI's memory IS your competitive advantage. Every interaction should compound into something uniquely yours. This transforms generic AI responses into personalized intelligence that grows with your specific needs, projects, and expertise.

πŸ” Privacy & Ownership First

  • Your Data, Your Control: Every memory stays on YOUR machine
  • Zero Cloud Dependencies: No corporate surveillance or data mining
  • Compliance Ready: Meet GDPR, HIPAA, and enterprise security requirements

🎯 Intelligence That Grows

  • Cumulative Learning: AI remembers context across weeks, months, and years
  • Specialized Knowledge: Build domain-specific intelligence in your field
  • Pattern Recognition: Discover connections from accumulated knowledge
  • Contextual Understanding: AI that truly "knows" your projects and preferences

πŸ› οΈ Available Tools

Core Memory Management

πŸ’Ύ store_memory

Store new memories with contextual information and automatic AI embedding generation.

  • content (string): The content to store
  • importance (number, optional): Importance score (1-10, default: 5)
  • tags (string[], optional): Tags for categorization
  • session_id (string, optional): Session identifier
  • source (string, optional): Source of the information
  • domain (string, optional): Domain for project-specific organization
πŸ” search_memories

Search memories using full-text search or AI-powered semantic search.

  • query (string): Search query
  • use_ai (boolean, optional): Enable AI semantic search (default: false)
  • limit (number, optional): Maximum results (default: 10)
  • min_importance (number, optional): Minimum importance filter
  • session_id (string, optional): Filter by session
  • domain (string, optional): Filter by domain
✏️ update_memory

Update an existing memory by ID.

  • id (string): Memory ID to update
  • content (string, optional): New content
  • importance (number, optional): New importance score
  • tags (string[], optional): New tags
πŸ—‘οΈ delete_memory

Delete a memory by ID.

  • id (string): Memory ID to delete

AI-Powered Intelligence

❓ ask_question

Ask natural language questions about your stored memories with AI-powered answers.

  • question (string): Your question about stored memories
  • session_id (string, optional): Limit context to specific session
  • domain (string, optional): Limit context to specific domain
  • context_limit (number, optional): Maximum memories for context (default: 5)

Returns: Detailed answer with confidence score and source memories

πŸ“Š summarize_memories

Generate AI-powered summaries and extract themes from memories.

  • session_id (string, optional): Summarize specific session
  • domain (string, optional): Summarize specific domain
  • timeframe (string, optional): 'today', 'week', 'month', 'all' (default: 'all')
  • limit (number, optional): Maximum memories to analyze (default: 10)

Returns: Comprehensive summary with key themes and patterns

πŸ” analyze_memories

Discover patterns, insights, and connections in your memory collection.

  • query (string): Analysis focus or question
  • analysis_type (string, optional): 'patterns', 'insights', 'trends', 'connections' (default: 'insights')
  • session_id (string, optional): Analyze specific session
  • domain (string, optional): Analyze specific domain

Returns: Detailed analysis with discovered patterns and actionable insights

Relationship & Graph Features

πŸ•ΈοΈ discover_relationships

AI-powered discovery of connections between memories.

  • memory_id (string, optional): Specific memory to analyze relationships for
  • session_id (string, optional): Filter by session
  • relationship_types (array, optional): Types to discover ('references', 'contradicts', 'expands', 'similar', 'sequential', 'causes', 'enables')
  • min_strength (number, optional): Minimum relationship strength (default: 0.5)
  • limit (number, optional): Maximum relationships to discover (default: 20)
πŸ”— create_relationship

Manually create relationships between two memories.

  • source_memory_id (string): ID of the source memory
  • target_memory_id (string): ID of the target memory
  • relationship_type (string): Type of relationship
  • strength (number, optional): Relationship strength (default: 0.8)
  • context (string, optional): Context or explanation
πŸ—ΊοΈ map_memory_graph

Generate graph visualization of memory relationships.

  • memory_id (string): Central memory for the graph
  • depth (number, optional): Maximum depth to traverse (default: 2)
  • include_types (array, optional): Relationship types to include

Smart Categorization

🏷️ categorize_memory

Automatically categorize memories using AI analysis.

  • memory_id (string): Memory ID to categorize
  • suggested_categories (array, optional): Suggested category names
  • create_new_categories (boolean, optional): Create new categories if needed (default: true)
πŸ“ create_category

Create hierarchical categories for organizing memories.

  • name (string): Category name
  • description (string): Category description
  • parent_category_id (string, optional): Parent category for hierarchy
  • confidence_threshold (number, optional): Auto-assignment threshold (default: 0.7)

Enhanced Temporal Analysis

πŸ“ˆ analyze_temporal_patterns

Analyze learning patterns and knowledge evolution over time.

  • session_id (string, optional): Filter by session
  • concept (string, optional): Specific concept to analyze
  • timeframe (string): 'week', 'month', 'quarter', 'year'
  • analysis_type (string): 'learning_progression', 'knowledge_gaps', 'concept_evolution'
πŸ“š track_learning_progression

Track progression stages for specific concepts or skills.

  • concept (string): Concept or skill to track
  • session_id (string, optional): Filter by session
  • include_suggestions (boolean, optional): Include next step suggestions (default: true)
πŸ” detect_knowledge_gaps

Identify knowledge gaps and suggest learning paths.

  • session_id (string, optional): Filter by session
  • focus_areas (array, optional): Specific areas to focus on
πŸ“… generate_timeline_visualization

Create timeline visualization of learning journey.

  • memory_ids (array, optional): Specific memory IDs to include
  • session_id (string, optional): Filter by session
  • concept (string, optional): Focus on specific concept
  • start_date (string, optional): Timeline start date
  • end_date (string, optional): Timeline end date

Domain Management

πŸ—οΈ create_domain

Create a new domain for organizing memories by project or context.

  • name (string): Domain name (e.g., "frontend-project", "machine-learning")
  • description (string): Domain description
πŸ“– list_domains

List all available domains with their descriptions and memory counts.

πŸ“Š get_domain_stats

Get detailed statistics about a specific domain.

  • domain (string): Domain name to analyze

Returns: Memory counts, common tags, average importance, and domain-specific patterns

Session Management

πŸ“‹ list_sessions

List all available sessions with memory counts.

πŸ“Š get_session_stats

Get detailed statistics about stored memories.

  • session_id (string, optional): Specific session to analyze

Domain Management Tools

create_domain

Create new domain

  • name (string): Name of the domain being created
  • description (string): Description of the domain being created
list_domains

List all available domains

get_domain_stats
  • domain (string): Returns memory counts, average importance, common tags, and usage patterns

FAISS Vector Search (NEW in v2.5.0)

πŸš€ vector_search_memories

Perform high-performance semantic similarity search using FAISS indices.

  • query (string): Search query for semantic similarity
  • limit (number, optional): Maximum results (default: 10)
  • session_id (string, optional): Filter by session
  • similarity_threshold (number, optional): Minimum similarity score (default: 0.5)

Returns: Memories ranked by semantic similarity with confidence scores

πŸ”§ build_faiss_index

Create or rebuild FAISS search indices for optimal performance.

  • session_id (string, optional): Build index for specific session
  • force_rebuild (boolean, optional): Force complete rebuild (default: false)

Returns: Index build status and performance metrics

πŸ“Š get_vector_stats

Get comprehensive statistics about vector embeddings and FAISS indices.

  • session_id (string, optional): Stats for specific session

Returns: Vector counts, index performance, embedding dimensions, and search metrics

⚑ High-Performance Vector Search

FAISS Support (Optional)

For maximum performance with large datasets, local-memory-mcp supports FAISS (Facebook AI Similarity Search) for blazing-fast vector similarity search.

Installation Options

Option 1: AI Agent Installation (Recommended)

"Install the latest version of local-memory-mcp: https://github.com/danieleugenewilliams/local-memory-mcp.git"

# Have your AI agent install directly from GitHub repository
# This ensures you get the latest features and FAISS support
git clone https://github.com/danieleugenewilliams/local-memory-mcp.git
cd local-memory-mcp
npm install && npm run build

# Start the server
node dist/index.js --db-path ~/.local-memory.db --session-id your-session

Option 2: Standard Install (Automatic Fallback)

npm install -g local-memory-mcp
# Automatically uses FAISS if available, falls back gracefully

Option 3: Docker (Zero-Setup with FAISS)

# Zero-setup with full FAISS support
docker run -v ./memory:/data ghcr.io/reckon/local-memory-mcp

Option 4: With FAISS Native Dependencies

# Install with FAISS support (requires build tools)
npm install -g local-memory-mcp faiss-node
Performance Comparison
Dataset SizeFTS SearchVector SearchFAISS Search
1K memories10-50ms50-200ms5-20ms
10K memories50-200ms500-2000ms10-50ms
100K memories200-1000ms5000-20000ms20-200ms
Search Strategy (Automatic)

The system automatically selects the best available search method:

  1. FAISS Vector Search - Fastest, if FAISS is available
  2. Custom Vector Search - Good performance, if Ollama is available
  3. Full-Text Search - Basic search, always available
# Check what search methods are available
local-memory-mcp --check-dependencies
βœ“ SQLite: Available
βœ“ Ollama: Available (optional)
βœ“ FAISS: Available - High-performance search enabled

πŸ“¦ Quick Setup

Install

# Recommended: GitHub installation for latest features
git clone https://github.com/danieleugenewilliams/local-memory-mcp.git
cd local-memory-mcp
npm install && npm run build

# Alternative: NPM (coming soon)
npm install -g local-memory-mcp

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "local-memory": {
      "command": "node",
      "args": ["/path/to/local-memory-mcp/dist/index.js", "--db-path", "~/.local-memory.db"]
    }
  }
}

Alternative with npx (requires npm install):

{
  "mcpServers": {
    "local-memory": {
      "command": "npx",
      "args": ["local-memory-mcp", "--db-path", "~/.local-memory.db"]
    }
  }
}

OpenCode

# GitHub installation
node /path/to/local-memory-mcp/dist/index.js --db-path ~/.opencode-memory.db

# NPM installation (alternative)
npx local-memory-mcp --db-path ~/.opencode-memory.db

Any MCP Tool

# GitHub installation (recommended)
node /path/to/local-memory-mcp/dist/index.js --db-path /path/to/memory.db --session-id your-session

# NPM installation (alternative)
local-memory-mcp --db-path /path/to/memory.db --session-id your-session

πŸ€– AI Features Setup

Install Ollama

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Required models
ollama pull nomic-embed-text    # For semantic search
ollama pull qwen2.5:7b         # For Q&A and analysis

Model Options

ModelSizeUse CasePerformance
qwen2.5:7b~4.3GBRecommended⭐⭐⭐⭐⭐
qwen2.5:14b~8GBBest quality⭐⭐⭐⭐⭐
qwen2.5:3b~2GBBalanced⭐⭐⭐⭐
phi3.5:3.8b~2.2GBEfficient⭐⭐⭐

The server automatically detects Ollama and enables AI features. Without Ollama, it gracefully falls back to traditional text search.

πŸ’‘ Usage Examples

Basic Operations

πŸ—£οΈ "Remember that our API endpoint is https://api.example.com/v1"
πŸ—£οΈ "Search for anything related to authentication"
πŸ—£οΈ "What do you remember about our database schema?"

AI-Powered Features

πŸ—£οΈ "Summarize what I've learned about TypeScript this week"
πŸ—£οΈ "Analyze my coding patterns and suggest improvements"
πŸ—£οΈ "Find relationships between my React and performance memories"

Advanced Analysis

πŸ—£οΈ "Track my learning progression in machine learning"
πŸ—£οΈ "What knowledge gaps do I have in backend development?"
πŸ—£οΈ "Show me a timeline of my project decisions"
πŸ—£οΈ "Create a domain for my React project and organize related memories"
πŸ—£οΈ "Analyze patterns in my frontend-project domain"

FAISS Vector Search (NEW)

πŸ—£οΈ "Use vector search to find similar authentication implementations"
πŸ—£οΈ "Build FAISS index for faster semantic search"
πŸ—£οΈ "Show vector search statistics and performance metrics"
πŸ—£οΈ "Find memories semantically similar to database optimization"

βš™οΈ Configuration

Command Line Options

  • --db-path: Database file path (default: ~/.local-memory.db)
  • --session-id: Session identifier for organizing memories
  • --ollama-url: Ollama server URL (default: http://localhost:11434)
  • --config: Configuration file path
  • --log-level: Logging level (debug, info, warn, error)

Configuration File (~/.local-memory/config.json)

{
  "database": {
    "path": "~/.local-memory/memories.db",
    "backupInterval": 86400000
  },
  "ollama": {
    "enabled": true,
    "baseUrl": "http://localhost:11434",
    "embeddingModel": "nomic-embed-text",
    "chatModel": "qwen2.5:7b"
  },
  "ai": {
    "maxContextMemories": 10,
    "minSimilarityThreshold": 0.3
  }
}

Environment Variables

export MEMORY_DB_PATH="/custom/path/memories.db"
export OLLAMA_BASE_URL="http://localhost:11434"
export OLLAMA_EMBEDDING_MODEL="nomic-embed-text"
export OLLAMA_CHAT_MODEL="qwen2.5:7b"

πŸ—οΈ Development

npm run dev          # Start development server
npm run build        # Build for production
npm test             # Run tests
npm run lint         # Lint code

πŸ§ͺ Testing

Comprehensive test suite covering:

  • βœ… Memory storage and retrieval
  • βœ… Full-text and semantic search
  • βœ… FAISS vector search integration
  • βœ… Session management
  • βœ… AI integration features
  • βœ… Relationship discovery
  • βœ… Temporal analysis
npm test                    # Run all tests
npm run test:watch         # Watch mode
npm test -- --coverage     # Coverage report

πŸ›οΈ Architecture

src/
β”œβ”€β”€ index.ts              # MCP server and CLI entry point
β”œβ”€β”€ memory-store.ts       # SQLite storage with caching
β”œβ”€β”€ ollama-service.ts     # AI service integration
β”œβ”€β”€ vector-service.ts     # FAISS vector search integration
β”œβ”€β”€ types.ts              # Schemas and TypeScript types
β”œβ”€β”€ logger.ts             # Structured logging
β”œβ”€β”€ config.ts             # Configuration management
β”œβ”€β”€ performance.ts        # Performance monitoring
└── __tests__/            # Comprehensive test suite

Key Features:

  • SQLite + FTS5: Fast full-text search with vector embeddings
  • FAISS Integration: Lightning-fast semantic similarity search
  • AI Integration: Ollama for semantic search and analysis
  • Performance: Caching, batch processing, monitoring
  • Type Safety: Full TypeScript with runtime validation
  • Production Ready: Error handling, logging, configuration

πŸ”Œ MCP Protocol Compatibility

Full Model Context Protocol (MCP) 0.5.0 compliance:

  • βœ… Stdio transport standard
  • βœ… All 21 memory management tools (including FAISS vector search)
  • βœ… Structured responses and error handling
  • βœ… Resource discovery and tool registration

Works with Claude Desktop, OpenCode, and any MCP-compatible tool.

πŸš€ Transform Your AI

Real Impact:

  • Development: AI remembers your architecture, patterns, and decisions
  • Research: Builds on previous insights and tracks learning progression
  • Analysis: Contextual responses based on your domain expertise
  • Strategy: Remembers successful approaches and methodologies

The Result: AI that evolves from generic responses to personalized intelligence built on YOUR accumulated knowledge.

🀝 Contributing

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/amazing-feature)
  3. Add tests for new functionality
  4. Ensure tests pass (npm test)
  5. Commit changes (git commit -m 'Add amazing feature')
  6. Push and open Pull Request

πŸ“„ License

MIT License - see file for details.

πŸ”„ Changelog

v2.5.1 (Current)

  • Fixed FAISS reliability and API initialization issues

v2.5.0

  • πŸš€ FAISS Vector Search Integration - Lightning-fast semantic similarity search
  • ⚑ High-Performance Search - Sub-millisecond search times with FAISS indices
  • πŸ” Enhanced Vector Operations - Build, rebuild, and manage FAISS indices automatically
  • πŸ“Š Vector Statistics - Comprehensive vector embedding and index statistics
  • 🎯 Hybrid Search - Combine text search with vector similarity for best results
  • πŸ› οΈ Production-Ready FAISS - Automatic fallback when FAISS unavailable
  • πŸ“ˆ Scalable Performance - Efficient searching even with large memory collections

v2.4.0

  • 🏷️ Domain Management Tools support

v2.2.0

  • ✨ Complete Ollama AI integration with semantic search
  • πŸ•ΈοΈ Relationship discovery and graph visualization
  • 🏷️ Smart categorization with AI analysis
  • πŸ“ˆ Enhanced temporal analysis and learning progression tracking
  • πŸ§ͺ Comprehensive AI integration test suite

v2.1.0

  • πŸš€ Production-ready release with performance optimizations
  • βœ… Comprehensive test suite and error handling
  • βš™οΈ Configuration management system

v1.0.0

  • ✨ Initial MCP server implementation
  • πŸ” SQLite FTS5 full-text search
  • πŸ“ Session management system

πŸ†˜ Support

🌟 Why Choose Local Memory MCP?

Because your AI's intelligence should be as unique as you are.

  • πŸ”’ True Privacy: All data stays on your machine
  • ⚑ Lightning Fast: Local SQLite + vector search
  • 🧠 Semantic Understanding: AI-powered memory retrieval
  • πŸ“ˆ Compound Intelligence: Every interaction builds knowledge
  • πŸ”Œ Universal Compatibility: Works with any MCP tool
  • πŸ› οΈ Production Ready: Tested, optimized, and reliable

Own your AI's memory. Control your competitive advantage.


⭐ Star this project β€’ πŸ“– Setup Guide β€’ 🀝 Community