danieleugenewilliams/local-memory-mcp
If you are the rightful owner of local-memory-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Local Memory MCP Server is a standalone server that provides persistent local memory functionality for AI assistants, enhancing their ability to maintain context and provide personalized assistance.
Local Memory MCP Server v2.5.1
Privacy-First AI Memory for True Intelligence
A production-ready Model Context Protocol (MCP) server that provides private, local memory for AI assistants. Your conversations, insights, and accumulated knowledge belong to YOU, but secured on your own machine, not in commercial corporate clouds.
β Production Status
Fully tested and production-ready with comprehensive test suite, robust error handling, performance optimization, and clean codebase.
π§ Why Local Memory Matters
Your AI's memory IS your competitive advantage. Every interaction should compound into something uniquely yours. This transforms generic AI responses into personalized intelligence that grows with your specific needs, projects, and expertise.
π Privacy & Ownership First
- Your Data, Your Control: Every memory stays on YOUR machine
- Zero Cloud Dependencies: No corporate surveillance or data mining
- Compliance Ready: Meet GDPR, HIPAA, and enterprise security requirements
π― Intelligence That Grows
- Cumulative Learning: AI remembers context across weeks, months, and years
- Specialized Knowledge: Build domain-specific intelligence in your field
- Pattern Recognition: Discover connections from accumulated knowledge
- Contextual Understanding: AI that truly "knows" your projects and preferences
π οΈ Available Tools
Core Memory Management
πΎ store_memory
Store new memories with contextual information and automatic AI embedding generation.
content
(string): The content to storeimportance
(number, optional): Importance score (1-10, default: 5)tags
(string[], optional): Tags for categorizationsession_id
(string, optional): Session identifiersource
(string, optional): Source of the informationdomain
(string, optional): Domain for project-specific organization
π search_memories
Search memories using full-text search or AI-powered semantic search.
query
(string): Search queryuse_ai
(boolean, optional): Enable AI semantic search (default: false)limit
(number, optional): Maximum results (default: 10)min_importance
(number, optional): Minimum importance filtersession_id
(string, optional): Filter by sessiondomain
(string, optional): Filter by domain
βοΈ update_memory
Update an existing memory by ID.
id
(string): Memory ID to updatecontent
(string, optional): New contentimportance
(number, optional): New importance scoretags
(string[], optional): New tags
ποΈ delete_memory
Delete a memory by ID.
id
(string): Memory ID to delete
AI-Powered Intelligence
β ask_question
Ask natural language questions about your stored memories with AI-powered answers.
question
(string): Your question about stored memoriessession_id
(string, optional): Limit context to specific sessiondomain
(string, optional): Limit context to specific domaincontext_limit
(number, optional): Maximum memories for context (default: 5)
Returns: Detailed answer with confidence score and source memories
π summarize_memories
Generate AI-powered summaries and extract themes from memories.
session_id
(string, optional): Summarize specific sessiondomain
(string, optional): Summarize specific domaintimeframe
(string, optional): 'today', 'week', 'month', 'all' (default: 'all')limit
(number, optional): Maximum memories to analyze (default: 10)
Returns: Comprehensive summary with key themes and patterns
π analyze_memories
Discover patterns, insights, and connections in your memory collection.
query
(string): Analysis focus or questionanalysis_type
(string, optional): 'patterns', 'insights', 'trends', 'connections' (default: 'insights')session_id
(string, optional): Analyze specific sessiondomain
(string, optional): Analyze specific domain
Returns: Detailed analysis with discovered patterns and actionable insights
Relationship & Graph Features
πΈοΈ discover_relationships
AI-powered discovery of connections between memories.
memory_id
(string, optional): Specific memory to analyze relationships forsession_id
(string, optional): Filter by sessionrelationship_types
(array, optional): Types to discover ('references', 'contradicts', 'expands', 'similar', 'sequential', 'causes', 'enables')min_strength
(number, optional): Minimum relationship strength (default: 0.5)limit
(number, optional): Maximum relationships to discover (default: 20)
π create_relationship
Manually create relationships between two memories.
source_memory_id
(string): ID of the source memorytarget_memory_id
(string): ID of the target memoryrelationship_type
(string): Type of relationshipstrength
(number, optional): Relationship strength (default: 0.8)context
(string, optional): Context or explanation
πΊοΈ map_memory_graph
Generate graph visualization of memory relationships.
memory_id
(string): Central memory for the graphdepth
(number, optional): Maximum depth to traverse (default: 2)include_types
(array, optional): Relationship types to include
Smart Categorization
π·οΈ categorize_memory
Automatically categorize memories using AI analysis.
memory_id
(string): Memory ID to categorizesuggested_categories
(array, optional): Suggested category namescreate_new_categories
(boolean, optional): Create new categories if needed (default: true)
π create_category
Create hierarchical categories for organizing memories.
name
(string): Category namedescription
(string): Category descriptionparent_category_id
(string, optional): Parent category for hierarchyconfidence_threshold
(number, optional): Auto-assignment threshold (default: 0.7)
Enhanced Temporal Analysis
π analyze_temporal_patterns
Analyze learning patterns and knowledge evolution over time.
session_id
(string, optional): Filter by sessionconcept
(string, optional): Specific concept to analyzetimeframe
(string): 'week', 'month', 'quarter', 'year'analysis_type
(string): 'learning_progression', 'knowledge_gaps', 'concept_evolution'
π track_learning_progression
Track progression stages for specific concepts or skills.
concept
(string): Concept or skill to tracksession_id
(string, optional): Filter by sessioninclude_suggestions
(boolean, optional): Include next step suggestions (default: true)
π detect_knowledge_gaps
Identify knowledge gaps and suggest learning paths.
session_id
(string, optional): Filter by sessionfocus_areas
(array, optional): Specific areas to focus on
π generate_timeline_visualization
Create timeline visualization of learning journey.
memory_ids
(array, optional): Specific memory IDs to includesession_id
(string, optional): Filter by sessionconcept
(string, optional): Focus on specific conceptstart_date
(string, optional): Timeline start dateend_date
(string, optional): Timeline end date
Domain Management
ποΈ create_domain
Create a new domain for organizing memories by project or context.
name
(string): Domain name (e.g., "frontend-project", "machine-learning")description
(string): Domain description
π list_domains
List all available domains with their descriptions and memory counts.
π get_domain_stats
Get detailed statistics about a specific domain.
domain
(string): Domain name to analyze
Returns: Memory counts, common tags, average importance, and domain-specific patterns
Session Management
π list_sessions
List all available sessions with memory counts.
π get_session_stats
Get detailed statistics about stored memories.
session_id
(string, optional): Specific session to analyze
Domain Management Tools
create_domain
Create new domain
name
(string): Name of the domain being createddescription
(string): Description of the domain being created
list_domains
List all available domains
get_domain_stats
- domain (string): Returns memory counts, average importance, common tags, and usage patterns
FAISS Vector Search (NEW in v2.5.0)
π vector_search_memories
Perform high-performance semantic similarity search using FAISS indices.
query
(string): Search query for semantic similaritylimit
(number, optional): Maximum results (default: 10)session_id
(string, optional): Filter by sessionsimilarity_threshold
(number, optional): Minimum similarity score (default: 0.5)
Returns: Memories ranked by semantic similarity with confidence scores
π§ build_faiss_index
Create or rebuild FAISS search indices for optimal performance.
session_id
(string, optional): Build index for specific sessionforce_rebuild
(boolean, optional): Force complete rebuild (default: false)
Returns: Index build status and performance metrics
π get_vector_stats
Get comprehensive statistics about vector embeddings and FAISS indices.
session_id
(string, optional): Stats for specific session
Returns: Vector counts, index performance, embedding dimensions, and search metrics
β‘ High-Performance Vector Search
FAISS Support (Optional)
For maximum performance with large datasets, local-memory-mcp supports FAISS (Facebook AI Similarity Search) for blazing-fast vector similarity search.
Installation Options
Option 1: AI Agent Installation (Recommended)
"Install the latest version of local-memory-mcp: https://github.com/danieleugenewilliams/local-memory-mcp.git"
# Have your AI agent install directly from GitHub repository
# This ensures you get the latest features and FAISS support
git clone https://github.com/danieleugenewilliams/local-memory-mcp.git
cd local-memory-mcp
npm install && npm run build
# Start the server
node dist/index.js --db-path ~/.local-memory.db --session-id your-session
Option 2: Standard Install (Automatic Fallback)
npm install -g local-memory-mcp
# Automatically uses FAISS if available, falls back gracefully
Option 3: Docker (Zero-Setup with FAISS)
# Zero-setup with full FAISS support
docker run -v ./memory:/data ghcr.io/reckon/local-memory-mcp
Option 4: With FAISS Native Dependencies
# Install with FAISS support (requires build tools)
npm install -g local-memory-mcp faiss-node
Performance Comparison
Dataset Size | FTS Search | Vector Search | FAISS Search |
---|---|---|---|
1K memories | 10-50ms | 50-200ms | 5-20ms |
10K memories | 50-200ms | 500-2000ms | 10-50ms |
100K memories | 200-1000ms | 5000-20000ms | 20-200ms |
Search Strategy (Automatic)
The system automatically selects the best available search method:
- FAISS Vector Search - Fastest, if FAISS is available
- Custom Vector Search - Good performance, if Ollama is available
- Full-Text Search - Basic search, always available
# Check what search methods are available
local-memory-mcp --check-dependencies
β SQLite: Available
β Ollama: Available (optional)
β FAISS: Available - High-performance search enabled
π¦ Quick Setup
Install
# Recommended: GitHub installation for latest features
git clone https://github.com/danieleugenewilliams/local-memory-mcp.git
cd local-memory-mcp
npm install && npm run build
# Alternative: NPM (coming soon)
npm install -g local-memory-mcp
Claude Desktop
Add to claude_desktop_config.json
:
{
"mcpServers": {
"local-memory": {
"command": "node",
"args": ["/path/to/local-memory-mcp/dist/index.js", "--db-path", "~/.local-memory.db"]
}
}
}
Alternative with npx (requires npm install):
{
"mcpServers": {
"local-memory": {
"command": "npx",
"args": ["local-memory-mcp", "--db-path", "~/.local-memory.db"]
}
}
}
OpenCode
# GitHub installation
node /path/to/local-memory-mcp/dist/index.js --db-path ~/.opencode-memory.db
# NPM installation (alternative)
npx local-memory-mcp --db-path ~/.opencode-memory.db
Any MCP Tool
# GitHub installation (recommended)
node /path/to/local-memory-mcp/dist/index.js --db-path /path/to/memory.db --session-id your-session
# NPM installation (alternative)
local-memory-mcp --db-path /path/to/memory.db --session-id your-session
π€ AI Features Setup
Install Ollama
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Required models
ollama pull nomic-embed-text # For semantic search
ollama pull qwen2.5:7b # For Q&A and analysis
Model Options
Model | Size | Use Case | Performance |
---|---|---|---|
qwen2.5:7b | ~4.3GB | Recommended | βββββ |
qwen2.5:14b | ~8GB | Best quality | βββββ |
qwen2.5:3b | ~2GB | Balanced | ββββ |
phi3.5:3.8b | ~2.2GB | Efficient | βββ |
The server automatically detects Ollama and enables AI features. Without Ollama, it gracefully falls back to traditional text search.
π‘ Usage Examples
Basic Operations
π£οΈ "Remember that our API endpoint is https://api.example.com/v1"
π£οΈ "Search for anything related to authentication"
π£οΈ "What do you remember about our database schema?"
AI-Powered Features
π£οΈ "Summarize what I've learned about TypeScript this week"
π£οΈ "Analyze my coding patterns and suggest improvements"
π£οΈ "Find relationships between my React and performance memories"
Advanced Analysis
π£οΈ "Track my learning progression in machine learning"
π£οΈ "What knowledge gaps do I have in backend development?"
π£οΈ "Show me a timeline of my project decisions"
π£οΈ "Create a domain for my React project and organize related memories"
π£οΈ "Analyze patterns in my frontend-project domain"
FAISS Vector Search (NEW)
π£οΈ "Use vector search to find similar authentication implementations"
π£οΈ "Build FAISS index for faster semantic search"
π£οΈ "Show vector search statistics and performance metrics"
π£οΈ "Find memories semantically similar to database optimization"
βοΈ Configuration
Command Line Options
--db-path
: Database file path (default:~/.local-memory.db
)--session-id
: Session identifier for organizing memories--ollama-url
: Ollama server URL (default:http://localhost:11434
)--config
: Configuration file path--log-level
: Logging level (debug, info, warn, error)
Configuration File (~/.local-memory/config.json
)
{
"database": {
"path": "~/.local-memory/memories.db",
"backupInterval": 86400000
},
"ollama": {
"enabled": true,
"baseUrl": "http://localhost:11434",
"embeddingModel": "nomic-embed-text",
"chatModel": "qwen2.5:7b"
},
"ai": {
"maxContextMemories": 10,
"minSimilarityThreshold": 0.3
}
}
Environment Variables
export MEMORY_DB_PATH="/custom/path/memories.db"
export OLLAMA_BASE_URL="http://localhost:11434"
export OLLAMA_EMBEDDING_MODEL="nomic-embed-text"
export OLLAMA_CHAT_MODEL="qwen2.5:7b"
ποΈ Development
npm run dev # Start development server
npm run build # Build for production
npm test # Run tests
npm run lint # Lint code
π§ͺ Testing
Comprehensive test suite covering:
- β Memory storage and retrieval
- β Full-text and semantic search
- β FAISS vector search integration
- β Session management
- β AI integration features
- β Relationship discovery
- β Temporal analysis
npm test # Run all tests
npm run test:watch # Watch mode
npm test -- --coverage # Coverage report
ποΈ Architecture
src/
βββ index.ts # MCP server and CLI entry point
βββ memory-store.ts # SQLite storage with caching
βββ ollama-service.ts # AI service integration
βββ vector-service.ts # FAISS vector search integration
βββ types.ts # Schemas and TypeScript types
βββ logger.ts # Structured logging
βββ config.ts # Configuration management
βββ performance.ts # Performance monitoring
βββ __tests__/ # Comprehensive test suite
Key Features:
- SQLite + FTS5: Fast full-text search with vector embeddings
- FAISS Integration: Lightning-fast semantic similarity search
- AI Integration: Ollama for semantic search and analysis
- Performance: Caching, batch processing, monitoring
- Type Safety: Full TypeScript with runtime validation
- Production Ready: Error handling, logging, configuration
π MCP Protocol Compatibility
Full Model Context Protocol (MCP) 0.5.0 compliance:
- β Stdio transport standard
- β All 21 memory management tools (including FAISS vector search)
- β Structured responses and error handling
- β Resource discovery and tool registration
Works with Claude Desktop, OpenCode, and any MCP-compatible tool.
π Transform Your AI
Real Impact:
- Development: AI remembers your architecture, patterns, and decisions
- Research: Builds on previous insights and tracks learning progression
- Analysis: Contextual responses based on your domain expertise
- Strategy: Remembers successful approaches and methodologies
The Result: AI that evolves from generic responses to personalized intelligence built on YOUR accumulated knowledge.
π€ Contributing
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Add tests for new functionality
- Ensure tests pass (
npm test
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push and open Pull Request
π License
MIT License - see file for details.
π Changelog
v2.5.1 (Current)
- Fixed FAISS reliability and API initialization issues
v2.5.0
- π FAISS Vector Search Integration - Lightning-fast semantic similarity search
- β‘ High-Performance Search - Sub-millisecond search times with FAISS indices
- π Enhanced Vector Operations - Build, rebuild, and manage FAISS indices automatically
- π Vector Statistics - Comprehensive vector embedding and index statistics
- π― Hybrid Search - Combine text search with vector similarity for best results
- π οΈ Production-Ready FAISS - Automatic fallback when FAISS unavailable
- π Scalable Performance - Efficient searching even with large memory collections
v2.4.0
- π·οΈ Domain Management Tools support
v2.2.0
- β¨ Complete Ollama AI integration with semantic search
- πΈοΈ Relationship discovery and graph visualization
- π·οΈ Smart categorization with AI analysis
- π Enhanced temporal analysis and learning progression tracking
- π§ͺ Comprehensive AI integration test suite
v2.1.0
- π Production-ready release with performance optimizations
- β Comprehensive test suite and error handling
- βοΈ Configuration management system
v1.0.0
- β¨ Initial MCP server implementation
- π SQLite FTS5 full-text search
- π Session management system
π Support
- Documentation: Check setup guides and examples above
- Issues: GitHub Issues
- Discussions: GitHub Discussions
π Why Choose Local Memory MCP?
Because your AI's intelligence should be as unique as you are.
- π True Privacy: All data stays on your machine
- β‘ Lightning Fast: Local SQLite + vector search
- π§ Semantic Understanding: AI-powered memory retrieval
- π Compound Intelligence: Every interaction builds knowledge
- π Universal Compatibility: Works with any MCP tool
- π οΈ Production Ready: Tested, optimized, and reliable
Own your AI's memory. Control your competitive advantage.
β Star this project β’ π Setup Guide β’ π€ Community