mjnong/custom-mem0
If you are the rightful owner of custom-mem0 and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Custom Mem0 MCP Server is a production-ready implementation that supports the Model Context Protocol (MCP), enabling AI agents and applications to maintain persistent memories.
add_memory
Store new memories.
search_memories
Find memories by similarity.
update_memory
Modify existing memories.
delete_memory
Remove specific memories.
delete_all_memories
Clear all memories for a user/agent.
Custom Mem0 MCP Server
A production-ready custom Mem0 implementation with Model Context Protocol (MCP) support, allowing AI agents and applications to maintain persistent memories.
๐ Table of Contents
- Custom Mem0 MCP Server
- ๐ Table of Contents
- โก Quick Navigation
- ๐ What This Project Does
- ๐๏ธ Architecture
- ๐ ๏ธ Quick Start
- ๐ Production Deployment
- ๐พ Backup & Recovery
- ๐ Available Commands
- ๐ง Configuration
- ๐ค MCP Integration
- ๐งช Testing & Development
- ๐ Production Deployment (Additional Info)
- ๐ Security
- ๐ API Documentation
- ๐ค Contributing
- ๐ License
- ๐ Troubleshooting
- ๐ Quick Links
โก Quick Navigation
๐ Get Started Quickly
# Development setup
git clone <your-repo>
cd custom-mem0
make dev-setup
make up-dev
# VS Code MCP Integration
# Add to settings.json:
"mcp": {
"servers": {
"memory-mcp": {
"url": "http://localhost:8888/memory/mcp/sse"
}
}
}
Access Points:
- API: http://localhost:8888
- Health: http://localhost:8888/health
- Neo4j: http://localhost:8474
๐ง Most Common Commands
make up-dev # Start development
make health # Check status
make logs # View logs
make backup # Backup data
make mcp-inspect # Debug MCP
make test # Run tests
๐ What This Project Does
This project provides a custom memory service that:
- Persistent Memory Management: Store, retrieve, update, and delete memories for users and AI agents
- MCP Integration: Exposes memory operations as MCP tools and resources for seamless integration with AI agents
- Multiple Backend Support: Choose between Neo4j (graph-based) or Qdrant (vector-based) for memory storage
- Production Ready: Containerized with Docker, health checks, proper logging, and graceful shutdown
- Development Friendly: Hot reload, comprehensive testing, and debugging tools
Core Features
- ๐ง Memory Operations: Add, search, update, delete memories
- ๐ Graph Relationships: Neo4j backend for complex memory relationships
- ๐ฏ Vector Search: Qdrant backend for semantic similarity search
- ๐ค MCP Protocol: Standardized interface for AI agent integration
- ๐ณ Containerized: Docker setup for development and production
- ๐ Health Monitoring: Built-in health checks and status endpoints
- ๐ก๏ธ Security: Non-root containers, proper error handling
- ๐ Observability: Structured logging and monitoring
๐๏ธ Architecture
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ MCP Client โ โ FastAPI App โ โ Memory Backend โ
โ (AI Agent) โโโโโบโ (MCP Server) โโโโโบโ (Neo4j/Qdrant) โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Vector Store โ
โ (pgvector) โ
โโโโโโโโโโโโโโโโโโโ
๐ ๏ธ Quick Start
Prerequisites
- Docker & Docker Compose: For containerized deployment
- uv: Fast Python package manager (install guide)
- Python 3.13+: Required version specified in pyproject.toml
- Node.js: For MCP inspector tool (optional)
Development Setup
-
Clone and Setup
git clone <your-repo> cd custom-mem0 make dev-setup
-
Configure Environment
cp .env.example .env # Edit .env with your configuration
-
Start Development Environment
make up-dev
-
Access the Service
- API: http://localhost:8888
- Health Check: http://localhost:8888/health
- Neo4j Browser: http://localhost:8474 (user: neo4j, password: mem0graph)
- PostgreSQL: localhost:8432 (user: postgres, password: postgres)
๐ Production Deployment
Automated Production Deployment
-
Full Production Setup
make deploy-prod
This command:
- Creates pre-deployment backups
- Builds production images
- Deploys services with health checks
- Validates deployment
- Sets up monitoring cron jobs
-
Manual Production Setup
make prod-setup make up make health
-
Monitor Health
make health make status
Environment Considerations
- Use strong passwords for databases
- Set proper OpenAI API keys
- Configure appropriate resource limits
- Set up monitoring and alerting
- Regular backups with
make backup
Health Monitoring
- Health endpoint:
/health
- Container health checks included
- Graceful shutdown handling
- Structured logging for observability
๐พ Backup & Recovery
๐ Production Backup Strategy
The system includes comprehensive backup functionality for production environments:
Backup Types
-
Application-Aware Backups
- PostgreSQL: Uses
pg_dump
for consistent database snapshots - Neo4j: Database dumps using Neo4j admin tools
- History: File-level backup of SQLite history database
- PostgreSQL: Uses
-
Automated Backup Process
make backup-automated # Full backup with validation and cleanup make backup # Manual backup make backup-validate # Verify backup integrity make backup-monitor # Check backup health
๐ป Backup Commands
# Create backups
make backup # All databases
make backup-postgres # PostgreSQL only
make backup-neo4j # Neo4j only
make backup-history # History database only
# Manage backups
make backup-list # List all backups
make backup-validate # Check backup integrity
make backup-cleanup # Remove old backups (30+ days)
make backup-monitor # Health monitoring
# Restore from backups
make restore-postgres BACKUP_FILE=postgres_20241225_120000.sql.gz
make restore-neo4j BACKUP_FILE=neo4j_20241225_120000.tar.gz
๐ Backup Monitoring
The system includes automated backup monitoring:
- Health Checks: Validates backup age, size, and integrity
- Alerting: Email and webhook notifications for backup issues
- Disk Space: Monitors available storage for backups
- Automated Cleanup: Removes backups older than 30 days
Production Backup Schedule
Set up automated backups with cron:
# Daily backup at 2 AM
0 2 * * * cd /path/to/custom-mem0 && make backup-automated >> logs/backup.log 2>&1
# Backup monitoring every 6 hours
0 */6 * * * cd /path/to/custom-mem0 && make backup-monitor >> logs/monitor.log 2>&1
โ๏ธ Cloud Backup Integration
Upload backups to cloud storage:
make backup-to-cloud # Requires AWS CLI configuration
Configure AWS CLI:
aws configure
# Enter your AWS credentials and region
Backup Storage Structure
backups/
โโโ postgres/
โ โโโ postgres_20241225_120000.sql.gz
โ โโโ postgres_20241225_140000.sql.gz
โโโ neo4j/
โ โโโ neo4j_20241225_120000.tar.gz
โ โโโ neo4j_20241225_140000.tar.gz
โโโ history/
โโโ history_20241225_120000.tar.gz
โโโ history_20241225_140000.tar.gz
๐จ Disaster Recovery
-
Full System Recovery
# Stop services make down # List available backups make backup-list # Restore databases make restore-postgres BACKUP_FILE=postgres_YYYYMMDD_HHMMSS.sql.gz make restore-neo4j BACKUP_FILE=neo4j_YYYYMMDD_HHMMSS.tar.gz # Start services make up make health
-
Point-in-Time Recovery
- Backups are timestamped for specific recovery points
- Choose the backup closest to your desired recovery time
- PostgreSQL dumps include complete schema and data
Backup Best Practices
- Regular Testing: Regularly test backup restoration procedures
- Multiple Locations: Store backups in multiple locations (local + cloud)
- Monitoring: Use backup monitoring to catch issues early
- Documentation: Keep recovery procedures documented and accessible
- Security: Encrypt backups containing sensitive data
๐ Available Commands
Run make help
to see all available commands:
make help # Show all commands
make up # Start production environment (default backend)
make up-pgvector # Start with PostgreSQL/pgvector backend
make up-qdrant # Start with Qdrant backend
make up-dev # Start development with hot reload
make down # Stop all services
make logs # View container logs
make health # Check service health
make test # Run tests
make mcp-inspect # Debug MCP protocol
make backup # Backup data volumes
๐ง Configuration
๐ Environment Variables
Key configuration options in .env
:
# Backend Selection
BACKEND="pgvector" # or "qdrant"
# OpenAI Configuration
OPENAI_API_KEY="your-api-key"
OPENAI_MODEL="gpt-4o-mini"
OPENAI_EMBEDDING_MODEL="text-embedding-3-small"
# Neo4j Configuration
NEO4J_IP="neo4j:7687"
NEO4J_USERNAME="neo4j"
NEO4J_PASSWORD="mem0graph"
# PostgreSQL (Vector Store)
POSTGRES_HOST="postgres"
POSTGRES_PORT=5432
POSTGRES_USER="postgres"
POSTGRES_PASSWORD="password"
# FastAPI Configuration
FASTAPI_HOST="localhost"
FASTAPI_PORT=8000
MEMORY_LOG_LEVEL="info"
๐๏ธ Backend Options
PostgreSQL/pgvector Backend (Default)
- Best for: Traditional SQL with vector search, ACID transactions
- Features: Familiar SQL interface, rich ecosystem, structured data
- Vector Store: PostgreSQL with pgvector extension
- Graph Store: Neo4j (shared)
Qdrant Backend
- Best for: Purpose-built vector search, high performance
- Features: Advanced filtering, clustering, optimized for vectors
- Vector Store: Qdrant native vectors
- Graph Store: Neo4j (shared)
๐ Multi-Backend Setup
Choose your vector store backend with simple commands:
# Start with PostgreSQL/pgvector (default)
make up-pgvector # Production
make up-dev-pgvector # Development
# Start with Qdrant
make up-qdrant # Production
make up-dev-qdrant # Development
Quick Setup:
# Use pre-configured environments
cp .env.pgvector .env # For PostgreSQL backend
cp .env.qdrant .env # For Qdrant backend
make up # Start with selected backend
Switching Backends:
make down # Stop current services
cp .env.qdrant .env # Switch configuration
make up # Start with new backend
Both backends share the same Neo4j graph store and provide identical MCP tools and APIs.
๐ค MCP Integration
๐ ๏ธ Available Tools
add_memory
: Store new memoriessearch_memories
: Find memories by similarityupdate_memory
: Modify existing memoriesdelete_memory
: Remove specific memoriesdelete_all_memories
: Clear all memories for a user/agent
๐ฆ Available Resources
memories://{user_id}/{agent_id}/{limit}
: Retrieve all memories
๐ป VS Code Integration
To use this MCP server with VS Code Copilot, add the following configuration to your VS Code settings.json
:
"mcp": {
"servers": {
"memory-mcp": {
"url": "http://localhost:8888/memory/mcp/sse"
}
}
}
Once configured, you can:
- Reference tools: Use
#
to access memory tools directly in VS Code - Custom instructions: Write natural language instructions to efficiently interact with the memory system
- Seamless integration: The memory tools will be available alongside other Copilot features
Make sure your MCP server is running (make up-dev
or make up
) before using it in VS Code.
๐ก Example Usage
# Add a memory
await memory_client.add_memory(
data="User prefers dark mode interface",
user_id="user123",
agent_id="assistant"
)
# Search memories
results = await memory_client.search_memories(
query="interface preferences",
user_id="user123"
)
๐งช Testing & Development
๐งช Running Tests
make test # Run all tests
make lint # Check code style
make format # Format code
make check # Run all checks
๐ Debugging
make logs SERVICE=mem0 # View specific service logs
make shell # Access container shell
make db-shell # Access PostgreSQL
make neo4j-shell # Access Neo4j
make mcp-inspect # Debug MCP protocol
โก Development Features
- Hot Reload: Code changes automatically restart the server
- Volume Mounting: Live code editing without rebuilds
- Debug Logging: Detailed logs for development
- MCP Inspector: Visual debugging of MCP protocol
๐ Production Deployment (Additional Info)
๐ณ Docker Production
make prod-setup
make up
make health
โ๏ธ Environment Considerations
- Use strong passwords for databases
- Set proper OpenAI API keys
- Configure appropriate resource limits
- Set up monitoring and alerting
- Regular backups with
make backup
๐ Health Monitoring
- Health endpoint:
/health
- Container health checks included
- Graceful shutdown handling
- Structured logging for observability
๐ Security
- Non-root containers: All services run as non-root users
- Environment isolation: Proper Docker networking
- Secret management: Environment-based configuration
- Input validation: Pydantic models for API validation
- Error handling: Graceful error responses
๐ API Documentation
When running, visit:
- Swagger UI: http://localhost:8888/docs
- ReDoc: http://localhost:8888/redoc
- OpenAPI JSON: http://localhost:8888/openapi.json
๐ค Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests:
make check
- Submit a pull request
๐ License
This project is released under the .
๐ Troubleshooting
๐ง Common Issues
Service won't start
make logs # Check logs
make health # Check health status
Database connection issues
make status # Check container status
make db-shell # Test database access
Memory operations failing
make mcp-inspect # Debug MCP protocol
curl http://localhost:8888/health # Check API health
๐ Getting Help
- Check logs with
make logs
- Use MCP inspector with
make mcp-inspect
- Review health status with
make health
- Access container shell with
make shell