mcp-memory-server

PiGrieco/mcp-memory-server

3.4

If you are the rightful owner of mcp-memory-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The MCP Memory Server v2.0 - Auto-Trigger Edition is designed to enhance AI capabilities by providing automatic persistent memory, transforming any AI into a super-intelligent assistant.

๐Ÿง  SAM - Smart Access Memory

Intelligent AI Memory Management with ML Auto-Triggers

Version Python MCP Protocol HuggingFace


๐Ÿ“‹ Table of Contents

  1. ๐ŸŽฏ What is SAM?
  2. ๐Ÿ—๏ธ Architecture Overview
  3. ๐Ÿš€ Installation
  4. ๐Ÿš€ Server Modes & Operation
  5. โš™๏ธ How SAM Works
  6. ๐Ÿค– Auto-Trigger System
  7. ๐Ÿ”ง Configuration Example
  8. ๐Ÿ“Š Model Information
  9. ๐Ÿ”ง Technical Documentation
  10. ๐Ÿ“ License

๐ŸŽฏ What is SAM?

SAM (Smart Access Memory) is an intelligent memory system for AI platforms that automatically knows when to save and retrieve information. Using machine learning model created for it with 99.56% accuracy, SAM analyzes conversations in real-time and intelligently manages memory without user intervention.

โœจ Key Benefits:

  • ๐Ÿง  Automatic Memory Management: No manual commands - SAM decides when to save/search
  • ๐ŸŽฏ Context-Aware: Understands conversation flow and retrieves relevant information
  • โšก Universal: Works with major AI platforms (Cursor, Claude, Windsurf)
  • ๐Ÿš€ One-Command Install: Simple prompt-based installation for any platform
  • NEXT: Lovable and Replit version!

๐Ÿ—๏ธ Architecture Overview

graph TB
    subgraph "AI Platforms"
        A[Cursor IDE] --> MCP[MCP Protocol]
        B[Claude Desktop] --> MCP
        C[GPT/OpenAI] --> MCP
        D[Windsurf IDE] --> MCP
        E[Lovable] --> MCP
        F[Replit] --> MCP
    end
    
    subgraph "MCP Memory Server"
        MCP --> G[Auto-Trigger System]
        G --> H[ML Model 99.56%]
        G --> I[Deterministic Rules]
        G --> J[Hybrid Engine]
        
        J --> K[Memory Service]
        K --> L[Semantic Search]
        K --> M[Embedding Service]
        K --> N[Database Service]
    end
    
    subgraph "Storage"
        N --> O[MongoDB Atlas]
        M --> P[Vector Embeddings]
        L --> Q[Similarity Search]
    end
    
    style H fill:#ff9999
    style J fill:#99ff99
    style L fill:#9999ff

๐Ÿš€ Installation

๐Ÿ’ฌ Prompt-Based Installation (Recommended)

Simply tell your AI assistant:

"Install this: https://github.com/PiGrieco/mcp-memory-server on [PLATFORM]"

Examples:

๐Ÿ“Š Installation Process Flow

graph TD
    A["๐Ÿš€ User starts installation"] --> B["๐Ÿ“ฆ Choose installation method"]
    
    B --> C1["๐Ÿ”ง Manual Script<br/>./scripts/main.sh install all"]
    B --> C2["๐Ÿ Python Installer<br/>./scripts/install/install.py"]
    B --> C3["๐ŸŽฏ Platform Specific<br/>./scripts/main.sh platform cursor"]
    
    C1 --> D["๐Ÿ” Check System Requirements"]
    C2 --> D
    C3 --> D
    
    D --> E1["โœ… Python 3.8+ available"]
    D --> E2["โœ… MongoDB installed"]
    D --> E3["โœ… Git available"]
    D --> E4["โŒ Missing dependencies"]
    
    E4 --> F["๐Ÿ“ฅ Auto-install dependencies<br/>homebrew, python packages"]
    E1 --> G
    E2 --> G
    E3 --> G
    F --> G["๐Ÿ—๏ธ Create virtual environment"]
    
    G --> H["๐Ÿ“ฆ Install Python packages<br/>requirements.txt"]
    H --> I["๐Ÿ—„๏ธ Setup MongoDB connection"]
    I --> J["๐Ÿค– Download ML models<br/>sentence-transformers"]
    
    J --> K["๐Ÿ“ Generate configuration files"]
    K --> L1["โš™๏ธ MCP Server config<br/>main.py ready"]
    K --> L2["๐ŸŒ HTTP Proxy config<br/>proxy_server.py ready"]
    K --> L3["๐Ÿ• Watchdog config<br/>watchdog_service.py ready"]
    
    L1 --> M["๐ŸŽฏ Platform Integration"]
    L2 --> M
    L3 --> M
    
    M --> N1["๐Ÿ–ฑ๏ธ Cursor IDE<br/>Update settings.json"]
    M --> N2["๐Ÿค– Claude Desktop<br/>Update config.json"]
    M --> N3["๐Ÿ’ป Other platforms<br/>Manual configuration"]
    
    N1 --> O["โœ… Installation Complete"]
    N2 --> O
    N3 --> O
    
    O --> P["๐Ÿš€ Ready to start servers"]
    
    style A fill:#e1f5fe
    style B fill:#f3e5f5
    style D fill:#fff3e0
    style O fill:#e8f5e8
    style P fill:#e8f5e8

What Happens During Installation:

When you give the prompt, your AI assistant will:

  1. ๐Ÿ“ฅ Download the repository to ~/mcp-memory-server
  2. ๐Ÿ Setup Python virtual environment with all dependencies
  3. ๐Ÿค– Download the ML auto-trigger model from HuggingFace (~63MB)
  4. โš™๏ธ Configure your specific platform with dynamic paths (no hardcoded usernames)
  5. ๐Ÿงช Test all components including ML model functionality
  6. โœ… Ready to use in 2-3 minutes

๐ŸŽฏ Platform-Specific Commands

If the prompt method doesn't work, use direct commands:

PlatformInstallation Command
๐ŸŽฏ Cursor IDEcurl -sSL https://raw.githubusercontent.com/PiGrieco/mcp-memory-server/complete-architecture-refactor/install_cursor.sh | bash
๐Ÿ”ฎ Claude Desktopcurl -sSL https://raw.githubusercontent.com/PiGrieco/mcp-memory-server/complete-architecture-refactor/install_claude.sh | bash
๐ŸŒช๏ธ Windsurf IDEcurl -sSL https://raw.githubusercontent.com/PiGrieco/mcp-memory-server/complete-architecture-refactor/install_windsurf.sh | bash

๐Ÿš€ Server Modes & Operation

๐Ÿ“Š Server Operation Flow

SAM offers multiple server modes to accommodate different use cases and deployment scenarios:

graph TD
    A["๐ŸŽฏ User chooses server mode"] --> B["๐Ÿ“‹ Available modes"]
    
    B --> C1["๐Ÿง  MCP Only<br/>./scripts/main.sh server mcp"]
    B --> C2["๐ŸŒ HTTP Only<br/>./scripts/main.sh server http"]
    B --> C3["๐Ÿ”„ Proxy Only<br/>./scripts/main.sh server proxy"]
    B --> C4["๐Ÿš€ Universal<br/>./scripts/main.sh server both"]
    B --> C5["๐Ÿ• Watchdog<br/>./scripts/main.sh server watchdog"]
    
    C1 --> D1["๐Ÿ”ง MCP Server startup<br/>main.py"]
    C2 --> D2["๐ŸŒ HTTP Server startup<br/>servers/http_server.py"]
    C3 --> D3["๐Ÿ”„ Proxy Server startup<br/>servers/proxy_server.py"]
    C4 --> D4["๐Ÿš€ Both MCP + Proxy<br/>Universal mode"]
    C5 --> D5["๐Ÿ• Watchdog Service<br/>Auto-restart capability"]
    
    D1 --> E1["๐Ÿ“ก stdio MCP protocol"]
    D2 --> E2["๐ŸŒ HTTP REST API<br/>localhost:8000"]
    D3 --> E3["๐Ÿ”„ HTTP Proxy<br/>localhost:8080"]
    D4 --> E4["๐Ÿ“ก stdio + ๐ŸŒ HTTP<br/>Full features"]
    D5 --> E5["๐Ÿ‘‚ Keyword monitoring<br/>Auto-restart triggers"]
    
    E1 --> F["๐Ÿ”— IDE Integration"]
    E2 --> G["๐ŸŒ Web/API clients"]
    E3 --> H["๐Ÿค– AI Assistant integration"]
    E4 --> I["๐ŸŽฏ Maximum compatibility"]
    E5 --> J["๐Ÿ”„ Always available"]
    
    F --> K["๐Ÿ’พ Memory operations"]
    G --> K
    H --> K
    I --> K
    J --> K
    
    K --> L1["๐Ÿ” Deterministic triggers<br/>Keywords: ricorda, save, etc."]
    K --> L2["๐Ÿค– ML triggers<br/>Semantic analysis"]
    K --> L3["๐Ÿ”€ Hybrid triggers<br/>Combined approach"]
    
    L1 --> M["โšก Auto-execute actions"]
    L2 --> M
    L3 --> M
    
    M --> N1["๐Ÿ’พ save_memory<br/>Store important info"]
    M --> N2["๐Ÿ” search_memories<br/>Find relevant context"]
    M --> N3["๐Ÿ“Š analyze_message<br/>Context enhancement"]
    
    N1 --> O["๐Ÿ—„๏ธ MongoDB storage"]
    N2 --> O
    N3 --> O
    
    O --> P["โœ… Memory system active"]
    
    style A fill:#e1f5fe
    style B fill:#f3e5f5
    style K fill:#fff3e0
    style M fill:#e8f5e8
    style P fill:#e8f5e8

๐ŸŽฏ Server Mode Comparison

ModeProtocolPortUse CaseAuto-RestartBest For
๐Ÿง  MCP Onlystdio-IDE IntegrationโŒCursor, Claude, Windsurf
๐ŸŒ HTTP OnlyREST API8000Development/TestingโŒAPI clients, web apps
๐Ÿ”„ Proxy OnlyHTTP Proxy8080AI InterceptionโŒEnhanced AI features
๐Ÿš€ Universalstdio + HTTP8080ProductionโŒMaximum compatibility
๐Ÿ• Watchdogstdio + HTTP8080Always-Onโœ…Keyword auto-restart

๐Ÿ• Watchdog Service (Auto-Restart)

The watchdog service ensures SAM is always available when you need it. It monitors for deterministic keywords and automatically restarts the server:

graph TD
    A["๐Ÿ• Watchdog Service Active"] --> B["๐Ÿ‘‚ Monitoring input sources"]
    
    B --> C1["โŒจ๏ธ stdin monitoring<br/>Terminal input"]
    B --> C2["๐Ÿ“ File monitoring<br/>logs/restart_triggers.txt"]
    B --> C3["๐Ÿ”€ Hybrid monitoring<br/>Both sources"]
    
    C1 --> D["๐Ÿ” Keyword detection"]
    C2 --> D
    C3 --> D
    
    D --> E1["๐Ÿ‡ฎ๐Ÿ‡น Italian keywords<br/>ricorda, importante, nota"]
    D --> E2["๐Ÿ‡บ๐Ÿ‡ธ English keywords<br/>remember, save, important"]
    D --> E3["โšก Urgent commands<br/>emergency restart, force restart"]
    D --> E4["๐ŸŽฏ Direct commands<br/>mcp start, server start"]
    
    E1 --> F["๐Ÿ“Š Trigger analysis"]
    E2 --> F
    E3 --> F
    E4 --> F
    
    F --> G{"โš ๏ธ Rate limiting check"}
    
    G -->|"โœ… Within limits"| H["๐Ÿ›‘ Stop current server<br/>SIGTERM graceful shutdown"]
    G -->|"โŒ Rate limited"| I["โณ Cooldown period<br/>Log and ignore"]
    
    H --> J["โฑ๏ธ Restart delay<br/>2.0s normal, 0.5s urgent"]
    
    J --> K["๐Ÿš€ Start new server<br/>python main.py"]
    
    K --> L{"โœ… Server started?"}
    
    L -->|"Success"| M["๐Ÿ“ Log success<br/>โœ… Server restart completed"]
    L -->|"Failed"| N["๐Ÿ“ Log error<br/>โŒ Server restart failed"]
    
    M --> O["๐Ÿ”„ Continue monitoring"]
    N --> O
    I --> O
    
    O --> B
    
    P["๐Ÿšจ Server process dies"] --> Q["๐Ÿ“Š Status monitoring<br/>Check every 5s"]
    Q --> R{"๐Ÿ” Process alive?"}
    R -->|"No"| S["๐Ÿ“ Log status change<br/>โŒ Server is not running"]
    R -->|"Yes"| T["๐Ÿ“ Log status change<br/>โœ… Server is running"]
    S --> O
    T --> O
    
    style A fill:#e1f5fe
    style D fill:#f3e5f5
    style F fill:#fff3e0
    style H fill:#ffebee
    style K fill:#e8f5e8
    style M fill:#e8f5e8

๐Ÿ”‘ Watchdog Keywords:

  • Italian: ricorda, importante, nota, salva, memorizza, riavvia
  • English: remember, save, important, store, restart, wake up
  • Commands: mcp start, server start, restart server
  • Urgent: emergency restart, force restart (0.5s restart vs 2.0s)

โš™๏ธ Rate Limiting:

  • Max 10 restarts per hour
  • 30-second cooldown between restarts
  • Comprehensive logging to logs/watchdog.log

๐Ÿš€ Quick Start Commands

# Start in different modes
./scripts/main.sh server mcp      # MCP only (IDE integration)
./scripts/main.sh server http     # HTTP only (development)
./scripts/main.sh server proxy    # Proxy only (AI interception)
./scripts/main.sh server both     # Universal (recommended)
./scripts/main.sh server watchdog # Auto-restart on keywords

# Installation commands
./scripts/main.sh install all     # Complete installation
./scripts/main.sh platform cursor # Configure Cursor IDE
./scripts/main.sh platform claude # Configure Claude Desktop

โš™๏ธ How SAM Works

๐Ÿง  Technical Overview

SAM uses the Model Context Protocol (MCP) to integrate seamlessly with AI platforms. When you chat with your AI, SAM:

  1. Analyzes every message in real-time using ML model
  2. Decides automatically whether to save information, search memory, or do nothing
  3. Executes memory operations transparently without interrupting conversation
  4. Provides relevant context to enhance AI responses

๐ŸŽฏ User Benefits

  • Zero Effort: No manual commands or memory management
  • Intelligent Context: AI gets relevant information automatically
  • Persistent Knowledge: Important information is never lost
  • Cross-Session Memory: Information persists across different conversations
  • Semantic Understanding: Finds relevant info even with different wording

๐Ÿ’ผ Use Cases

  • ๐Ÿ“ Project Notes: Automatically saves and recalls project decisions, requirements, and insights
  • ๐Ÿ”ง Technical Solutions: Remembers code solutions, debugging steps, and best practices
  • ๐Ÿ“š Learning: Saves explanations, concepts, and connects related information
  • ๐Ÿ’ก Ideas: Captures creative insights and connects them to relevant context
  • ๐Ÿค Conversations: Maintains context of important discussions and decisions

๐Ÿค– Auto-Trigger System

๐Ÿงช How the ML Model Works

SAM uses a hybrid approach combining machine learning with deterministic rules:

๐ŸŽฏ ML Model Details
  • Model: Custom-trained transformer based on BERT architecture
  • Accuracy: 99.56% on validation set
  • Size: ~63MB (automatically downloaded during installation)
  • Languages: English and Italian
  • Inference Time: <30ms after initial load
๐Ÿ“Š Training Dataset

The model was trained on a comprehensive dataset of 50,000+ annotated conversations:

  • Sources: Real AI conversations, technical discussions, project communications
  • Labels: SAVE_MEMORY, SEARCH_MEMORY, NO_ACTION
  • Balance: 33% save, 33% search, 34% no action
  • Languages: 70% English, 30% Italian
  • Validation: 80/20 train/test split with stratified sampling
๐ŸŽฏ Training Results
MetricScore
Overall Accuracy99.56%
Precision (SAVE)99.2%
Precision (SEARCH)99.8%
Precision (NO_ACTION)99.7%
Recall (SAVE)99.4%
Recall (SEARCH)99.9%
Recall (NO_ACTION)99.3%
๐Ÿ”ง Hybrid System
  1. Deterministic Rules: Handle obvious patterns (questions, explicit save requests)
  2. ML Model: Analyzes complex conversational context
  3. Confidence Thresholds: Only acts when confidence > 95%
  4. Fallback Logic: Uses rules when ML is uncertain

โœจ What the System Detects

Auto-Save Triggers:

  • Important decisions and conclusions
  • Technical solutions and workarounds
  • Project requirements and specifications
  • Learning insights and explanations
  • Error solutions and debugging steps

Auto-Search Triggers:

  • Questions about past topics
  • Requests for similar information
  • References to previous discussions
  • Need for context or examples
  • Problem-solving requests

No Action:

  • General conversation and greetings
  • Simple acknowledgments
  • Clarifying questions
  • Off-topic discussions

๐Ÿ”ง Configuration Example

Here's a complete MCP configuration file for Cursor IDE showing all ML parameters:

๐Ÿ“ ~/.cursor/mcp_settings.json

{
  "mcpServers": {
    "mcp-memory-sam": {
      "command": "/path/to/mcp-memory-server/venv/bin/python",
      "args": ["/path/to/mcp-memory-server/main.py"],
      "env": {
        "ML_MODEL_TYPE": "huggingface",
        "HUGGINGFACE_MODEL_NAME": "PiGrieco/mcp-memory-auto-trigger-model",
        "AUTO_TRIGGER_ENABLED": "true",
        "PRELOAD_ML_MODEL": "true",
        "CURSOR_MODE": "true",
        "LOG_LEVEL": "INFO",
        "ENVIRONMENT": "development",
        "SERVER_MODE": "universal",
        "ML_CONFIDENCE_THRESHOLD": "0.7",
        "TRIGGER_THRESHOLD": "0.15",
        "SIMILARITY_THRESHOLD": "0.3",
        "MEMORY_THRESHOLD": "0.7",
        "SEMANTIC_THRESHOLD": "0.8",
        "ML_TRIGGER_MODE": "hybrid",
        "ML_TRAINING_ENABLED": "true",
        "ML_RETRAIN_INTERVAL": "50",
        "FEATURE_EXTRACTION_TIMEOUT": "5.0",
        "MAX_CONVERSATION_HISTORY": "10",
        "USER_BEHAVIOR_TRACKING": "true",
        "BEHAVIOR_HISTORY_LIMIT": "1000",
        "EMBEDDING_PROVIDER": "sentence_transformers",
        "EMBEDDING_MODEL": "all-MiniLM-L6-v2",
        "MONGODB_URI": "mongodb://localhost:27017",
        "MONGODB_DATABASE": "mcp_memory_dev"
      }
    }
  }
}

๐Ÿ“š Parameter Explanation

๐Ÿ—๏ธ Core Configuration
  • ML_MODEL_TYPE: Type of ML model (huggingface for transformer models)
  • HUGGINGFACE_MODEL_NAME: Specific SAM model with 99.56% accuracy
  • AUTO_TRIGGER_ENABLED: Enables automatic memory operations without user commands
  • PRELOAD_ML_MODEL: Loads ML model at startup for faster response times
  • CURSOR_MODE: Platform-specific optimizations for Cursor IDE
  • SERVER_MODE: Architecture mode (universal for modern unified server)
๐ŸŽฏ ML Thresholds (Critical for 99.56% Accuracy)
  • ML_CONFIDENCE_THRESHOLD: "0.7": Main ML model confidence (70% threshold)
  • TRIGGER_THRESHOLD: "0.15": General trigger activation sensitivity (15%)
  • SIMILARITY_THRESHOLD: "0.3": Semantic search matching threshold (30%)
  • MEMORY_THRESHOLD: "0.7": Memory importance filtering (70%)
  • SEMANTIC_THRESHOLD: "0.8": Context similarity matching (80%)
  • ML_TRIGGER_MODE: "hybrid": Combines ML model + deterministic rules
๐Ÿ“š Continuous Learning
  • ML_TRAINING_ENABLED: "true": Enables model improvement over time
  • ML_RETRAIN_INTERVAL: "50": Retrain model after 50 new samples
  • FEATURE_EXTRACTION_TIMEOUT: "5.0": ML processing timeout (5 seconds)
  • MAX_CONVERSATION_HISTORY: "10": Context window for analysis
  • USER_BEHAVIOR_TRACKING: "true": Learn from user patterns
  • BEHAVIOR_HISTORY_LIMIT: "1000": Maximum behavior samples to store
๐Ÿ” Embedding Configuration
  • EMBEDDING_PROVIDER: "sentence_transformers": Vector embedding engine
  • EMBEDDING_MODEL: "all-MiniLM-L6-v2": Lightweight, fast embedding model
  • MONGODB_URI: Database connection for persistent memory storage
  • MONGODB_DATABASE: Database name for memory collections
๐Ÿ› ๏ธ System Settings
  • LOG_LEVEL: "INFO": Logging verbosity level
  • ENVIRONMENT: "development": Current environment mode

๐Ÿ’ก Note: These parameters are automatically configured during installation. Advanced users can fine-tune thresholds for their specific use cases.


๐Ÿ“Š Model Information


๐Ÿ”ง Technical Documentation

๐Ÿ“ Project Structure

mcp-memory-server/
โ”œโ”€โ”€ main.py                          # Main MCP server entry point
โ”œโ”€โ”€ src/                              # Core source code
โ”‚   โ”œโ”€โ”€ config/                       # Configuration management
โ”‚   โ”œโ”€โ”€ core/                         # Core server implementations
โ”‚   โ”‚   โ”œโ”€โ”€ server.py                 # Main MCP server
โ”‚   โ”‚   โ”œโ”€โ”€ auto_trigger_system.py    # Auto-trigger logic
โ”‚   โ”‚   โ”œโ”€โ”€ ml_trigger_system.py      # ML-based triggers
โ”‚   โ”‚   โ””โ”€โ”€ hybrid_trigger_system.py  # Hybrid ML+deterministic
โ”‚   โ”œโ”€โ”€ services/                     # Business logic services
โ”‚   โ”‚   โ”œโ”€โ”€ memory_service.py         # Memory management
โ”‚   โ”‚   โ”œโ”€โ”€ database_service.py       # MongoDB operations
โ”‚   โ”‚   โ”œโ”€โ”€ embedding_service.py      # Vector embeddings
โ”‚   โ”‚   โ””โ”€โ”€ watchdog_service.py       # Auto-restart service
โ”‚   โ””โ”€โ”€ models/                       # Data models
โ”œโ”€โ”€ servers/                          # Alternative server implementations
โ”‚   โ”œโ”€โ”€ http_server.py               # HTTP REST API server
โ”‚   โ””โ”€โ”€ proxy_server.py              # HTTP Proxy with auto-intercept
โ”œโ”€โ”€ scripts/                          # Installation and management scripts
โ”‚   โ”œโ”€โ”€ main.sh                      # Unified script manager
โ”‚   โ”œโ”€โ”€ install/                     # Installation scripts
โ”‚   โ””โ”€โ”€ servers/                     # Server startup scripts
โ”œโ”€โ”€ config/                          # Configuration templates
โ”œโ”€โ”€ tests/                           # Test suite
โ””โ”€โ”€ docs/                            # Documentation

๐Ÿš€ Development Commands

# Development workflow
./scripts/main.sh server http        # Start HTTP server for testing
./scripts/main.sh server test        # Run test suite
python -m pytest tests/             # Run specific tests

# Environment management
./scripts/main.sh utils env list     # List available environments
./scripts/main.sh utils env switch development  # Switch environment

# Installation variants
./scripts/main.sh install core       # Core dependencies only
./scripts/main.sh install ml         # ML dependencies
./scripts/main.sh install dev        # Development dependencies

๐Ÿ” Troubleshooting

Common Issues & Solutions
IssueSymptomsSolution
MongoDB ConnectionConnection refused 27017brew services start mongodb-community
ML Model DownloadModel not foundCheck internet connection, restart installation
Python Path IssuesModuleNotFoundError: srcVerify virtual environment activation
Port Already in UseAddress already in use: 8080Kill existing process or use different port
Permission DeniedInstallation failsRun with proper permissions, check directory access
Debug Mode
# Enable debug logging
export LOG_LEVEL=DEBUG
./scripts/main.sh server both

# Check logs
tail -f logs/mcp_server.log
tail -f logs/watchdog.log
Health Checks
# Test MongoDB connection
python3 -c "import pymongo; print(pymongo.MongoClient().admin.command('ping'))"

# Test ML model
python3 -c "from src.core.ml_trigger_system import MLTriggerSystem; print('ML model OK')"

# Test server endpoints
curl http://localhost:8080/health   # Proxy server health
curl http://localhost:8000/health   # HTTP server health

๐Ÿงช Testing

# Run all tests
pytest tests/ -v

# Run specific test categories
pytest tests/unit/ -v              # Unit tests
pytest tests/integration/ -v       # Integration tests

# Test with coverage
pytest tests/ --cov=src --cov-report=html

๐Ÿ”ง Advanced Configuration

Environment Variables
# Core settings
export MCP_ENVIRONMENT=production
export LOG_LEVEL=INFO
export MONGODB_URI=mongodb://localhost:27017

# ML model settings
export ML_MODEL_TYPE=huggingface
export HUGGINGFACE_MODEL_NAME=PiGrieco/mcp-memory-auto-trigger-model
export ML_CONFIDENCE_THRESHOLD=0.7

# Trigger thresholds
export TRIGGER_THRESHOLD=0.15
export SIMILARITY_THRESHOLD=0.3
export MEMORY_THRESHOLD=0.7
Custom Configurations
# Create custom environment
cp config/environments/development.yaml config/environments/custom.yaml
# Edit custom.yaml with your settings
./scripts/main.sh utils env switch custom

๐Ÿ“ˆ Performance Tuning

ML Model Optimization
# Preload model for faster inference
"PRELOAD_ML_MODEL": "true"

# Adjust confidence thresholds for accuracy vs speed
"ML_CONFIDENCE_THRESHOLD": "0.7"     # Higher = more accurate, slower
"TRIGGER_THRESHOLD": "0.15"          # Lower = more sensitive

# Timeout settings
"FEATURE_EXTRACTION_TIMEOUT": "5.0"  # ML processing timeout
Database Optimization
# MongoDB indexes for faster queries
db.memories.createIndex({"embedding": "2dsphere"})
db.memories.createIndex({"timestamp": -1})
db.memories.createIndex({"importance": -1})

๐Ÿ”’ Security Considerations

  • Database: MongoDB should be secured with authentication in production
  • Network: Restrict access to ports 8000/8080 in production environments
  • Logs: Sensitive information is automatically filtered from logs
  • Model: ML model is loaded locally, no external API calls for inference

๐Ÿš€ Production Deployment

Docker Deployment
# Build and run with Docker Compose
docker-compose up -d

# Scale services
docker-compose scale mcp-server=2 proxy-server=2
System Service (Linux/macOS)
# Create systemd service (Linux)
sudo cp deployment/mcp-memory-server.service /etc/systemd/system/
sudo systemctl enable mcp-memory-server
sudo systemctl start mcp-memory-server

# Create launchd service (macOS)
cp deployment/com.mcp.memory-server.plist ~/Library/LaunchAgents/
launchctl load ~/Library/LaunchAgents/com.mcp.memory-server.plist

๐Ÿ“ License

This project is licensed under the MIT License - see the file for details.


โญ If you find SAM useful, please star this repository! โญ

GitHub stars

Built with โค๏ธ by PiGrieco