Second-Brain-MCP

codebyellalesperance/Second-Brain-MCP

3.2

If you are the rightful owner of Second-Brain-MCP and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

Second Brain MCP is a personal memory management system that integrates with AI assistants, CLIs, and web apps, allowing users to store, search, and recall their thoughts, preferences, relationships, and experiences using semantic search powered by local ML embeddings.

Tools
7
Resources
0
Prompts
0

🧠 Second Brain MCP

Tests Lint TypeScript Node.js

A personal memory management system that integrates with AI assistants, CLIs, and web apps. Store, search, and recall your thoughts, preferences, relationships, and experiences using semantic search powered by local ML embeddings.

⚡ Quick Start

Just want to get started? → See (5 minutes to working system)

Want step-by-step guide? → See (complete walkthrough)

✨ Features

  • 🤖 MCP Server Integration - Connect with Claude Desktop and other Model Context Protocol clients
  • 💻 CLI Tool - Quick command-line interface for memory management
  • 🌐 REST API - HTTP endpoints for web/mobile app integration
  • 🔍 Semantic Search - Find memories by meaning, not just keywords (using local embeddings)
  • 🧠 AI-Powered Analysis - Extract memories from conversations using Claude
  • 📊 5 Memory Types - Organize by preference, relationship, goal, experience, or fact
  • 🎤 Voice Input - Record and transcribe voice memos (optional)
  • 💾 Dual Storage - SQLite for metadata + vector embeddings for semantic search
  • 🔒 Privacy-First - All data stored locally, no cloud storage
  • 🔐 End-to-End Encryption - Encrypt sensitive memories with AES-256-GCM
  • ✏️ Full CRUD Operations - Create, read, update, and delete memories
  • 🏷️ Tag Management - Browse and filter by tags
  • 🔄 Duplicate Detection - Check for similar memories before adding
  • 📦 Import/Export - Backup and migrate your data easily

📋 Prerequisites

  • Node.js 20+ and Yarn (or npm) - Required for Vitest 3.x
  • Anthropic API key (for conversation analysis)
  • OpenAI API key (optional, only for voice transcription)

🚀 Installation

# Clone the repository
git clone <your-repo-url>
cd second-brain-mcp

# Install dependencies
yarn install

# Create environment file
cp .env.example .env

# Edit .env with your API keys
nano .env

# Build the project
yarn build

⚙️ Configuration

Create a .env file in the project root:

# Required: For AI-powered conversation analysis
ANTHROPIC_API_KEY=your_anthropic_key_here

# Optional: Only needed for voice memo transcription
OPENAI_API_KEY=your_openai_key_here

# Optional: Custom data directory (defaults to ./data)
DATA_DIR=./data

# Optional: API server port (defaults to 3000)
PORT=3000

📖 Usage

1️⃣ MCP Server (For AI Assistants)

Connect this server to Claude Desktop or other MCP clients:

# Start the MCP server
yarn dev

Add to Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "second-brain": {
      "command": "node",
      "args": ["/absolute/path/to/second-brain-mcp/dist/server/index.js"],
      "env": {
        "DATA_DIR": "/absolute/path/to/second-brain-mcp/data",
        "ANTHROPIC_API_KEY": "your-api-key-here"
      }
    }
  }
}

📖 Detailed Setup: See for complete configuration guide.

Available MCP Tools:

  • search_memory - Search your memories semantically
  • get_context_summary - Get overview of preferences/goals/relationships
  • remember_this - Save a single memory
  • update_memory - Update an existing memory by ID
  • delete_memory - Delete a memory by ID (permanent)
  • extract_conversation_memories - Analyze conversation and extract multiple memories
  • bulk_save_memories - Save multiple memories at once

2️⃣ CLI Tool

# Add a memory
yarn cli add "I love hiking on weekends" --type preference --tags hobbies,outdoor --importance 4

# Search memories
yarn cli search "outdoor activities" --limit 5

# List recent memories
yarn cli list --limit 10

# List by type
yarn cli list --type preference

Memory Types:

  • preference - Likes, dislikes, habits, personality traits, work style
  • relationship - People (friends, family, colleagues), their roles
  • goal - Things to achieve, learn, or improve
  • experience - Events, trips, achievements, past experiences
  • fact - Basic info (location, job, education, age, etc.)

Examples:

# Preferences
yarn cli add "I prefer dark mode in all apps" --type preference --tags ui,coding

# Relationships
yarn cli add "My colleague Sarah is a React expert" --type relationship --tags work,coworkers

# Goals
yarn cli add "I want to learn Rust this year" --type goal --tags learning,programming

# Experiences
yarn cli add "Hiked Mount Rainier in July 2024" --type experience --tags travel,hiking

# Facts
yarn cli add "I work as a software engineer in Seattle" --type fact --tags career,location

# Encrypted memory (requires setup first)
yarn cli add "My bank password is..." --type fact --tags sensitive --encrypt

✏️ Editing & Managing Memories

# Edit a memory
yarn cli edit MEMORY_ID --content "Updated content" --tags new,tags

# Delete a memory (with confirmation)
yarn cli delete MEMORY_ID

# Delete without confirmation
yarn cli delete MEMORY_ID --yes

# Check for duplicate content before adding
yarn cli check-duplicates "I love hiking"

🏷️ Tag Management

# List all tags with counts
yarn cli tags

# View all memories with a specific tag
yarn cli tag hiking

# Filter by type and tags
yarn cli list --type preference  # List preferences

📦 Import/Export

# Export all memories to JSON
yarn cli export                          # Auto-named file
yarn cli export my-backup.json          # Custom filename

# Import memories from JSON
yarn cli import my-backup.json

# Import with duplicate detection
yarn cli import my-backup.json --skip-duplicates

🎤 Voice Memos

Record voice memos that are automatically transcribed and categorized:

# Record a voice memo
yarn cli voice

# This will:
# 1. Record audio from your microphone (press Ctrl+C to stop)
# 2. Transcribe using OpenAI Whisper
# 3. Analyze and suggest categorization using GPT-4
# 4. Let you review/edit before saving

Setup Requirements:

PlatformToolInstallation
macOSsoxbrew install sox
Linuxarecordsudo apt-get install alsa-utils
WindowsffmpegDownload FFmpeg

Configuration:

  • Requires OPENAI_API_KEY in .env file
  • Uses Whisper API for transcription (~$0.006 per minute)
  • Uses GPT-4 for categorization

Example Session:

$ yarn cli voice

🎤 Recording... (Press Ctrl+C to stop)

[speak your memory]
^C
✅ Recording stopped

🔄 Transcribing audio...
📝 Transcribed: "I went hiking with Sarah at Mount Rainier yesterday..."

🧠 Analyzing memory...
💡 Suggested categorization:
   Content: I hiked Mount Rainier with Sarah
   Type: experience
   Tags: outdoor, hiking, friends
   Importance: 3/5

? Save memory with these settings? Yes

✅ Voice memo saved!

🔐 Encryption

Protect sensitive memories with end-to-end encryption:

# Setup encryption (one-time)
yarn cli setup-encryption

# This generates and saves a secure encryption key
# ⚠️ SAVE THE KEY - you'll need it to decrypt later!

# Check encryption status
yarn cli encryption-status

# Add encrypted memory
yarn cli add "SSN: 123-45-6789" --type fact --tags sensitive --encrypt

# Encrypted memories are marked with 🔒 in lists
yarn cli list

Important Notes:

  • Encrypted memories cannot be searched semantically (content is encrypted)
  • You can still filter by type and tags
  • Encryption key is stored in ./data/encrypted/.key (or ENCRYPTION_KEY env var)
  • Keep your key safe - lost keys = lost memories
  • Uses AES-256-GCM encryption (industry standard)

3️⃣ REST API

Start the API server:

yarn api

The API runs on http://localhost:3000 (or your configured PORT).

API Authentication (Optional):

The API supports optional authentication. Set API_KEY in .env to enable:

# Generate a secure API key
openssl rand -hex 32

# Add to .env
API_KEY=your-generated-key-here

# Use in requests
curl -H "X-API-Key: your-key" http://localhost:3000/api/memories
# Or as query parameter
curl "http://localhost:3000/api/memories?api_key=your-key"

Endpoints:

# Health check (no auth required)
GET /health

# Get single memory
GET /api/memories/:id

# Search memories
GET /api/memories/search?q=hiking&limit=5&type=preference

# List all memories (with pagination)
GET /api/memories?page=1&limit=20&type=goal
# Returns: { count, total, page, totalPages, hasMore, memories }

# Add a single memory
POST /api/memories
{
  "content": "I love TypeScript",
  "type": "preference",
  "tags": ["programming", "languages"],
  "importance": 4,
  "encrypt": false  // optional: set to true for encrypted memory
}

# Update a memory
PUT /api/memories/:id
{
  "content": "Updated content",
  "type": "preference",
  "tags": ["new", "tags"],
  "importance": 5
}

# Delete a memory
DELETE /api/memories/:id

# Get all tags
GET /api/tags

# Get memories by tag
GET /api/tags/:tag

# Check for duplicates
POST /api/memories/check-duplicates
{
  "content": "Content to check",
  "threshold": 0.85  // optional, default 0.85
}

# Export all memories
GET /api/export

# Import memories
POST /api/import
{
  "memories": [...],
  "skipDuplicates": true  // optional
}

# Extract memories from conversation
POST /api/memories/extract
{
  "conversation": "I went hiking last weekend with Sarah..."
}

# Bulk save memories
POST /api/memories/bulk
{
  "memories": [
    { "content": "...", "type": "preference", "tags": [...], "importance": 3 },
    { "content": "...", "type": "goal", "tags": [...], "importance": 4 }
  ]
}

# Get context summary
GET /api/context/all
GET /api/context/preferences
GET /api/context/goals
GET /api/context/relationships

🏗️ Architecture

second-brain-mcp/
├── src/
│   ├── server/          # MCP server implementation
│   │   ├── index.ts     # Server setup
│   │   └── tools.ts     # MCP tool handlers
│   ├── api/             # REST API
│   │   └── server.ts    # Express server
│   ├── ingest/          # Input methods
│   │   ├── cli-commands.ts  # Memory manager
│   │   └── voice.ts     # Voice recording (WIP)
│   ├── storage/         # Data persistence
│   │   ├── metadata-store.ts   # SQLite database
│   │   ├── vector-store.ts     # Vector embeddings
│   │   └── encryption.ts       # Encryption (TODO)
│   └── utils/           # Shared utilities
│       ├── types.ts     # TypeScript types
│       ├── embeddings.ts        # Embedding generation
│       ├── local-embeddings.ts  # Local ML model
│       └── conversation-analyzer.ts  # AI analysis
├── data/                # Local data storage (gitignored)
│   ├── metadata.db      # SQLite database
│   └── vectors.json     # Vector embeddings
├── cli.ts               # CLI entry point
└── package.json

🔧 Development

# Build TypeScript
yarn build

# Watch mode for MCP server
yarn dev

# Run CLI
yarn cli <command>

# Run API server
yarn api

🧪 Testing

The project includes a comprehensive test suite with 60+ tests:

# Run all tests
yarn test

# Run tests in watch mode
yarn test:watch

# Generate coverage report
yarn test:coverage

# Run tests with UI
yarn test:ui

Test Coverage:

  • ✅ Encryption (AES-256-GCM, key management, decryption)
  • ✅ Metadata Store (SQLite operations, CRUD, filtering)
  • ✅ Memory Manager (add, search, list, encryption integration)
  • ✅ Type definitions and interfaces
  • ✅ Database migrations
  • ✅ Error handling

Testing Stack:

  • Vitest - Fast unit test framework
  • @vitest/ui - Interactive test UI
  • @vitest/coverage-v8 - Code coverage reports

All tests use isolated temporary databases to avoid conflicts.

Continuous Integration:

  • Tests run automatically on every push and PR via GitHub Actions
  • Tested on Node.js 20.x and 22.x
  • TypeScript compilation validated before merge

🤝 How It Works

  1. Storage: Memories are stored in SQLite (structured data) and a JSON vector store (embeddings)
  2. Embeddings: Uses Xenova/all-MiniLM-L6-v2 model locally (no API costs!)
  3. Search: Computes cosine similarity between query embedding and stored embeddings
  4. AI Analysis: Claude extracts structured memories from natural conversation
  5. MCP Integration: Exposes tools that AI assistants can call directly
  6. Caching: LRU cache for embeddings (10x faster repeated queries)
  7. Validation: Input validation on all operations
  8. Retry Logic: Auto-retry for API failures with exponential backoff
  9. Pagination: API supports paginated responses for large datasets

🎯 Use Cases

  • Personal Knowledge Base - Remember preferences, facts, and experiences
  • AI Assistant Context - Give Claude/GPT long-term memory about you
  • Daily Journaling - Quick CLI to log thoughts and events
  • Relationship Management - Track details about people you know
  • Goal Tracking - Store and search your aspirations
  • Web App Backend - Use the API for custom UIs

📝 Notes

  • First embedding generation downloads a ~50MB ML model (one-time)
  • All data stored locally in ./data/ directory
  • Semantic search works best with 100+ memories
  • Voice feature requires platform-specific audio tools (sox/arecord/ffmpeg)
  • Optimized for up to 10,000 memories
  • Input validation prevents invalid data
  • Automatic retry on API failures
  • Embedding cache speeds up repeated queries

📚 Documentation

  • ⭐ Essential guide - setup, MCP, ChatGPT, CLI
  • - Version history
  • - Security policy
  • - For developers

🚧 Roadmap

✅ Completed (v1.0)

  • Encryption for sensitive memories (AES-256-GCM)
  • Cross-platform voice recording with transcription
  • Comprehensive test suite with 83+ tests
  • CI/CD pipeline with automated testing (GitHub Actions)
  • Memory editing and deletion (update/delete operations)
  • Deduplication detection (similarity-based checking)
  • Tag browsing and management (list tags, filter by tag)
  • Import/export functionality (JSON backup/restore)
  • Input validation for all operations
  • Retry logic for API failures
  • API pagination support
  • Embedding cache for performance
  • MCP configuration and setup guide

🔮 Future (v2.0+)

  • Memory relationships/linking (connect related memories)
  • Web UI dashboard with React/Next.js
  • Searchable encrypted memories (homomorphic encryption)
  • Local Whisper transcription (no API costs)
  • Advanced search filters (date ranges, importance levels)
  • Memory templates and quick-add shortcuts
  • Mobile app (React Native)
  • PostgreSQL + pgvector for 50K+ memories
  • Real-time sync across devices
  • Collaboration features

📄 License

MIT

🙋 Support

For issues or questions, please open a GitHub issue or contact the maintainer.


Built with ❤️ using TypeScript, MCP, and local ML