neverinfamous/memory-journal-mcp
If you are the rightful owner of memory-journal-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Memory Journal MCP Server is a Model Context Protocol server designed for developers to manage project-related journaling, capturing technical details, GitHub issues, and personal insights.
Memory Journal MCP Server
Last Updated January 8, 2026 - v3.0.0
Project context management for AI-assisted development - Bridge the gap between fragmented AI threads with persistent knowledge graphs and intelligent context recall
🎯 Solve the AI Context Problem: When working with AI across multiple threads and sessions, context is lost. Memory Journal maintains a persistent, searchable record of your project work, decisions, and progress - making every AI conversation informed by your complete project history.
GitHub • Wiki • Changelog • Release Article
🚀 Quick Deploy:
- npm Package -
npm install -g memory-journal-mcp - Docker Hub - Alpine-based with full semantic search
✨ What's New in v3.0.0 (December 28, 2025)
🚀 Complete TypeScript Rewrite
Memory Journal v3.0 is a ground-up rewrite in TypeScript, delivering:
- Pure JS Stack - No native compilation required (
sql.js+vectra+@xenova/transformers) - Cross-Platform Portability - Works on Windows, macOS, Linux without binary dependencies
- Strict Type Safety - Zero TypeScript errors, 100% strict mode compliance
- Faster Startup - Lazy ML loading with instant cold starts
- MCP 2025-11-25 Compliance - Full spec compliance with behavioral annotations
🗄️ New: Backup & Restore Tools
Never lose your journal data again:
| Tool | Description |
|---|---|
backup_journal | Create timestamped database backups |
list_backups | List all available backup files |
restore_backup | Restore from any backup (with auto-backup before restore) |
// Create a backup before major changes
backup_journal({ name: "before_migration" })
// → { success: true, filename: "before_migration.db", sizeBytes: 524288 }
// List available backups
list_backups()
// → { backups: [...], total: 3, backupsDirectory: "~/.memory-journal/backups" }
// Restore from backup (requires confirmation)
restore_backup({ filename: "before_migration.db", confirm: true })
// → { success: true, previousEntryCount: 50, newEntryCount: 42 }
📊 New: Server Health Resource
Get comprehensive server diagnostics via memory://health:
{
"database": {
"path": "~/.memory-journal/memory_journal.db",
"sizeBytes": 524288,
"entryCount": 150,
"deletedEntryCount": 5,
"relationshipCount": 42,
"tagCount": 28
},
"backups": {
"directory": "~/.memory-journal/backups",
"count": 3,
"lastBackup": { "filename": "...", "createdAt": "...", "sizeBytes": 524288 }
},
"vectorIndex": {
"available": true,
"indexedEntries": 150,
"modelName": "all-MiniLM-L6-v2"
},
"toolFilter": {
"active": false,
"enabledCount": 27,
"totalCount": 27
},
"timestamp": "2025-12-28T05:47:00Z"
}
📈 Current Capabilities
- 27 MCP tools - Complete development workflow + backup/restore
- 14 workflow prompts - Standups, retrospectives, PR workflows, CI/CD failure analysis
- 14 MCP resources - Including new
memory://healthdiagnostics - GitHub Integration - Projects, Issues, Pull Requests, Actions with auto-linking
- 8 tool groups -
core,search,analytics,relationships,export,admin,github,backup - Knowledge graphs - 5 relationship types, Mermaid visualization
- Semantic search - AI-powered conceptual search via
@xenova/transformers
🎯 Why Memory Journal?
The Fragmented AI Context Problem
When managing large projects with AI assistance, you face a critical challenge:
- Thread Amnesia - Each new AI conversation starts from zero, unaware of previous work
- Lost Context - Decisions, implementations, and learnings scattered across disconnected threads
- Repeated Work - AI suggests solutions you've already tried or abandoned
- Context Overload - Manually copying project history into every new conversation
The Solution: Persistent Project Memory
Memory Journal acts as your project's long-term memory, bridging the gap between fragmented AI threads:
For Developers:
- 📝 Automatic Context Capture - Git commits, branches, GitHub issues, PRs, and project state captured with every entry
- 🔗 Knowledge Graph - Link related work (specs → implementations → tests → PRs) to build a connected history
- 🔍 Intelligent Search - Find past decisions, solutions, and context across your entire project timeline
- 📊 Project Analytics - Track progress from issues through PRs, generate reports for standups/retrospectives
For AI-Assisted Work:
- 💡 AI can query your complete project history in any conversation
- 🧠 Semantic search finds conceptually related work, even without exact keywords
- 📖 Context bundles provide AI with comprehensive project state instantly
- 🔗 Relationship visualization shows how different pieces of work connect
🚀 Quick Start
Option 1: npm (Recommended)
Step 1: Install the package
npm install -g memory-journal-mcp
Step 2: Add to ~/.cursor/mcp.json
{
"mcpServers": {
"memory-journal-mcp": {
"command": "memory-journal-mcp"
}
}
}
Step 3: Restart Cursor
Restart Cursor or your MCP client, then start journaling!
Option 2: npx (No Installation)
{
"mcpServers": {
"memory-journal-mcp": {
"command": "npx",
"args": ["-y", "memory-journal-mcp"]
}
}
}
Option 3: From Source
git clone https://github.com/neverinfamous/memory-journal-mcp.git
cd memory-journal-mcp
npm install
npm run build
{
"mcpServers": {
"memory-journal-mcp": {
"command": "node",
"args": ["dist/cli.js"]
}
}
}
GitHub Integration Configuration
The GitHub tools (get_github_issues, get_github_prs, etc.) can auto-detect the repository from your git context. However, MCP clients may run the server from a different directory than your project.
To enable GitHub auto-detection, add GITHUB_REPO_PATH to your config:
{
"mcpServers": {
"memory-journal-mcp": {
"command": "memory-journal-mcp",
"env": {
"GITHUB_TOKEN": "ghp_your_token_here",
"GITHUB_REPO_PATH": "/path/to/your/git/repo"
}
}
}
}
| Environment Variable | Description |
|---|---|
GITHUB_TOKEN | GitHub personal access token for API access |
GITHUB_REPO_PATH | Path to the git repository for auto-detecting owner/repo |
Without GITHUB_REPO_PATH: You'll need to explicitly provide owner and repo parameters when calling GitHub tools.
Cursor Known Issues
Listing MCP Resources: If the agent has trouble listing resources, instruct it to call list_mcp_resources() without specifying a server parameter. Using server="memory-journal-mcp" may return nothing (Cursor bug).
📋 Core Capabilities
🛠️ 27 MCP Tools (8 Groups)
| Group | Tools | Description |
|---|---|---|
core | 6 | Entry CRUD, tags, test |
search | 4 | Text search, date range, semantic, vector stats |
analytics | 2 | Statistics, cross-project insights |
relationships | 2 | Link entries, visualize graphs |
export | 1 | JSON/Markdown export |
admin | 4 | Update, delete, rebuild/add to vector index |
github | 5 | Issues, PRs, context integration |
backup | 3 | NEW Backup, list, restore |
🎯 14 Workflow Prompts
find-related- Discover connected entries via semantic similarityprepare-standup- Daily standup summariesprepare-retro- Sprint retrospectivesweekly-digest- Day-by-day weekly summariesanalyze-period- Deep period analysis with insightsgoal-tracker- Milestone and achievement trackingget-context-bundle- Project context with Git/GitHubpr-summary- Pull request journal activity summarycode-review-prep- Comprehensive PR review preparationpr-retrospective- Completed PR analysis with learningsactions-failure-digest- CI/CD failure analysis
📡 14 Resources
memory://recent- 10 most recent entriesmemory://significant- Significant milestones and breakthroughsmemory://graph/recent- Live Mermaid diagram of recent relationshipsmemory://team/recent- Recent team-shared entriesmemory://health- NEW Server health & diagnosticsmemory://projects/{number}/timeline- Project activity timelinememory://issues/{issue_number}/entries- Entries linked to issuememory://prs/{pr_number}/entries- Entries linked to PRmemory://prs/{pr_number}/timeline- Combined PR + journal timelinememory://graph/actions- CI/CD narrative graphmemory://actions/recent- Recent workflow runsmemory://tags- All tags with usage countsmemory://statistics- Journal statistics
🔧 Configuration
GitHub Integration (Optional)
export GITHUB_TOKEN="your_token" # For Projects/Issues/PRs
export GITHUB_ORG_TOKEN="your_org_token" # Optional: org projects
export DEFAULT_ORG="your-org-name" # Optional: default org
Scopes: repo, project, read:org (org only)
Tool Filtering (Optional)
Control which tools are exposed using MEMORY_JOURNAL_MCP_TOOL_FILTER:
export MEMORY_JOURNAL_MCP_TOOL_FILTER="-analytics,-github"
Filter Syntax:
-group- Disable all tools in a group-tool- Disable a specific tool+tool- Re-enable after group disable- Meta-groups:
starter,essential,full,readonly
Example Configurations:
{
"mcpServers": {
"memory-journal-mcp": {
"command": "memory-journal-mcp",
"env": {
"MEMORY_JOURNAL_MCP_TOOL_FILTER": "starter",
"GITHUB_TOKEN": "your_token"
}
}
}
}
| Configuration | Filter String | Tools |
|---|---|---|
| Starter | starter | ~10 |
| Essential | essential | ~6 |
| Full (default) | full | 27 |
| Read-only | readonly | ~20 |
Complete tool filtering guide →
📖 Usage Examples
Create an Entry with GitHub Context
create_entry({
content: "Completed Phase 1 of GitHub Projects integration!",
entry_type: "technical_achievement",
tags: ["github-projects", "milestone"],
project_number: 1,
significance_type: "technical_breakthrough"
})
Create and Manage Backups
// Before major refactoring
backup_journal({ name: "pre_refactor" })
// Check available backups
list_backups()
// Restore if needed (creates auto-backup first)
restore_backup({ filename: "pre_refactor.db", confirm: true })
Check Server Health
// Fetch the health resource
// Returns: database stats, backup info, vector index status, tool filter config
Search and Analyze
// Full-text search
search_entries({ query: "performance optimization", limit: 5 })
// Semantic search for concepts
semantic_search({ query: "startup time improvements", limit: 3 })
// Get analytics
get_statistics({ group_by: "week" })
Generate Visual Maps
// Visualize entry relationships
visualize_relationships({
entry_id: 55,
depth: 2
})
🏗️ Architecture
┌─────────────────────────────────────────────────────────────┐
│ MCP Server Layer (TypeScript) │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────┐ │
│ │ Tools (27) │ │ Resources (14) │ │ Prompts (14)│ │
│ │ with Annotations│ │ with Annotations│ │ │ │
│ └─────────────────┘ └─────────────────┘ └─────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Pure JS Stack (No Native Dependencies) │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────┐ │
│ │ sql.js │ │ vectra │ │ transformers│ │
│ │ (SQLite) │ │ (Vector Index) │ │ (Embeddings)│ │
│ └─────────────────┘ └─────────────────┘ └─────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ SQLite Database with Hybrid Search │
│ ┌─────────────────────────────────────────────────────────┐│
│ │ entries + tags + relationships + embeddings + backups ││
│ └─────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────┘
🔧 Technical Highlights
Performance & Portability
- TypeScript + Pure JS Stack - No native compilation, works everywhere
- sql.js - SQLite in pure JavaScript with disk sync
- vectra - Vector similarity search without native dependencies
- @xenova/transformers - ML embeddings in JavaScript
- Lazy loading - ML models load on first use, not startup
Security
- Local-first - All data stored locally, no external API calls (except optional GitHub)
- Input validation - Zod schemas, content size limits, SQL injection prevention
- Path traversal protection - Backup filenames validated
- MCP 2025-11-25 annotations - Behavioral hints (
readOnlyHint,destructiveHint, etc.)
Data & Privacy
- Single SQLite file - You own your data
- Portable - Move your
.dbfile anywhere - Soft delete - Entries can be recovered
- Auto-backup on restore - Never lose data accidentally
📚 Documentation & Resources
- GitHub Wiki - Complete documentation
- Docker Hub - Container images
- npm Package - Node.js distribution
- Issues - Bug reports & feature requests
📄 License
MIT License - See file for details.
🤝 Contributing
Built by developers, for developers. PRs welcome! See for guidelines.
Migrating from v2.x? Your existing database is fully compatible. The TypeScript version uses the same schema and data format.