stevenjjobson/mcp-chromadb-memory
If you are the rightful owner of mcp-chromadb-memory and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The MCP ChromaDB Memory Server is an AI-driven server that utilizes the Model Context Protocol to manage intelligent memory storage and retrieval using ChromaDB.
store_memory
Stores information based on AI-assessed importance.
recall_memories
Retrieves relevant memories with context-aware filtering.
health_check
Verifies server status and ChromaDB connection.
π§ MCP ChromaDB Memory Server - Cognitive State Management Platform
A comprehensive Cognitive State Management Platform that transforms how developers preserve context, manage knowledge, and maintain continuity across projects, sessions, and teams.
Features β’ Platform Vision β’ Installation β’ Usage β’ API β’ Architecture β’ Contributing
π Overview
The MCP ChromaDB Memory Server has evolved from a simple memory storage tool into a comprehensive Cognitive State Management Platform with revolutionary Dual Vault Architecture.
π― Key Innovation: Dual Vault Architecture
Maintain two separate but connected knowledge bases:
- π§ Core Vault: Your personal "second brain" that grows with every project
- π¦ Project Vault: Clean, isolated context for each specific project
Never lose valuable insights again - learn once, apply everywhere!
π Platform Vision
This project implements a complete cognitive platform that:
- Preserves Context: Never lose your mental state when switching tasks or devices
- Learns from Usage: Automatically extracts patterns and insights from development sessions
- Scales Intelligently: Hierarchical memory system optimized for performance
- Integrates Deeply: Works seamlessly with your existing development workflow
See for detailed vision.
Platform Capabilities
π Headline Features
- π― Dual Vault Architecture - Separate core knowledge from project contexts while maintaining connections
- π 60x Performance Boost - PostgreSQL hybrid storage eliminates ChromaDB throttling
- π§ Intelligent Categorization - Automatically routes memories to the appropriate vault
- π Cross-Vault Search - Query both vaults with weighted, relevant results
- π Memory Promotion - Elevate project insights to permanent core knowledge
Current Features
- π€ Autonomous Storage - AI-assessed importance determines what gets stored
- π Intelligent Retrieval - Multi-factor scoring combines semantic similarity, recency, importance, and access frequency
- π― Context-Aware - Supports different memory contexts (general, user preferences, critical tasks, notes)
- π Smart Scoring - Retrieval uses weighted scoring: Semantic (40%), Recency (30%), Importance (20%), Frequency (10%)
- π Exact Search - Fast string matching with keyword indexing for precise lookups
- π Hybrid Search - Combines exact and semantic search with configurable weights
- ποΈ Token Optimization - Smart compression (50-90% reduction) while preserving important content
- π Access Pattern Analysis - Track hot/warm/cold memories with tier recommendations
- π Obsidian Integration - Read, write, and search notes in your Obsidian vault with semantic search
- π Session Logging - Automatically log Claude Code conversations to Obsidian with summaries and code highlights
- π Template System - Import and manage documentation templates from webhooks with Handlebars support
- ποΈ Hierarchical Vault Structure - Universal Developer Documentation System with automated folder generation and hooks
- π₯ Health Monitoring - Real-time system health checks with visual dashboard and startup verification
- π Vault Index - Comprehensive vault statistics and navigation system with automatic updates
- ποΈ Hierarchical Memory System - Three-tier architecture (Working, Session, Long-term) with automatic migration
- π Vault Management - Multi-project support with instant context switching
- πΎ State Capture - Save and restore complete working context across devices
- π§ Code Intelligence - Automatic codebase indexing with symbol tracking and relationships
- π Code-Aware Search - Stream-based symbol search, find implementations and patterns instantly
- π Code Pattern Recognition - Detect and learn from coding patterns, suggest improvements
- β‘ Streaming Responses - Fast, incremental results optimized for Claude Code and large codebases
- π Hybrid Storage - PostgreSQL + ChromaDB for optimal performance (644 symbols/second, 60x faster than ChromaDB alone) β
- π Dual-Write Migration - Safe migration with writes to both databases, configurable read ratios β
- β‘ No Throttling - Bulk operations complete in <1s vs 60s+ with ChromaDB alone β
Platform Implementations
- π― CoachNTT - Specialized conversational AI implementation with voice synthesis and VSCode integration
- AI-powered text-to-speech with ElevenLabs
- Rich VSCode extension with audio controls
- Conversation-aware memory scoring
- See
Platform Enhancements (Coming Soon)
- 𧬠Advanced Pattern Recognition - Deep learning from development patterns across projects
- π Memory Consolidation - Intelligent deduplication and memory merging
- π Git Integration - Link memories to commits, branches, and pull requests
π Requirements
- Node.js 20+
- Docker & Docker Compose
- PostgreSQL 16+ with pgvector extension (included in docker-compose)
- OpenAI API key (for embeddings)
- 4GB RAM minimum (increased for PostgreSQL)
- Windows/macOS/Linux
π Quick References
π― Get Started with Dual Vaults
- - Set up dual vaults in 5 minutes
- - Migrate from single vault
Core Documentation
- - Learn how to effectively use the memory system
- - PostgreSQL + ChromaDB hybrid architecture
- - Code-aware features and symbol indexing
- - Set up isolated development environment
- - Current progress and roadmap
- - Conversational AI with voice synthesis
π New User Guides
- - Complete guide for Claude Code & Desktop with WSL/Windows paths
- - Essential commands and tips on one page
- - Advanced techniques and productivity patterns
π Quick Start
Using Docker Compose (Recommended)
-
Clone the repository
git clone https://github.com/stevenjjobson/mcp-chromadb-memory.git cd mcp-chromadb-memory
-
Set up environment
cp .env.example .env # Edit .env and add your OpenAI API key
-
Start services
# For Claude Desktop - start ChromaDB and PostgreSQL (both required) docker-compose up -d coachntt-chromadb coachntt-postgres # Or use the convenience script (Windows) .\start-chromadb.ps1
Note: Claude Desktop creates its own MCP container automatically. Both ChromaDB and PostgreSQL are now required for the hybrid storage architecture.
-
Verify installation
docker-compose logs -f coachntt-chromadb
Note: The MCP server container will exit immediately when run standalone. This is normal behavior - MCP servers communicate via stdio and need a client to connect. Use the Claude Desktop configuration below to properly connect to the server.
π― Dual Vault Architecture
Transform Your Development Workflow
The Dual Vault Architecture revolutionizes how you manage knowledge across projects:
βββββββββββββββββββββββ βββββββββββββββββββββββ
β Core Vault β β Project Vault β
β (Your Brain π§ ) βββββββββββ (Current Work π¦) β
βββββββββββββββββββββββ€ βββββββββββββββββββββββ€
β β’ Best Practices β β β’ Project Decisions β
β β’ Code Patterns β β β’ Local Config β
β β’ Personal Prefs β β β’ Client Context β
β β’ Learned Wisdom β β β’ Session Logs β
βββββββββββββββββββββββ βββββββββββββββββββββββ
β² β²
βββββββββββββββ¬ββββββββββββββββββββ
β
Smart Search
Categorization
Memory Promotion
Quick Setup (2 minutes)
-
Enable Dual Vaults in
.env.PRODUCTION
:VAULT_MODE=dual CORE_VAULT_PATH=C:/Users/YourName/Obsidian/YourVault PROJECT_VAULT_PATH=./vault
-
Update Claude Desktop config to mount both vaults
-
Start using - memories automatically go to the right vault!
See for complete setup.
Local Development Setup
-
Install dependencies
npm install
-
Start required services
# Both ChromaDB and PostgreSQL are required docker-compose up -d coachntt-chromadb coachntt-postgres
-
Build and run
npm run build npm run dev
π Quick Start with WSL
Using the Startup Script (Recommended)
For WSL users, we provide a comprehensive startup script that ensures all services are running correctly:
# Make the script executable (first time only)
chmod +x start-mcp-platform.sh
# Run the startup script
./start-mcp-platform.sh
The script will:
- β Verify Docker is running
- β Check ChromaDB status and start if needed
- β Validate environment configuration
- β Build TypeScript if needed
- β Run health checks
- β Display a visual dashboard
- β Optionally launch Claude Desktop when ready
See for detailed information.
π§ Configuration
Environment Variables
Create a .env
file in the project root:
# ChromaDB Configuration
CHROMA_HOST=coachntt-chromadb # Use 'localhost' for local development
CHROMA_PORT=8000
# OpenAI Configuration (required for embeddings)
# API key is stored securely in Docker secrets - see Security section below
# Obsidian Integration (optional)
OBSIDIAN_VAULT_PATH=/path/to/your/vault
# Memory Configuration
MEMORY_IMPORTANCE_THRESHOLD=0.7 # Minimum importance score to store (0-1)
MEMORY_COLLECTION_NAME=coachntt_memories
MAX_MEMORY_RESULTS=10
# Server Configuration
MCP_SERVER_NAME=coachntt-cognitive-server
MCP_SERVER_VERSION=1.0.0
Claude Desktop Integration
-
Locate the configuration file:
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Linux:
~/.config/Claude/claude_desktop_config.json
- Windows:
-
Add the MCP server configuration:
{
"mcpServers": {
"memory": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--network", "mcp-chromadb-memory_coachntt-platform-network",
"-v", "C:/Users/Steve/Dockers/mcp-chromadb-memory/vault:/vault:rw",
"-e", "DOCKER_CONTAINER=true",
"-e", "CHROMA_HOST=coachntt-chromadb",
"-e", "CHROMA_PORT=8000",
"-e", "POSTGRES_HOST=coachntt-postgres",
"-e", "POSTGRES_PORT=5432",
"-e", "POSTGRES_USER=coachntt_user",
"-e", "POSTGRES_PASSWORD=coachntt_pass",
"-e", "POSTGRES_DATABASE=coachntt_cognitive_db",
"-e", "USE_HYBRID_STORAGE=true",
"-e", "OBSIDIAN_VAULT_PATH=/vault",
"-e", "AUTO_START_SESSION_LOGGING=true",
"-e", "SESSION_LOGGING_PROJECT_NAME=CoachNTT Cognitive Platform",
"mcp-chromadb-memory-mcp-memory"
]
}
}
}
- Restart Claude Desktop to load the new configuration
For local development without Docker:
{
"mcpServers": {
"memory-local": {
"command": "node",
"args": ["C:\\path\\to\\mcp-chromadb-memory\\dist\\index.js"],
"env": {
"OPENAI_API_KEY": "your-api-key-here",
"CHROMA_HOST": "localhost",
"CHROMA_PORT": "8000",
"POSTGRES_HOST": "localhost",
"POSTGRES_PORT": "5432",
"POSTGRES_DATABASE": "coachntt_cognitive_db",
"POSTGRES_USER": "coachntt_user",
"POSTGRES_PASSWORD": "coachntt_pass",
"USE_HYBRID_STORAGE": "true"
}
}
}
}
π API Reference
Tools
store_memory
Stores information based on AI-assessed importance.
{
content: string; // The information to store
context?: string; // Context category (general, user_preference, task_critical, obsidian_note)
metadata?: object; // Additional metadata
}
Response:
{
"stored": true,
"id": "mem_1234567890_abc",
"importance": 0.85
}
recall_memories
Retrieves relevant memories with context-aware filtering.
{
query: string; // Search query
context?: string; // Optional context filter
limit?: number; // Max results (default: 5)
}
Response:
[
{
"content": "User prefers dark mode interfaces",
"context": "user_preference",
"importance": "0.80",
"timestamp": "2024-01-15T10:30:00Z",
"scores": {
"total": "0.825",
"semantic": "0.920",
"recency": "0.750",
"importance": "0.800",
"frequency": "0.600"
}
}
]
health_check
Verifies server status and ChromaDB connection.
Response:
{
"status": "ok",
"chromadb_connected": true,
"server_version": "1.0.0",
"platform": "linux",
"docker": true
}
Session Logging Tools
start_session_logging
Begins logging a Claude Code session to Obsidian.
{
project?: string; // Project name (default: "General")
}
save_session_log
Saves the current session to Obsidian with auto-generated summary.
{
summary?: string; // Optional manual summary
}
log_session_event
Manually logs specific events during the session.
{
type: string; // Event type: user, assistant, tool, decision, achievement
content: string; // Event content
metadata?: object; // Additional metadata
}
Automatic Session Logging: Set AUTO_START_SESSION_LOGGING=true
in your environment to automatically start logging when Claude Code connects. The session will be auto-saved on exit if SESSION_LOGGING_SAVE_ON_EXIT=true
(default).
See for detailed usage.
Template Management Tools
import_template
Import a documentation template from an external webhook source.
{
source: string; // URL of the template to import
category?: string; // Template category (session, decision, pattern, etc.)
variables?: object; // Variables to apply immediately
saveAs?: string; // Filename to save generated document
}
list_templates
List all available templates in the system.
{
category?: string; // Filter by category
source?: string; // Filter by source URL
}
apply_template
Apply a template with variables to generate a document.
{
templateId: string; // ID of the template
variables: object; // Variables to apply
outputPath: string; // Where to save the document
}
configure_template_webhook
Configure a webhook source for importing templates.
{
name: string; // Name for this webhook
url: string; // Webhook URL
authType?: string; // Authentication type (none, bearer, api-key, oauth)
authCredentials?: string; // Auth credentials
syncInterval?: number; // Auto-sync interval in minutes
}
sync_templates
Synchronize templates from all configured webhook sources.
// No parameters required
See for detailed architecture.
Vault Structure Management Tools
import_vault_structure
Import a complete vault structure definition with templates and hooks.
{
source: string; // URL or path to structure definition
applyImmediately?: boolean; // Apply structure after import
targetPath?: string; // Target path (defaults to vault)
}
generate_vault_structure
Generate folder hierarchy from a loaded structure template.
{
structureId?: string; // Structure name/ID
targetPath: string; // Where to generate
options?: {
skipExisting?: boolean; // Skip existing folders
dryRun?: boolean; // Preview without changes
applyTemplates?: boolean; // Apply folder templates
}
}
apply_folder_hooks
Apply hooks to existing folders for automated actions.
{
folderPath: string; // Folder to apply hooks to
hookIds?: string[]; // Specific hooks (or all)
}
See for complete documentation.
Hierarchical Memory System
The platform now features a sophisticated three-tier memory architecture that automatically manages memory lifecycle:
Memory Tiers
-
Working Memory (48 hours)
- Stores immediate context and active tasks
- Fastest retrieval speed
- Automatically migrates older memories to Session tier
-
Session Memory (14 days)
- Contains recent development sessions
- Balanced performance and retention
- Migrates important memories to Long-term tier
-
Long-term Memory (Permanent)
- Preserves critical knowledge and patterns
- Optimized for important information
- Never expires
Tier Management Tools
get_tier_stats
- View memory distribution across tiersanalyze_access_patterns
- Get recommendations for tier optimizationget_memories_for_migration
- Preview pending tier migrations
Configuration
Configure tier behavior in your .env
:
# Tier Configuration
TIER_ENABLED=true
TIER_WORKING_RETENTION=48 # Hours
TIER_SESSION_RETENTION=336 # Hours (14 days)
TIER_LONGTERM_RETENTION=8760 # Hours (1 year)
TIER_MIGRATION_INTERVAL=3600000 # Milliseconds (1 hour)
The migration service runs automatically in the background, moving memories between tiers based on age and access patterns.
Code Intelligence System
The platform includes advanced code intelligence capabilities optimized for Claude Code and development workflows:
Code Intelligence Tools
index_codebase
- Fast symbol extraction and storage with streaming supportfind_symbol
- Stream-based symbol search across your codebaseget_symbol_context
- Rich context retrieval including imports, usage, and relationshipsanalyze_code_patterns
- Detect patterns, anti-patterns, and improvement opportunities
Code Memory Features
-
Automatic Symbol Indexing
- Functions, classes, methods, and variables
- Import relationships and dependencies
- File structure and organization
- Automatic updates on file changes
-
Streaming Architecture
- Results stream as they're found (<50ms first result)
- Prioritized results (exact matches first)
- Background semantic search
- Optimized for large codebases
-
Code-Aware Search
- Natural language to code queries
- Symbol type filtering
- Relationship navigation
- Pattern-based search
Configuration
# Code Intelligence Configuration
CODE_INDEXING_ENABLED=true
CODE_INDEXING_PATTERNS="**/*.{js,ts,py,java,go,rs,cpp}"
CODE_INDEXING_EXCLUDE="**/node_modules/**,**/dist/**,**/.git/**"
CODE_PATTERN_DETECTION=true
CODE_STREAMING_ENABLED=true
CODE_CACHE_SIZE=1000
See for detailed usage and examples.
π― Smart Tool Optimization
Hook Scripts for Efficient Tool Usage
The project includes intelligent hook scripts that intercept inefficient tool usage and suggest optimized alternatives, resulting in 94% token reduction and 96% faster searches.
How It Works
When you use Grep
or Glob
for code searches, the hooks detect patterns and suggest better tools:
# Before: Inefficient grep search
Grep pattern="class UserManager" include="*.ts"
# Hook intercepts and suggests:
β Use: find_symbol query='UserManager' type=['class']
# Result: 98% faster, 98% fewer tokens
Performance Improvements
Scenario | Traditional Approach | Optimized Approach | Improvement |
---|---|---|---|
Finding a class | Grep: 2,500ms, 15K tokens | find_symbol: 50ms, 200 tokens | 98% faster |
Code search | Grep: 3,000ms, 45K tokens | find_symbol: 100ms, 2.4K tokens | 94% fewer tokens |
File listing | Glob: 1,000ms, 8K tokens | explore_folder: 200ms, 1K tokens | 87% reduction |
Benefits
- Token Efficiency: 10-100x reduction in token usage
- Speed: O(1) indexed lookups vs O(n) file scanning
- Accuracy: 100% relevant results vs 30% with grep
- Cost Savings: ~$2,820/month saved on API costs (based on 100 searches/day)
The hooks work automatically in the background, requiring no configuration or user intervention.
See for complete implementation details.
ποΈ Architecture
Hybrid Storage Architecture (v2.1+)
βββββββββββββββββββ ββββββββββββββββββββ
β MCP Client ββββββΆβ MCP Server β
βββββββββββββββββββ ββββββββββ¬ββββββββββ
β
ββββββββββββββ΄βββββββββββββ
β β
βββββββββΌβββββββββ ββββββββββΌβββββββββ
β PostgreSQL β β ChromaDB β
β + pgvector β β (Embeddings) β
β (Structure) β βββββββββββββββββββ
ββββββββββββββββββ
Performance Improvements (Tested & Verified)
Operation | ChromaDB Only | Hybrid (PostgreSQL + ChromaDB) | Improvement |
---|---|---|---|
Bulk Insert (10k symbols) | 60+ seconds (throttled) | <1 second | 60x faster |
Exact Search | 200ms | <10ms | 20x faster |
Code Symbol Indexing | Connection errors | 1700+ symbols/second | Reliable |
Complex Queries | Limited | Full SQL support | New capability |
Project Structure
mcp-chromadb-memory/
βββ src/
β βββ index.ts # MCP server entry point
β βββ config.ts # Configuration management
β βββ memory-manager.ts # Legacy ChromaDB manager
β βββ memory-manager-enhanced.ts # Enhanced with tier support
β βββ hybrid-memory-manager.ts # NEW: PostgreSQL + ChromaDB hybrid
β βββ db/
β β βββ postgres-client.ts # PostgreSQL connection management
β β βββ memory-repository.ts # Memory data access layer
β β βββ symbol-repository.ts # Code symbol operations
β βββ obsidian-manager.ts # Obsidian vault integration
β βββ session-logger.ts # Session capture and logging
β βββ template-manager.ts # Template system with webhook support
β βββ vault-manager.ts # Vault management wrapper
β βββ services/
β βββ code-indexer.ts # Symbol extraction & bulk indexing
β βββ code-pattern-detector.ts # Pattern recognition
β βββ streaming-manager.ts # Streaming response handler
β βββ migration-service.ts # Tier migration service
β βββ hybrid-search-service.ts # NEW: Unified search layer
βββ vault/
β βββ vault/ # Project-specific Obsidian vault
β β βββ Templates/ # Documentation templates
β βββ *.md # Platform documentation
βββ init.sql # PostgreSQL schema initialization
βββ dist/ # Compiled JavaScript
βββ docker-compose.yml # Container orchestration
βββ Dockerfile # Container definition
Platform Architecture (v2.0)
The platform uses a hybrid storage approach where PostgreSQL handles structured data and metadata for high-performance queries, while ChromaDB manages vector embeddings for semantic search capabilities.
graph TD
subgraph "Application Layer"
A1[Claude Desktop]
A2[CLI Tools]
A3[Web Dashboard]
end
subgraph "Integration Layer"
B1[MCP Protocol]
B2[REST API]
B3[WebSocket Events]
end
subgraph "Intelligence Layer"
C1[Pattern Recognition]
C2[Session Processor]
C3[State Manager]
C4[Vault Manager]
C5[Code Intelligence]
end
subgraph "Memory Layer"
D1[Working Memory]
D2[Session Memory]
D3[Long-term Memory]
end
subgraph "Storage Layer"
E1[PostgreSQL + pgvector]
E2[ChromaDB]
E3[Obsidian Vault]
E4[Git Repository]
end
A1 --> B1
A2 --> B2
A3 --> B3
B1 --> C1
B2 --> C2
B3 --> C3
C1 --> D1
C2 --> D2
C3 --> D3
D1 --> E1
D1 --> E2
D2 --> E1
D2 --> E2
D3 --> E1
D3 --> E2
C2 --> E3
C4 --> E3
C3 --> E4
See for transformation details.
Memory Scoring Algorithm
The retrieval system uses a sophisticated multi-factor scoring approach:
- Semantic Similarity (40%): Cosine similarity between query and memory embeddings
- Recency Score (30%): Exponential decay based on time since last access
- Importance Score (20%): AI-assessed importance during storage
- Frequency Score (10%): Logarithmic scaling of access count
π οΈ Development
Dual-Instance Development
For safe testing of new features, use the isolated development environment:
# Start development environment
./scripts/env-manager.sh start-dev
# Check status
./scripts/env-manager.sh status
# Run in development mode
./scripts/test-hierarchical.sh
This creates a completely separate ChromaDB instance on port 8001 with its own data and configuration.
Available Scripts
npm run build # Compile TypeScript
npm run dev # Run with hot reload
npm run test # Run test suite
npm run inspect # Test with MCP Inspector
npm run docker:build # Build Docker image
npm run docker:run # Run in Docker
Testing with MCP Inspector
npm run inspect
Then in the inspector:
- Call
health_check
to verify connection - Use
store_memory
to save test memories - Use
recall_memories
to test retrieval
π§ Troubleshooting
Quick Diagnostics
Run the configuration validator to check for common issues:
./scripts/validate-config.sh
MCP Server Not Loading
Claude Desktop Issues
-
Verify configuration file location:
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Linux:
~/.config/Claude/claude_desktop_config.json
- Windows:
-
Common configuration errors:
- JSON syntax errors (use
python -m json.tool < config.json
to validate) - Wrong container names (should be
coachntt-chromadb
andcoachntt-postgres
) - Hardcoded API keys (security risk but required for Claude Desktop)
- Windows path issues (use forward slashes:
C:/Users/...
)
- JSON syntax errors (use
-
Quick setup script:
./scripts/setup-claude-desktop.sh
Claude Code CLI Issues
-
Environment variables not set:
export OPENAI_API_KEY="your-api-key-here" # Add to ~/.bashrc for persistence
-
Configuration issues:
.mcp.json
must be in project root- Check for empty environment variables in config
- Ensure
.env.PRODUCTION
has correct settings
-
Test the server directly:
OPENAI_API_KEY="your-key" node dist/index.js
Database Connection Errors
-
Check service names match:
# docker-compose.yml should have: coachntt-postgres: container_name: coachntt-postgres coachntt-chromadb: container_name: coachntt-chromadb
-
Verify credentials match:
- PostgreSQL:
coachntt_user
/coachntt_pass
/coachntt_cognitive_db
- These must be consistent across all config files
- PostgreSQL:
-
Check if services are healthy:
docker ps # Look for (healthy) or (unhealthy) status
Path and Environment Issues
-
Vault path not found:
- Check
.env.PRODUCTION
hasOBSIDIAN_VAULT_PATH=./vault
- Ensure vault directory exists
- For dual vault: ensure both paths exist and are accessible
- Check
-
Environment file loading:
- System loads
.env.PRODUCTION
by default - Settings in
.env
are ignored unless ENVIRONMENT_NAME is set - Missing settings don't fall back to other files
- System loads
Common Issues and Solutions
Issue | Cause | Solution |
---|---|---|
"Container exits immediately" | Normal MCP behavior | MCP servers run on-demand |
"Cannot connect to ChromaDB" | Container unhealthy | docker-compose restart chromadb |
"Missing OpenAI API key" | Not in environment | Set OPENAI_API_KEY env var |
"Vault not found at ./vault" | Path misconfiguration | Ensure vault directory exists and is accessible |
"Bad escaped character in JSON" | Windows path issue | Use forward slashes in paths |
"postgres role does not exist" | Wrong credentials | Check docker-compose.yml matches config |
Configuration Reference
See the comprehensive for:
- Detailed Claude Desktop vs Claude Code CLI differences
- Platform-specific setup instructions
- Environment variable reference
- Security best practices
π€ Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
π Platform v2.0 Development
We're actively developing the next major version that transforms this into a Cognitive State Management Platform. To contribute:
# Platform development branch
git checkout feature/platform-transformation
git pull origin feature/platform-transformation
Contributing Process
- Fork the repository
- Switch to platform branch (
git checkout feature/platform-transformation
) - Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
π License
This project is licensed under the MIT License - see the file for details.
π Acknowledgments
- Built on the Model Context Protocol by Anthropic
- Powered by ChromaDB for vector storage
- Uses OpenAI Embeddings for semantic search
π Support
- π Report bugs
- π‘ Request features
- π Documentation
π Additional Documentation
The project uses a dual documentation structure:
Technical Documentation (docs/
):
- API references
- Getting started guides
- Architecture notes
- Roadmap and status
Knowledge Base (vault/
):
- Obsidian vault for AI context
- Session logs and development history
- Architecture decisions
- Knowledge and setup guides
- Templates and planning documents
Key locations:
- Setup Guides:
vault/Knowledge/Setup/
- Architecture:
vault/Architecture/
- Session Logs:
vault/Sessions/
- Templates:
vault/Templates/