leoric-crown/shared-context-server
If you are the rightful owner of shared-context-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Shared Context MCP Server is a centralized memory store that enables multiple AI agents to collaborate on complex tasks through shared conversational context.
Shared Context Server
Content Navigation
Symbol | Meaning | Time Investment |
---|---|---|
π | Quick start | 2-5 minutes |
βοΈ | Configuration | 10-15 minutes |
π§ | Deep dive | 30+ minutes |
π‘ | Why this works | Context only |
β οΈ | Important note | Read carefully |
π― Quick Understanding (30 seconds)
A shared workspace for AI agents to collaborate on complex tasks.
The Problem: AI agents work independently, duplicate research, and can't build on each other's discoveries.
The Solution: Shared sessions where agents see previous findings and build incrementally instead of starting over.
# Agent 1: Security analysis
session.add_message("security_agent", "Found SQL injection in user login")
# Agent 2: Performance review (sees security findings)
session.add_message("perf_agent", "Optimized query while fixing SQL injection")
# Agent 3: Documentation (has full context)
session.add_message("docs_agent", "Documented secure, optimized login implementation")
Each agent builds on previous work instead of starting over.
π‘ Uses MCP Protocol: Model Context Protocol - the standard for AI agent communication (works with Claude Code, Gemini, VS Code, Cursor, and frameworks like CrewAI).
πͺ Multi-Expert Demo (30 seconds to start)
See AI agents collaborate better than any individual agent could.
# One command creates complete demo environment
scs setup demo
scs # Start server, then try the magic prompt below in Claude Code
The Magic Prompt (copy to Claude Code):
I want to optimize this repository using our expert committee approach. Please start by having our Performance Architect analyze the codebase for bottlenecks.
What Happens: Three AI experts collaborate autonomously:
- Performance Architect β finds bottlenecks with evidence
- Implementation Expert β builds concrete solutions
- Validation Expert β creates testing strategy
Each expert builds on the previous expert's findings through persistent shared sessions. No manual coordination required.
π‘ Try asking Claude: "Show me how the experts coordinated and what would this look like with a single agent instead?"
β‘οΈ (transforms to "ALREADY DONE!" after setup)
π Try It Now (2 minutes)
β οΈ Important: Choose Your Deployment Method
Docker (Recommended for Multi-Client Collaboration):
- β Shared context across all MCP clients (Claude Code + Cursor + Windsurf)
- β Persistent service - single server instance on port 23456
- β True multi-agent collaboration - agents share sessions and memory
- π― Use when: You want multiple tools to collaborate on the same tasks
uvx (Quick Trial & Testing Only):
- β οΈ Isolated per-client - each MCP client gets its own separate instance
- β οΈ No shared context - Claude Code and Cursor can't see each other's work
- β Quick testing - perfect for trying features without Docker setup
- π― Use when: Quick feature testing or learning the MCP tools in isolation
# π³ Docker: Multi-client shared collaboration (RECOMMENDED)
# β οΈ Requires environment variables - see Step 1 below
# π¦ uvx: Isolated single-client testing only
# β οΈ Requires API key - see Step 1 below
uvx shared-context-server --help
π‘ TL;DR: Use Docker for real multi-agent work, uvx for quick testing only.
Prerequisites Check (30 seconds)
Choose your path:
- β
Docker (recommended):
docker --version
works - β
uvx Trial:
uvx --version
works (testing only)
Environment Configuration Templates
Choose your .env template (for local development):
# π Quick Start (recommended) - Essential variables only
cp .env.minimal .env
# π§ Full Development - All development features
cp .env.example .env
# π³ Docker Deployment - Container-optimized paths
cp .env.docker .env
π‘ Most users want .env.minimal
- it contains only the 12 essential variables you actually need.
Step 1: Generate Keys & Start Server
π One-Command Demo Setup (Recommended)
# Experience multi-expert AI collaboration in 30 seconds
git clone https://github.com/leoric-crown/shared-context-server.git
cd shared-context-server
scs setup demo
# β³ Creates complete demo environment with expert agents ready to collaborate
π³ Production Setup Alternative
# For production deployment with Docker
scs setup docker
# β³ Generates keys, shows Docker commands, creates .env file
Option A: Docker Compose (Recommended)
# After running the key generator above, choose your deployment:
# π Production (pre-built image from GHCR):
make docker
# OR: docker compose up -d
# π§ Development (with hot reload):
make dev-docker
# OR: docker compose -f docker-compose.dev.yml up -d
# ποΈ Production (build locally):
make docker-local
# OR: docker compose -f docker-compose.yml -f docker-compose.local.yml up -d
Alternative: Raw Docker Commands
# If you prefer docker run over docker compose:
docker run -d --name shared-context-server -p 23456:23456 \
-e API_KEY="your-generated-api-key" \
-e JWT_SECRET_KEY="your-generated-jwt-secret" \
-e JWT_ENCRYPTION_KEY="your-generated-jwt-encryption-key" \
ghcr.io/leoric-crown/shared-context-server:latest
Option B: uvx Trial (Isolated Testing Only)
# Generate keys first
scs setup uvx
# β³ Shows the exact uvx command with generated keys
# Example output command to run:
API_KEY="generated-key" JWT_SECRET_KEY="generated-secret" \
uvx shared-context-server --transport http
# β οΈ IMPORTANT: Each MCP client gets isolated instances
# No shared context between Claude Code, Cursor, Windsurf
Option C: Local Development
# Full development setup
git clone https://github.com/leoric-crown/shared-context-server.git
cd shared-context-server
uv sync
scs setup
# β³ Creates .env file and shows make dev command
make dev # Starts with hot reload
Step 2: Connect Your MCP Client
The key generation script shows the exact commands with your API key. Replace YOUR_API_KEY_HERE
with your generated key:
# Claude Code (simple HTTP transport)
claude mcp add --transport http scs http://localhost:23456/mcp/ \
--header "X-API-Key: YOUR_API_KEY_HERE"
# Gemini CLI
gemini mcp add scs http://localhost:23456/mcp -t http -H "X-API-Key: YOUR_API_KEY_HERE"
# Test connection
claude mcp list # Should show: β Connected
Ports note: Internal ports are fixed (HTTP 23456, WebSocket 34567). Change host ports via compose env only (e.g., HTTP_PORT=8080 WEBSOCKET_PORT=9090 docker compose up -d
). Use EXTERNAL_WEBSOCKET_PORT
to reflect the host WS port in links/UI.
VS Code Configuration
Add to your existing .vscode/mcp.json
(create if it doesn't exist):
{
"servers": {
"shared-context-server": {
"type": "http",
"url": "http://localhost:23456/mcp",
"headers": { "X-API-Key": "YOUR_API_KEY_HERE" }
}
}
}
Cursor Configuration
Add to your existing .cursor/mcp.json
(create if it doesn't exist):
{
"mcpServers": {
"shared-context-server": {
"command": "mcp-proxy",
"args": [
"--transport=streamablehttp",
"http://localhost:23456/mcp/",
"--headers",
"X-API-Key",
"YOUR_API_KEY_HERE"
]
}
}
}
Claude Desktop Configuration
Add to your existing claude_desktop_config.json
:
On MacOS, you may have to provide explicity path to mcp-proxy.
Have not tested in Windows.
{
"scs": {
"command": "/Users/YOUR_USER/.local/bin/mcp-proxy",
"args": [
"--transport=streamablehttp",
"http://localhost:23456/mcp/",
"--headers",
"X-API-Key",
"YOUR_API_KEY_HERE"
]
}
}
Step 3: Verify & Monitor
π Note: If you used make docker-prod
, press Ctrl+C to exit the log viewer first, then run these commands in the same terminal.
# Test your setup (30 seconds)
# Method 1: Quick health check
curl http://localhost:23456/health
# Method 2: Create actual test session (see it in web UI!)
# If you have Claude Code with shared-context-server MCP tools:
# Run this in Claude: Create a session with purpose "README test setup"
# Expected: {"success": true, "session_id": "session_...", ...}
# Method 3: Test MCP tools discovery
npx @modelcontextprotocol/inspector --cli --method tools/list \
-e API_KEY=$API_KEY \
-e JWT_SECRET_KEY=$JWT_SECRET_KEY \
-e JWT_ENCRYPTION_KEY=$JWT_ENCRYPTION_KEY \
uv run python -m shared_context_server.scripts.cli
# Expected: {"tools": [...]} (proves MCP tools are available)
# Method 4: For Docker deployment, test via HTTP endpoint
npx @modelcontextprotocol/inspector --cli --method tools/list \
http://localhost:23456/mcp
# View the dashboard
open http://localhost:23456/ui/ # Real-time session monitoring
β Success indicators:
- Health endpoint returns
{"status": "healthy", ...}
- Dashboard loads at http://localhost:23456/ui/ and shows active sessions
- MCP Inspector validation error (proves MCP protocol is working)
- MCP client shows
β Connected
status
π Web Dashboard (MVP)
Real-time monitoring interface for agent collaboration:
- Live session overview with active agent counts
- Real-time message streaming without page refreshes
- Session isolation visualization to track multi-agent workflows
- Performance monitoring for collaboration efficiency
π‘ Perfect for: Monitoring agent handoffs, debugging collaboration flows, and demonstrating multi-agent coordination to stakeholders.
π¦ PyPI Installation (Alternative Method)
The shared-context-server is also available on PyPI for quick testing:
# π¦ Install and try (creates isolated instances per client)
uvx shared-context-server --help
uvx shared-context-server --version
# β οΈ For multi-client collaboration, use Docker instead
π‘ When to use PyPI/uvx: Quick feature testing, learning MCP tools, single-client workflows only.
π§ Choose Your Path
Are you...
βββ π¨βπ» Building a side project?
β β [Simple Integration](#-simple-integration) (5 minutes)
β
βββ π’ Planning enterprise deployment?
β β [Enterprise Setup](#-enterprise-considerations) (15+ minutes)
β
βββ π Researching multi-agent systems?
β β [Technical Deep Dive](#-technical-architecture) (30+ minutes)
β
βββ π€ Just evaluating the concept?
β [Framework Integration Examples](#-framework-examples) (5 minutes)
π Simple Integration
Works with existing tools you already use:
Direct MCP Integration (Tested)
# Generate Claude Code configuration automatically
scs client-config claude -s user -c
# Or generate configuration for other MCP clients
scs client-config cursor -c # Cursor IDE
scs client-config all -c # All supported clients
# Direct MCP usage (use proper MCP client in production)
# Example shows concept - use mcp-proxy or MCP client libraries
import asyncio
from mcp_client import MCPClient # Conceptual - use actual MCP client
async def create_session():
client = MCPClient("http://localhost:23456/mcp/")
return await client.call_tool("create_session", {"purpose": "agent collaboration"})
β οΈ Framework Integration Status: Direct MCP protocol tested. CrewAI, AutoGen, and LangChain integrations are conceptual - we welcome community contributions to develop and test these patterns.
β‘οΈ Next:
βοΈ Framework Examples
Multi-Expert Code Optimization (Featured Demo)
- Performance Architect analyzes codebase β identifies bottlenecks with evidence
- Implementation Expert reads findings β develops concrete solutions
- Validation Expert synthesizes both β creates comprehensive testing strategy
π‘ Why this works: Experts ask clarifying questions and build on each other's insights through persistent sessions.
Conversational vs Monologue Patterns
β Traditional: "Here are my findings" (isolated analysis)
β
Advanced: "Based on your bottleneck analysis, I have questions about X constraint..." (collaborative)
Research & Implementation Pipeline
- Research Agent gathers requirements β shares insights
- Architecture Agent questions research gaps β designs using complete context
- Developer Agent implements with iterative feedback loop
Demo these patterns: Run scs setup demo
to experience expert committees vs individual analysis.
More examples:
What works: β MCP clients (Claude Code, Gemini, VS Code, Cursor) What's conceptual: π Framework patterns (CrewAI, AutoGen, LangChain) - community contributions welcome
π§ What This Is / What This Isn't
β What this MCP server provides
- Real-time collaboration substrate for multi-agent workflows
- Session isolation with clean boundaries between different tasks
- MCP protocol compliance that works with any MCP-compatible agent framework
- Infrastructure layer that enhances existing orchestration tools
π‘ Why MCP protocol? Universal compatibility - works with Claude Code, CrewAI, AutoGen, LangChain, and custom frameworks without vendor lock-in.
β What this MCP server isn't
- Not a vector database - Use Pinecone, Milvus, or Chroma for long-term storage
- Not an orchestration platform - Use CrewAI, AutoGen, or LangChain for task management
- Not for permanent memory - Sessions are for active collaboration, not archival
π‘ Why this approach? We enhance your existing tools rather than replacing them - no need to rewrite your agent workflows.
π’ Enterprise Considerations
βοΈ Production Setup & Scaling
Development β Production Path
Development (SQLite)
- β Zero configuration
- β Perfect for prototyping
- β Limited to ~5 concurrent agents
Production (PostgreSQL)
- β High concurrency (20+ agents)
- β Enterprise backup/recovery
- β Requires database management
Enterprise Features Roadmap
- SSO Integration: SAML/OIDC support planned
- Audit Logging: Enhanced compliance logging
- High Availability: Multi-node deployment
- Advanced RBAC: Attribute-based permissions
Migration: Start with SQLite, migrate when you hit concurrency limits.
π§ Security & Compliance
Current Security Features
- JWT Authentication: Role-based access control
- Input Sanitization: XSS and injection prevention
- Secure Token Management: Prevents JWT exposure vulnerabilities
- Message Visibility: Public/private/agent-only filtering
Enterprise Security Roadmap
- SSO Integration: SAML, OIDC, Active Directory
- Audit Trails: SOX, HIPAA-compliant logging
- Data Governance: Retention policies, geographic residency
- Advanced Encryption: At-rest and in-transit encryption
π§ Technical Architecture
π Deployment Architecture: Docker vs uvx
Docker Deployment (Multi-Client Shared Context)
βββββββββββββββββββ ββββββββββββββββββββββββ
β Claude Code βββββΆβ β
βββββββββββββββββββ€ β Shared HTTP Server β
β Cursor βββββΆβ (port 23456) β
βββββββββββββββββββ€ β β
β Windsurf βββββΆβ β’ Single database β
βββββββββββββββββββ β β’ Shared sessions β
β β’ Cross-tool memory β
ββββββββββββββββββββββββ
β Enables: True multi-agent collaboration, session sharing, persistent context
uvx Deployment (Isolated Per-Client)
βββββββββββββββββββ βββββββββββββββββββ
β Claude Code βββββΆβ Isolated Server β
βββββββββββββββββββ β + Database #1 β
βββββββββββββββββββ
βββββββββββββββββββ βββββββββββββββββββ
β Cursor βββββΆβ Isolated Server β
βββββββββββββββββββ β + Database #2 β
βββββββββββββββββββ
βββββββββββββββββββ βββββββββββββββββββ
β Windsurf βββββΆβ Isolated Server β
βββββββββββββββββββ β + Database #3 β
βββββββββββββββββββ
β οΈ Limitation: No cross-tool collaboration, separate contexts, testing only
π‘ Key Insight: Docker provides the "shared" in shared-context-server, while uvx creates isolated silos.
Core Design Principles
Session-Based Isolation
What: Each collaborative task gets its own workspace Why: Prevents cross-contamination while enabling rich collaboration within teams
Message Visibility Controls
What: Four-tier system (public/private/agent-only/admin-only) Why: Granular information sharing - agents can have private working memory and shared discoveries
MCP Protocol Integration
What: Model Context Protocol compliance with automated orchestration prompts Why: Works with any MCP-compatible framework with built-in multi-agent collaboration patterns
Advanced Orchestration Features
What: MCP prompts with parallel agent launches, token refresh patterns, and collaborative documentation Why: Enables true conversational collaboration vs sequential monologues
Performance Characteristics
Designed for Real-Time Collaboration
- <30ms message operations for smooth agent handoffs
- 2-3ms fuzzy search across session history
- 20+ concurrent agents per session
- Session continuity during agent switches
π‘ Why these targets? Sub-30ms ensures imperceptible delays during agent handoffs, maintaining workflow momentum.
Scalability Considerations
- SQLite: Development and small teams (<5 concurrent agents)
- PostgreSQL: Production deployments (20+ concurrent agents)
- Connection pooling: Built-in performance optimization
- Multi-level caching: >70% cache hit ratio for common operations
Database & Storage
Architecture Decision: Database Choice
SQLite for Development
- β Zero configuration
- β Perfect for prototyping
- β Single writer limitation
PostgreSQL for Production
- β Multi-writer concurrency
- β Enterprise backup/recovery
- β Advanced indexing and performance
- β Requires database administration
Database Backend
- Unified: SQLAlchemy Core (supports SQLite, PostgreSQL, MySQL)
- Development: SQLite with aiosqlite driver (fastest, simplest)
- Production: PostgreSQL/MySQL with async drivers (scalable, robust)
Migration Path: SQLAlchemy backend provides smooth transition to PostgreSQL when scaling needs arise.
π‘ Why this hybrid approach? Optimizes for developer experience during development while supporting enterprise scale in production.
π Documentation & Next Steps
π’ Getting Started Paths
- - CrewAI, AutoGen, LangChain examples
- - Commands and common tasks
- - Local development environment
π‘ Production Deployment
- - Container deployment guide
- - All 15+ MCP tools with examples
- - Common issues and solutions
π΄ Advanced Topics
- - Build your own MCP integration
- - Docker and scaling strategies
All documentation:
π Development Commands
make help # Show all available commands
make dev # Start development server with hot reload
make test # Run tests with coverage
make quality # Run all quality checks
make docker # Production Docker (GHCR image) β shows logs
make dev-docker # Development Docker (local build + hot reload) β shows logs
# β οΈ Both commands show live logs - press Ctrl+C to exit and continue setup
SCS Setup Commands
scs setup # Basic setup: generate keys, create .env, show deployment options
scs setup demo # πͺ Create complete demo environment with expert agents
scs setup docker # Generate keys + show Docker commands only
scs setup uvx # Generate keys + show uvx commands only
scs setup export json # Create .env file + export keys as JSON to stdout
π‘ For first-time users: scs setup demo
creates everything needed for the multi-expert collaboration experience.
βοΈ Direct commands without make
# Development
uv sync && uv run python -m shared_context_server.scripts.dev
# Testing
uv run pytest --cov=src
# Quality checks
uv run ruff check && uv run mypy src/
License
MIT License - Open source software for the AI community.
Built with modern Python tooling and MCP standards. Contributions welcome!