agentience/expert-registry-mcp
If you are the rightful owner of expert-registry-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Expert Registry MCP Server is a high-performance server designed for expert discovery, registration, and context injection, utilizing FastMCP v2 with integrated vector and graph databases.
expert_registry_list
List experts with filtering.
expert_registry_get
Get expert details.
expert_registry_search
Search experts by query.
expert_detect_technologies
Detect project technologies.
expert_select_optimal
Select best expert for task.
Expert Registry MCP Server
Last Updated: 2025-06-30
A high-performance MCP server for expert discovery, registration, and context injection built with FastMCP v2, featuring vector and graph database integration for enhanced semantic search and relationship modeling.
Features
- š High Performance: Multi-layer caching with vector indices for sub-millisecond queries
- š File-Based Updates: Hot reload on registry/context file changes
- š Semantic Search: Vector database integration for meaning-based expert discovery
- š Relationship Modeling: Graph database for expert networks and team formation
- š Context Injection: AI-powered prompt enhancement with expert knowledge
- š Analytics: Performance tracking with collaborative filtering
- š§ Hybrid Discovery: Combined vector similarity and graph connectivity scoring
- š Python-First: Built with FastMCP v2 for clean, Pythonic code
Installation
Docker (Recommended for Production)
The easiest way to run the Expert Registry MCP server is using Docker:
# Build and deploy locally
./scripts/build.sh
./scripts/deploy.sh
# Or use pre-built image from GitHub Container Registry
docker pull ghcr.io/agentience/expert-registry-mcp:latest
Features:
- š³ Single container service for multiple MCP clients
- š¦ Expert contexts and registry mapped to host for easy editing
- š Hot reload support when files change on host
- š SSE transport for client connections
- šļø Includes Neo4j database setup
- š Production-ready with health checks
See for complete deployment guide.
Local Development
Using uv (recommended):
# Create virtual environment and install
uv venv
uv pip install -e .
# Or install directly
uv pip install expert-registry-mcp
Using pip:
pip install expert-registry-mcp
Database Setup
Vector Database (ChromaDB - Embedded)
# ChromaDB is embedded, no separate installation needed
# It will create a vector-db directory automatically
Graph Database (Neo4j)
# Option 1: Docker (recommended)
docker run -d --name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/password \
neo4j:latest
# Option 2: Local installation
# Download from https://neo4j.com/download/
Quick Start
- Set up your expert system directory structure:
expert-system/
āāā registry/
ā āāā expert-registry.json
āāā expert-contexts/
ā āāā aws-amplify-gen2.md
ā āāā aws-cloudscape.md
ā āāā ...
āāā performance/
āāā metrics.json
- Configure environment:
export EXPERT_SYSTEM_PATH=/path/to/expert-system
export NEO4J_URI=bolt://localhost:7687
export NEO4J_PASSWORD=password
- Run the server:
# Using FastMCP CLI
fastmcp run expert-registry-mcp
# Or using Python
python -m expert_registry_mcp.server
Claude Desktop Configuration
Add to your Claude Desktop configuration:
{
"mcpServers": {
"expert-registry": {
"command": "uv",
"args": ["run", "expert-registry-mcp"],
"env": {
"EXPERT_SYSTEM_PATH": "/path/to/expert-system",
"NEO4J_URI": "bolt://localhost:7687",
"NEO4J_PASSWORD": "password"
}
}
}
}
Usage Examples
Basic Expert Discovery
# Detect technologies in your project
technologies = await expert_detect_technologies(
scan_paths=["./src", "./package.json"]
)
# Select the best expert with hybrid search
result = await expert_smart_discover(
context={
"description": "Refactor authentication system using AWS Amplify",
"technologies": technologies.technologies,
"constraints": ["maintain backward compatibility"],
"preferred_strategy": "single"
}
)
Context Injection
# Load expert context
context = await expert_load_context(
expert_id=result.expert.id
)
# Inject into prompt
enhanced_prompt = await expert_inject_context(
prompt="Refactor the authentication system",
expert_id=result.expert.id,
injection_points=["constraints", "patterns", "quality-criteria"]
)
Performance Tracking
# Track usage
await expert_track_usage(
expert_id=result.expert.id,
task_id="auth-refactor-001",
outcome={
"success": True,
"adherence_score": 9.5,
"task_type": "refactoring"
}
)
# Get analytics
analytics = await expert_get_analytics(
expert_id=result.expert.id
)
Available Tools
Registry Management
expert_registry_list
- List experts with filteringexpert_registry_get
- Get expert detailsexpert_registry_search
- Search experts by query
Expert Selection
expert_detect_technologies
- Detect project technologiesexpert_select_optimal
- Select best expert for taskexpert_assess_capability
- Assess expert capabilityexpert_smart_discover
- AI-powered hybrid search (vector + graph)
Semantic Search
expert_semantic_search
- Search using natural languageexpert_find_similar
- Find similar experts
Graph Operations
expert_explore_network
- Explore expert relationshipsexpert_find_combinations
- Find complementary expert teams
Context Operations
expert_load_context
- Load expert knowledgeexpert_inject_context
- Enhance prompts with expertise
Analytics
expert_track_usage
- Record expert performanceexpert_get_analytics
- Get performance metrics
Expert Registry Format
{
"version": "1.0.0",
"last_updated": "2025-06-30T00:00:00Z",
"experts": [
{
"id": "aws-amplify-gen2",
"name": "AWS Amplify Gen 2 Expert",
"version": "1.0.0",
"description": "Expert in AWS Amplify Gen 2 development",
"domains": ["backend", "cloud", "serverless"],
"specializations": [
{
"technology": "AWS Amplify Gen 2",
"frameworks": ["AWS CDK", "TypeScript"],
"expertise_level": "expert"
}
],
"workflow_compatibility": {
"feature": 0.95,
"bug-fix": 0.85,
"refactoring": 0.80,
"investigation": 0.70,
"article": 0.60
},
"constraints": [
"Use TypeScript-first approach",
"Follow AWS Well-Architected Framework"
],
"patterns": [
"Infrastructure as Code",
"Serverless-first architecture"
],
"quality_standards": [
"100% type safety",
"Comprehensive error handling"
]
}
]
}
Expert Context Format
Expert context files are markdown documents in expert-contexts/
:
# AWS Amplify Gen 2 Expert Context
## Constraints
- Use TypeScript for all backend code
- Follow AWS Well-Architected Framework principles
- Implement proper error handling and logging
## Patterns
- Infrastructure as Code using CDK
- Serverless-first architecture
- Event-driven communication
## Quality Standards
- 100% TypeScript type coverage
- Comprehensive error handling
- Unit test coverage > 80%
Development
Setup Development Environment
# Clone repository
git clone https://github.com/agentience/expert-registry-mcp
cd expert-registry-mcp
# Create virtual environment with uv
uv venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
# Install in development mode
uv pip install -e ".[dev]"
Run Tests
# Run all tests
pytest
# Run with coverage
pytest --cov=expert_registry_mcp
# Run specific test file
pytest tests/test_registry.py
Code Quality
# Format code
black src tests
# Lint code
ruff check src tests
# Type checking
mypy src
Architecture
Multi-Layer Caching
- Registry Cache: 24-hour TTL for expert definitions
- Vector Cache: Embeddings cached until expert updates
- Graph Cache: Relationship queries cached for 10 minutes
- Selection Cache: 5-minute TTL for technology detection
- Context Cache: LRU cache for expert contexts (50 entries)
Database Integration
- ChromaDB: Embedded vector database for semantic search
- Multiple collections for different embedding types
- Automatic embedding generation with sentence-transformers
- Neo4j: Graph database for relationship modeling
- Expert-Technology-Task relationships
- Team synergy calculations
- Evolution tracking
Performance Features
- Vector Indices: Annoy indices for ultra-fast similarity search
- Precomputed Combinations: Common expert pairs cached
- Batch Operations: Efficient bulk processing
- Smart Invalidation: Targeted cache updates
File Watching
- Uses
watchdog
for cross-platform file monitoring - Automatic registry reload and database sync
- No server restart required for updates
Troubleshooting
Common Issues
-
Expert not found
- Verify expert ID in registry
- Check file paths are correct
- Ensure registry JSON is valid
-
Context file missing
- Check expert-contexts directory
- Verify filename matches expert ID
- Ensure .md extension
-
Cache not updating
- File watcher may need restart
- Check file permissions
- Verify EXPERT_SYSTEM_PATH
Debug Mode
Enable debug logging:
export FASTMCP_DEBUG=1
expert-registry-mcp
Advanced Features
Semantic Search
The system uses ChromaDB to enable natural language queries:
# Find experts by meaning, not just keywords
results = await expert_semantic_search(
query="implement secure authentication with cloud integration",
search_mode="hybrid"
)
Relationship Exploration
Neo4j powers sophisticated relationship queries:
# Explore expert networks
network = await expert_explore_network(
start_expert_id="aws-amplify-gen2",
depth=2,
relationship_types=["SPECIALIZES_IN", "COMPATIBLE_WITH"]
)
Team Formation
AI-powered team composition:
# Find complementary expert teams
teams = await expert_find_combinations(
requirements=["AWS Amplify", "React", "DynamoDB"],
team_size=3
)
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Run tests and linting
- Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
License
MIT License - see LICENSE file for details