SaptaDey/NexusMind
If you are the rightful owner of NexusMind and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
NexusMind is an advanced AI reasoning framework that uses graph structures for scientific research.
๐ง NexusMind
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ ๐ง NexusMind ๐ง โ
โ โ
โ Intelligent Scientific โ
โ Reasoning through โ
โ Graph-of-Thoughts โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Intelligent Scientific Reasoning through Graph-of-Thoughts
๐ Next-Generation AI Reasoning Framework for Scientific Research
Leveraging graph structures to transform how AI systems approach scientific reasoning
๐ Overview
NexusMind leverages graph structures to perform sophisticated scientific reasoning. It implements the Model Context Protocol (MCP) to integrate with AI applications like Claude Desktop, providing an Advanced Scientific Reasoning Graph-of-Thoughts (ASR-GoT) framework designed for complex research tasks.
Key highlights:
- Process complex scientific queries using graph-based reasoning
- Dynamic confidence scoring with multi-dimensional evaluations
- Built with modern Python and FastAPI for high performance
- Dockerized for easy deployment
- Modular design for extensibility and customization
- Integration with Claude Desktop via MCP protocol
๐ Key Features
8-Stage Reasoning Pipeline
graph TD
A[๐ฑ Stage 1: Initialization] --> B[๐งฉ Stage 2: Decomposition]
B --> C[๐ฌ Stage 3: Hypothesis/Planning]
C --> D[๐ Stage 4: Evidence Integration]
D --> E[โ๏ธ Stage 5: Pruning/Merging]
E --> F[๐ Stage 6: Subgraph Extraction]
F --> G[๐ Stage 7: Composition]
G --> H[๐ค Stage 8: Reflection]
A1[Create root node<br/>Set initial confidence<br/>Define graph structure] --> A
B1[Break into dimensions<br/>Identify components<br/>Create dimensional nodes] --> B
C1[Generate hypotheses<br/>Create reasoning strategy<br/>Set falsification criteria] --> C
D1[Gather evidence<br/>Link to hypotheses<br/>Update confidence scores] --> D
E1[Remove low-value elements<br/>Consolidate similar nodes<br/>Optimize structure] --> E
F1[Identify relevant portions<br/>Focus on high-value paths<br/>Create targeted subgraphs] --> F
G1[Synthesize findings<br/>Create coherent insights<br/>Generate comprehensive answer] --> G
H1[Evaluate reasoning quality<br/>Identify improvements<br/>Final confidence assessment] --> H
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
style D fill:#fff3e0
style E fill:#ffebee
style F fill:#f1f8e9
style G fill:#e3f2fd
style H fill:#fce4ec
The core reasoning process follows a sophisticated 8-stage pipeline:
-
๐ฑ Initialization
- Creates root node from query with multi-dimensional confidence vector
- Establishes initial graph structure with proper metadata
- Sets baseline confidence across empirical, theoretical, methodological, and consensus dimensions
-
๐งฉ Decomposition
- Breaks query into key dimensions: Scope, Objectives, Constraints, Data Needs, Use Cases
- Identifies potential biases and knowledge gaps from the outset
- Creates dimensional nodes with initial confidence assessments
-
๐ฌ Hypothesis/Planning
- Generates 3-5 hypotheses per dimension with explicit falsification criteria
- Creates detailed execution plans for each hypothesis
- Tags with disciplinary provenance and impact estimates
-
๐ Evidence Integration
- Iteratively selects hypotheses based on confidence-to-cost ratio and impact
- Gathers and links evidence using typed edges (causal, temporal, correlative)
- Updates confidence vectors using Bayesian methods with statistical power assessment
-
โ๏ธ Pruning/Merging
- Removes nodes with low confidence and impact scores
- Consolidates semantically similar nodes
- Optimizes graph structure while preserving critical relationships
-
๐ Subgraph Extraction
- Identifies high-value subgraphs based on multiple criteria
- Focuses on nodes with high confidence and impact scores
- Extracts patterns relevant to the original query
-
๐ Composition
- Synthesizes findings into coherent narrative
- Annotates claims with node IDs and edge types
- Provides comprehensive answers with proper citations
-
๐ค Reflection
- Performs comprehensive quality audit
- Evaluates coverage, bias detection, and methodological rigor
- Provides final confidence assessment and improvement recommendations
Advanced Technical Capabilities
๐ Multi-Dimensional Confidence | ๐ง Graph-Based Knowledge | ๐ MCP Integration | โก FastAPI Backend |
๐ณ Docker Deployment | ๐งฉ Modular Design | โ๏ธ Configuration Management | ๐ Type Safety |
๐ Interdisciplinary Bridge Nodes | ๐ Hyperedge Support | ๐ Statistical Power Analysis | ๐ฏ Impact Estimation |
Core Features:
- ๐ง Graph Knowledge Representation: Uses
networkx
to model complex relationships with hyperedges and multi-layer networks - ๐ Dynamic Confidence Vectors: Four-dimensional confidence assessment (empirical support, theoretical basis, methodological rigor, consensus alignment)
- ๐ Interdisciplinary Bridge Nodes: Automatically connects insights across different research domains
- ๐ Advanced Edge Types: Supports causal, temporal, correlative, and custom relationship types
- ๐ Statistical Rigor: Integrated power analysis and effect size estimation
- ๐ฏ Impact-Driven Prioritization: Focuses on high-impact research directions
- ๐ MCP Server: Seamless Claude Desktop integration with Model Context Protocol
- โก High-Performance API: Modern FastAPI implementation with async support
๐ ๏ธ Technology Stack
Python 3.13+ | ![]() FastAPI | NetworkX | Docker |
Pytest | Pydantic | Poetry | ![]() Uvicorn |
๐ Project Structure
NexusMind/
โโโ ๐ config/ # Configuration files
โ โโโ settings.yaml # Application settings
โ โโโ claude_mcp_config.json # Claude MCP integration config
โ โโโ logging.yaml # Logging configuration
โ
โโโ ๐ src/asr_got_reimagined/ # Main source code
โ โโโ ๐ api/ # API layer
โ โ โโโ ๐ routes/ # API route definitions
โ โ โ โโโ mcp.py # MCP protocol endpoints
โ โ โ โโโ health.py # Health check endpoints
โ โ โ โโโ graph.py # Graph query endpoints
โ โ โโโ schemas.py # API request/response schemas
โ โ โโโ middleware.py # API middleware
โ โ
โ โโโ ๐ domain/ # Core business logic
โ โ โโโ ๐ models/ # Domain models
โ โ โ โโโ common.py # Common types and enums
โ โ โ โโโ graph_elements.py # Node, Edge, Hyperedge models
โ โ โ โโโ graph_state.py # Graph state management
โ โ โ โโโ confidence.py # Confidence vector models
โ โ โ โโโ metadata.py # Metadata schemas
โ โ โ
โ โ โโโ ๐ services/ # Business services
โ โ โ โโโ got_processor.py # Main GoT processing service
โ โ โ โโโ evidence_service.py # Evidence gathering and assessment
โ โ โ โโโ confidence_service.py # Confidence calculation service
โ โ โ โโโ graph_service.py # Graph manipulation service
โ โ โ โโโ mcp_service.py # MCP protocol service
โ โ โ
โ โ โโโ ๐ stages/ # 8-Stage pipeline implementation
โ โ โ โโโ base_stage.py # Abstract base stage
โ โ โ โโโ stage_1_initialization.py # Stage 1: Graph initialization
โ โ โ โโโ stage_2_decomposition.py # Stage 2: Query decomposition
โ โ โ โโโ stage_3_hypothesis.py # Stage 3: Hypothesis generation
โ โ โ โโโ stage_4_evidence.py # Stage 4: Evidence integration
โ โ โ โโโ stage_5_pruning.py # Stage 5: Pruning and merging
โ โ โ โโโ stage_6_extraction.py # Stage 6: Subgraph extraction
โ โ โ โโโ stage_7_composition.py # Stage 7: Answer composition
โ โ โ โโโ stage_8_reflection.py # Stage 8: Quality reflection
โ โ โ
โ โ โโโ ๐ utils/ # Utility functions
โ โ โโโ graph_utils.py # Graph manipulation utilities
โ โ โโโ confidence_utils.py # Confidence calculation utilities
โ โ โโโ statistical_utils.py # Statistical analysis utilities
โ โ โโโ bias_detection.py # Bias detection algorithms
โ โ โโโ temporal_analysis.py # Temporal pattern analysis
โ โ
โ โโโ ๐ infrastructure/ # Infrastructure layer
โ โ โโโ ๐ database/ # Database integration
โ โ โโโ ๐ cache/ # Caching layer
โ โ โโโ ๐ external/ # External service integrations
โ โ
โ โโโ main.py # Application entry point
โ โโโ app_setup.py # Application setup and configuration
โ
โโโ ๐ tests/ # Test suite
โ โโโ ๐ unit/ # Unit tests
โ โ โโโ ๐ stages/ # Stage-specific tests
โ โ โโโ ๐ services/ # Service tests
โ โ โโโ ๐ models/ # Model tests
โ โโโ ๐ integration/ # Integration tests
โ โโโ ๐ fixtures/ # Test fixtures and data
โ
โโโ ๐ scripts/ # Utility scripts
โ โโโ setup_dev.py # Development setup
โ โโโ add_type_hints.py # Type hint utilities
โ โโโ deployment/ # Deployment scripts
โ
โโโ ๐ docs/ # Documentation
โ โโโ api/ # API documentation
โ โโโ architecture/ # Architecture diagrams
โ โโโ examples/ # Usage examples
โ
โโโ ๐ static/ # Static assets
โ โโโ nexusmind-logo.png # Application logo
โ
โโโ ๐ Docker Files & Config
โโโ Dockerfile # Docker container definition
โโโ docker-compose.yml # Multi-container setup
โโโ .dockerignore # Docker ignore patterns
โ
โโโ ๐ Configuration Files
โโโ pyproject.toml # Python project configuration
โโโ poetry.lock # Dependency lock file
โโโ mypy.ini # Type checking configuration
โโโ pyrightconfig.json # Python type checker config
โโโ .pre-commit-config.yaml # Pre-commit hooks
โโโ .gitignore # Git ignore patterns
โ
โโโ ๐ Documentation
โโโ README.md # This file
โโโ CHANGELOG.md # Version history
โโโ LICENSE # Apache 2.0 license
โโโ CONTRIBUTING.md # Contribution guidelines
๐ Getting Started
Prerequisites
- Python 3.13+ (Docker image uses Python 3.13.3-slim-bookworm)
- Poetry: For dependency management
- Docker and Docker Compose: For containerized deployment
Installation and Setup (Local Development)
-
Clone the repository:
git clone https://github.com/SaptaDey/NexusMind.git cd NexusMind
-
Install dependencies using Poetry:
poetry install
This creates a virtual environment and installs all necessary packages specified in
pyproject.toml
. -
Activate the virtual environment:
poetry shell
-
Configure the application:
# Copy example configuration cp config/settings.example.yaml config/settings.yaml # Edit configuration as needed vim config/settings.yaml
-
Set up environment variables (optional):
# Create .env file for sensitive configuration echo "LOG_LEVEL=DEBUG" > .env echo "API_HOST=0.0.0.0" >> .env echo "API_PORT=8000" >> .env
-
Run the development server:
python src/asr_got_reimagined/main.py
Alternatively, for more control:
uvicorn asr_got_reimagined.main:app --reload --host 0.0.0.0 --port 8000
The API will be available at
http://localhost:8000
.
Docker Deployment
graph TB
subgraph "Development Environment"
A[๐จโ๐ป Developer] --> B[๐ณ Docker Compose]
end
subgraph "Container Orchestration"
B --> C[๐ฆ NexusMind Container]
B --> D[๐ Monitoring Container]
B --> E[๐๏ธ Database Container]
end
subgraph "NexusMind Application"
C --> F[โก FastAPI Server]
F --> G[๐ง ASR-GoT Engine]
F --> H[๐ MCP Protocol]
end
subgraph "External Integrations"
H --> I[๐ค Claude Desktop]
H --> J[๐ Other AI Clients]
end
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
style F fill:#fff3e0
style G fill:#ffebee
style H fill:#f1f8e9
-
Quick Start with Docker Compose:
# Build and run all services docker-compose up --build # For detached mode (background) docker-compose up --build -d # View logs docker-compose logs -f nexusmind
-
Individual Docker Container:
# Build the image docker build -t nexusmind:latest . # Run the container docker run -p 8000:8000 -v $(pwd)/config:/app/config nexusmind:latest
-
Production Deployment:
# Use production compose file docker-compose -f docker-compose.prod.yml up --build -d
-
Access the Services:
- API Documentation:
http://localhost:8000/docs
- Health Check:
http://localhost:8000/health
- MCP Endpoint:
http://localhost:8000/mcp
- API Documentation:
๐ API Endpoints
Core Endpoints
-
MCP Protocol:
POST /mcp
{ "method": "process_query", "params": { "query": "Analyze the relationship between microbiome diversity and cancer progression", "confidence_threshold": 0.7, "max_stages": 8 } }
-
Health Check:
GET /health
{ "status": "healthy", "version": "0.1.0", "timestamp": "2024-05-23T10:30:00Z" }
Advanced Endpoints
-
Graph Query:
POST /api/v1/graph/query
{ "query": "Research question or hypothesis", "parameters": { "disciplines": ["immunology", "oncology"], "confidence_threshold": 0.6, "include_temporal_analysis": true, "enable_bias_detection": true } }
-
Graph State:
GET /api/v1/graph/{session_id}
- Retrieve current state of a reasoning graph
- Includes confidence scores, node relationships, and metadata
-
Analytics:
GET /api/v1/analytics/{session_id}
- Get comprehensive metrics about the reasoning process
- Includes performance stats, confidence trends, and quality measures
-
Subgraph Extraction:
POST /api/v1/graph/{session_id}/extract
{ "criteria": { "min_confidence": 0.7, "node_types": ["hypothesis", "evidence"], "include_causal_chains": true } }
๐งช Testing & Quality Assurance
๐งช Testing | ๐ Type Checking | โจ Linting | ๐ Coverage |
poetry run pytest poetry run pytest -v |
poetry run mypy src/ pyright src/ |
poetry run ruff check . poetry run ruff format . |
poetry run pytest --cov=src coverage html |
Development Commands
# Run full test suite with coverage
poetry run pytest --cov=src --cov-report=html --cov-report=term
# Run specific test categories
poetry run pytest tests/unit/stages/ # Stage-specific tests
poetry run pytest tests/integration/ # Integration tests
poetry run pytest -k "test_confidence" # Tests matching pattern
# Type checking and linting
poetry run mypy src/ --strict # Strict type checking
poetry run ruff check . --fix # Auto-fix linting issues
poetry run ruff format . # Format code
# Pre-commit hooks (recommended)
poetry run pre-commit install # Install hooks
poetry run pre-commit run --all-files # Run all hooks
Quality Metrics
-
Type Safety:
- Fully typed codebase with strict mypy configuration
- Configured with
mypy.ini
andpyrightconfig.json
- Fix logger type issues:
python scripts/add_type_hints.py
-
Code Quality:
- 95%+ test coverage target
- Automated formatting with Ruff
- Pre-commit hooks for consistent code quality
- Comprehensive integration tests for the 8-stage pipeline
๐ง Configuration
Application Settings (config/settings.yaml
)
# Core application settings
app:
name: "NexusMind"
version: "0.1.0"
debug: false
log_level: "INFO"
# API configuration
api:
host: "0.0.0.0"
port: 8000
cors_origins: ["*"]
# ASR-GoT Framework settings
asr_got:
max_stages: 8
default_confidence_threshold: 0.6
enable_bias_detection: true
enable_temporal_analysis: true
max_hypotheses_per_dimension: 5
# Graph settings
graph:
max_nodes: 10000
enable_hyperedges: true
enable_multi_layer: true
temporal_decay_factor: 0.1
MCP Configuration (config/claude_mcp_config.json
)
{
"name": "nexusmind",
"description": "Advanced Scientific Reasoning with Graph-of-Thoughts",
"version": "0.1.0",
"endpoints": {
"mcp": "http://localhost:8000/mcp"
},
"capabilities": [
"scientific_reasoning",
"graph_analysis",
"confidence_assessment",
"bias_detection"
]
}
๐ค Contributing
We welcome contributions! Please see our for details.
Development Setup
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Install development dependencies:
poetry install --with dev
- Make your changes and add tests
- Run the test suite:
poetry run pytest
- Submit a pull request
Code Style
- Follow PEP 8 style guidelines
- Use type hints for all functions and methods
- Write comprehensive docstrings
- Maintain test coverage above 95%
๐ Documentation
- : Comprehensive API reference
- : System design and components
- : Practical usage scenarios
- : Contributing and development setup
๐ License
This project is licensed under the Apache License 2.0 - see the file for details.
๐ Acknowledgments
- NetworkX community for graph analysis capabilities
- FastAPI team for the excellent web framework
- Pydantic for robust data validation
- The scientific research community for inspiration and feedback
Built with โค๏ธ for the scientific research community
NexusMind - Advancing scientific reasoning through intelligent graph structures