Dark-Mater

ZYNIQ-AI-Driven-Development-Firm/Dark-Mater

3.2

If you are the rightful owner of Dark-Mater and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Dark Matter MCP is a comprehensive application designed to manage Model Context Protocol server instances with advanced features such as secure authentication, real-time communication, and AI integration.

Dark Matter MCP (Model Context Protocol)

A sophisticated full-stack application for managing MCP (Model Context Protocol) server instances with advanced OTP authentication, real-time communication, server lifecycle management, and integrated Ollama AI chat capabilities.

๐Ÿš€ Features

  • Secure Authentication: Email-based OTP authentication system with JWT tokens
  • Server Management: Full lifecycle management for MCP server instances
  • Ollama AI Integration: Production-ready local LLM integration with streaming support
  • Dual Chat Modes: Company Chat (shared knowledge with RAG) and MCP Chat (per-server conversations)
  • Vector Database: pgvector integration for semantic search and RAG capabilities
  • Real-time Communication: WebSocket integration for live updates
  • Modern UI: React with TypeScript, Tailwind CSS, and streaming chat interfaces
  • Production Ready: Docker containerization and cloud deployment configuration

๐Ÿ›  Tech Stack

Frontend

  • React 18 with TypeScript
  • Vite for fast development and building
  • Tailwind CSS for styling
  • Framer Motion for animations
  • Axios for HTTP requests
  • WebSocket for real-time communication

Backend

  • FastAPI with async support
  • SQLModel for database ORM with pgvector
  • PostgreSQL with vector similarity search
  • Redis for caching and sessions
  • Ollama for local LLM inference
  • JWT authentication
  • SMTP Email system with OTP
  • WebSocket support

๐Ÿ“‹ Prerequisites

  • Node.js 18+ and npm/pnpm
  • Python 3.12+
  • PostgreSQL 15+ with pgvector extension
  • Redis server
  • Ollama (for local LLM inference)
  • Docker and Docker Compose (recommended)
  • Email account with SMTP access (Gmail recommended)

๐Ÿš€ Quick Start

1. Clone and Configure

git clone <repository-url>
cd Dark-Matter-MPC

# Copy environment template and configure
cp .env.example .env

2. Configure Environment (Required)

Edit .env file with your settings:

# Database Configuration
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/darkmatter_mcp

# Redis Configuration
REDIS_URL=redis://localhost:6379/0

# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434

# Gmail SMTP Configuration
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=your-email@gmail.com
SMTP_PASS=your-gmail-app-password
SMTP_USER_SEND_FROM=your-email@gmail.com

# JWT Configuration (generate secure keys)
JWT_SECRET_KEY=your-super-secret-jwt-key
JWT_ALGORITHM=HS256
JWT_ACCESS_TOKEN_EXPIRE_MINUTES=15
JWT_REFRESH_TOKEN_EXPIRE_DAYS=7

For Gmail Setup:

  1. Enable 2-factor authentication on your Google account
  2. Go to Google Account Settings โ†’ Security โ†’ App passwords
  3. Generate a new app password for "Mail"
  4. Use this app password (not your regular password) in SMTP_PASS

3. Install Dependencies

# Frontend dependencies
npm install

# Backend dependencies
cd backend
pip install -r requirements.txt
cd ..

4. Initialize Database

# Run database migrations (PostgreSQL + pgvector)
cd backend
alembic upgrade head
cd ..

5. Setup Ollama (Required for AI Features)

# Install Ollama
curl https://ollama.ai/install.sh | sh

# Start Ollama server
ollama serve

# Install required models (in another terminal)
ollama pull llama3.2:3b        # For general chat
ollama pull nomic-embed-text   # For embeddings/RAG

6. Run the Application

Option A: Docker Mode (Recommended)
# Start all services (PostgreSQL, Redis, Ollama, Backend, Frontend)
docker-compose up -d

# View logs
docker-compose logs -f

# Load sample company documents (optional)
docker-compose exec backend python scripts/load_company_docs.py

# Stop services
docker-compose down
Option B: Development Mode
# Terminal 1: Start PostgreSQL + Redis + Ollama (Docker)
docker-compose up postgres redis ollama -d

# Terminal 2: Start Backend
cd backend
python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

# Terminal 3: Start Frontend
npm run dev

# Terminal 4: Load sample company documents (optional)
cd backend
python scripts/load_company_docs.py

๏ฟฝ Security Features

  • Email OTP Authentication: 6-digit codes with TTL and rate limiting
  • JWT Tokens: Access (15min) and refresh (7 days) tokens
  • Rate Limiting: Configurable limits on all endpoints
  • CORS Protection: Configurable allowed origins
  • Input Validation: Pydantic models with strict validation
  • Password Security: Secure hashing for sensitive data

๐Ÿ— Project Structure

โ”œโ”€โ”€ components/              # React components
โ”‚   โ”œโ”€โ”€ icons/              # SVG icon components
โ”‚   โ”œโ”€โ”€ AddClientModal.tsx  # MCP client management
โ”‚   โ”œโ”€โ”€ CompanyChatWidget.tsx # Company chat with RAG
โ”‚   โ”œโ”€โ”€ McpChatWidget.tsx   # Per-server MCP chat
โ”‚   โ”œโ”€โ”€ LoginPage.tsx       # Authentication UI
โ”‚   โ”œโ”€โ”€ MainPage.tsx        # Main dashboard
โ”‚   โ”œโ”€โ”€ OtpInput.tsx        # OTP entry component
โ”‚   โ””โ”€โ”€ ...
โ”œโ”€โ”€ backend/                # FastAPI backend
โ”‚   โ”œโ”€โ”€ app/
โ”‚   โ”‚   โ”œโ”€โ”€ main.py         # FastAPI application
โ”‚   โ”‚   โ”œโ”€โ”€ core/           # Configuration and security
โ”‚   โ”‚   โ”œโ”€โ”€ routes/         # API endpoints
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ company_chat.py # Company chat API
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ mcp_chat.py     # MCP chat API
โ”‚   โ”‚   โ”œโ”€โ”€ db/
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ models.py   # Database models with pgvector
โ”‚   โ”‚   โ”œโ”€โ”€ services/       # Business logic
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ companyChat.py  # Company chat service
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ mcpChat.py      # MCP chat service
โ”‚   โ”‚   โ”œโ”€โ”€ llm/            # LLM integration
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ ollamaClient.py # Production Ollama client
โ”‚   โ”‚   โ”œโ”€โ”€ clients/        # External service clients
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ mcpMemoryClient.py # MCP memory integration
โ”‚   โ”‚   โ””โ”€โ”€ schemas/        # Pydantic schemas
โ”‚   โ”œโ”€โ”€ alembic/            # Database migrations
โ”‚   โ”œโ”€โ”€ scripts/            # Utility scripts
โ”‚   โ”‚   โ””โ”€โ”€ load_company_docs.py # Document loader
โ”‚   โ”œโ”€โ”€ tests/              # Test files
โ”‚   โ””โ”€โ”€ requirements.txt    # Python dependencies
โ”œโ”€โ”€ deploy/                 # Docker and deployment
โ”‚   โ”œโ”€โ”€ Dockerfile.backend
โ”‚   โ”œโ”€โ”€ Dockerfile.frontend
โ”‚   โ””โ”€โ”€ cloudbuild.yaml
โ”œโ”€โ”€ .env.example           # Environment template
โ”œโ”€โ”€ docker-compose.yml     # Docker services
โ””โ”€โ”€ package.json          # Node.js configuration

๐Ÿ”ง Configuration

Environment Variables

The application uses a comprehensive .env file for configuration. Key settings include:

  • API_BASE_URL: Backend API endpoint
  • DATABASE_URL: Database connection string
  • REDIS_URL: Redis connection string
  • SMTP_*: Email configuration
  • JWT_*: Authentication settings
  • RATE_LIMIT_*: API rate limiting

Email Providers

While Gmail is recommended, you can use any SMTP provider:

# Outlook/Hotmail
SMTP_HOST=smtp-mail.outlook.com
SMTP_PORT=587

# Yahoo
SMTP_HOST=smtp.mail.yahoo.com
SMTP_PORT=587

# Custom SMTP
SMTP_HOST=your-smtp-server.com
SMTP_PORT=587

๐Ÿ“– API Documentation

Once the backend is running, access interactive API documentation:

๐Ÿค– AI Chat Features

Company Chat API
# Start a company chat conversation
POST /api/v1/chat/company
{
  "message": "What are our company policies?",
  "thread_id": "optional-thread-id",
  "model": "llama3.2:3b"
}

# Streaming response with RAG context
Response: Server-Sent Events (SSE) stream
data: {"type": "source", "content": {"title": "Policy Document", "content": "..."}}
data: {"type": "chunk", "content": "Based on company policies..."}
data: {"type": "done"}
MCP Chat API
# Chat with specific MCP server
POST /api/v1/chat/mcp/{server_id}
{
  "message": "Show me the system status",
  "thread_id": "optional-thread-id",
  "model": "llama3.2:3b"
}

# Clear MCP server conversation history
DELETE /api/v1/chat/mcp/{server_id}/clear?thread_id=optional-thread-id
Health Monitoring
# Check Ollama service health
GET /api/v1/health/ollama

# Check all services health
GET /api/v1/health/all

๐Ÿ”ง Frontend Integration

The React components provide streaming chat interfaces:

// Company Chat Widget
<CompanyChatWidget 
  onClose={() => setShowCompanyChat(false)}
  className="fixed bottom-4 right-4 w-96 h-96"
/>

// MCP Chat Widget
<McpChatWidget 
  server={selectedServer}
  onClose={() => setShowMcpChat(false)}
  className="fixed bottom-4 right-4 w-96 h-96"
/>

๐Ÿงช Testing

Unit Tests

# Frontend tests
npm test

# Backend tests
cd backend
pytest -v

# Backend tests with coverage
pytest --cov=app --cov-report=html

Integration Testing

Test the complete Ollama integration:

# Install test dependencies
pip install httpx

# Run integration test (with backend running)
python test_ollama_integration.py

# Test specific endpoints
python test_ollama_integration.py http://localhost:8000

Manual Testing

  1. Company Chat: Ask about company information to test RAG functionality
  2. MCP Chat: Test per-server conversations with different MCP servers
  3. Streaming: Verify real-time response streaming works correctly
  4. Health Checks: Monitor service health at /api/v1/health/all

Load Testing

# Install load testing tools
pip install locust

# Create test scenarios (example)
locust -f backend/tests/load_test.py --host=http://localhost:8000

๐Ÿš€ Deployment

Google Cloud Run

The application includes complete Google Cloud deployment configuration:

# Build and deploy using Cloud Build
gcloud builds submit --config deploy/cloudbuild.yaml

# Or deploy manually
docker build -f deploy/Dockerfile.backend -t gcr.io/PROJECT_ID/backend .
docker build -f deploy/Dockerfile.frontend -t gcr.io/PROJECT_ID/frontend .
gcloud run deploy backend --image gcr.io/PROJECT_ID/backend
gcloud run deploy frontend --image gcr.io/PROJECT_ID/frontend

Environment Variables for Production

Update your production environment with:

ENV=production
DEBUG=false
DATABASE_URL=postgresql://user:pass@host:port/db
REDIS_URL=redis://production-redis:6379/0
ALLOWED_ORIGINS=https://yourdomain.com
SECRET_KEY=production-secret-key

๏ฟฝ Troubleshooting

Common Issues

  1. Email not sending: Check SMTP credentials and app password setup
  2. CORS errors: Verify ALLOWED_ORIGINS includes your frontend URL
  3. WebSocket connection failed: Ensure backend is accessible and CORS is configured
  4. Database errors: Check DATABASE_URL and ensure PostgreSQL is running
  5. Redis connection failed: Verify Redis server is running and accessible
  6. Ollama connection failed:
    • Ensure Ollama is running on the correct port (11434)
    • Check OLLAMA_BASE_URL environment variable
    • Verify required models are installed: ollama list
  7. pgvector extension missing:
    • Install pgvector: CREATE EXTENSION vector; in PostgreSQL
    • Run migrations: alembic upgrade head
  8. Chat responses empty:
    • Check if company documents are loaded
    • Verify embedding model is available: ollama pull nomic-embed-text
  9. Streaming issues:
    • Check browser network tab for SSE connection errors
    • Verify FastAPI is handling streaming endpoints correctly

Debug Mode

Enable debug logging:

DEBUG=true
LOG_LEVEL=DEBUG

View logs:

# Docker logs
docker-compose logs -f backend

# Development logs
tail -f backend/app.log

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests for new functionality
  5. Ensure all tests pass
  6. Submit a pull request

๐Ÿ“„ License

This project is licensed under the MIT License. See the LICENSE file for details.

๐Ÿ”— Resources