data_narrator_mcp

sandsiv/data_narrator_mcp

3.2

If you are the rightful owner of data_narrator_mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Insight Digger MCP is an enterprise-grade Model Context Protocol (MCP) system designed for data analysis with integration capabilities for Claude Desktop.

Tools
  1. list_sources

    Lists available data sources for analysis.

  2. get_source_structure

    Retrieves the structure of a specified data source.

  3. prepare_analysis_configuration

    Prepares the configuration for data analysis.

  4. execute_analysis_from_config

    Executes analysis based on the prepared configuration.

Disclaimer: this project and documentation were created with active usage of AI with basic code and content review and are in the "beta" state as a part of POC project. There can be errors in documentation.

Insight Digger MCP

Enterprise-grade Model Context Protocol (MCP) system for data analysis with Claude Desktop integration.

Architecture Overview

This project provides a sophisticated 3-layer MCP architecture designed for enterprise environments:

  1. MCP Bridge ↔ MCP Client Flask API (Custom HTTP REST endpoints)
  2. MCP Client Flask API ↔ MCP Server subprocess (Standard MCP protocol)
  3. MCP Server ↔ Backend Data API (HTTP calls to enterprise backend)

Key Enterprise Features

  • šŸ” Dynamic JWT Authentication: 14-day JWT tokens with session management
  • 🧠 Intelligent Caching: Parameter caching and auto-injection for efficient workflows
  • šŸ“‹ Workflow Guidance: LLM-optimized tool orchestration with conversation management
  • šŸ‘„ Multi-User Support: Centralized service with session isolation
  • šŸ¢ Enterprise Integration: Compatible with existing authentication and monitoring systems

Setup Options

Option 1: Claude Desktop Integration (Recommended)

For end users who want to use Claude Desktop with Insight Digger:

1. Install the NPX Bridge
npx @sandsiv/data-narrator-mcp
2. Configure Claude Desktop

Add to your Claude Desktop configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "data-narrator": {
      "command": "npx",
      "args": ["-y", "@sandsiv/data-narrator-mcp@1.0.0"],
      "env": {
        "MCP_CLIENT_URL": "https://your-mcp-service.com"
      }
    }
  }
}

Note: The MCP_CLIENT_URL environment variable is optional. By default, the system will use Sandsiv's hosted MCP service. Only provide this variable if you're deploying your own version of the data-narrator-mcp service.

3. Usage in Claude Desktop
  1. Authenticate first: Use the setup_authentication tool with your API URL and JWT token
  2. Start analysis: Begin with list_sources to see available data
  3. Follow the workflow: The system guides you through multi-step analysis processes

Option 2: Direct API Integration (For developers)

For custom integrations or testing:

1. Start the MCP Services
# Install dependencies
./scripts/setup/install_dependencies.sh

# Activate virtual environment
source venv/bin/activate

# Start the Flask API service  
npm run dev:flask
# OR
python src/python/scripts/start_flask_api.py
2. Use the REST API
# Initialize session
curl -X POST http://localhost:5000/init \
  -H "Content-Type: application/json" \
  -d '{"session_id": "test-session", "apiUrl": "https://your-api.com", "jwtToken": "your-jwt"}'

# Get available tools
curl -X POST http://localhost:5000/tools \
  -H "Content-Type: application/json" \
  -d '{"session_id": "test-session"}'

# Call a tool
curl -X POST http://localhost:5000/call-tool \
  -H "Content-Type: application/json" \
  -d '{"session_id": "test-session", "tool": "list_sources", "params": {}}'

Development Setup

Prerequisites

  • Python 3.8+
  • Node.js 18+ (for NPX bridge)
  • Access to Insight Digger backend API

Local Development

# Clone the repository
git clone <repository-url>
cd insight_digger_mcp

# Install all dependencies
./scripts/setup/install_dependencies.sh --dev

# Activate virtual environment
source venv/bin/activate

# Run tests
npm test
# OR separately:
npm run test:python
npm run test:nodejs

Testing the NPX Bridge Locally

# Start your MCP client service
npm run dev:flask

# In another terminal, test the bridge
npm run dev:bridge
# Use the MCP Inspector or Claude Desktop to test

Authentication Flow

JWT Token Management

  • Lifetime: 14 days
  • Refresh: Through the main platform web UI (outside MCP scope)
  • Validation: Bridge handles expired tokens by requesting re-authentication

Session Management

  • Single Session: One active session per bridge instance
  • Session ID: UUID generated for each bridge startup
  • Isolation: Multiple Claude Desktop instances use separate sessions

Tools & Workflow

Available Analysis Tools

The system provides LLM-optimized tools for:

  • šŸ“Š Data Source Discovery: list_sources, get_source_structure
  • āš™ļø Analysis Configuration: prepare_analysis_configuration
  • šŸš€ Execution: execute_analysis_from_config
  • šŸ“ˆ Results: Interactive dashboards and summaries

Intelligent Caching

  • Parameter Injection: Previously fetched data automatically included in subsequent calls
  • Workflow Memory: System remembers source selections, configurations, and analysis state
  • Efficiency: LLM doesn't need to repeat large data structures between steps

Error Handling

  • Authentication Errors: Clear guidance for JWT/URL validation failures
  • Tool Errors: Contextual error messages from backend systems
  • Session Errors: Automatic cleanup and re-authentication prompts

Configuration

Environment Variables

  • MCP_CLIENT_URL: URL of the MCP Client Flask API service
  • INSIGHT_DIGGER_API_URL: Backend API URL (configured in MCP server layer)

Service Configuration

The MCP Server (mcp_server.py) connects to your backend API using configuration provided during the /init call.

Documentation

  • - Detailed bridge architecture
  • - Integration patterns
  • - Client development guide
  • - Server development guide

Production Deployment

Service Deployment

# Install as systemd service (Linux)
sudo cp data-narrator-mcp.service /etc/systemd/system/
sudo systemctl enable data-narrator-mcp
sudo systemctl start data-narrator-mcp

NPX Package Publishing

# Build and publish the bridge package
npm version patch
npm publish --access public

Monitoring

  • Service logs: journalctl -u data-narrator-mcp -f
  • Bridge logs: Console output in Claude Desktop
  • Session tracking: All sessions logged with UUIDs

Security & Production Readiness

āœ… Status: Ready for external publication
šŸ” Security: Comprehensive credential validation implemented
šŸ“Š Performance: Optimized with session reuse and direct validation

Security Features

  • Immediate credential validation during /init endpoint
  • Session reuse optimization - no redundant validation calls
  • Proper HTTP status codes (401 for auth failures, 500 for server errors)
  • Input validation for API URLs and JWT tokens
  • Resource efficiency - MCP servers created only for valid credentials
  • 5-second timeout for validation requests

Security Considerations

  • JWT Tokens: Never logged or stored permanently
  • Session Isolation: Proper cleanup prevents cross-session data leakage
  • HTTPS Required: All production communications must use HTTPS
  • Enterprise Auth: Integrates with existing authentication systems
  • Immediate Auth Feedback: Invalid credentials rejected in <5 seconds
  • Resource Protection: No MCP instances created for invalid credentials

See for detailed security documentation.

Support

For issues or questions:

  1. Check the documentation in the docs/ folder
  2. Review the service logs for error details
  3. Verify JWT token validity and API connectivity
  4. Ensure MCP Client service is running and accessible

License

MIT License - See LICENSE file for details.