CryptoSentiment-Intelligence-MCP-Server

KaayaanAi/CryptoSentiment-Intelligence-MCP-Server

3.2

If you are the rightful owner of CryptoSentiment-Intelligence-MCP-Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

CryptoSentiment Intelligence MCP Server is an advanced AI-driven platform designed for comprehensive cryptocurrency sentiment analysis, supporting multiple protocols for broad compatibility.

Tools
1
Resources
0
Prompts
0

🚀 CryptoSentiment Intelligence MCP Server

Advanced AI-driven cryptocurrency sentiment analysis with quad-protocol support and multi-provider AI integration

Version MCP Compatible AI Provider

✨ Features

🔄 Quad-Protocol Support

  • STDIO MCP - Claude Desktop integration
  • HTTP REST API - Standard REST endpoints
  • HTTP MCP - JSON-RPC 2.0 for n8n-nodes-mcp
  • WebSocket MCP - Real-time streaming analysis

🤖 Multi-Provider AI IntegrationNEW

  • Primary: Ollama Cloud with gpt-oss:120b model for cost-effective analysis
  • Fallback: Google Gemini with gemini-2.0-flash for reliability
  • Smart Failover: Automatic provider switching with exponential backoff
  • Enhanced Error Handling: Comprehensive retry logic and circuit breaker patterns

🧠 Advanced AI Analysis Frameworks

  1. Sentiment Fusion - Multi-source sentiment aggregation with ML adaptation
  2. Behavioral Network Analysis - Whale and influencer behavior patterns
  3. Multi-Modal Processing - Text, image, and video content analysis
  4. Predictive Impact Modeling - ML-based market impact forecasting
  5. Quantum Correlation - Cross-sector event cascade analysis

📊 Comprehensive Market Intelligence

  • Real-time cryptocurrency price integration
  • Multi-source news aggregation (RSS feeds, Reddit, social media)
  • Historical pattern matching and correlation analysis
  • Risk assessment and actionable recommendations
  • Geopolitical ripple effect analysis

🛡️ Production-Ready Features

  • Advanced caching with Redis fallback to memory
  • Rate limiting and API key authentication
  • Comprehensive logging and health monitoring
  • Graceful shutdown and connection management
  • Docker containerization with multi-stage builds
  • Enterprise-grade TypeScript with strict linting (0 errors)

🎯 Quick Start

1. Environment Setup

# Clone the repository
git clone https://github.com/kaayaan-ai/crypto-sentiment-intelligence-mcp-server
cd crypto-sentiment-intelligence-mcp-server

# Setup environment interactively
npm run setup-env

# Or copy template manually
cp .env.example .env && nano .env

2. Required Environment Variables

# AI Provider Configuration - Ollama Cloud Primary
OLLAMA_API_KEY=your-ollama-cloud-api-key
OLLAMA_BASE_URL=https://ollama.com
OLLAMA_MODEL=gpt-oss:120b

# AI Provider Configuration - Gemini Fallback
GEMINI_API_KEY=your-google-gemini-api-key
GEMINI_BASE_URL=https://generativelanguage.googleapis.com/v1beta
GEMINI_MODEL=gemini-2.0-flash

# Database Connections
MONGODB_URL=mongodb://localhost:27017/crypto_sentiment
REDIS_URL=redis://localhost:6379

# Optional: Price API
COINGECKO_API_KEY=your-coingecko-api-key

3. Quick Start Options

Option A: Docker (Recommended)
# Start with Docker Compose
docker-compose up -d

# Check logs
docker-compose logs -f crypto-sentiment-mcp

# Validate environment
npm run validate-env
Option B: Native Installation
# Install dependencies
npm install

# Build TypeScript
npm run build

# Validate environment and configuration
npm run validate-env

# Start STDIO MCP (Claude Desktop)
npm run dev

# Or start HTTP server (REST + MCP + WebSocket)
npm run http-server

🔧 Usage Examples

Claude Desktop Integration

Add to your Claude Desktop MCP configuration:

{
  "mcpServers": {
    "crypto-sentiment": {
      "command": "npx",
      "args": ["-s", "user", "crypto-sentiment-intelligence-mcp-server"]
    }
  }
}

n8n Integration

Use the MCP Client node with these settings:

{
  "connection": {
    "type": "HTTP MCP",
    "url": "http://localhost:4004/mcp"
  },
  "tools": ["analyze_crypto_sentiment"]
}

REST API Usage

# Analyze latest crypto sentiment
curl -X POST http://localhost:4004/tools/analyze_crypto_sentiment \
  -H "Content-Type: application/json" \
  -d '{
    "query": "Bitcoin ETF news",
    "analysis_depth": "standard",
    "include_prices": true
  }'

WebSocket Real-time Analysis

const ws = new WebSocket('ws://localhost:4004/mcp/ws');

ws.onopen = () => {
  // Initialize connection
  ws.send(JSON.stringify({
    jsonrpc: '2.0',
    id: 1,
    method: 'initialize',
    params: {
      protocolVersion: '2024-11-05',
      capabilities: {},
      clientInfo: { name: 'MyApp', version: '1.0.0' }
    }
  }));
};

// Request analysis with real-time updates
ws.send(JSON.stringify({
  jsonrpc: '2.0',
  id: 2,
  method: 'tools/call',
  params: {
    name: 'analyze_crypto_sentiment',
    arguments: {
      query: 'latest crypto sentiment',
      stream_updates: true
    }
  }
}));

📋 Analysis Tool Parameters

analyze_crypto_sentiment

Parameters:

  • query (required) - Search query or "latest" for general analysis
  • analysis_depth - "quick" | "standard" | "deep" (default: "standard")
  • max_news_items - 5-50 (default: 15)
  • time_range - "1h" | "6h" | "12h" | "24h" (default: "6h")
  • include_prices - Include current price data (default: true)
  • focus_coins - Array of specific coins to prioritize (optional)

Example Response:

{
  "overall_sentiment": "BULLISH",
  "confidence_score": 0.87,
  "processing_time_ms": 4200,
  "analysis_timestamp": "2025-09-23T17:15:00Z",
  "ai_model_used": "gpt-oss:120b",
  "market_signals": [
    {
      "headline": "BlackRock increases Bitcoin ETF allocation by 40%",
      "sentiment": "STRONGLY_POSITIVE",
      "affected_coins": ["BTC", "ETH"],
      "impact_prediction": {
        "timeframe": "SHORT_TERM",
        "magnitude": "HIGH",
        "direction": "BULLISH"
      },
      "price_context": {
        "BTC": { "current": 45250.33, "change_24h": 2.4 }
      },
      "ai_analysis": "Institutional adoption signal suggests sustained buying pressure",
      "recommendation": "STRONG_BUY_SIGNAL",
      "confidence_score": 0.92
    }
  ],
  "behavioral_insights": {
    "whale_activity": "INCREASED_ACCUMULATION",
    "social_sentiment": "FEAR_TO_GREED_TRANSITION",
    "influencer_alignment": 0.73,
    "volume_patterns": "ACCUMULATION",
    "retail_sentiment": "BULLISH"
  },
  "risk_assessment": {
    "level": "MEDIUM",
    "factors": ["regulatory_uncertainty", "macro_correlation"],
    "mitigation": "Consider dollar-cost averaging and stop-loss levels",
    "probability": 0.65,
    "impact_severity": "MODERATE"
  },
  "actionable_recommendations": [
    "Consider BTC accumulation on next dip below $44,000",
    "Monitor Ethereum for breakout above $2,850 resistance",
    "Set stop-loss at 8% below entry for risk management"
  ],
  "data_sources_count": 15
}

🐳 Docker Deployment

Full Stack Deployment

# Deploy everything (MCP Server + Redis + MongoDB + Monitoring)
docker-compose up -d

# Scale HTTP servers
docker-compose up -d --scale crypto-sentiment-http=3

# Enable monitoring (Prometheus + Grafana)
docker-compose --profile monitoring up -d

Kaayaan Stack Integration

# Add to your existing docker-compose.yml
services:
  crypto-sentiment:
    image: kaayaan/crypto-sentiment-mcp:latest
    networks:
      - kaayaan_default
    environment:
      # AI Provider Configuration - Ollama Cloud Primary
      - OLLAMA_API_KEY=${OLLAMA_API_KEY}
      - OLLAMA_BASE_URL=${OLLAMA_BASE_URL}
      - OLLAMA_MODEL=${OLLAMA_MODEL}
      # AI Provider Configuration - Gemini Fallback
      - GEMINI_API_KEY=${GEMINI_API_KEY}
      - GEMINI_BASE_URL=${GEMINI_BASE_URL}
      - GEMINI_MODEL=${GEMINI_MODEL}
      # Database Configuration
      - MONGODB_URL=${MONGODB_URL}
      - REDIS_URL=${REDIS_URL}
      - COINGECKO_API_KEY=${COINGECKO_API_KEY}

🛠️ Development

Local Development Setup

# Install dependencies
npm install

# Start development with auto-reload
npm run watch

# Run in development mode
npm run dev

# Test the server with comprehensive suite
npm test

# Run linting and type checking
npm run test:quick

# Format code
npm run format

Environment Validation

# Validate environment configuration
npm run validate-env

# Verbose validation with detailed output
npm run validate-env:verbose

# Interactive environment setup
npm run setup-env

Testing Individual Components

# Test STDIO MCP protocol
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0.0"}}}' | node build/index.js

# Test tool execution
echo -e '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0.0"}}}\n{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"analyze_crypto_sentiment","arguments":{"query":"Bitcoin news"}}}' | node build/index.js

MCP Inspector

# Launch MCP Inspector for interactive testing
npm run inspector

📊 API Endpoints

Health & Status

  • GET /health - Health check with service status
  • GET /status - Server status and capabilities
  • GET /metrics - Performance metrics

Tools

  • GET /tools - List available tools with schemas
  • GET /tools/{name}/schema - Get specific tool schema
  • POST /tools/analyze_crypto_sentiment - Execute sentiment analysis

MCP Protocol

  • POST /mcp - JSON-RPC 2.0 MCP endpoint
  • POST /mcp/batch - Batch request support
  • WebSocket /mcp/ws - WebSocket MCP protocol

Documentation

  • GET /docs - API documentation with examples
  • GET / - Server information and available endpoints

🔐 Security

Authentication

# Set API key requirement
export REQUIRED_API_KEY="your-secure-api-key"

# Include in requests
curl -H "x-api-key: your-secure-api-key" http://localhost:4004/tools

Rate Limiting

  • Default: 100 requests per minute per IP
  • Configurable via RATE_LIMIT_MAX_REQUESTS
  • Returns 429 with retry-after header

Security Headers

  • Helmet.js for security headers
  • CORS configuration
  • Input validation with Zod schemas
  • SQL/NoSQL injection protection

📈 Performance

Optimizations

  • Redis caching with memory fallback
  • Connection pooling for databases
  • Async processing throughout
  • Smart batching for external APIs
  • Response compression for WebSocket
  • Multi-provider AI with intelligent failover

Benchmarks

  • Response time: < 6 seconds for standard analysis (improved with Ollama Cloud)
  • Concurrent requests: 1000+ connections supported
  • Memory usage: ~512MB typical, 1GB limit
  • CPU usage: Optimized for multi-core processing
  • AI Provider latency: ~1.2-1.7 seconds (Ollama Cloud)

🔧 Configuration

Environment Variables

VariableDescriptionDefault
NODE_ENVEnvironment modedevelopment
PORTHTTP server port4004
LOG_LEVELLogging levelinfo
ENABLE_STDIOEnable STDIO MCPtrue
ENABLE_HTTP_RESTEnable REST APItrue
ENABLE_HTTP_MCPEnable HTTP MCPtrue
ENABLE_WEBSOCKETEnable WebSockettrue
OLLAMA_API_KEYOllama Cloud API keyRequired
OLLAMA_BASE_URLOllama Cloud base URLhttps://ollama.com
OLLAMA_MODELOllama model namegpt-oss:120b
GEMINI_API_KEYGoogle Gemini API keyRequired for fallback
GEMINI_BASE_URLGemini API base URLhttps://generativelanguage.googleapis.com/v1beta
GEMINI_MODELGemini model namegemini-2.0-flash
MONGODB_URLMongoDB connectionRequired
REDIS_URLRedis connectionRequired
COINGECKO_API_KEYPrice API keyOptional
DEFAULT_ANALYSIS_DEPTHDefault depthstandard
RATE_LIMIT_MAX_REQUESTSRate limit100

Advanced Configuration

# AI Configuration
AI_TIMEOUT_MS=30000
MAX_TOKENS=4000

# News Sources
RSS_FEEDS=https://cointelegraph.com/rss,https://coindesk.com/rss
REDDIT_FEEDS=https://reddit.com/r/CryptoCurrency/.rss

# Cache Configuration
PRICE_CACHE_TTL=300
NEWS_CACHE_TTL=600
ANALYSIS_CACHE_TTL=900

# Performance
MAX_CONCURRENT_ANALYSIS=5
CONNECTION_POOL_SIZE=10
REQUEST_TIMEOUT_MS=30000

# Database Configuration
MONGO_INITDB_ROOT_PASSWORD=crypto_sentiment_2025_secure_password!
REDIS_PASSWORD=crypto_sentiment_redis_2025_secure!

📝 Troubleshooting

Common Issues

Server won't start:

# Check build
npm run build

# Verify environment
npm run validate-env

# Test connections
npm run validate-env:verbose

MCP Inspector errors:

# Ensure build is current
npm run build

# Check executable permissions
ls -la build/index.js

# Test with minimal input
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0.0"}}}' | node build/index.js

Analysis failures:

  • Verify both OLLAMA_API_KEY and GEMINI_API_KEY are set
  • Check internet connectivity for news feeds
  • Ensure sufficient API quotas on both providers
  • Review logs: docker-compose logs crypto-sentiment-mcp
  • Test individual provider availability

AI Provider Issues:

# Test primary provider (Ollama Cloud)
curl -X POST "https://ollama.com/api/chat" \
  -H "Authorization: Bearer $OLLAMA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-oss:120b","messages":[{"role":"user","content":"test"}]}'

# Test fallback provider (Gemini)
curl -X POST "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"contents":[{"parts":[{"text":"test"}]}]}'

Debug Mode

# Enable debug logging
export LOG_LEVEL=debug

# Run with detailed output
npm run dev

🤝 Integration Examples

Python Client

import requests

def analyze_crypto_sentiment(query):
    response = requests.post(
        'http://localhost:4004/tools/analyze_crypto_sentiment',
        json={
            'query': query,
            'analysis_depth': 'standard',
            'include_prices': True
        }
    )
    return response.json()

result = analyze_crypto_sentiment("Bitcoin ETF approval")
print(f"Sentiment: {result['data']['overall_sentiment']}")
print(f"AI Model: {result['data']['ai_model_used']}")
print(f"Confidence: {result['data']['confidence_score']}")

Node.js Client

import axios from 'axios';

async function analyzeCrypto(query) {
  const response = await axios.post(
    'http://localhost:4004/tools/analyze_crypto_sentiment',
    {
      query,
      analysis_depth: 'deep',
      max_news_items: 20
    }
  );

  return response.data;
}

const result = await analyzeCrypto('Ethereum upgrade');
console.log(`Confidence: ${result.data.confidence_score}`);
console.log(`AI Provider: ${result.data.ai_model_used}`);
console.log(`Processing Time: ${result.data.processing_time_ms}ms`);

🎯 What's New in v1.1.0

Multi-Provider AI Integration

  • Ollama Cloud Primary: Cost-effective gpt-oss:120b model integration
  • Google Gemini Fallback: Reliable gemini-2.0-flash backup
  • Smart Failover: Automatic provider switching with exponential backoff
  • Enhanced Reliability: Circuit breaker patterns and comprehensive error handling

🔧 Quality Improvements

  • Zero ESLint Errors: Enterprise-grade TypeScript with strict linting
  • Improved Type Safety: Enhanced interfaces and proper error handling
  • Better Environment Management: Interactive setup and validation scripts
  • Enhanced Testing: Comprehensive test suite with proper cleanup

🚀 Performance Enhancements

  • Faster Response Times: Optimized AI provider integration (~1.2-1.7s latency)
  • Better Error Recovery: Robust retry mechanisms with smart backoff
  • Improved Caching: Enhanced Redis integration with memory fallback
  • Production Readiness: All quality gates validated and passing

📄 License

MIT License - see file for details.

🙏 Acknowledgments

  • Ollama Cloud - Primary AI provider infrastructure
  • Google Gemini - Reliable AI fallback provider
  • CoinGecko - Cryptocurrency price data
  • Model Context Protocol - Standardized AI tool integration
  • Kaayaan AI Infrastructure - Production deployment platform

📞 Support


Built with ❤️ by Kaayaan AI Infrastructure

Powering the future of cryptocurrency intelligence through advanced multi-provider AI analysis