Context-Optimizer-MCP

MrNitro360/Context-Optimizer-MCP

3.2

If you are the rightful owner of Context-Optimizer-MCP and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Context Optimizer MCP server is designed to enhance AI assistants' ability to manage conversation context efficiently, ensuring better continuity and reduced token usage.

Tools
5
Resources
0
Prompts
0

Context Optimizer MCP

A Model Context Protocol (MCP) server for intelligent conversation context management with token optimization, designed to enhance Claude's ability to retain and utilize conversation context across interactions.

๐ŸŽฏ Purpose

This MCP server provides advanced context management capabilities that help AI assistants maintain better conversation continuity, reduce token usage through intelligent optimization, and prevent context loss during long conversations.

โœจ Features

Core Context Management

  • Context Storage & Retrieval: Store and retrieve important conversation context with categorization
  • Conversation Summarization: Generate intelligent summaries to preserve key information
  • Topic Flow Tracking: Monitor conversation evolution and topic transitions
  • Actionable Item Extraction: Identify and track tasks, decisions, and follow-ups

Token Optimization

  • Smart Compression: Multiple compression strategies to reduce token usage
  • Budget Management: Configurable token budgets with automatic optimization
  • Usage Analytics: Detailed token usage statistics and recommendations
  • Priority-Based Filtering: Preserve high-importance context while optimizing low-priority content

Advanced Features

  • Context Gap Detection: Identify missing context that could improve assistance quality
  • Anti-Hallucination Validation: Built-in validation to ensure context accuracy
  • Persistent Storage: Optional permanent storage for critical context
  • Flexible Configuration: Customizable settings for different use cases

๐Ÿš€ Installation

Prerequisites

  • Node.js โ‰ฅ 18.0.0
  • Claude Desktop or compatible MCP client

Setup

  1. Clone the repository:
git clone https://github.com/MrNitro360/Context-Optimizer-MCP.git
cd Context-Optimizer-MCP
  1. Install dependencies:
npm install
  1. Configure Claude Desktop to use this MCP server by adding to your claude_desktop_config.json:
{
  "mcpServers": {
    "context-optimizer": {
      "command": "node",
      "args": ["path/to/Context-Optimizer-MCP/src/index.js"]
    }
  }
}
  1. Start the server:
npm start

๐Ÿ› ๏ธ Available Tools

Context Management Tools

store_conversation_context

Store important conversation context for future reference.

  • Parameters: contextType, content, importance (1-10), tags, expiresAfter
  • Context Types: decision, preference, fact, goal, constraint, insight
retrieve_relevant_context

Retrieve stored context relevant to the current conversation.

  • Parameters: query, contextTypes, maxResults, minImportance, tokenBudget
summarize_conversation

Create intelligent summaries of conversation content.

  • Parameters: conversationText, focusAreas, includeCodeContext, tokenBudget
track_conversation_flow

Track conversation evolution and topic transitions.

  • Parameters: currentMessage, messageType, topicShift
extract_actionable_items

Extract tasks, decisions, and follow-ups from conversation.

  • Parameters: conversationText, itemTypes

Optimization Tools

optimize_context_for_continuation

Optimize and compress context for efficient conversation continuation.

  • Parameters: fullContext, targetLength, preserveTypes, tokenBudget
detect_context_gaps

Identify missing context that could improve assistance quality.

  • Parameters: currentRequest, availableContext, domain
get_token_usage_stats

Get detailed token usage statistics and optimization recommendations.

  • Parameters: includeRecommendations, analyzeRedundancy
optimize_tokens

Apply token optimization using various compression strategies.

  • Parameters: strategy, targetReduction, preserveImportant
  • Strategies: priority_filtering, content_compression, context_merging, aggressive_compression
set_token_budget

Configure token budgets and optimization settings.

  • Parameters: maxContextTokenBudget, compressionThreshold, tokenOptimizationEnabled, maxTokensPerContext

๐Ÿ”ง Configuration

Default Settings

  • Max Context Token Budget: 8,000 tokens
  • Compression Threshold: 0.8 (80%)
  • Max Tokens Per Context: 1,000 tokens
  • Token Optimization: Enabled

Environment Variables

Set these environment variables to customize behavior:

  • CONTEXT_STORAGE_PATH: Directory for persistent context storage
  • MAX_TOKEN_BUDGET: Override default token budget
  • DEBUG_MODE: Enable detailed logging

๐Ÿ“– Usage Examples

Basic Context Storage

// Store a user preference
await store_conversation_context({
  contextType: "preference",
  content: "User prefers concise explanations with code examples",
  importance: 8,
  tags: ["communication", "coding"],
  expiresAfter: "permanent"
});

Context Retrieval

// Retrieve relevant context for current question
await retrieve_relevant_context({
  query: "How should I explain this technical concept?",
  minImportance: 6,
  maxResults: 5
});

Token Optimization

// Optimize stored context to reduce token usage
await optimize_tokens({
  strategy: "priority_filtering",
  targetReduction: 25,
  preserveImportant: true
});

๐Ÿ—๏ธ Architecture

Core Components

  • ContextOptimizerServer: Main MCP server implementation
  • ContextOptimizer: Advanced context processing and optimization
  • TokenOptimizer: Token usage analysis and compression
  • FileManager: Persistent storage management
  • DateUtils: Time-based context management

Data Flow

  1. Context ingestion through MCP tools
  2. Processing and categorization
  3. Storage in memory/disk with metadata
  4. Retrieval with relevance scoring
  5. Token optimization when needed

๐Ÿงช Development

Scripts

  • npm start: Start the production server
  • npm run dev: Start with nodemon for development
  • npm run lint: Check code style
  • npm run format: Format code with Prettier

Testing

Run the server in test mode:

node src/index.js

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'Add amazing feature')
  4. Push to branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

๐Ÿ“„ License

This project is licensed under the MIT License - see the file for details.

๐Ÿ”— Links

๐Ÿ“Š Performance

  • Memory Efficient: Intelligent context pruning and compression
  • Fast Retrieval: Optimized search algorithms for context lookup
  • Scalable: Handles thousands of context items efficiently
  • Token Optimized: Reduces token usage by up to 50% through compression

Built with โค๏ธ for the Claude AI ecosystem by mrnitro360