innovaassolutions/kms-mcp-server
If you are the rightful owner of kms-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
Innovaas KMS MCP Server is an Enhanced Model Context Protocol server designed for the Innovaas RAG Knowledge Management System, offering advanced multi-modal search, RAG-powered chat, and document access via the MCP protocol.
🧠 Innovaas KMS MCP Server
Enhanced Model Context Protocol (MCP) server for the Innovaas RAG Knowledge Management System. This server exposes powerful multi-modal search, RAG-powered chat with intelligent token management, and comprehensive document access to external systems via the standardized MCP protocol.
⚡ Latest v1.0.0 Features
🎯 Intelligent Token Management
- Automatic Optimization: Prevents API token limit errors (65K+ → 30K tokens)
- Provider-Aware: Different limits for OpenAI (30K) vs Claude (200K)
- Smart Document Selection: Prioritizes by relevance, includes summaries of excluded content
- Zero Configuration: Works automatically with
kms_chattool
🔍 Advanced Search Capabilities
- Full Document Content: Complete text (4,000+ characters) instead of 200-char previews
- Multi-Modal Search: Text, audio transcriptions, video frames, and technical content
- Intelligent Routing: Enhanced RAG with query analysis and optimal strategy selection
- Technical Content Detection: Find code, diagrams, and UI elements in video content
💬 Enhanced RAG-Powered Chat
- Comprehensive Responses: Based on complete source material with full content access
- Source Citations: Precise document and timestamp references
- Provider Choice: OpenAI GPT-4o-mini or Claude for different use cases
- Context Filtering: Focus conversations by tags and document types
🚀 Quick Start
1. Installation
# Clone the repository
git clone https://github.com/innovaas/kms-mcp-server.git
cd kms-mcp-server
# Install dependencies
npm install
# Build the server
npm run build
2. Configuration
# Required: KMS API endpoint
export KMS_BASE_URL="https://your-kms-domain.com/kms"
# Required: Authentication key
export BACKGROUND_PROCESS_API_KEY="your-secure-api-key"
# OR use MCP-specific key
export MCP_API_KEY="your-mcp-api-key"
3. Run the Server
# Development mode
npm run dev
# Production mode
npm start
# With environment variables inline
KMS_BASE_URL="https://your-domain.com/kms" BACKGROUND_PROCESS_API_KEY="your-key" npm start
🛠️ Integration Examples
Claude Desktop Configuration
Add to your Claude Desktop config file (~/.claude_desktop_config.json):
{
"mcpServers": {
"innovaas-kms": {
"command": "node",
"args": ["/path/to/kms-mcp-server/dist/index.js"],
"env": {
"KMS_BASE_URL": "https://your-domain.com/kms",
"BACKGROUND_PROCESS_API_KEY": "your-secure-api-key"
}
}
}
}
Cline/VSCode Integration
Configure in your MCP settings:
{
"name": "innovaas-kms",
"serverPath": "/path/to/kms-mcp-server/dist/index.js",
"environment": {
"KMS_BASE_URL": "https://your-domain.com/kms",
"MCP_API_KEY": "your-secure-api-key"
}
}
Programmatic Integration
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
const transport = new StdioClientTransport({
command: "node",
args: ["/path/to/kms-mcp-server/dist/index.js"],
env: {
KMS_BASE_URL: "https://your-domain.com/kms",
MCP_API_KEY: "your-api-key"
}
});
const client = new Client(
{ name: "kms-client", version: "1.0.0" },
{ capabilities: {} }
);
await client.connect(transport);
// Use intelligent search with full content
const result = await client.callTool({
name: "kms_intelligent_search",
arguments: {
query: "What are the best practices for implementing a Unified Namespace?",
maxResults: 10
}
});
🎯 Available Tools
kms_chat 🚀 Primary Tool
Comprehensive knowledge queries with intelligent token management
{
"message": "How do I implement OEE monitoring in a manufacturing environment?",
"provider": "openai",
"useMultiModal": true,
"tags": ["OEE", "manufacturing"],
"maxResults": 15
}
✅ Key Benefits:
- Token Optimization: Automatically prevents API limit errors
- Full Content Access: Complete document text (4,000+ characters)
- Provider-Aware: Adjusts context size for OpenAI vs Claude
- Multi-Modal Context: Combines text, video, and web sources
kms_intelligent_search
Advanced RAG search with query analysis
{
"query": "unified namespace MQTT implementation patterns",
"maxResults": 15,
"filters": {
"type": "video",
"tags": ["UNS", "MQTT"]
},
"includeAnalysis": true
}
kms_multimodal_search
Search across all content types
{
"query": "user authentication flow diagrams",
"searchMode": "multimodal",
"maxResults": 10,
"filters": {
"hasVisualContent": true,
"documentTypes": ["video", "whitepaper"]
}
}
kms_search
Basic semantic search
{
"query": "manufacturing execution systems",
"limit": 10,
"threshold": 0.7
}
kms_get_document
Retrieve specific document
{
"documentId": "uuid-of-document"
}
kms_get_stats
System analytics
{
"includeProcessingDetails": true
}
kms_list_documents
Browse documents
{
"limit": 25,
"type": "video",
"tags": ["training", "technical"],
"mediaType": "video"
}
🎉 What's Fixed in v1.0.0
❌ Before: Token Limit Errors
Error: Request too large for gpt-4o: Limit 30000, Requested 70239
✅ After: Intelligent Optimization
{
"tokenOptimization": {
"enabled": true,
"documentsIncluded": 8,
"documentsExcluded": 7,
"optimization": "Included 8/15 documents, using ~27,518 tokens",
"estimatedTotalTokens": 27518
}
}
🔧 Improvements Made
- Automatic Token Management: No more API limit errors
- Smart Document Selection: Prioritizes most relevant content
- Full Content Access: 4,000+ character responses vs 200-char previews
- Provider Optimization: Different strategies for OpenAI vs Claude
- Transparent Operation: Shows what was included/excluded and why
📊 System Capabilities
Current KMS Status ✅
- 127+ documents processed with 100% success rate
- 1,000+ video frames extracted and analyzed
- Multi-modal search across text, audio, and video
- Technical content detection for code, diagrams, UI elements
- Real-time processing pipeline with error recovery
Content Coverage
- Technical Documentation: API docs, system architecture, code examples
- Training Videos: 105+ processed videos with transcription and frame analysis
- Manufacturing Content: MES, OEE, UNS, MQTT, IoT, SCADA terminology
- Web Resources: Crawled documentation and technical resources
AI Capabilities
- AssemblyAI: High-quality transcription with technical term boosting
- OpenAI Embeddings: 1536-dimensional vectors for semantic search
- Claude Vision: Technical content analysis for diagrams and code
- Multi-Provider Chat: OpenAI GPT-4o-mini and Claude support
🛡️ Authentication & Security
API Key Authentication
# Set authentication key
export BACKGROUND_PROCESS_API_KEY="secure-random-string"
# Or use MCP-specific key
export MCP_API_KEY="mcp-specific-secure-key"
Network Configuration
- Protocol: HTTPS (secure connection)
- Transport: STDIO (standard for MCP)
- Authentication: Bearer token with API key
📋 Development
Project Structure
kms-mcp-server/
├── src/
│ └── index.ts # Main MCP server implementation
├── dist/ # Built files (generated by npm run build)
├── examples/ # Configuration examples
├── package.json # Dependencies and scripts
├── tsconfig.json # TypeScript configuration
└── README.md # This file
Scripts
npm run build # Compile TypeScript to JavaScript
npm run dev # Development mode with hot reload
npm start # Run compiled server
npm run clean # Clean build directory
npm test # Run tests
Requirements
- Node.js: 18.0.0 or higher
- TypeScript: 5.0.0 or higher
- KMS Server: Running Innovaas KMS instance
🐛 Troubleshooting
Common Issues
-
Connection Failed
Error: KMS API request failed: 500 Internal Server Error- ✅ Ensure KMS server is running
- ✅ Check
KMS_BASE_URLenvironment variable - ✅ Verify network connectivity
-
Authentication Errors
Error: 401 Unauthorized- ✅ Verify API key is set correctly
- ✅ Check Bearer token format
- ✅ Ensure KMS server has matching API key
-
Token Limit Errors (Should be fixed)
Error: Request too large for gpt-4o: Limit 30000, Requested 65879- ✅ Update to v1.0.0 with token optimization
- ✅ Use
kms_chattool (automatically optimized) - ✅ Check
tokenOptimizationin responses
Debug Mode
# Enable verbose logging
DEBUG=1 npm run dev
# Check KMS server status
curl -H "Authorization: Bearer your-api-key" https://your-domain.com/kms/api/dashboard-stats
🤝 Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes
- Run tests:
npm test - Build:
npm run build - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Create Pull Request
Development Guidelines
- Follow existing code patterns for consistency
- Add comprehensive error handling
- Update tool schemas when modifying parameters
- Test with multiple MCP clients before committing
- Document new features in README
📄 License
MIT License - see the file for details.
🔗 Links
- GitHub Repository: https://github.com/innovaas/kms-mcp-server
- Issues: https://github.com/innovaas/kms-mcp-server/issues
- Innovaas Website: https://innovaas.co
- Model Context Protocol: https://modelcontextprotocol.io
🚀 Ready to integrate your knowledge management with any MCP-compatible system with intelligent token optimization!