innovaassolutions/kms-mcp-server
If you are the rightful owner of kms-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
Innovaas KMS MCP Server is an Enhanced Model Context Protocol server designed for the Innovaas RAG Knowledge Management System, offering advanced multi-modal search, RAG-powered chat, and document access via the MCP protocol.
š§ Innovaas KMS MCP Server
Enhanced Model Context Protocol (MCP) server for the Innovaas RAG Knowledge Management System. This server exposes powerful multi-modal search, RAG-powered chat with intelligent token management, and comprehensive document access to external systems via the standardized MCP protocol.
ā” Latest v1.0.0 Features
šÆ Intelligent Token Management
- Automatic Optimization: Prevents API token limit errors (65K+ ā 30K tokens)
- Provider-Aware: Different limits for OpenAI (30K) vs Claude (200K)
- Smart Document Selection: Prioritizes by relevance, includes summaries of excluded content
- Zero Configuration: Works automatically with
kms_chat
tool
š Advanced Search Capabilities
- Full Document Content: Complete text (4,000+ characters) instead of 200-char previews
- Multi-Modal Search: Text, audio transcriptions, video frames, and technical content
- Intelligent Routing: Enhanced RAG with query analysis and optimal strategy selection
- Technical Content Detection: Find code, diagrams, and UI elements in video content
š¬ Enhanced RAG-Powered Chat
- Comprehensive Responses: Based on complete source material with full content access
- Source Citations: Precise document and timestamp references
- Provider Choice: OpenAI GPT-4o-mini or Claude for different use cases
- Context Filtering: Focus conversations by tags and document types
š Quick Start
1. Installation
# Clone the repository
git clone https://github.com/innovaas/kms-mcp-server.git
cd kms-mcp-server
# Install dependencies
npm install
# Build the server
npm run build
2. Configuration
# Required: KMS API endpoint
export KMS_BASE_URL="https://your-kms-domain.com/kms"
# Required: Authentication key
export BACKGROUND_PROCESS_API_KEY="your-secure-api-key"
# OR use MCP-specific key
export MCP_API_KEY="your-mcp-api-key"
3. Run the Server
# Development mode
npm run dev
# Production mode
npm start
# With environment variables inline
KMS_BASE_URL="https://your-domain.com/kms" BACKGROUND_PROCESS_API_KEY="your-key" npm start
š ļø Integration Examples
Claude Desktop Configuration
Add to your Claude Desktop config file (~/.claude_desktop_config.json
):
{
"mcpServers": {
"innovaas-kms": {
"command": "node",
"args": ["/path/to/kms-mcp-server/dist/index.js"],
"env": {
"KMS_BASE_URL": "https://your-domain.com/kms",
"BACKGROUND_PROCESS_API_KEY": "your-secure-api-key"
}
}
}
}
Cline/VSCode Integration
Configure in your MCP settings:
{
"name": "innovaas-kms",
"serverPath": "/path/to/kms-mcp-server/dist/index.js",
"environment": {
"KMS_BASE_URL": "https://your-domain.com/kms",
"MCP_API_KEY": "your-secure-api-key"
}
}
Programmatic Integration
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
const transport = new StdioClientTransport({
command: "node",
args: ["/path/to/kms-mcp-server/dist/index.js"],
env: {
KMS_BASE_URL: "https://your-domain.com/kms",
MCP_API_KEY: "your-api-key"
}
});
const client = new Client(
{ name: "kms-client", version: "1.0.0" },
{ capabilities: {} }
);
await client.connect(transport);
// Use intelligent search with full content
const result = await client.callTool({
name: "kms_intelligent_search",
arguments: {
query: "What are the best practices for implementing a Unified Namespace?",
maxResults: 10
}
});
šÆ Available Tools
kms_chat
š Primary Tool
Comprehensive knowledge queries with intelligent token management
{
"message": "How do I implement OEE monitoring in a manufacturing environment?",
"provider": "openai",
"useMultiModal": true,
"tags": ["OEE", "manufacturing"],
"maxResults": 15
}
ā Key Benefits:
- Token Optimization: Automatically prevents API limit errors
- Full Content Access: Complete document text (4,000+ characters)
- Provider-Aware: Adjusts context size for OpenAI vs Claude
- Multi-Modal Context: Combines text, video, and web sources
kms_intelligent_search
Advanced RAG search with query analysis
{
"query": "unified namespace MQTT implementation patterns",
"maxResults": 15,
"filters": {
"type": "video",
"tags": ["UNS", "MQTT"]
},
"includeAnalysis": true
}
kms_multimodal_search
Search across all content types
{
"query": "user authentication flow diagrams",
"searchMode": "multimodal",
"maxResults": 10,
"filters": {
"hasVisualContent": true,
"documentTypes": ["video", "whitepaper"]
}
}
kms_search
Basic semantic search
{
"query": "manufacturing execution systems",
"limit": 10,
"threshold": 0.7
}
kms_get_document
Retrieve specific document
{
"documentId": "uuid-of-document"
}
kms_get_stats
System analytics
{
"includeProcessingDetails": true
}
kms_list_documents
Browse documents
{
"limit": 25,
"type": "video",
"tags": ["training", "technical"],
"mediaType": "video"
}
š What's Fixed in v1.0.0
ā Before: Token Limit Errors
Error: Request too large for gpt-4o: Limit 30000, Requested 70239
ā After: Intelligent Optimization
{
"tokenOptimization": {
"enabled": true,
"documentsIncluded": 8,
"documentsExcluded": 7,
"optimization": "Included 8/15 documents, using ~27,518 tokens",
"estimatedTotalTokens": 27518
}
}
š§ Improvements Made
- Automatic Token Management: No more API limit errors
- Smart Document Selection: Prioritizes most relevant content
- Full Content Access: 4,000+ character responses vs 200-char previews
- Provider Optimization: Different strategies for OpenAI vs Claude
- Transparent Operation: Shows what was included/excluded and why
š System Capabilities
Current KMS Status ā
- 127+ documents processed with 100% success rate
- 1,000+ video frames extracted and analyzed
- Multi-modal search across text, audio, and video
- Technical content detection for code, diagrams, UI elements
- Real-time processing pipeline with error recovery
Content Coverage
- Technical Documentation: API docs, system architecture, code examples
- Training Videos: 105+ processed videos with transcription and frame analysis
- Manufacturing Content: MES, OEE, UNS, MQTT, IoT, SCADA terminology
- Web Resources: Crawled documentation and technical resources
AI Capabilities
- AssemblyAI: High-quality transcription with technical term boosting
- OpenAI Embeddings: 1536-dimensional vectors for semantic search
- Claude Vision: Technical content analysis for diagrams and code
- Multi-Provider Chat: OpenAI GPT-4o-mini and Claude support
š”ļø Authentication & Security
API Key Authentication
# Set authentication key
export BACKGROUND_PROCESS_API_KEY="secure-random-string"
# Or use MCP-specific key
export MCP_API_KEY="mcp-specific-secure-key"
Network Configuration
- Protocol: HTTPS (secure connection)
- Transport: STDIO (standard for MCP)
- Authentication: Bearer token with API key
š Development
Project Structure
kms-mcp-server/
āāā src/
ā āāā index.ts # Main MCP server implementation
āāā dist/ # Built files (generated by npm run build)
āāā examples/ # Configuration examples
āāā package.json # Dependencies and scripts
āāā tsconfig.json # TypeScript configuration
āāā README.md # This file
Scripts
npm run build # Compile TypeScript to JavaScript
npm run dev # Development mode with hot reload
npm start # Run compiled server
npm run clean # Clean build directory
npm test # Run tests
Requirements
- Node.js: 18.0.0 or higher
- TypeScript: 5.0.0 or higher
- KMS Server: Running Innovaas KMS instance
š Troubleshooting
Common Issues
-
Connection Failed
Error: KMS API request failed: 500 Internal Server Error
- ā Ensure KMS server is running
- ā
Check
KMS_BASE_URL
environment variable - ā Verify network connectivity
-
Authentication Errors
Error: 401 Unauthorized
- ā Verify API key is set correctly
- ā Check Bearer token format
- ā Ensure KMS server has matching API key
-
Token Limit Errors (Should be fixed)
Error: Request too large for gpt-4o: Limit 30000, Requested 65879
- ā Update to v1.0.0 with token optimization
- ā
Use
kms_chat
tool (automatically optimized) - ā
Check
tokenOptimization
in responses
Debug Mode
# Enable verbose logging
DEBUG=1 npm run dev
# Check KMS server status
curl -H "Authorization: Bearer your-api-key" https://your-domain.com/kms/api/dashboard-stats
š¤ Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Make your changes
- Run tests:
npm test
- Build:
npm run build
- Commit changes:
git commit -m 'Add amazing feature'
- Push to branch:
git push origin feature/amazing-feature
- Create Pull Request
Development Guidelines
- Follow existing code patterns for consistency
- Add comprehensive error handling
- Update tool schemas when modifying parameters
- Test with multiple MCP clients before committing
- Document new features in README
š License
MIT License - see the file for details.
š Links
- GitHub Repository: https://github.com/innovaas/kms-mcp-server
- Issues: https://github.com/innovaas/kms-mcp-server/issues
- Innovaas Website: https://innovaas.co
- Model Context Protocol: https://modelcontextprotocol.io
š Ready to integrate your knowledge management with any MCP-compatible system with intelligent token optimization!