nedops/launch-mode-mcp
If you are the rightful owner of launch-mode-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
Launch Mode MCP Server provides real-time code validation for AI assistants, optimizing context usage and offering instant feedback.
🚀 Launch Mode MCP Server
Real-Time Code Validation for AI Assistants
Save 97.5% of your AI assistant's context by validating code changes externally. Get instant feedback without consuming precious tokens.
✅ NOW MCP COMPATIBLE - Install directly in Claude Desktop or Claude Code!
📊 - Real test shows 8,000 → 200 tokens per file!
🎯 What It Does
When your AI assistant writes code, it typically uses significant context to:
- Check syntax
- Run tests
- Validate API contracts
- Ensure compatibility
Launch Mode MCP handles all of this externally, giving you:
- ✅ Sub-second validation
- ✅ Real test results
- ✅ Breaking change detection
- ✅ Service dependency awareness
⚡ Quick Start (30 seconds)
For Claude Code (CLI)
# Install the MCP server
claude mcp add launch-mode /path/to/venv/bin/python /path/to/launch_mode_mcp.py
# Or with the provided path:
claude mcp add launch-mode /Users/ned/repos/github/nedops/launch-mode-mcp/venv/bin/python /Users/ned/repos/github/nedops/launch-mode-mcp/launch_mode_mcp.py
For Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"launch-mode": {
"command": "/path/to/venv/bin/python",
"args": ["/path/to/launch_mode_mcp.py"]
}
}
}
For REST API Server (Legacy)
git clone https://github.com/nedops/launch-mode-mcp
cd launch-mode-mcp
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python mcp_server.py # Runs on http://localhost:8000
Test It Works
curl http://localhost:8000/health
🤖 Available Tools
Once installed, these MCP tools are available:
-
validate_changes- Validate code syntax and run tests- Supports Python, JavaScript, TypeScript
- Runs pytest and jest automatically
- Detects common issues and anti-patterns
-
get_service_info- Get service configuration- View dependencies and API endpoints
- Check service health status
-
run_smoke_test- Execute smoke tests- Health checks
- API validation
- Integration tests
-
list_services- List all configured services
📊 Example Usage
Validate Code Changes
# Your AI writes this code
def calculate_total(items):
return sum(item.price for item in items)
# Launch Mode MCP validates it instantly:
# ✅ Syntax valid
# ✅ 5 tests passed
# ⚠️ Warning: Missing type hints
# 💡 Suggestion: Add error handling for empty list
Check Breaking Changes
# Changing an API endpoint
@app.route('/api/users/<id>', methods=['DELETE']) # was GET
# Launch Mode MCP warns:
# ❌ Breaking change detected!
# - Method changed from GET to DELETE
# - 3 services depend on this endpoint
# - Suggested: Version the API instead
🛠️ Configuration
Edit services.yaml to add your services:
services:
- name: your-service
repository: https://github.com/you/repo
health_endpoint: /health
dependencies: [database, cache]
api_endpoints:
- GET /api/resource
- POST /api/resource
📈 Current Status & Roadmap
✅ Skateboard (Current - v1.0)
- ✅ MCP protocol implementation
- ✅ Python/JavaScript validation
- ✅ Real pytest/jest execution
- ✅ Claude Desktop/Code integration
- ✅ Common issue detection
- ✅ API breaking change warnings
🚲 Bicycle (Next - v2.0)
- 🔄 Cross-service validation
- 🔄 Real-time test watching
- 🔄 More language support (Go, Rust)
- 🔄 Caching for faster responses
- 🔄 Integration test coordination
🚗 Car (Future - v3.0)
- 📅 Full context persistence
- 📅 Multi-agent orchestration
- 📅 Autonomous test generation
- 📅 Semantic code understanding
🎯 Why This Matters
Proven Results (September 1, 2025 Test)
Traditional AI coding workflow:
Reading file: 5,000 tokens
Validation: 3,000 tokens
Total per file: 8,000 tokens (4% of context)
20 files: 160,000 tokens (80% of context!) ⚠️
With Launch Mode MCP:
Reading file: 0 tokens (reads from disk)
Validation: 200 tokens (just results)
Total per file: 200 tokens (0.1% of context)
20 files: 4,000 tokens (2% of context!) ✨
📊 97.5% Context Savings -
🏗️ Architecture
Two Server Implementations
-
launch_mode_mcp.py- Standard MCP Server- Uses official MCP Python SDK
- stdio transport for AI assistants
- Direct integration with Claude
-
mcp_server.py- REST/WebSocket Server- HTTP endpoints for web integration
- WebSocket for real-time validation
- Standalone operation on port 8000
Validation Features
- Syntax Checking: Python (compile), JavaScript (bracket matching)
- Test Execution: pytest with JSON reporting, jest support
- Issue Detection: TODOs, console.logs, hardcoded values, wildcard imports
- API Validation: Breaking change detection, contract checking
📊 Metrics
After validation, you get:
- ✅ Pass/fail status
- 📝 List of errors
- ⚠️ Warnings
- 💡 Suggestions
- ⏱️ Execution time
- 🧪 Number of tests run
🤝 Contributing
This is the "skateboard" version - simple but functional. Help us evolve:
- Use it and report issues
- Extend it with your validators
- Share it with your team
- Contribute improvements
📚 Documentation
- - Connect your AI assistant
- - Detailed API documentation
- - Extend and customize
🐛 Troubleshooting
Server won't start?
# Check port availability
lsof -i :8000
# Check logs
docker-compose logs
Validation timing out?
# Increase timeout
export TEST_TIMEOUT=60
AI not connecting?
# Verify server is running
curl http://localhost:8000/health
📄 License
MIT - Use freely in your projects
🙏 Acknowledgments
Part of the Launch Mode platform - simplifying AI-assisted development by solving the context problem.
Built with frustration from wasted context, for developers who want their AI assistants to actually help code, not just validate it.