MetehanYasar11/ultralytics_mcp_server
If you are the rightful owner of ultralytics_mcp_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Ultralytics MCP Server is a Model Context Protocol compliant server that provides RESTful API access to Ultralytics YOLO operations for various computer vision tasks.
š Ultralytics MCP Server - AI-Powered Computer Vision Platform
Unified Development Platform for YOLO Models with N8N Integration
A comprehensive Model Context Protocol (MCP) server that seamlessly integrates Ultralytics YOLO models with N8N workflows, providing a complete AI-powered computer vision solution in a single command.
⨠Features
šÆ Core Capabilities
- 7 AI-Powered Tools for comprehensive YOLO operations
- Real-time Object Detection with live inference
- Model Training & Fine-tuning with custom datasets
- Performance Analytics via TensorBoard integration
- N8N Workflow Integration for automation
š„ļø User Interfaces
- Streamlit Dashboard - Interactive web interface for model management
- Jupyter Lab - Notebook environment for development
- TensorBoard - Real-time training metrics and visualization
- N8N Integration - Workflow automation and AI task orchestration
š§ Technical Stack
- CUDA 12.4.1 - GPU acceleration for training and inference
- PyTorch - Deep learning framework with CUDA support
- Ultralytics YOLO - State-of-the-art object detection models
- Docker - Containerized deployment
- Node.js MCP Server - Model Context Protocol implementation
š Quick Start
Prerequisites
- Docker Desktop with GPU support
- NVIDIA drivers compatible with CUDA 12.4.1
- Windows PowerShell or Linux/macOS terminal
One-Command Deployment
docker-compose up -d
That's it! The entire platform will be available at:
- š Streamlit UI: http://localhost:8501
- š TensorBoard: http://localhost:6006
- š Jupyter Lab: http://localhost:8888
- š MCP Server: http://localhost:8092
š® Available Services
Service | Port | Description | Status |
---|---|---|---|
Streamlit Dashboard | 8501 | Interactive YOLO model interface | ā Ready |
MCP Server | 8092 | N8N integration endpoint | ā Ready |
TensorBoard | 6006 | Training metrics visualization | ā Ready |
Jupyter Lab | 8888 | Development environment | ā Ready |
š ļø MCP Tools Available
Our MCP server provides 7 specialized tools for AI workflows:
detect_objects
- Real-time object detection in imagestrain_model
- Custom YOLO model trainingevaluate_model
- Model performance assessmentpredict_batch
- Batch processing for multiple imagesexport_model
- Model format conversion (ONNX, TensorRT, etc.)benchmark_model
- Performance benchmarkinganalyze_dataset
- Dataset statistics and validation
š N8N Integration
Connect to N8N using our MCP server:
- Server Endpoint:
http://localhost:8092
- Transport: Server-Sent Events (SSE)
- Health Check:
http://localhost:8092/health
Example N8N Workflow
{
"mcp_connection": {
"transport": "sse",
"endpoint": "http://localhost:8092/sse"
}
}
š Project Structure
ultralytics_mcp_server/
āāā š³ docker-compose.yml # Orchestration configuration
āāā š§ Dockerfile.ultralytics # CUDA-enabled Ultralytics container
āāā š§ Dockerfile.mcp-connector # Node.js MCP server container
āāā š¦ src/
ā āāā server.js # MCP server implementation
āāā šØ main_dashboard.py # Streamlit main interface
āāā š pages/ # Streamlit multi-page app
ā āāā train.py # Model training interface
ā āāā inference.py # Inference interface
āāā ā” startup.sh # Container initialization script
āāā š .dockerignore # Build optimization
āāā š README.md # This documentation
š§ Configuration
Environment Variables
CUDA_VISIBLE_DEVICES
- GPU device selectionSTREAMLIT_PORT
- Streamlit service port (default: 8501)MCP_PORT
- MCP server port (default: 8092)TENSORBOARD_PORT
- TensorBoard port (default: 6006)
Custom Configuration
Edit docker-compose.yml
to customize:
- Port mappings
- Volume mounts
- Environment variables
- Resource limits
š Usage Examples
Object Detection via Streamlit
- Navigate to http://localhost:8501
- Upload an image or video
- Select YOLO model (YOLOv8, YOLOv11)
- Run inference and view results
Training Custom Models
- Access Jupyter Lab at http://localhost:8888
- Prepare your dataset in YOLO format
- Use the training interface in Streamlit
- Monitor progress in TensorBoard
N8N Automation
- Create N8N workflow
- Add MCP connector node
- Configure endpoint:
http://localhost:8092
- Use available tools for automation
š Monitoring & Debugging
Container Status
docker ps
docker-compose logs ultralytics-container
docker-compose logs mcp-connector-container
Health Checks
# MCP Server
curl http://localhost:8092/health
# Streamlit
curl http://localhost:8501/_stcore/health
# TensorBoard
curl http://localhost:6006
š Restart & Maintenance
Restart Services
docker-compose restart
Update & Rebuild
docker-compose down
docker-compose up --build -d
Clean Reset
docker-compose down
docker system prune -f
docker-compose up --build -d
šÆ Performance Optimization
- GPU Memory: Automatically managed by CUDA runtime
- Batch Processing: Optimized for multiple image inference
- Model Caching: Pre-loaded models for faster response
- Multi-threading: Concurrent request handling
šØ Troubleshooting
Common Issues
Container Restart Loop
# Check logs
docker-compose logs ultralytics-container
# Restart with rebuild
docker-compose down
docker-compose up --build -d
Streamlit Not Loading
# Verify container status
docker ps
# Check if files are copied correctly
docker exec ultralytics-container ls -la /ultralytics/
GPU Not Detected
# Check NVIDIA drivers
nvidia-smi
# Verify CUDA in container
docker exec ultralytics-container nvidia-smi
š§ Development
Local Development Setup
- Clone repository
- Install dependencies:
npm install
(for MCP server) - Set up Python environment for Streamlit
- Run services individually for debugging
Adding New MCP Tools
- Edit
src/server.js
- Add tool definition in
tools
array - Implement handler in
handleToolCall
- Test with N8N integration
š¤ Contributing
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open Pull Request
š License
This project is licensed under the AGPL-3.0 License - see the Ultralytics License for details.
š Acknowledgments
- Ultralytics - For the amazing YOLO implementation
- N8N - For the workflow automation platform
- Streamlit - For the beautiful web interface framework
- NVIDIA - For CUDA support and GPU acceleration
š Support
- š Issues: GitHub Issues
- š¬ Discussions: GitHub Discussions
- š§ Contact: Create an issue for support
Made with ā¤ļø for the AI Community
š Ready to revolutionize your computer vision workflows? Start with
docker-compose up -d
!