MetehanYasar11/ultralytics_mcp_server
If you are the rightful owner of ultralytics_mcp_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Ultralytics MCP Server is a Model Context Protocol compliant server that provides RESTful API access to Ultralytics YOLO operations for various computer vision tasks.
UltralyticsMCPTool
A TypeScript MCP client library for integrating Ultralytics operations into workflows.
Ultralytics MCP Server π
A powerful Model Context Protocol (MCP) compliant server that provides RESTful API access to Ultralytics YOLO operations for computer vision tasks including training, validation, prediction, export, tracking, and benchmarking.
π― What is this?
The Ultralytics MCP Server transforms Ultralytics' command-line YOLO operations into a production-ready REST API service. Whether you're building computer vision applications, training custom models, or integrating YOLO into your workflow automation tools like n8n, this server provides a seamless bridge between Ultralytics' powerful capabilities and modern application architectures.
β¨ Key Features
- π RESTful API: HTTP endpoints for all YOLO operations with comprehensive request/response validation
- π‘ Real-time Updates: Server-Sent Events (SSE) for monitoring long-running operations like training
- π€ MCP Compliance: Full Model Context Protocol support with handshake endpoint and tool discovery for workflow automation
- π³ Production Ready: Docker containerization with multi-stage builds and security scanning
- π§ͺ Battle Tested: Comprehensive test suite with CI/CD pipeline and 90%+ code coverage
- π Observability: Built-in metrics parsing, health checks, and monitoring endpoints
- π Enterprise Security: API key authentication, input validation, and vulnerability scanning
- β‘ CPU & GPU Support: Automatic device detection with graceful fallbacks
- π Self-Documenting: Auto-generated OpenAPI/Swagger documentation
ποΈ Architecture Overview
graph TB
A[Client Applications] --> B[FastAPI REST API]
B --> C[Pydantic Validation]
C --> D[Ultralytics CLI Engine]
D --> E[YOLO Models]
B --> F[SSE Events]
D --> G[Metrics Parser]
D --> H[File System Artifacts]
I[Docker Container] --> B
J[MCP Client Tools] --> B
K[n8n Workflows] --> B
Core Components:
app/main.py
: FastAPI application with route definitions and middlewareapp/schemas.py
: Pydantic models for comprehensive request/response validationapp/ultra.py
: Ultralytics CLI integration with metrics parsing and device managementtools/UltralyticsMCPTool
: TypeScript MCP client library for workflow automation
π Quick Start
π Prerequisites
- Python 3.11+ (required for compatibility)
- Conda/Miniconda (recommended for environment management)
- Git (for cloning the repository)
- 4GB+ RAM (for model operations)
- Optional: NVIDIA GPU with CUDA support for faster training
β‘ One-Minute Setup
# 1. Clone and enter directory
git clone https://github.com/MetehanYasar11/ultralytics_mcp_server.git
cd ultralytics_mcp_server
# 2. Create environment and install dependencies
conda env create -f environment.yml
conda activate ultra-dev
# 3. Start the server
uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
# 4. Test it works (in another terminal)
curl http://localhost:8000/
π Verify Installation
After setup, verify everything works:
# Check health endpoint
curl http://localhost:8000/
# Expected: {"status":"healthy","message":"Ultralytics API is running",...}
# View interactive API documentation
open http://localhost:8000/docs
# Test a simple prediction
curl -X POST "http://localhost:8000/predict" \
-H "Content-Type: application/json" \
-d '{
"model": "yolov8n.pt",
"source": "https://ultralytics.com/images/bus.jpg",
"conf": 0.5,
"save": true
}'
π What Just Happened?
- Environment Setup: Created isolated conda environment with PyTorch CPU support
- Dependency Installation: Installed Ultralytics, FastAPI, and all required packages
- Server Start: Launched FastAPI server with auto-reload for development
- API Test: Made a prediction request using a pre-trained YOLOv8 nano model
π’ Real-time SSE with n8n
Live streaming updates for YOLO operations with Server-Sent Events (SSE)
Using SSE in n8n
- Drag MCP Client Tool β set SSE Endpoint
http://host.docker.internal:8092/sse/train
(or compose DNSultra-api:8000/sse/train
). - In Settings β OpenAPI URL use
http://host.docker.internal:8092/openapi.json
. - Set Timeout
0
to keep stream open. - Run workflow β live epoch/loss lines appear in the node's execution log.
π― Available SSE Endpoints:
/sse
- MCP handshake endpoint with tool discovery and keep-alive/sse/train
- Real-time training progress with epoch updates/sse/predict
- Live prediction results/sse/val
- Validation metrics streaming/sse/export
- Export progress updates/sse/track
- Object tracking stream/sse/benchmark
- Performance testing results/sse/solution
- Solution execution logs
π SSE Examples:
MCP Handshake:
curl -N "http://localhost:8092/sse"
# Output:
# data: {"tools": ["train", "val", "predict", "export", "track", "benchmark"], "info": "Ultralytics MCP ready"}
# : ping
# : ping
# (continues with keep-alive pings every 15s)
Training with Live Progress:
curl -N "http://localhost:8092/sse/train?data=coco128.yaml&epochs=1&device=cpu"
# Output:
# data: Ultralytics YOLOv8.0.196 π Python-3.11.5 torch-2.1.1
# data:
# data: train: Scanning /datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|ββββββββββ| 128/128
# data: val: Scanning /datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|ββββββββββ| 128/128
# data:
# data: Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
# data: 1/1 0.12G 1.325 2.009 1.268 89 640: 100%|ββββββββββ| 8/8
# data: [COMPLETED] Process finished successfully
π OpenAPI Documentation: http://localhost:8092/docs#/default/sse_endpoint_sse__op__get
π§ͺ Running Tests
Our comprehensive test suite ensures reliability across all operations.
πββοΈ Quick Test
# Run all tests (recommended)
python run_tests.py
# View test progress with details
pytest tests/test_flow.py -v -s
# Run only fast tests (skip training)
python run_tests.py quick
π¬ Test Categories
Test Type | Command | Duration | What it Tests |
---|---|---|---|
Unit Tests | pytest tests/test_unit.py | ~10s | Individual functions |
Integration | pytest tests/test_flow.py | ~5min | Complete workflows |
Quick Check | python run_tests.py quick | ~30s | Endpoints only |
Full Suite | python run_tests.py | ~5min | Everything including training |
π Understanding Test Output
tests/test_flow.py::TestUltralyticsFlow::test_health_check β
PASSED
tests/test_flow.py::TestUltralyticsFlow::test_01_train_model β
PASSED
tests/test_flow.py::TestUltralyticsFlow::test_02_validate_model β
PASSED
tests/test_flow.py::TestUltralyticsFlow::test_03_predict_with_model β
PASSED
# ... more tests
======================== 9 passed in 295.15s ========================
The integration test performs a complete YOLO workflow:
- π₯ Health Check - Verify API is responsive
- ποΈ Model Training - Train YOLOv8n for 1 epoch on COCO128
- π Model Validation - Validate the trained model
- π― Prediction - Run inference on a test image
- οΏ½ File Verification - Check all expected outputs were created
CI/CD Workflow
The project uses GitHub Actions for continuous integration and deployment. See for the complete configuration.
Workflow Jobs
-
π§ͺ Test Job
- Sets up Conda environment with caching
- Runs pytest with coverage reporting
- Uploads coverage to Codecov
-
π³ Build Job (on success)
- Builds Docker image with multi-stage optimization
- Pushes to GitHub Container Registry
- Supports multi-platform builds (amd64, arm64)
-
π Security Job
- Runs Trivy vulnerability scanner
- Uploads SARIF results to GitHub Security
-
π Integration Job
- Tests complete API workflow
- Validates endpoint responses
- Checks health and documentation endpoints
Workflow Triggers
- Push to
main
ordevelop
branches - Pull Requests to
main
branch - Manual workflow dispatch
Caching Strategy
# Conda packages cached by environment.yml hash
key: conda-${{ runner.os }}-${{ hashFiles('environment.yml') }}
# Docker layers cached using GitHub Actions cache
cache-from: type=gha
cache-to: type=gha,mode=max
Docker Deployment
Quick Deploy
# Using Docker Compose (recommended)
docker-compose up -d
# Check service status
docker-compose ps
# View logs
docker-compose logs -f ultra-api
Production Deployment
# Production configuration
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# With monitoring stack
docker-compose -f docker-compose.yml -f docker-compose.prod.yml --profile monitoring up -d
Environment Configuration
# Copy environment template
cp .env.example .env
# Edit configuration
nano .env
Key Variables:
ULTRA_API_KEY
: API authentication keyCUDA_VISIBLE_DEVICES
: GPU selectionMEMORY_LIMIT
: Container memory limit
Service Access
Once deployed, access the service at:
- API: http://localhost:8000
- Docs: http://localhost:8000/docs
- Prometheus (if enabled): http://localhost:9090
- Grafana (if enabled): http://localhost:3000
For detailed Docker configuration, see .
π API Reference & Examples
π― Core Operations
Operation | Endpoint | Purpose | Example Use Case |
---|---|---|---|
Train | POST /train | Train custom models | Training on your dataset |
Validate | POST /val | Model performance testing | Check accuracy metrics |
Predict | POST /predict | Object detection/classification | Real-time inference |
Export | POST /export | Format conversion | Deploy to mobile/edge |
Track | POST /track | Object tracking in videos | Surveillance, sports analysis |
Benchmark | POST /benchmark | Performance testing | Hardware optimization |
π Request/Response Format
All endpoints return a standardized response structure:
{
"run_id": "abc123-def456-ghi789",
"command": "yolo train model=yolov8n.pt data=coco128.yaml epochs=10",
"return_code": 0,
"stdout": "Training completed successfully...",
"stderr": "",
"metrics": {
"mAP50": 0.95,
"mAP50-95": 0.73,
"precision": 0.89,
"recall": 0.84,
"training_time": 1200.5
},
"artifacts": [
"runs/train/exp/weights/best.pt",
"runs/train/exp/weights/last.pt",
"runs/train/exp/results.csv"
],
"success": true,
"timestamp": "2025-07-12T10:30:00Z"
}
π Example Operations
1. Training a Custom Model
curl -X POST "http://localhost:8000/train" \
-H "Content-Type: application/json" \
-d '{
"model": "yolov8n.pt",
"data": "coco128.yaml",
"epochs": 50,
"imgsz": 640,
"batch": 16,
"device": "0",
"extra_args": {
"patience": 10,
"save_period": 5,
"cos_lr": true
}
}'
2. Real-time Prediction
curl -X POST "http://localhost:8000/predict" \
-H "Content-Type: application/json" \
-d '{
"model": "yolov8n.pt",
"source": "path/to/image.jpg",
"conf": 0.25,
"iou": 0.7,
"save": true,
"save_txt": true,
"save_conf": true
}'
3. Model Export for Deployment
curl -X POST "http://localhost:8000/export" \
-H "Content-Type: application/json" \
-d '{
"model": "runs/train/exp/weights/best.pt",
"format": "onnx",
"dynamic": true,
"simplify": true,
"opset": 11
}'
4. Video Object Tracking
curl -X POST "http://localhost:8000/track" \
-H "Content-Type: application/json" \
-d '{
"model": "yolov8n.pt",
"source": "path/to/video.mp4",
"tracker": "bytetrack.yaml",
"conf": 0.3,
"save": true
}'
π Common Parameters Reference
Parameter | Type | Default | Description | Example |
---|---|---|---|---|
model | string | required | Model path or name | "yolov8n.pt" |
data | string | - | Dataset YAML path | "coco128.yaml" |
source | string | - | Input source | "image.jpg" , "video.mp4" , "0" (webcam) |
epochs | integer | 100 | Training epochs | 50 |
imgsz | integer | 640 | Image size | 320 , 640 , 1280 |
device | string | "cpu" | Compute device | "cpu" , "0" , "0,1" |
conf | float | 0.25 | Confidence threshold | 0.1 to 1.0 |
iou | float | 0.7 | IoU threshold for NMS | 0.1 to 1.0 |
batch | integer | 16 | Batch size | 1 , 8 , 32 |
save | boolean | false | Save results | true , false |
extra_args | object | {} | Additional YOLO args | {"patience": 10} |
π§ͺ Testing & Quality Assurance
π¬ Comprehensive Test Suite
Our testing infrastructure ensures reliability across all YOLO operations:
# Run all tests with conda environment
conda activate ultra-dev
pytest tests/ -v
# Run specific test categories
pytest tests/test_flow.py -v # Core workflow tests
pytest tests/test_mcp_train.py -v # Training specific tests
pytest tests/test_mcp_predict.py -v # Prediction tests
pytest tests/test_mcp_export.py -v # Export functionality tests
# Generate coverage report
pytest tests/ --cov=app --cov-report=html
π Test Coverage
Component | Tests | Coverage | Description |
---|---|---|---|
Core Flow | 9 tests | 95%+ | Complete trainβvalidateβpredict workflow |
Training | 5 tests | 98% | Model training with various configurations |
Prediction | 4 tests | 97% | Inference on images, videos, webcam |
Export | 3 tests | 95% | Model format conversion (ONNX, TensorRT) |
Tracking | 3 tests | 92% | Object tracking in video streams |
Benchmark | 2 tests | 90% | Performance testing and profiling |
π¦ CI/CD Pipeline
# Automated testing on every commit
Workflow: Test Suite
βββ Environment Setup (Conda + PyTorch CPU)
βββ Dependency Installation
βββ Linting & Code Quality (flake8, black)
βββ Unit Tests (pytest)
βββ Integration Tests
βββ Security Scanning (bandit)
βββ Docker Build & Test
βββ Documentation Validation
π Example Test Run
$ pytest tests/test_flow.py::test_complete_workflow -v
tests/test_flow.py::test_complete_workflow PASSED [100%]
======================== Test Results ========================
β
Train: Model trained successfully (epochs: 2)
β
Validate: mAP50 = 0.847, mAP50-95 = 0.621
β
Predict: 3 objects detected with confidence > 0.5
β
Export: ONNX model exported (size: 12.4MB)
β
Cleanup: Temporary files removed
Duration: 45.2s | Memory: 2.1GB | CPU: Intel i7
=================== 1 passed in 45.23s ===================
See for detailed test documentation.
π n8n Integration
n8n MCP Client Setup
SSE Endpoint : http://host.docker.internal:8092/sse
OpenAPI URL : http://host.docker.internal:8092/openapi.json
Manifest URL : http://host.docker.internal:8092/mcp/manifest.json
Tools : train Β· val Β· predict Β· export Β· track Β· benchmark
Timeout : 0
π€ MCP Handshake Protocol
The /sse
endpoint now serves as a Model Context Protocol (MCP) handshake endpoint:
-
Initial Connection: When you connect to
/sse
, it immediately sends a tool discovery message:data: {"tools": ["train", "val", "predict", "export", "track", "benchmark"], "info": "Ultralytics MCP ready"}
-
Keep-Alive: After the handshake, it sends ping comments every 15 seconds to maintain the connection:
: ping
-
Tool Discovery: MCP clients can discover available tools via:
- Manifest Endpoint:
GET /mcp/manifest.json
- Static tool definitions - SSE Handshake:
GET /sse
- Dynamic tool discovery with live connection
- Manifest Endpoint:
Streaming in n8n
- Drag MCP Client Tool β SSE Endpoint
http://host.docker.internal:8092/sse/train
(or/sse/predict
). - OpenAPI URL
http://host.docker.internal:8092/openapi.json
. - Timeout 0, Auth None.
- Run workflow β live epoch/loss lines appear in execution log (see GIF).
Available SSE Endpoints:
/sse
- MCP handshake endpoint with tool discovery and keep-alive/sse/train
- Real-time training progress with epoch updates/sse/predict
- Live prediction results/sse/val
- Validation metrics streaming/sse/export
- Export progress updates/sse/track
- Object tracking stream/sse/benchmark
- Performance testing results/sse/solution
- Solution execution logs
Example SSE Training Stream:
curl -N "http://localhost:8092/sse/train?data=coco128.yaml&epochs=1&device=cpu"
# Output:
# data: Ultralytics YOLOv8.0.196 π Python-3.11.5 torch-2.1.1
# data: Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
# data: 1/1 0.12G 1.325 2.009 1.268 89 640: 100%|ββββββββββ| 8/8
# data: [COMPLETED] Process finished successfully
For detailed integration examples, see .
1. Environment Setup
Add the Ultralytics API URL to your n8n environment:
# In your n8n environment
export ULTRA_API_URL=http://localhost:8000
# Or in Docker Compose
environment:
- ULTRA_API_URL=http://ultralytics-api:8000
2. Install UltralyticsMCPTool
# Navigate to the tool directory
cd tools/UltralyticsMCPTool
# Install dependencies
npm install
# Build the tool
npm run build
# Link for global usage
npm link
3. n8n Node Configuration
Create a custom n8n node or use the HTTP Request node:
// n8n Custom Node Example
import UltralyticsMCPTool from 'ultralytics-mcp-tool';
const tool = new UltralyticsMCPTool(process.env.ULTRA_API_URL);
// Train a model
const result = await tool.train({
model: 'yolov8n.pt',\n data: 'coco128.yaml',\n epochs: 10
});
4. Workflow Examples
Image Classification Workflow:
- Trigger: Webhook receives image
- Ultralytics: Predict objects
- Logic: Process results
- Output: Send notifications
Training Pipeline:
- Schedule: Daily trigger
- Ultralytics: Train model
- Validate: Check performance
- Deploy: Update production model
5. MCP Integration
// Get available tools
const manifest = UltralyticsMCPTool.manifest();
console.log('Available operations:', manifest.tools.map(t => t.name));
// Execute with different channels
const httpResult = await tool.execute('predict', params, 'http');
const stdioResult = await tool.execute('predict', params, 'stdio');
// Real-time updates with SSE
tool.trainSSE(params, {
onProgress: (data) => updateWorkflowStatus(data),
onComplete: (result) => triggerNextNode(result)
});
For detailed integration examples, see .
π³ Docker Deployment
π Quick Docker Setup
# Clone and build
git clone https://github.com/your-username/ultralytics-mcp-server.git
cd ultralytics-mcp-server
# Build and run with Docker Compose
docker-compose up -d
# Verify deployment
curl http://localhost:8000/docs
π Docker Configuration
Production-ready setup:
# Dockerfile highlights
FROM python:3.11-slim
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY app/ ./app/
COPY models/ ./models/
# Expose port and run
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Docker Compose services:
# docker-compose.yml
version: '3.8'
services:
ultralytics-api:
build: .
ports:
- "8000:8000"
volumes:
- ./models:/app/models
- ./data:/app/data
- ./runs:/app/runs
environment:
- YOLO_CACHE_DIR=/app/cache
- YOLO_SETTINGS_DIR=/app/settings
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- ultralytics-api
π§ Environment Variables
Variable | Default | Description |
---|---|---|
YOLO_CACHE_DIR | /tmp/yolo | Model cache directory |
YOLO_SETTINGS_DIR | /tmp/settings | Settings directory |
API_HOST | 0.0.0.0 | API host binding |
API_PORT | 8000 | API port |
LOG_LEVEL | INFO | Logging level |
MAX_WORKERS | 4 | Uvicorn workers |
MODEL_DIR | /app/models | Model storage path |
π Production Deployment
# Production deployment with SSL
docker-compose -f docker-compose.prod.yml up -d
# Health check
curl -f http://localhost:8000/health || exit 1
# Scale services
docker-compose up -d --scale ultralytics-api=3
# Monitor logs
docker-compose logs -f ultralytics-api
π§ API Documentation
Response Format
All endpoints return a standardized response:
{
"run_id": "uuid-string",
"command": "yolo train model=yolov8n.pt...",
"return_code": 0,
"stdout": "command output",
"stderr": "error output",
"metrics": {
"mAP50": 0.95,
"precision": 0.89,
"training_time": 1200
},
"artifacts": [
"runs/train/exp/weights/best.pt",
"runs/train/exp/results.csv"
],
"success": true,
"timestamp": "2024-01-01T12:00:00Z"
}
Error Handling
{
"error": "Validation Error",
"details": "Model file not found: invalid_model.pt",
"timestamp": "2024-01-01T12:00:00Z"
}
π‘οΈ Security & Authentication
# API Key authentication
curl -H "X-API-Key: your-api-key-here" \
-X POST "http://localhost:8000/predict" \
-d '{"model": "yolov8n.pt", "source": "image.jpg"}'
# JWT Token authentication
curl -H "Authorization: Bearer your-jwt-token" \
-X POST "http://localhost:8000/train" \
-d '{"model": "yolov8n.pt", "data": "dataset.yaml"}'
π Health & Monitoring
# Health check endpoint
curl http://localhost:8000/health
# Response: {"status": "healthy", "version": "1.0.0", "uptime": 3600}
# Metrics endpoint
curl http://localhost:8000/metrics
# Response: Prometheus-formatted metrics
# Status endpoint with system info
curl http://localhost:8000/status
# Response: {"gpu": "available", "memory": "8GB", "models_loaded": 3}
π€ Contributing Guidelines
We welcome contributions! Please follow these guidelines:
Development Setup
-
Fork and Clone
git clone https://github.com/your-username/ultralytics-mcp-server.git cd ultralytics-mcp-server
-
Create Environment
conda env create -f environment.yml conda activate ultra-dev
-
Install Development Tools
pip install black isort flake8 mypy pytest-cov
Code Standards
- Python: Follow PEP 8, use Black for formatting
- TypeScript: Use ESLint and Prettier
- Documentation: Update README.md and docstrings
- Tests: Maintain 80%+ test coverage
Pre-commit Checks
# Format code
black app/ tests/
isort app/ tests/
# Lint code
flake8 app/ tests/
mypy app/
# Run tests
pytest --cov=app
Pull Request Process
-
Create Feature Branch
git checkout -b feature/your-feature-name
-
Make Changes
- Write code following standards
- Add/update tests
- Update documentation
-
Test Changes
pytest -v python run_tests.py
-
Submit PR
- Clear description of changes
- Reference related issues
- Ensure CI passes
Issue Reporting
When reporting issues, include:
- Environment: OS, Python version, dependencies
- Steps: Minimal reproduction steps
- Expected: What should happen
- Actual: What actually happens
- Logs: Error messages and stack traces
Feature Requests
For new features:
- Use Case: Why is this needed?
- Proposal: How should it work?
- Impact: Who benefits from this?
- Implementation: Any technical considerations?
π License & Support
π License
This project is licensed under the MIT License - see the file for details.
Key permissions:
- β Commercial use
- β Modification
- β Distribution
- β Private use
π Getting Help
Resource | Link | Purpose |
---|---|---|
π API Docs | http://localhost:8000/docs | Interactive API documentation |
π Issues | GitHub Issues | Bug reports & feature requests |
π¬ Discussions | GitHub Discussions | Questions & community chat |
π Ultralytics | Official Docs | YOLO model documentation |
π§ MCP Protocol | Specification | MCP standard reference |
π― Quick Support Checklist
Before asking for help:
- Check the for common issues
- Search existing GitHub Issues
- Test with the latest version
- Include environment details in your issue
When reporting bugs:
# Include this information
OS: Windows 11 / macOS 14 / Ubuntu 22.04
Python: 3.11.x
Conda env: ultra-dev
PyTorch: 2.5.1+cpu
Error: [paste complete error message]
π Acknowledgments
Component | Thanks To | For |
---|---|---|
π― YOLO Models | Ultralytics | Revolutionary object detection |
π FastAPI | Sebastian Ramirez | Lightning-fast API framework |
π§ Pydantic | Samuel Colvin | Data validation & settings |
π³ Docker | Docker Inc | Containerization platform |
π§ͺ pytest | pytest-dev | Testing framework |
π Conda | Anaconda | Package management |
π Built with β€οΈ for the Computer Vision Community π
β Star this repo | π΄ Fork & contribute | π’ Share with friends
Empowering developers to build intelligent computer vision applications with ease π