1sarthakbhardwaj/labellerr-mcp-server
If you are the rightful owner of labellerr-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Labellerr MCP Server is a Model Context Protocol server that interfaces with the Labellerr SDK to manage annotation projects, datasets, and operations using natural language through Claude Desktop.
Labellerr MCP Server
A Model Context Protocol (MCP) server that provides a comprehensive interface to the Labellerr SDK for managing annotation projects, datasets, and monitoring operations through AI assistants like Claude Desktop and Cursor.
Features
- š Project Management - Create, list, update, and track annotation projects
- š Dataset Management - Create datasets, upload files/folders, and query information
- š·ļø Annotation Tools - Upload pre-annotations, export data, and download results
- š Monitoring & Insights - Real-time progress tracking and system health monitoring
- š Query Capabilities - Search projects, get statistics, and analyze operations
22 specialized tools available across 5 categories to streamline your annotation workflow.
Installation
Prerequisites
- Node.js 16 or higher
- npm or yarn
- Labellerr API credentials (API Key, API Secret, Client ID)
Setup
- Clone the repository:
git clone https://github.com/1sarthakbhardwaj/labellerr-mcp-server.git
cd labellerr-mcp-server
- Install dependencies:
npm install
- Configure environment variables:
cp .env.example .env
Edit .env
and add your Labellerr credentials:
LABELLERR_API_KEY=your_api_key_here
LABELLERR_API_SECRET=your_api_secret_here
LABELLERR_CLIENT_ID=your_client_id_here
Getting Credentials: Contact Labellerr support or email support@labellerr.com to obtain your API credentials.
Configuration
Option 1: Using with Claude Desktop
Add to your Claude Desktop configuration file:
Location: ~/Library/Application Support/Claude/claude_desktop_config.json
(macOS)
{
"mcpServers": {
"labellerr": {
"command": "node",
"args": ["/absolute/path/to/labellerr-mcp-server/src/index.js"],
"env": {
"LABELLERR_API_KEY": "your_api_key",
"LABELLERR_API_SECRET": "your_api_secret",
"LABELLERR_CLIENT_ID": "your_client_id"
}
}
}
}
Important: Replace /absolute/path/to/
with the full path to your installation directory.
After configuration:
- Restart Claude Desktop completely
- The Labellerr tools will be available in your conversations
- Ask Claude to list your projects or check system health
Option 2: Using with Cursor
Add to your Cursor MCP configuration file:
Location: ~/.cursor/mcp.json
(macOS/Linux) or %APPDATA%\Cursor\mcp.json
(Windows)
{
"mcpServers": {
"labellerr": {
"command": "node",
"args": ["/absolute/path/to/labellerr-mcp-server/src/index.js"],
"env": {
"LABELLERR_API_KEY": "your_api_key",
"LABELLERR_API_SECRET": "your_api_secret",
"LABELLERR_CLIENT_ID": "your_client_id"
}
}
}
}
Important: Replace /absolute/path/to/
with the full path to your installation directory.
After configuration:
- Restart Cursor completely (Quit and reopen)
- The Labellerr tools will be available in the AI assistant
- Try asking: "List all my Labellerr projects"
Verifying Installation
Test the server is working:
# Start the server
npm start
# In another terminal, test the protocol
echo '{"jsonrpc":"2.0","method":"tools/list","id":1}' | node src/index.js
You should see a JSON response listing all 22 available tools.
Usage
Starting the Server Standalone
# Production mode
npm start
# Development mode (with auto-reload)
npm run dev
Using with AI Assistants
Once configured with Claude Desktop or Cursor, you can interact naturally:
Project Management:
- "List all my Labellerr projects"
- "Create a new image classification project for product categorization"
- "What's the progress of project XYZ?"
Dataset Operations:
- "Upload images from /path/to/folder"
- "List all my datasets"
- "Create a new dataset for video annotation"
Monitoring:
- "Show me system health"
- "Check the progress of my active projects"
- "What operations have been performed?"
Exports:
- "Export annotations in COCO format"
- "Check status of export ABC123"
- "Download completed export"
Current Status
ā Fully Working (21 tools)
- Project Management: List, get details, update rotation
- Dataset Management: Create, upload, list, query
- Annotation Operations: Upload pre-annotations, export, download
- Monitoring: Job status, progress, system health
- Query & Search: Statistics, history, search
ā ļø In Progress (1 tool)
- Project Creation - Implementation complete but encountering API 400 error during dataset creation
- File upload to GCS: ā Implemented
- Dataset creation: ā ļø Getting 400 error
- Template creation: ā Implemented
- Project finalization: ā Implemented
- See Issue #1 for details
Available Tools
The server provides 22 specialized tools:
š Project Management (4 tools)
project_create
- Create projects with annotation guidelinesproject_list
- List all projectsproject_get
- Get detailed project informationproject_update_rotation
- Update rotation configuration
š Dataset Management (5 tools)
dataset_create
- Create new datasetsdataset_upload_files
- Upload individual filesdataset_upload_folder
- Upload entire foldersdataset_list
- List all datasetsdataset_get
- Get dataset information
š·ļø Annotation Operations (5 tools)
annotation_upload_preannotations
- Upload pre-annotations (sync)annotation_upload_preannotations_async
- Upload pre-annotations (async)annotation_export
- Create annotation exportannotation_check_export_status
- Check export statusannotation_download_export
- Get export download URL
š Monitoring & Analytics (4 tools)
monitor_job_status
- Monitor background job statusmonitor_project_progress
- Track project progressmonitor_active_operations
- List active operationsmonitor_system_health
- Check system health
š Query & Search (4 tools)
query_project_statistics
- Get detailed project statsquery_dataset_info
- Get dataset informationquery_operation_history
- View operation historyquery_search_projects
- Search projects by name/type
For detailed parameters and examples, see the Full Tool Documentation below.
Supported Data Types
- image - JPEG, PNG, TIFF
- video - MP4
- audio - MP3, WAV
- document - PDF
- text - TXT
Annotation Types
BoundingBox
- Rectangle annotations for object detectionpolygon
- Polygon shapes for segmentationdot
- Point annotationsradio
- Single choice selectiondropdown
- Dropdown selectionboolean
- Yes/No selectioninput
- Text input fieldselect
- Multiple choice selection
Export Formats
json
- Standard JSON formatcoco_json
- COCO dataset formatcsv
- Comma-separated valuespng
- Image masks
Limits
- Maximum 2,500 files per folder upload
- Maximum 2.5 GB total folder size
- Batch processing: 15 MB per batch, 900 files max
Example Workflows
1. Create an Object Detection Project
{
"project_name": "Vehicle Detection",
"dataset_name": "Traffic Dataset",
"data_type": "image",
"created_by": "user@example.com",
"annotation_guide": [
{
"question": "Detect Vehicles",
"option_type": "BoundingBox",
"required": true,
"options": [{"option_name": "#ff0000"}]
}
],
"folder_to_upload": "/path/to/images"
}
2. Monitor Project Progress
Ask your AI assistant: "Show me the progress of my annotation projects"
The server will return:
- Total files
- Annotated count
- Reviewed count
- Completion percentage
3. Export Annotations
{
"project_id": "proj_abc123",
"export_name": "Training Export",
"export_format": "coco_json",
"statuses": ["accepted", "reviewed"]
}
4. Search Projects
Ask: "Find all projects related to 'vehicle' or 'traffic'"
The server will search project names and return matching results.
Detailed Tool Reference
Project Management Tools
project_create
Create a new annotation project.
Parameters:
project_name
(string, required) - Name of the projectdataset_name
(string, required) - Name of the datasetdata_type
(string, required) - Type: image/video/audio/document/textcreated_by
(string, required) - Creator's emailannotation_guide
(array, required) - Annotation questions/guidelinesdataset_description
(string, optional) - Dataset descriptionfolder_to_upload
(string, optional) - Path to folder with filesfiles_to_upload
(array, optional) - Array of file pathsrotation_config
(object, optional) - Rotation configurationautolabel
(boolean, optional) - Enable auto-labeling
project_list
List all projects for the client.
Returns: Array of projects with metadata
project_get
Get detailed information about a specific project.
Parameters:
project_id
(string, required) - ID of the project
project_update_rotation
Update rotation configuration for a project.
Parameters:
project_id
(string, required) - ID of the projectrotation_config
(object, required) - New rotation settings
Dataset Management Tools
dataset_create
Create a new dataset.
Parameters:
dataset_name
(string, required) - Name of the datasetdata_type
(string, required) - Type of datadataset_description
(string, optional) - Description
dataset_upload_files
Upload individual files to a dataset.
Parameters:
files
(array, required) - Array of file pathsdata_type
(string, required) - Type of data
dataset_upload_folder
Upload all files from a folder.
Parameters:
folder_path
(string, required) - Path to folderdata_type
(string, required) - Type of data
dataset_list
List all datasets (linked and unlinked).
Parameters:
data_type
(string, optional) - Filter by data type (default: "image")
dataset_get
Get detailed information about a dataset.
Parameters:
dataset_id
(string, required) - ID of the dataset
Annotation Tools
annotation_upload_preannotations
Upload pre-annotations (synchronous).
Parameters:
project_id
(string, required) - ID of the projectannotation_format
(string, required) - Format: json/coco_json/csv/pngannotation_file
(string, required) - Path to annotation file
annotation_upload_preannotations_async
Upload pre-annotations (asynchronous).
Parameters:
- Same as
annotation_upload_preannotations
annotation_export
Create an export of project annotations.
Parameters:
project_id
(string, required) - ID of the projectexport_name
(string, required) - Name for the exportexport_format
(string, required) - Format for exportstatuses
(array, required) - Statuses to includeexport_description
(string, optional) - Description
annotation_check_export_status
Check the status of export jobs.
Parameters:
project_id
(string, required) - ID of the projectexport_ids
(array, required) - Array of export IDs
annotation_download_export
Get download URL for a completed export.
Parameters:
project_id
(string, required) - ID of the projectexport_id
(string, required) - ID of the export
Monitoring Tools
monitor_job_status
Monitor the status of a background job.
Parameters:
job_id
(string, required) - ID of the job
monitor_project_progress
Get progress statistics for a project.
Parameters:
project_id
(string, required) - ID of the project
monitor_active_operations
List all active operations and their status.
Returns: List of active operations with timestamps
monitor_system_health
Check the health and status of the MCP server.
Returns: System status, connectivity, active projects count
Query Tools
query_project_statistics
Get detailed statistics for a project.
Parameters:
project_id
(string, required) - ID of the project
query_dataset_info
Get detailed information about a dataset.
Parameters:
dataset_id
(string, required) - ID of the dataset
query_operation_history
Query the history of operations performed.
Parameters:
limit
(number, optional) - Max number of operations (default: 10)status
(string, optional) - Filter by status: success/failed/in_progress
query_search_projects
Search for projects by name or type.
Parameters:
query
(string, required) - Search query string
Troubleshooting
Server won't start
- Verify Node.js version (requires 16+)
- Check environment variables are set correctly
- Ensure port is not in use
Tools return errors
- Verify Labellerr API credentials are correct
- Check network connectivity
- Review operation history for error details
AI assistant can't find tools
- Verify configuration file path is correct
- Use absolute paths, not relative paths
- Restart the AI assistant completely after configuration
- Check that credentials are set in the config file
Debug Mode
Set LOG_LEVEL=debug
in your .env
file for detailed logging.
Development
Project Structure
labellerr-mcp-server/
āāā src/
ā āāā index.js # Main server entry point
ā āāā labellerr-client.js # Labellerr API client
ā āāā tools/
ā āāā index.js # Tool definitions
āāā package.json # Dependencies and scripts
āāā .env.example # Environment template
āāā claude_desktop_config.json # Claude configuration example
āāā LICENSE # MIT License
āāā README.md # This file
Adding New Tools
- Define the tool schema in
src/tools/index.js
- Implement the handler in
src/index.js
(handleCallTool method) - Add the client method in
src/labellerr-client.js
if needed - Update documentation
Resources
- Labellerr Documentation: docs.labellerr.com
- MCP Protocol: modelcontextprotocol.io
- Support Email: support@labellerr.com
- GitHub Issues: github.com/1sarthakbhardwaj/labellerr-mcp-server/issues
License
MIT License - see file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Made with ā¤ļø for the Labellerr community