mandoline-mcp-server

mandoline-ai/mandoline-mcp-server

3.3

If you are the rightful owner of mandoline-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Mandoline MCP Server enables AI assistants to evaluate and improve their performance using the Model Context Protocol.

Tools

Functions exposed to the LLM to take actions

create_metric

Define custom evaluation criteria for your specific tasks

batch_create_metrics

Create multiple evaluation metrics in one operation

get_metric

Retrieve details about a specific metric

get_metrics

Browse your metrics with filtering and pagination

update_metric

Modify existing metric definitions

create_evaluation

Score prompt/response pairs against your metrics

batch_create_evaluations

Evaluate the same content against multiple metrics

get_evaluation

Retrieve evaluation results and scores

get_evaluations

Browse evaluation history with filtering and pagination

update_evaluation

Add metadata or context to evaluations

Prompts

Interactive templates invoked by user choice

No prompts

Resources

Contextual data attached and managed by the client

No resources