prompt-tester

rt96-hub/prompt-tester

3.1

If you are the rightful owner of prompt-tester and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

A simple MCP server that allows agents to test LLM prompts with different providers.

The MCP Prompt Tester is a server designed to facilitate the testing of language model prompts across different providers, specifically OpenAI and Anthropic. It allows users to configure system and user prompts, set various parameters, and receive formatted responses or error messages. The server supports easy setup through environment variables or a .env file, making it accessible for developers. It offers tools for listing available providers, comparing prompts across different models, and managing multi-turn conversations. This server is particularly useful for agents looking to optimize prompt phrasing, compare model performance, and maintain context in conversations.

Features

  • Test prompts with OpenAI and Anthropic models
  • Configure system prompts, user prompts, and other parameters
  • Get formatted responses or error messages
  • Easy environment setup with .env file support

Tools

  1. list_providers

    Retrieves available LLM providers and their default models.

  2. test_comparison

    Compares multiple prompts side-by-side, allowing you to test different providers, models, and parameters simultaneously.

  3. test_multiturn_conversation

    Manages multi-turn conversations with LLM providers, allowing you to create and maintain stateful conversations.