teleprompter

raisinbread/teleprompter

3.3

If you are the rightful owner of teleprompter and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Teleprompter is an MCP server designed to manage and expose tools for prompt re-use with LLMs, enhancing efficiency and consistency in interactions.

Teleprompter

Teleprompter Logo

An MCP server that manages and exposes tools to allow prompt re-use with LLMs.


Table of Contents


Features

  • Prompt Storage & Reuse: Store, search, and retrieve prompt templates for LLMs.
  • MCP Server: Exposes prompt tools via the Model Context Protocol (MCP).
  • Prompt Variables: Supports template variables (e.g., {{name}}) for dynamic prompt generation.
  • Search: Fast full-text search over stored prompts using MiniSearch.
  • TypeScript: Modern, type-safe codebase.
  • Extensive Testing: Includes unit and integration tests with Vitest.

MCP Configuration

To use Teleprompter with your LLM client, add this configuration:

{
  "mcpServers": {
    "teleprompter": {
      "command": "npx",
      "args": ["-y", "mcp-teleprompter"],
      "env": {
        "PROMPT_STORAGE_PATH": "/path/to/your/prompts-directory"
      }
    }
  }
}

Note: Replace /path/to/your/prompts-directory with the absolute path where you want prompts stored.


Usage Examples

Once configured, you can use Teleprompter with your LLM by using prompt tags in your conversations. Here's a detailed example that shows how it solves the problem of repeating complex instructions:

🎡 Music Discovery on Spotify

The Problem: Every time you want music recommendations, you have to remind your LLM of all your preferences and constraints:

  • "Don't suggest songs I already have in my playlists"
  • "Avoid explicit lyrics"
  • "Add songs to my queue for review, not directly to playlists"
  • "Focus on discovering new artists, not just popular hits"
  • "Consider my current activity and mood"
  • "Provide brief explanations for why each song fits"

The Solution: Create a prompt that captures all these instructions once.

Creating the prompt: Ask your LLM: "Create a prompt called 'spotify-discover' that helps me find new music with all my specific preferences and workflow requirements."

This creates a comprehensive template like:

I'm looking for music recommendations for Spotify based on:

**Current mood:** {{mood}}
**Activity/setting:** {{activity}}
**Preferred genres:** {{genres}}
**Recent artists I've enjoyed:** {{recent_artists}}

**Important constraints:**
- DO NOT suggest songs I already have in my existing playlists
- Avoid explicit lyrics (clean versions only)
- Focus on discovering new/lesser-known artists, not just popular hits
- Provide 5-7 song recommendations maximum

**Workflow:**
- Add recommendations to my Spotify queue (not directly to playlists)
- I'll review and save the ones I like to appropriate playlists later

**For each recommendation, include:**
- Artist and song name
- Brief explanation (1-2 sentences) of why it fits my current mood/activity
- Similar artists I might also enjoy

Please help me discover music that matches this vibe while following these preferences.

Using it:

>> spotify-discover

Now you just fill in your current mood and activity, and get perfectly tailored recommendations that follow all your rulesβ€”no need to repeat your constraints every time.

πŸ”„ Other Common Use Cases

πŸ“‹ Work Ticket Management

  • Create prompts for JIRA/Linear ticket formatting with your team's specific requirements
  • Include standard fields, priority levels, acceptance criteria templates
  • Avoid repeating your company's ticket standards every time

πŸ“§ Email Templates

  • Customer support responses with your company's tone and required disclaimers
  • Follow-up sequences that match your communication style
  • Automated inclusion of signatures, links, and standard information

πŸ“ Code Review Guidelines

  • Technical review checklists with your team's specific standards
  • Security considerations and performance criteria
  • Documentation requirements and testing expectations

The common thread: stop repeating yourself. If you find yourself giving the same detailed instructions to your LLM repeatedly, create a prompt for it.

πŸ” Discovering Existing Prompts

You can search your prompt library:

Can you search my prompts for "productivity" or "task management"?

Or list all available prompts:

What prompts do I have available?

✏️ Manual Editing

Prompts are stored as simple markdown files in your PROMPT_STORAGE_PATH directory. You can also create and edit them directly with your favorite text editor:

  • Each prompt is saved as {id}.md in your prompts directory
  • Use {{variable_name}} syntax for template variables
  • Standard markdown formatting is supported
  • File changes are automatically picked up by the server

πŸ’‘ Best Practices

  1. Use descriptive IDs: Choose prompt IDs that clearly indicate their purpose (e.g., meeting-notes, code-review-checklist)

  2. Include helpful variables: Use {{variable_name}} for dynamic content that changes each time you use the prompt

  3. Organize by category: Consider using prefixes like task-, content-, analysis- to group related prompts

Testing

Run all tests:

npm test

Run tests with coverage:

npm run test:coverage

Tests are written with Vitest. Coverage reports are generated in the coverage/ directory.


Contributing

Contributions are welcome! Please:

  • Follow the existing code style (see .prettierrc.json and .eslintrc.mjs).
  • Add tests for new features or bug fixes.

License

This project is licensed under the MIT License. See for details.


Acknowledgements


Made with ❀️ by John Anderson