claude-prompts-mcp

minipuft/claude-prompts-mcp

3.9

claude-prompts-mcp is hosted online, so all tools can be tested directly either in theInspector tabor in theOnline Client.

If you are the rightful owner of claude-prompts-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

Claude Prompts MCP Server is a universal Model Context Protocol server designed to enhance AI workflows with advanced prompt engineering and orchestration capabilities.

Try claude-prompts-mcp with chat:

Server config via mcphub

Traditional api access examples

Path-based authentication

Tools
8
Resources
0
Prompts
33

Claude Prompts MCP Server

npm version License: MIT Model Context Protocol

Hot-reloadable prompts, structured reasoning, and chain workflows for your AI assistant. Built for Claude, works everywhere.

Quick StartWhat You GetSyntaxDocs

Why

Stop copy-pasting prompts. This server turns your prompt library into a programmable engine.

  • Version Control — Prompts are Markdown files in git. Track changes, review diffs, branch experiments.
  • Hot Reload — Edit a template, run it immediately. No restarts.
  • Structured Execution — Not just text. The server parses operators, injects methodology, enforces quality gates, renders the final prompt.

Quick Start

MCP clients launch the server automatically—you just configure and connect.

Option 1: NPM (Fastest)

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "claude-prompts": {
      "command": "npx",
      "args": ["-y", "claude-prompts@latest"]
    }
  }
}

Restart Claude Desktop. Test with: prompt_manager(action: "list")

That's it. The client handles the rest.


Option 1b: NPM with Custom Prompts

Want your own prompts without cloning the repo? Create a workspace:

npx claude-prompts --init=~/my-prompts

This creates a workspace with starter prompts. Then point Claude Desktop to it:

{
  "mcpServers": {
    "claude-prompts": {
      "command": "npx",
      "args": ["-y", "claude-prompts@latest"],
      "env": {
        "MCP_WORKSPACE": "/home/YOUR_USERNAME/my-prompts"
      }
    }
  }
}

Restart Claude Desktop. Your prompts are now hot-reloadable—edit them directly, or ask Claude to update them:

User: "Make the quick_review prompt also check for TypeScript errors"
Claude: prompt_manager(action:"update", id:"quick_review", ...)  # Updates automatically

See the for all configuration options.


Option 2: From Source (For Customization)

Clone if you want to edit prompts, create custom frameworks, gates or contribute:

git clone https://github.com/minipuft/claude-prompts-mcp.git
cd claude-prompts-mcp/server
npm install && npm run build

Then configure Claude Desktop to use your local build:

Windows:

{
  "mcpServers": {
    "claude-prompts": {
      "command": "node",
      "args": ["C:\\path\\to\\claude-prompts-mcp\\server\\dist\\index.js"]
    }
  }
}

Mac/Linux:

{
  "mcpServers": {
    "claude-prompts": {
      "command": "node",
      "args": ["/path/to/claude-prompts-mcp/server/dist/index.js"]
    }
  }
}

Verify It Works

Restart Claude Desktop. In the input bar, type:

prompt_manager list

How It Works

Not a static file reader. It's a render pipeline with a feedback loop:

%%{init: {'theme': 'neutral', 'themeVariables': {'background':'#0b1224','primaryColor':'#e2e8f0','primaryBorderColor':'#1f2937','primaryTextColor':'#0f172a','lineColor':'#94a3b8','fontFamily':'"DM Sans","Segoe UI",sans-serif','fontSize':'14px','edgeLabelBackground':'#0b1224'}}}%%
flowchart TB
    classDef actor fill:#0f172a,stroke:#cbd5e1,stroke-width:1.5px,color:#f8fafc;
    classDef server fill:#111827,stroke:#fbbf24,stroke-width:1.8px,color:#f8fafc;
    classDef process fill:#e2e8f0,stroke:#1f2937,stroke-width:1.6px,color:#0f172a;
    classDef client fill:#f4d0ff,stroke:#a855f7,stroke-width:1.6px,color:#2e1065;
    classDef clientbg fill:#1a0a24,stroke:#a855f7,stroke-width:1.8px,color:#f8fafc;
    classDef decision fill:#fef3c7,stroke:#f59e0b,stroke-width:1.6px,color:#78350f;

    linkStyle default stroke:#94a3b8,stroke-width:2px

    User["1. User sends command"]:::actor
    Example[">>analyze @CAGEERF :: 'cite sources'"]:::actor
    User --> Example --> Parse

    subgraph Server["MCP Server"]
        direction TB
        Parse["2. Parse operators"]:::process
        Inject["3. Inject framework + gates"]:::process
        Render["4. Render prompt"]:::process
        Decide{"6. Route verdict"}:::decision
        Parse --> Inject --> Render
    end
    Server:::server

    subgraph Client["Claude (Client)"]
        direction TB
        Execute["5. Run prompt + check gates"]:::client
    end
    Client:::clientbg

    Render -->|"Prompt with gate criteria"| Execute
    Execute -->|"Verdict + output"| Decide

    Decide -->|"PASS → render next step"| Render
    Decide -->|"FAIL → render retry prompt"| Render
    Decide -->|"Done"| Result["7. Return to user"]:::actor

The feedback loop:

  1. You send a command with operators (@framework, :: gates, --> chains)
  2. Server parses operators and injects methodology guidance + gate criteria
  3. Server returns the rendered prompt (gates appear as self-check instructions at the bottom)
  4. Claude executes the prompt and evaluates itself against the gate criteria
  5. Claude responds with a verdict (PASS/FAIL) and its output
  6. Server routes: renders next chain step (PASS), renders retry with feedback (FAIL), or returns final result (done)
  • Templates: Markdown files with Nunjucks ({{var}}).
  • Frameworks: Structured thinking patterns (CAGEERF, ReACT, 5W1H, SCAMPER) that guide HOW Claude reasons through problems. When active, frameworks inject:
    • System prompt guidance: Step-by-step methodology instructions
    • Methodology gates: Auto-applied quality checks specific to the framework's phases
    • Tool overlays: Context-aware tool descriptions showing current methodology state
  • Guidance Styles: Instructional templates (analytical, procedural, creative, reasoning) in server/prompts/guidance/ that shape response format.
  • Gates: Quality criteria (e.g., "Must cite sources") injected into prompts for Claude to self-check. Use :: criteria inline or define in server/src/gates/definitions/.

Injection Control: Override defaults with modifiers: %guided forces framework injection, %clean skips all guidance, %lean keeps only gate checks. Configure default frequency in config.json under injection.system-prompt.frequency. See the for details.

What You Get

🔥 Hot Reload

Problem: Prompt iteration is slow. Edit file → restart server → test → repeat. And you're the one debugging prompt issues.

Solution: The server watches server/prompts/*.md for changes and reloads instantly. But the real value: just ask Claude to fix it. When a prompt underperforms, describe the issue—Claude diagnoses, updates the file via prompt_manager, and you test immediately. No manual editing, no restart.

User: "The code_review prompt is too verbose"
Claude: prompt_manager(action:"update", id:"code_review", ...)  # Claude fixes it
User: "Test it"
Claude: prompt_engine(command:">>code_review")                   # Runs updated version instantly

Expect: Claude iterates on prompts faster than you can. You describe the problem, Claude proposes and applies the fix, you validate. Tight feedback loop.


🔗 Chains

Problem: Complex tasks need multiple reasoning steps, but a single prompt tries to do everything at once.

Solution: Break work into discrete steps with -->. Each step's output becomes the next step's input. Add quality checks between steps.

analyze code --> identify issues --> propose fixes --> generate tests

Expect: The server executes steps sequentially, passing context forward. You see each step's output and can intervene if something goes wrong mid-chain.


🧠 Frameworks

Problem: Claude's reasoning varies in structure. Sometimes it's thorough, sometimes it skips steps. You want consistent, methodical thinking.

Solution: Frameworks inject a thinking methodology into the system prompt. The LLM follows a defined reasoning pattern (e.g., "first gather context, then analyze, then plan, then execute"). Each framework also auto-injects quality gates specific to its phases.

@CAGEERF Review this architecture    # Injects structured planning methodology
@ReACT Debug this error              # Injects iterative reason-act-observe loops

Expect: Claude's response follows the methodology's structure. You'll see labeled phases in the output. The framework's gates validate each phase was completed properly.


🛡️ Gates

Problem: Claude returns plausible-sounding outputs, but you need specific criteria met—and you want Claude to verify this, not you.

Solution: Gates inject quality criteria into the prompt. Claude self-evaluates against these criteria and reports PASS/FAIL with reasoning. Failed gates can trigger retries or block the chain.

Summarize this document :: 'must be under 200 words' :: 'must include key statistics'

Expect: Claude's response includes a self-assessment section. If criteria aren't met, the server can auto-retry with feedback or pause for your decision.


✨ Judge Selection

Problem: You have multiple frameworks, styles, and gates available—but you're not sure which combination fits your task.

Solution: %judge presents Claude with your available resources. Claude analyzes your task and recommends (or auto-applies) the best combination.

%judge Help me refactor this legacy codebase

Expect: Claude returns a resource menu with recommendations, then makes a follow-up call with the selected operators applied.

Using Gates

Gates inject quality criteria into prompts. Claude self-checks against them and reports PASS/FAIL.

Inline — quick natural language checks:

Help me refactor this function :: 'keep it under 20 lines' :: 'add error handling'

With Framework — methodology + auto-gates:

@CAGEERF Explain React hooks :: 'include practical examples'

The framework injects its phase-specific gates automatically. Your inline gate (:: 'include practical examples') adds on top.

Chained — quality checks between steps:

Research the topic :: 'use recent sources' --> Summarize findings :: 'be concise' --> Create action items
Gate FormatSyntaxUse Case
Inline:: 'criteria text'Quick checks, readable commands
Named:: {name, description}Reusable gates with clear intent
Full:: {name, criteria[], guidance}Complex validation, multiple criteria

Structured gates (programmatic):

prompt_engine({
  command: ">>code_review",
  gates: [
    {
      name: "Security Check",
      criteria: ["No hardcoded secrets", "Input validation on user data"],
      guidance: "Flag vulnerabilities with severity ratings",
    },
  ],
});

For the full gate schema, see .

Syntax Reference

The prompt_engine uses symbolic operators to compose workflows:

SymbolNameWhat It Does
>>PromptExecutes a template by ID (>>code_review)
-->ChainPipes output to next step (step1 --> step2)
@FrameworkInjects methodology + auto-gates (@CAGEERF)
::GateAdds quality criteria (:: 'cite sources')
%ModifierToggles execution mode (%clean, %lean, %judge)
#StyleApplies tone/persona preset (#analytical)

Modifiers explained:

  • %clean — Skip all framework/gate injection (raw template only)
  • %lean — Skip framework guidance, keep gates only
  • %guided — Force framework injection even if disabled by frequency settings
  • %judge — Claude analyzes task and selects best resources automatically

Advanced Features

Gate Retry & Enforcement

The server manages gate failures automatically:

  • Retry Limits: Failed gates retry up to 2× (configurable) before pausing for input.
  • Enforcement Modes:
    • blocking — Must pass to proceed (Critical/High severity gates)
    • advisory — Logs warning, continues anyway (Medium/Low severity)
  • User Choice: On retry exhaustion, respond with retry, skip, or abort.

Examples

1. Judge-Driven Selection (Two-Call Pattern) Not sure what style, framework, or gates to use? Let Claude analyze and decide.

# Phase 1: Get resource menu
prompt_engine(command:"%judge >>code_review")
# Claude sees available options and analyzes your task

# Phase 2: Claude calls back with selections
prompt_engine(command:">>code_review @CAGEERF :: security_review #style(analytical)")

The %judge modifier returns a resource menu. Claude analyzes the task, selects appropriate resources, and makes a follow-up call with inline operators.

2. Chained Reasoning Multi-step workflows with quality checks at each stage:

Research AI trends :: 'use 2024 sources' --> Analyze implications --> Write executive summary :: 'keep under 500 words'

3. Iterative Prompt Refinement Found an issue with a prompt? Ask Claude to fix it—changes apply immediately:

User: "The code_review prompt is too verbose, make it more concise"
Claude: prompt_manager(action:"update", id:"code_review", ...)

User: "Now test it"
Claude: prompt_engine(command:">>code_review")
# Uses the updated prompt instantly—no restart needed

This feedback loop lets you continuously improve prompts as you discover edge cases.

Configuration

Customize behavior via server/config.json. No rebuild required—just restart.

SectionSettingDefaultDescription
promptsfileprompts/promptsConfig.jsonMaster config defining prompt categories and import paths.
promptsregisterWithMcptrueExposes prompts to Claude clients. Set false for internal-only mode.
frameworksenableSystemPromptInjectiontrueAuto-injects methodology guidance (CAGEERF, etc.) into system prompts.
gatesdefinitionsDirectorysrc/gates/definitionsPath to custom quality gate definitions (JSON).
judgeenabledtrueEnables the built-in judge phase (%judge) that surfaces framework/style/gate options.

Injection Target Modes (Advanced)

By default, framework guidance injects on both step execution and gate reviews. To customize WHERE injection occurs, add an injection section to your config:

{
  "injection": {
    "system-prompt": { "enabled": true, "target": "steps" },
    "gate-guidance": { "enabled": true, "target": "gates" }
  }
}
TargetBehavior
bothInject on steps and gate reviews (default)
stepsInject only during normal step execution
gatesInject only during gate review steps

Applies to: system-prompt, gate-guidance, style-guidance

Docs

  • — Execution pipeline deep dive
  • — Full command reference
  • — Creating templates and gates
  • — Multi-step workflows
  • — Quality validation

Contributing

See .

cd server
npm run test        # Run Jest
npm run typecheck   # Verify types
npm run validate:all # Full CI check

License