claude-prompts-mcp

minipuft/claude-prompts-mcp

3.9

claude-prompts-mcp is hosted online, so all tools can be tested directly either in theInspector tabor in theOnline Client.

If you are the rightful owner of claude-prompts-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Claude Prompts MCP Server is a universal Model Context Protocol server designed to enhance AI workflows with advanced prompt engineering and orchestration capabilities.

Try claude-prompts-mcp with chat:

Server config via mcphub

Tools
8
Resources
0
Prompts
33

Claude Prompts MCP Server

Claude Prompts MCP Server Logo

npm version License: MIT Model Context Protocol Node.js

šŸš€ The Universal Model Context Protocol Server for Any MCP Client

Supercharge your AI workflows with battle-tested prompt engineering, intelligent orchestration, and lightning-fast hot-reload capabilities. Works seamlessly with Claude Desktop, Cursor Windsurf, and any MCP-compatible client.

⚔ Quick Start • šŸŽÆ Features • šŸ“š Docs • šŸ› ļø Advanced


🌟 What Makes This Special? (v1.1.0 - "Intelligent Execution")

  • 🧠 Semantic Analysis Engine → Automatically detects execution types without manual configuration
  • šŸŽÆ Universal Prompt Execution → Single tool with intelligent mode detection and auto-execution
  • šŸ›”ļø Smart Quality Gates → Auto-assigned validation based on prompt complexity and type
  • šŸ”„ Zero-Configuration Reliability → No headers or manual setup required - just works intelligently
  • šŸ“Š Learning Analytics → System improves detection accuracy through usage patterns
  • šŸ”„ Intelligent Hot-Reload System → Update prompts instantly without restarts
  • šŸŽØ Advanced Template Engine → Nunjucks-powered with conditionals, loops, and dynamic data
  • ⚔ Multi-Phase Orchestration → Robust startup sequence with comprehensive health monitoring
  • šŸš€ Universal MCP Compatibility → Works flawlessly with Claude Desktop, Cursor Windsurf, and any MCP client

Transform your AI assistant experience from scattered prompts to a truly intelligent execution engine that automatically understands and optimally executes any prompt across any MCP-compatible platform.

šŸš€ Revolutionary Interactive Prompt Management

šŸŽÆ The Future is Here: Manage Your AI's Capabilities FROM WITHIN the AI Conversation

This isn't just another prompt server – it's a living, breathing prompt ecosystem that evolves through natural conversation with your AI assistant. Imagine being able to:

# šŸ—£ļø Create new prompts by talking to your AI
"Hey Claude, create a new prompt called 'code_reviewer' that analyzes code for security issues"
→ Claude creates, tests, and registers the prompt instantly

# āœļø Refine prompts through conversation
"That code reviewer prompt needs to also check for performance issues"
→ Claude modifies the prompt and hot-reloads it immediately

# šŸ” Discover and iterate on your prompt library
>>listprompts
→ Browse your growing collection, then ask: "Improve the research_assistant prompt to be more thorough"

# 🧠 Execute prompts with zero configuration - system auto-detects everything
>>content_analysis my content
→ Automatic semantic analysis detects workflow type, applies quality gates, executes perfectly

🌟 Why This Changes Everything:

  • 🧠 True Intelligence: System understands prompts like a human - no configuration needed
  • šŸ”„ Self-Evolving System: Your AI assistant literally builds and improves its own capabilities in real-time
  • šŸŽ® Zero Friction: Never configure execution modes, quality gates, or headers - everything just works
  • ⚔ Instant Perfection: Create → Auto-detect → Execute optimally in one seamless flow
  • 🌱 Learning System: Detection accuracy improves through usage - gets smarter over time

This is what truly intelligent AI infrastructure looks like – where the system understands intent as naturally as reading human language.

⚔ Features & Reliability

šŸŽÆ Developer Experience

  • šŸ”„ One-Command Installation in under 60 seconds
  • ⚔ Hot-Reload Everything → prompts, configs, templates
  • šŸŽØ Rich Template Engine → conditionals, loops, data injection
  • šŸš€ Universal MCP Integration → works with Claude Desktop, Cursor Windsurf, and any MCP client
  • šŸ“± Multi-Transport Support → STDIO for Claude Desktop + SSE/REST for web
  • šŸ› ļø Dynamic Management Tools → update, delete, reload prompts on-the-fly

šŸš€ Enterprise Architecture

  • šŸ—ļø Orchestration → phased startup with dependency management
  • šŸ”§ Robust Error Handling → graceful degradation with comprehensive logging
  • šŸ“Š Real-Time Health Monitoring → module status, performance metrics, diagnostics
  • šŸŽÆ Smart Environment Detection → works across development and production contexts
  • āš™ļø Modular Plugin System → extensible architecture for custom workflows
  • šŸ” Production-Ready Security → input validation, sanitization, error boundaries

šŸ› ļø Enhanced MCP Tools Suite (v1.1.0)

  • šŸŽÆ Universal Prompt Execution → execute_prompt tool with automatic mode detection and gate validation
  • šŸ›”ļø Quality Assurance Gates → Automatic content validation with intelligent retry mechanisms
  • šŸ“Š Execution Analytics → execution_analytics tool for performance monitoring and insights
  • šŸ”„ Step-by-Step Chain Execution → Optional confirmation between chain steps for quality control
  • šŸ“‹ List Prompts → listprompts to discover all available commands with enhanced usage examples
  • āœļø Update Prompts → Modify existing prompts through conversation with full validation and hot-reload
  • šŸ—‘ļø Delete Prompts → Remove prompts by asking your AI assistant - automatic file cleanup included
  • šŸ”§ Modify Sections → "Edit the description of my research prompt" → Done instantly
  • šŸ”„ Reload System → Force refresh through chat - no terminal access needed
  • āš™ļø Smart Argument Parsing → JSON objects, single arguments, or fallback to {{previous_message}}
  • šŸ”— Chain Execution → Multi-step workflow management with conversational guidance
  • šŸŽØ Conversational Creation → "Create a new prompt that..." → AI builds it for you interactively

šŸŽÆ One-Command Installation

Get your AI command center running in under a minute:

# Clone → Install → Launch → Profit! šŸš€
git clone https://github.com/minipuft/claude-prompts-mcp.git
cd claude-prompts-mcp/server && npm install && npm run build && npm start

šŸ”Œ Universal MCP Client Integration

Claude Desktop

Drop this into your claude_desktop_config.json:

{
  "mcpServers": {
    "claude-prompts-mcp": {
      "command": "node",
      "args": ["E:\\path\\to\\claude-prompts-mcp\\server\\dist\\index.js"],
      "env": {
        "MCP_PROMPTS_CONFIG_PATH": "E:\\path\\to\\claude-prompts-mcp\\server\\promptsConfig.json"
      }
    }
  }
}
Cursor Windsurf & Other MCP Clients

Configure your MCP client to connect via STDIO transport:

  • Command: node
  • Args: ["path/to/claude-prompts-mcp/server/dist/index.js"]
  • Environment: MCP_PROMPTS_CONFIG_PATH=path/to/promptsConfig.json

šŸ’” Pro Tip: Use absolute paths for bulletproof integration across all MCP clients!

šŸŽ® Start Building Immediately (v1.1.0 Enhanced)

Your AI command arsenal is ready with enhanced reliability:

# 🧠 Discover your intelligent superpowers
>>listprompts

# šŸŽÆ Zero-config intelligent execution - system auto-detects everything
>>friendly_greeting name="Developer"
→ Auto-detected as template, returns personalized greeting

>>content_analysis my research data
→ Auto-detected as workflow, applies quality gates, executes analysis framework

>>notes my content
→ Auto-detected as chain, validates each step, executes sequence

# šŸ“Š Monitor intelligent detection performance
>>execution_analytics {"include_history": true}
→ See how accurately the system detects prompt types and applies gates

# šŸš€ Create prompts that just work (zero configuration)
"Create a prompt called 'bug_analyzer' that finds and explains code issues"
→ AI creates prompt, system auto-detects workflow type, assigns quality gates

# šŸ”„ Refine prompts through conversation (intelligence improves)
"Make the bug_analyzer prompt also suggest performance improvements"
→ Prompt updated, system re-analyzes, updates detection profile automatically

# 🧠 Build intelligent AI workflows
"Create a prompt chain that reviews code, validates output, tests it, then documents it"
→ Chain created, each step auto-analyzed, appropriate gates assigned automatically

# šŸŽ›ļø Manual override when needed (but rarely necessary)
>>execute_prompt {"command": ">>content_analysis data", "step_confirmation": true}
→ Force step confirmation for sensitive analysis

🌟 The Magic: Your prompt library becomes a living extension of your workflow, growing and adapting as you work with your AI assistant.

šŸ”„ Why Developers Choose This Server

⚔ Lightning-Fast Hot-Reload → Edit prompts, see changes instantly

Our sophisticated orchestration engine monitors your files and reloads everything seamlessly:

# Edit any prompt file → Server detects → Reloads automatically → Zero downtime
  • Instant Updates: Change templates, arguments, descriptions in real-time
  • Zero Restart Required: Advanced hot-reload system keeps everything running
  • Smart Dependency Tracking: Only reloads what actually changed
  • Graceful Error Recovery: Invalid changes don't crash the server
šŸŽØ Next-Gen Template Engine → Nunjucks-powered dynamic prompts

Go beyond simple text replacement with a full template engine:

Analyze {{content}} for {% if focus_area %}{{focus_area}}{% else %}general{% endif %} insights.

{% for requirement in requirements %}
- Consider: {{requirement}}
{% endfor %}

{% if previous_context %}
Build upon: {{previous_context}}
{% endif %}
  • Conditional Logic: Smart prompts that adapt based on input
  • Loops & Iteration: Handle arrays and complex data structures
  • Template Inheritance: Reuse and extend prompt patterns
  • Real-Time Processing: Templates render with live data injection
šŸ—ļø Enterprise-Grade Orchestration → Multi-phase startup with health monitoring

Built like production software with comprehensive architecture:

Phase 1: Foundation → Config, logging, core services
Phase 2: Data Loading → Prompts, categories, validation
Phase 3: Module Init → Tools, executors, managers
Phase 4: Server Launch → Transport, API, diagnostics
  • Dependency Management: Modules start in correct order with validation
  • Health Monitoring: Real-time status of all components
  • Performance Metrics: Memory usage, uptime, connection tracking
  • Diagnostic Tools: Built-in troubleshooting and debugging
šŸ”„ Intelligent Prompt Chains → Multi-step AI workflows

Create sophisticated workflows where each step builds on the previous:

{
  "id": "content_analysis_chain",
  "name": "Content Analysis Chain",
  "isChain": true,
  "chainSteps": [
    {
      "stepName": "Extract Key Points",
      "promptId": "extract_key_points",
      "inputMapping": { "content": "original_content" },
      "outputMapping": { "key_points": "extracted_points" }
    },
    {
      "stepName": "Analyze Sentiment",
      "promptId": "sentiment_analysis",
      "inputMapping": { "text": "extracted_points" },
      "outputMapping": { "sentiment": "analysis_result" }
    }
  ]
}
  • Visual Step Planning: See your workflow before execution
  • Input/Output Mapping: Data flows seamlessly between steps
  • Error Recovery: Failed steps don't crash the entire chain
  • Flexible Execution: Run chains or individual steps as needed

šŸ“Š System Architecture

graph TB
    A[Claude Desktop] -->|MCP Protocol| B[Transport Layer]
    B --> C[🧠 Orchestration Engine]
    C --> D[šŸ“ Prompt Manager]
    C --> E[šŸ› ļø MCP Tools Manager]
    C --> F[āš™ļø Config Manager]
    D --> G[šŸŽØ Template Engine]
    E --> H[šŸ”§ Management Tools]
    F --> I[šŸ”„ Hot Reload System]

    style C fill:#ff6b35
    style D fill:#00ff88
    style E fill:#0066cc

🌐 MCP Client Compatibility

This server implements the Model Context Protocol (MCP) standard and works with any compatible client:

āœ… Tested & Verified

  • šŸŽÆ Claude Desktop → Full integration support
  • šŸš€ Cursor Windsurf → Native MCP compatibility

šŸ”Œ Transport Support

  • šŸ“” STDIO → Primary transport for desktop clients
  • 🌐 Server-Sent Events (SSE) → Web-based clients and integrations
  • šŸ”— HTTP Endpoints → Basic endpoints for health checks and data queries

šŸŽÆ Integration Features

  • šŸ”„ Auto-Discovery → Clients detect tools automatically
  • šŸ“‹ Tool Registration → Dynamic capability announcement
  • ⚔ Hot Reload → Changes appear instantly in clients
  • šŸ› ļø Error Handling → Graceful degradation across clients

šŸ’” Developer Note: As MCP adoption grows, this server will work with any new MCP-compatible AI assistant or development environment without modification.

šŸ› ļø Advanced Configuration

āš™ļø Server Powerhouse (config.json)

Fine-tune your server's behavior:

{
  "server": {
    "name": "Claude Custom Prompts MCP Server",
    "version": "1.0.0",
    "port": 9090
  },
  "prompts": {
    "file": "promptsConfig.json",
    "registrationMode": "name"
  },
  "transports": {
    "default": "stdio",
    "sse": { "enabled": false },
    "stdio": { "enabled": true }
  }
}

šŸ—‚ļø Prompt Organization (promptsConfig.json)

Structure your AI command library:

{
  "categories": [
    {
      "id": "development",
      "name": "šŸ”§ Development",
      "description": "Code review, debugging, and development workflows"
    },
    {
      "id": "analysis",
      "name": "šŸ“Š Analysis",
      "description": "Content analysis and research prompts"
    },
    {
      "id": "creative",
      "name": "šŸŽØ Creative",
      "description": "Content creation and creative writing"
    }
  ],
  "imports": [
    "prompts/development/prompts.json",
    "prompts/analysis/prompts.json",
    "prompts/creative/prompts.json"
  ]
}

šŸš€ Advanced Features

šŸ”„ Multi-Step Prompt Chains → Build sophisticated AI workflows

Create complex workflows that chain multiple prompts together:

# Research Analysis Chain

## User Message Template

Research {{topic}} and provide {{analysis_type}} analysis.

## Chain Configuration

Steps: research → extract → analyze → summarize
Input Mapping: {topic} → {content} → {key_points} → {insights}
Output Format: Structured report with executive summary

Capabilities:

  • Sequential Processing: Each step uses output from previous step
  • Parallel Execution: Run multiple analysis streams simultaneously
  • Error Recovery: Graceful handling of failed steps
  • Custom Logic: Conditional branching based on intermediate results
šŸŽØ Advanced Template Features → Dynamic, intelligent prompts

Leverage the full power of Nunjucks templating:

# {{ title | title }} Analysis

## Context
{% if previous_analysis %}
Building upon previous analysis: {{ previous_analysis | summary }}
{% endif %}

## Requirements
{% for req in requirements %}
{{loop.index}}. **{{req.priority | upper}}**: {{req.description}}
   {% if req.examples %}
   Examples: {% for ex in req.examples %}{{ex}}{% if not loop.last %}, {% endif %}{% endfor %}
   {% endif %}
{% endfor %}

## Focus Areas
{% set focus_areas = focus.split(',') %}
{% for area in focus_areas %}
- {{ area | trim | title }}
{% endfor %}

Template Features:

  • Filters & Functions: Transform data on-the-fly
  • Conditional Logic: Smart branching based on input
  • Loops & Iteration: Handle complex data structures
  • Template Inheritance: Build reusable prompt components
šŸ”§ Real-Time Management Tools → Hot management without downtime

Manage your prompts dynamically while the server runs:

# Update prompts on-the-fly
>>update_prompt id="analysis_prompt" content="new template"

# Add new sections dynamically
>>modify_prompt_section id="research" section="examples" content="new examples"

# Hot-reload everything
>>reload_prompts reason="updated templates"

Management Capabilities:

  • Live Updates: Change prompts without server restart
  • Section Editing: Modify specific parts of prompts
  • Bulk Operations: Update multiple prompts at once
  • Rollback Support: Undo changes when things go wrong
šŸ“Š Production Monitoring → Enterprise-grade observability

Built-in monitoring and diagnostics for production environments:

// Health Check Response
{
  healthy: true,
  modules: {
    foundation: true,
    dataLoaded: true,
    modulesInitialized: true,
    serverRunning: true
  },
  performance: {
    uptime: 86400,
    memoryUsage: { rss: 45.2, heapUsed: 23.1 },
    promptsLoaded: 127,
    categoriesLoaded: 8
  }
}

Monitoring Features:

  • Real-Time Health Checks: All modules continuously monitored
  • Performance Metrics: Memory, uptime, connection tracking
  • Diagnostic Tools: Comprehensive troubleshooting information
  • Error Tracking: Graceful error handling with detailed logging

šŸ“š Documentation Hub

GuideDescription
Complete setup walkthrough with troubleshooting
Common issues, diagnostic tools, and solutions
A deep dive into the orchestration engine, modules, and data flow
Master prompt creation with examples
Build complex multi-step workflows
Dynamic management and hot-reload features
Complete MCP tools documentation
Planned features and development roadmap
Join our development community

šŸ¤ Contributing

We're building the future of AI prompt engineering! Join our community:

šŸ“„ License

Released under the - see the file for details.


⭐ Star this repo if it's transforming your AI workflow!

Report Bug • Request Feature •

Built with ā¤ļø for the AI development community