minipuft/claude-prompts-mcp
claude-prompts-mcp is hosted online, so all tools can be tested directly either in theInspector tabor in theOnline Client.
If you are the rightful owner of claude-prompts-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
Claude Prompts MCP Server is a universal Model Context Protocol server designed to enhance AI workflows with advanced prompt engineering and orchestration capabilities.
Try claude-prompts-mcp with chat:
Server config via mcphub
Claude Prompts MCP Server
š The Universal Model Context Protocol Server for Any MCP Client
Supercharge your AI workflows with battle-tested prompt engineering, intelligent orchestration, and lightning-fast hot-reload capabilities. Works seamlessly with Claude Desktop, Cursor Windsurf, and any MCP-compatible client.
ā” Quick Start ⢠šÆ Features ⢠š Docs ⢠š ļø Advanced
š What Makes This Special? (v1.1.0 - "Intelligent Execution")
- š§ Semantic Analysis Engine ā Automatically detects execution types without manual configuration
- šÆ Universal Prompt Execution ā Single tool with intelligent mode detection and auto-execution
- š”ļø Smart Quality Gates ā Auto-assigned validation based on prompt complexity and type
- š Zero-Configuration Reliability ā No headers or manual setup required - just works intelligently
- š Learning Analytics ā System improves detection accuracy through usage patterns
- š„ Intelligent Hot-Reload System ā Update prompts instantly without restarts
- šØ Advanced Template Engine ā Nunjucks-powered with conditionals, loops, and dynamic data
- ā” Multi-Phase Orchestration ā Robust startup sequence with comprehensive health monitoring
- š Universal MCP Compatibility ā Works flawlessly with Claude Desktop, Cursor Windsurf, and any MCP client
Transform your AI assistant experience from scattered prompts to a truly intelligent execution engine that automatically understands and optimally executes any prompt across any MCP-compatible platform.
š Revolutionary Interactive Prompt Management
šÆ The Future is Here: Manage Your AI's Capabilities FROM WITHIN the AI Conversation
This isn't just another prompt server ā it's a living, breathing prompt ecosystem that evolves through natural conversation with your AI assistant. Imagine being able to:
# š£ļø Create new prompts by talking to your AI
"Hey Claude, create a new prompt called 'code_reviewer' that analyzes code for security issues"
ā Claude creates, tests, and registers the prompt instantly
# āļø Refine prompts through conversation
"That code reviewer prompt needs to also check for performance issues"
ā Claude modifies the prompt and hot-reloads it immediately
# š Discover and iterate on your prompt library
>>listprompts
ā Browse your growing collection, then ask: "Improve the research_assistant prompt to be more thorough"
# š§ Execute prompts with zero configuration - system auto-detects everything
>>content_analysis my content
ā Automatic semantic analysis detects workflow type, applies quality gates, executes perfectly
š Why This Changes Everything:
- š§ True Intelligence: System understands prompts like a human - no configuration needed
- š Self-Evolving System: Your AI assistant literally builds and improves its own capabilities in real-time
- š® Zero Friction: Never configure execution modes, quality gates, or headers - everything just works
- ā” Instant Perfection: Create ā Auto-detect ā Execute optimally in one seamless flow
- š± Learning System: Detection accuracy improves through usage - gets smarter over time
This is what truly intelligent AI infrastructure looks like ā where the system understands intent as naturally as reading human language.
ā” Features & Reliability
šÆ Developer Experience
|
š Enterprise Architecture
|
š ļø Enhanced MCP Tools Suite (v1.1.0)
|
šÆ One-Command Installation
Get your AI command center running in under a minute:
# Clone ā Install ā Launch ā Profit! š
git clone https://github.com/minipuft/claude-prompts-mcp.git
cd claude-prompts-mcp/server && npm install && npm run build && npm start
š Universal MCP Client Integration
Claude Desktop
Drop this into your claude_desktop_config.json
:
{
"mcpServers": {
"claude-prompts-mcp": {
"command": "node",
"args": ["E:\\path\\to\\claude-prompts-mcp\\server\\dist\\index.js"],
"env": {
"MCP_PROMPTS_CONFIG_PATH": "E:\\path\\to\\claude-prompts-mcp\\server\\promptsConfig.json"
}
}
}
}
Cursor Windsurf & Other MCP Clients
Configure your MCP client to connect via STDIO transport:
- Command:
node
- Args:
["path/to/claude-prompts-mcp/server/dist/index.js"]
- Environment:
MCP_PROMPTS_CONFIG_PATH=path/to/promptsConfig.json
š” Pro Tip: Use absolute paths for bulletproof integration across all MCP clients!
š® Start Building Immediately (v1.1.0 Enhanced)
Your AI command arsenal is ready with enhanced reliability:
# š§ Discover your intelligent superpowers
>>listprompts
# šÆ Zero-config intelligent execution - system auto-detects everything
>>friendly_greeting name="Developer"
ā Auto-detected as template, returns personalized greeting
>>content_analysis my research data
ā Auto-detected as workflow, applies quality gates, executes analysis framework
>>notes my content
ā Auto-detected as chain, validates each step, executes sequence
# š Monitor intelligent detection performance
>>execution_analytics {"include_history": true}
ā See how accurately the system detects prompt types and applies gates
# š Create prompts that just work (zero configuration)
"Create a prompt called 'bug_analyzer' that finds and explains code issues"
ā AI creates prompt, system auto-detects workflow type, assigns quality gates
# š Refine prompts through conversation (intelligence improves)
"Make the bug_analyzer prompt also suggest performance improvements"
ā Prompt updated, system re-analyzes, updates detection profile automatically
# š§ Build intelligent AI workflows
"Create a prompt chain that reviews code, validates output, tests it, then documents it"
ā Chain created, each step auto-analyzed, appropriate gates assigned automatically
# šļø Manual override when needed (but rarely necessary)
>>execute_prompt {"command": ">>content_analysis data", "step_confirmation": true}
ā Force step confirmation for sensitive analysis
š The Magic: Your prompt library becomes a living extension of your workflow, growing and adapting as you work with your AI assistant.
š„ Why Developers Choose This Server
ā” Lightning-Fast Hot-Reload ā Edit prompts, see changes instantly
Our sophisticated orchestration engine monitors your files and reloads everything seamlessly:
# Edit any prompt file ā Server detects ā Reloads automatically ā Zero downtime
- Instant Updates: Change templates, arguments, descriptions in real-time
- Zero Restart Required: Advanced hot-reload system keeps everything running
- Smart Dependency Tracking: Only reloads what actually changed
- Graceful Error Recovery: Invalid changes don't crash the server
šØ Next-Gen Template Engine ā Nunjucks-powered dynamic prompts
Go beyond simple text replacement with a full template engine:
Analyze {{content}} for {% if focus_area %}{{focus_area}}{% else %}general{% endif %} insights.
{% for requirement in requirements %}
- Consider: {{requirement}}
{% endfor %}
{% if previous_context %}
Build upon: {{previous_context}}
{% endif %}
- Conditional Logic: Smart prompts that adapt based on input
- Loops & Iteration: Handle arrays and complex data structures
- Template Inheritance: Reuse and extend prompt patterns
- Real-Time Processing: Templates render with live data injection
šļø Enterprise-Grade Orchestration ā Multi-phase startup with health monitoring
Built like production software with comprehensive architecture:
Phase 1: Foundation ā Config, logging, core services
Phase 2: Data Loading ā Prompts, categories, validation
Phase 3: Module Init ā Tools, executors, managers
Phase 4: Server Launch ā Transport, API, diagnostics
- Dependency Management: Modules start in correct order with validation
- Health Monitoring: Real-time status of all components
- Performance Metrics: Memory usage, uptime, connection tracking
- Diagnostic Tools: Built-in troubleshooting and debugging
š Intelligent Prompt Chains ā Multi-step AI workflows
Create sophisticated workflows where each step builds on the previous:
{
"id": "content_analysis_chain",
"name": "Content Analysis Chain",
"isChain": true,
"chainSteps": [
{
"stepName": "Extract Key Points",
"promptId": "extract_key_points",
"inputMapping": { "content": "original_content" },
"outputMapping": { "key_points": "extracted_points" }
},
{
"stepName": "Analyze Sentiment",
"promptId": "sentiment_analysis",
"inputMapping": { "text": "extracted_points" },
"outputMapping": { "sentiment": "analysis_result" }
}
]
}
- Visual Step Planning: See your workflow before execution
- Input/Output Mapping: Data flows seamlessly between steps
- Error Recovery: Failed steps don't crash the entire chain
- Flexible Execution: Run chains or individual steps as needed
š System Architecture
graph TB
A[Claude Desktop] -->|MCP Protocol| B[Transport Layer]
B --> C[š§ Orchestration Engine]
C --> D[š Prompt Manager]
C --> E[š ļø MCP Tools Manager]
C --> F[āļø Config Manager]
D --> G[šØ Template Engine]
E --> H[š§ Management Tools]
F --> I[š„ Hot Reload System]
style C fill:#ff6b35
style D fill:#00ff88
style E fill:#0066cc
š MCP Client Compatibility
This server implements the Model Context Protocol (MCP) standard and works with any compatible client:
ā Tested & Verified
|
š Transport Support
|
šÆ Integration Features
|
š” Developer Note: As MCP adoption grows, this server will work with any new MCP-compatible AI assistant or development environment without modification.
š ļø Advanced Configuration
āļø Server Powerhouse (config.json
)
Fine-tune your server's behavior:
{
"server": {
"name": "Claude Custom Prompts MCP Server",
"version": "1.0.0",
"port": 9090
},
"prompts": {
"file": "promptsConfig.json",
"registrationMode": "name"
},
"transports": {
"default": "stdio",
"sse": { "enabled": false },
"stdio": { "enabled": true }
}
}
šļø Prompt Organization (promptsConfig.json
)
Structure your AI command library:
{
"categories": [
{
"id": "development",
"name": "š§ Development",
"description": "Code review, debugging, and development workflows"
},
{
"id": "analysis",
"name": "š Analysis",
"description": "Content analysis and research prompts"
},
{
"id": "creative",
"name": "šØ Creative",
"description": "Content creation and creative writing"
}
],
"imports": [
"prompts/development/prompts.json",
"prompts/analysis/prompts.json",
"prompts/creative/prompts.json"
]
}
š Advanced Features
š Multi-Step Prompt Chains ā Build sophisticated AI workflows
Create complex workflows that chain multiple prompts together:
# Research Analysis Chain
## User Message Template
Research {{topic}} and provide {{analysis_type}} analysis.
## Chain Configuration
Steps: research ā extract ā analyze ā summarize
Input Mapping: {topic} ā {content} ā {key_points} ā {insights}
Output Format: Structured report with executive summary
Capabilities:
- Sequential Processing: Each step uses output from previous step
- Parallel Execution: Run multiple analysis streams simultaneously
- Error Recovery: Graceful handling of failed steps
- Custom Logic: Conditional branching based on intermediate results
šØ Advanced Template Features ā Dynamic, intelligent prompts
Leverage the full power of Nunjucks templating:
# {{ title | title }} Analysis
## Context
{% if previous_analysis %}
Building upon previous analysis: {{ previous_analysis | summary }}
{% endif %}
## Requirements
{% for req in requirements %}
{{loop.index}}. **{{req.priority | upper}}**: {{req.description}}
{% if req.examples %}
Examples: {% for ex in req.examples %}{{ex}}{% if not loop.last %}, {% endif %}{% endfor %}
{% endif %}
{% endfor %}
## Focus Areas
{% set focus_areas = focus.split(',') %}
{% for area in focus_areas %}
- {{ area | trim | title }}
{% endfor %}
Template Features:
- Filters & Functions: Transform data on-the-fly
- Conditional Logic: Smart branching based on input
- Loops & Iteration: Handle complex data structures
- Template Inheritance: Build reusable prompt components
š§ Real-Time Management Tools ā Hot management without downtime
Manage your prompts dynamically while the server runs:
# Update prompts on-the-fly
>>update_prompt id="analysis_prompt" content="new template"
# Add new sections dynamically
>>modify_prompt_section id="research" section="examples" content="new examples"
# Hot-reload everything
>>reload_prompts reason="updated templates"
Management Capabilities:
- Live Updates: Change prompts without server restart
- Section Editing: Modify specific parts of prompts
- Bulk Operations: Update multiple prompts at once
- Rollback Support: Undo changes when things go wrong
š Production Monitoring ā Enterprise-grade observability
Built-in monitoring and diagnostics for production environments:
// Health Check Response
{
healthy: true,
modules: {
foundation: true,
dataLoaded: true,
modulesInitialized: true,
serverRunning: true
},
performance: {
uptime: 86400,
memoryUsage: { rss: 45.2, heapUsed: 23.1 },
promptsLoaded: 127,
categoriesLoaded: 8
}
}
Monitoring Features:
- Real-Time Health Checks: All modules continuously monitored
- Performance Metrics: Memory, uptime, connection tracking
- Diagnostic Tools: Comprehensive troubleshooting information
- Error Tracking: Graceful error handling with detailed logging
š Documentation Hub
Guide | Description |
---|---|
Complete setup walkthrough with troubleshooting | |
Common issues, diagnostic tools, and solutions | |
A deep dive into the orchestration engine, modules, and data flow | |
Master prompt creation with examples | |
Build complex multi-step workflows | |
Dynamic management and hot-reload features | |
Complete MCP tools documentation | |
Planned features and development roadmap | |
Join our development community |
š¤ Contributing
We're building the future of AI prompt engineering! Join our community:
- š Found a bug? Open an issue
- š” Have an idea? Start a discussion
- š§ Want to contribute? Check our
- š Need help? Visit our
š License
Released under the - see the file for details.
ā Star this repo if it's transforming your AI workflow!
Report Bug ⢠Request Feature ā¢
Built with ā¤ļø for the AI development community