mako10k/mcp-llm-generator
If you are the rightful owner of mcp-llm-generator and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The MCP LLM Generator is a server that leverages the Model Context Protocol to provide advanced text generation capabilities using LLMs, implemented with the latest TypeScript MCP SDK.
llm-generate
Generate text using LLM via MCP sampling protocol
template-execute
Execute predefined templates with parameter substitution
template-manage
Manage templates with file-based persistence
template-to-params
Convert templates into parameters for LLM text generation
MCP LLM Generator v2 🤖
🚀 Production-ready Model Context Protocol (MCP) server with advanced LLM text generation, context memory management, and intelligent template systems.
New in v2: 67% token reduction optimization, context memory tools, personality-driven consultants, and associative memory for enhanced AI workflows.
✨ Key Features
🔧 Core Tools
llm-generate
- Direct LLM text generation via MCP sampling protocoltemplate-execute
- Execute sophisticated templates with smart parameter substitutiontemplate-manage
- Full CRUD operations for reusable prompt templatescontext-chat
- Personality-driven conversations with persistent memorycontext-manage
- Create, update, and manage AI consultant contextsmemory-store
- Store and organize knowledge with associative linking
📚 Smart Resources
template-list
- Dynamic discovery of available templatestemplate-detail
- Comprehensive template information with validationcontext-history
- Access conversation histories and consultant insights
🎯 Advanced Prompts
explain-template
- Multi-style explanations (beginner, technical, expert)review-template
- Intelligent code review and analysisconsultant-prompt
- Access to 12+ specialized AI consultants
🚀 Quick Start
Global Installation (Recommended)
# Install globally for easy access
npm install -g @mako10k/mcp-llm-generator
# Verify installation
mcp-llm-generator --version
Local Development Setup
# Clone the repository
git clone https://github.com/mako10k/mcp-sampler.git
cd mcp-sampler
# Install dependencies
npm install
# Build TypeScript
npm run build
# Start the server
npm start
🔌 MCP Client Integration
Add to your MCP client configuration (e.g., ~/.claude/mcp.json
):
{
"servers": {
"mcp-llm-generator": {
"command": "mcp-llm-generator",
"type": "stdio"
}
}
}
For local development:
{
"servers": {
"mcp-llm-generator": {
"command": "node",
"args": ["/path/to/mcp-sampler/build/index.js"],
"type": "stdio"
}
}
}
}
📖 Usage Examples
🤖 Basic LLM Text Generation
// Generate text with custom parameters
await client.callTool("llm-generate", {
messages: [
{
role: "user",
content: {
type: "text",
text: "Explain quantum computing in simple terms"
}
}
],
maxTokens: 500,
temperature: 0.7
});
📝 Template System
// Execute predefined templates
await client.callTool("template-execute", {
templateName: "explain-template",
args: {
topic: "machine learning",
style: "beginner",
audience: "developers"
}
});
// Manage templates
await client.callTool("template-manage", {
action: "add",
template: {
name: "code-review",
systemPrompt: "You are an expert code reviewer...",
userMessage: "Review this code: {code}",
parameters: { code: "Code to review" }
}
});
🧠 Context Memory & Consultants
// Create a specialized consultant
await client.callTool("context-manage", {
action: "create",
name: "Security Expert",
personality: "Expert cybersecurity consultant with 15+ years experience",
maxTokens: 1000,
temperature: 0.3
});
// Chat with consultant
await client.callTool("context-chat", {
contextId: "security-expert-id",
message: "Review this authentication system for vulnerabilities",
maintainPersonality: true
});
// Store important insights
await client.callTool("memory-store", {
content: "JWT tokens should expire within 15 minutes for high-security applications",
scope: "security/authentication",
tags: ["jwt", "security", "best-practices"]
});
📚 Resource Discovery
// List available templates
const templates = await client.readResource({
uri: "mcp-llm-generator://template-list"
});
// Get detailed template information
const template = await client.readResource({
uri: "mcp-llm-generator://template-detail/explain-template"
});
// Access consultant history
const history = await client.readResource({
uri: "mcp-llm-generator://context-history/security-expert-id"
});
🛡️ Security & Safety
🔒 Database File Protection
This project uses SQLite databases containing sensitive data including consultant personalities, conversation histories, and associative memory networks. These files must never be committed to version control.
Multi-Layer Protection System
.gitignore
- Prevents database files from being tracked- Pre-commit hooks - Automatically blocks commits containing sensitive files
- Clear error handling - Provides immediate feedback on security violations
⚠️ Critical Security Notes
- Never commit:
context-memory.db
,*.db
,*.db-wal
,*.db-shm
files - Contains: 12+ consultant personalities, conversation data, memory associations
- Risk: Loss of these files means losing valuable AI consultant expertise
🔧 Secure Development Setup
# Install with automatic security hooks
npm install
# If you encounter database commit errors:
git reset HEAD context-memory.db
git reset HEAD *.db *.db-wal *.db-shm
# Verify protection is active
npm run lint
🔐 Production Security Best Practices
- Regular security audits with
npm audit
- Dependency vulnerability scanning via GitHub Dependabot
- MIT license ensures open-source transparency
- No network access required for core functionality
- Input validation using Zod schemas throughout
🏗️ Architecture & Design
Core Components
- LLM Integration - Direct text generation using MCP sampling protocol
- Template Engine - Reusable prompt templates with intelligent parameter substitution
- Context Memory - Persistent conversation and consultant management
- Associative Memory - Smart knowledge linking and discovery
- Resource Management - Dynamic access to templates, contexts, and metadata
- Type Safety - Full TypeScript implementation with comprehensive Zod validation
📊 Performance Optimizations (v2)
- 67% Token Reduction - Optimized prompt engineering and response formatting
- Smart Caching - Template and context caching for improved response times
- Memory Efficiency - Associative linking reduces redundant data storage
- Lazy Loading - Dynamic resource loading for faster startup times
📚 API Reference
Core Tools
llm-generate
Generate text using LLM via MCP sampling protocol.
Parameters:
messages
(required) - Array of conversation messagesmaxTokens
(optional, default: 500) - Maximum tokens to generatetemperature
(optional, default: 0.7) - Sampling temperature (0.0-1.0)systemPrompt
(optional) - Custom system prompt
template-execute
Execute predefined templates with parameter substitution.
Parameters:
templateName
(required) - Name of template to executeargs
(required) - Object with template parameter valuesmaxTokens
(optional, default: 500) - Token limittemperature
(optional, default: 0.7) - Sampling temperature
context-chat
Chat with personality-driven consultants with persistent memory.
Parameters:
contextId
(required) - Unique consultant context identifiermessage
(required) - User message to sendmaintainPersonality
(optional, default: true) - Keep consultant personality
memory-store
Store knowledge with automatic associative linking.
Parameters:
content
(required) - Content to storescope
(optional, default: "user/default") - Hierarchical organization scopetags
(optional) - Array of descriptive tagscategory
(optional) - Content category
Resources
template-list
URI: mcp-llm-generator://template-list
Returns dynamic list of all available templates with metadata.
template-detail/{name}
URI: mcp-llm-generator://template-detail/{templateName}
Returns comprehensive template information including parameters and validation rules.
context-history/{id}
URI: mcp-llm-generator://context-history/{contextId}
Returns conversation history and consultant insights.
🔧 Troubleshooting
Common Issues
❌ "Command not found: mcp-llm-generator"
Solution:
# Reinstall globally
npm install -g @mako10k/mcp-llm-generator
# Or check npm global bin path
npm config get prefix
❌ "Cannot connect to MCP server"
Solutions:
-
Verify MCP client configuration:
{ "command": "mcp-llm-generator", "type": "stdio" }
-
Test server directly:
mcp-llm-generator # Should output: Context Memory System initialized successfully
-
Check Node.js compatibility:
node --version # Should be ≥18.0.0
❌ "Template not found" errors
Solutions:
-
List available templates:
await client.readResource({ uri: "mcp-llm-generator://template-list" })
-
Add missing template:
await client.callTool("template-manage", { action: "add", template: { /* template definition */ } })
❌ "Database file locked" errors
Solutions:
- Ensure no other MCP server instances are running
- Check file permissions:
ls -la context-memory.db* chmod 644 context-memory.db
❌ "Memory allocation errors"
Solutions:
- Reduce
maxTokens
parameter - Clear old conversation history:
await client.callTool("conversation-manage", { action: "clear", contextId: "your-context-id" })
🐛 Debug Mode
Enable detailed logging:
DEBUG=mcp-llm-generator:* mcp-llm-generator
📞 Support
- Issues: GitHub Issues
- Security: See
- Discussions: GitHub Discussions
🚀 Migration from v1
Breaking Changes in v2
- Scoped package name:
mcp-llm-generator
→@mako10k/mcp-llm-generator
- New tools:
context-chat
,context-manage
,memory-store
require client updates - Enhanced templates: Additional optional parameters for better control
Migration Steps
-
Update global installation:
npm uninstall -g mcp-llm-generator npm install -g @mako10k/mcp-llm-generator
-
Update MCP client configuration:
{ "command": "mcp-llm-generator" // Updated command }
-
Template compatibility: Existing templates continue to work with new optional parameters
🤝 Contributing
We welcome contributions! Please see our contribution guidelines:
Development Setup
git clone https://github.com/mako10k/mcp-sampler.git
cd mcp-sampler
npm install
npm run build
npm test
Code Standards
- TypeScript: Full type safety with strict mode
- ESLint: Code quality and consistency
- Prettier: Automated code formatting
- Husky: Pre-commit hooks for quality gates
- Conventional Commits: Semantic commit messages
Testing
npm test # Run all tests
npm run test:watch # Watch mode for development
npm run test:coverage # Coverage report
Release Process
- Update version in
package.json
- Update
CHANGELOG.md
with new features - Create GitHub release with semantic versioning
- Automated npm publish via GitHub Actions
📝 Changelog
See for detailed version history and breaking changes.
📄 License
MIT License - see file for details.
🙏 Acknowledgments
- Model Context Protocol - Built on the innovative MCP framework
- TypeScript Community - For excellent tooling and type safety
- Open Source Contributors - For making this project possible
Made with ❤️ for the AI development community
Star this repo if you find it useful! Your support helps us continue improving MCP LLM Generator.