REDOANUZZAMAN/AI-Powered-YouTube-Shorts-Generator
If you are the rightful owner of AI-Powered-YouTube-Shorts-Generator and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The AI-Powered YouTube Shorts Generator is an intelligent n8n workflow that automates the creation and uploading of YouTube Shorts using content from Reddit, enhanced by AI for engaging video production.
🎬 AI-Powered YouTube Shorts Generator
An intelligent n8n workflow that automatically creates and uploads YouTube Shorts from Reddit content. Uses AI to generate engaging short-form videos with text-to-speech, background footage, and music - all automated via MCP (Model Context Protocol) server integration.
✨ Features
- Multi-Source Content - Creates videos from r/Jokes, r/LifeProTips, and r/Stories
- AI Content Adaptation - GPT-4 transforms Reddit posts into video-friendly scripts
- Automated Video Generation - MCP server creates videos with TTS and background footage
- Background Video Selection - Automatically fetches relevant Pexels videos
- Music Selection - AI picks appropriate background music tags
- Text-to-Speech - Converts scripts to natural voice narration
- Subtitle Generation - Automatic subtitle overlay on videos
- YouTube Integration - Direct upload to YouTube with metadata
- Status Monitoring - Polls video generation status until ready
- Batch Processing - Handles multiple posts simultaneously
- Local AI Support - Works with Ollama for cost-effective generation
- Flexible Models - Supports GPT-4, GPT-4o-mini, and local models
🎯 Use Case
Perfect for content creators who want to:
- Build a YouTube Shorts channel automatically
- Repurpose Reddit content into video format
- Create engaging short-form content at scale
- Monetize viral Reddit posts
- Save time on video editing and production
- Maintain consistent upload schedules
🏗️ Architecture
Three Content Pipelines
Pipeline 1: r/Jokes
RSS Feed → Map Fields → Aggregate → AI Agent → Wait Loop → Check Status → Download → Upload to YouTube
Pipeline 2: r/LifeProTips
RSS Feed → Map Fields → Aggregate → AI Agent → Wait Loop → Check Status → Download → Upload to YouTube
Pipeline 3: r/Stories
RSS Feed → Map Fields → Aggregate → AI Agent → Wait Loop → Check Status → Download → Upload to YouTube
Custom Video Generation
Configure → Get Music Tags → Generate Content → Pick Music → Start Generation → Wait Loop → Check Status → Download → Upload
🚀 Quick Start
Prerequisites
- Docker installed on your system
- n8n instance (local or cloud)
- Pexels API Key - Get here
- OpenAI API Key - Get here
- YouTube OAuth credentials
- Reddit RSS access (no auth needed)
Step 1: Run MCP Server
# Run the short-video-maker MCP server
docker run -it --rm \
--name short-video-maker \
-p 3123:3123 \
-e LOG_LEVEL=debug \
-e PEXELS_API_KEY=your_pexels_api_key \
gyoridavid/short-video-maker:latest-tiny
# Server will be available at http://localhost:3123
Step 2: Import Workflow
# In n8n:
# 1. Go to Workflows → Import from File
# 2. Upload "youtube_shorts_with_mcp_server.json"
# 3. All nodes will be imported
Step 3: Configure Credentials
Set up authentication for:
Service | Type | Required For |
---|---|---|
OpenAI API | API Key | AI content generation |
YouTube OAuth2 | OAuth2 | Video uploads |
Pexels API | API Key | Background videos (in Docker) |
Step 4: Update Server URL
// In "Configure" node (for custom generation)
SERVER_URL: "http://host.docker.internal:3123"
// In MCP Client nodes
sseEndpoint: "http://host.docker.internal:3123/mcp/sse"
Note: If not using Docker Desktop, replace host.docker.internal
with:
localhost
for local setup- Your server IP for remote setup
Step 5: Enable n8n Binary Data Mode
# Add to your n8n environment variables
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
# This prevents 400 errors when uploading to YouTube
Step 6: Test the Workflow
# 1. Click "Test workflow" button
# 2. Workflow fetches top Reddit posts
# 3. AI generates video scripts
# 4. MCP server creates videos
# 5. Videos are uploaded to YouTube
📊 Workflow Structure
Reddit Content Workflows
Each Reddit source follows this pattern:
1. Get Top Posts
- Fetches weekly top posts via RSS
- No authentication required
- Returns titles and content
2. Map Fields
- Extracts title and content
- Removes "submitted by" text
- Formats for AI processing
3. Aggregate
- Combines all posts into single array
- Prepares for batch processing
4. Generate Video (AI Agent)
- Uses GPT-4 or GPT-4o-mini
- Connects to MCP server via tools
- Adapts content for video format
- Returns video ID and title
5. Wait Loop
- Pauses execution (webhook-based)
- Allows video generation to complete
- Configurable wait time
6. Check Status
- Polls MCP server for video status
- Returns current generation state
7. If Ready
- Checks if status === "ready"
- True: Downloads video
- False: Loops back to Wait
8. Download & Upload
- Downloads generated video
- Uploads to YouTube with metadata
- Sets category and region
Custom Video Generation
1. Configure
- Sets MCP server URL
- Centralizes configuration
2. Get Music Tags
- Fetches available music tags from server
- Used for background music selection
3. Generate Content
- AI creates video scenes
- Each scene has text and search terms
- Returns structured JSON
4. Pick Music
- AI selects appropriate music tag
- Based on video content and available tags
5. Start Generation
- POSTs to MCP server API
- Includes scenes and config
- Returns video ID
6. Wait & Check Loop
- Similar to Reddit workflows
- Monitors generation progress
7. Download & Upload
- Same as Reddit workflows
🎨 Customization
Change Reddit Sources
// In RSS Feed Read nodes
url: "https://www.reddit.com/r/YourSubreddit/top/.rss?t=week"
// Available time filters: hour, day, week, month, year, all
Modify Post Selection
// In "Generate video" node prompt
{{ $json.data[8].title }} // Change index [0-24] for different posts
// Get multiple posts
{{ $json.data.slice(0, 5) }} // First 5 posts
Adjust AI Models
// For OpenAI
model: "gpt-4o-mini" // Fast and cheap
model: "gpt-4-turbo" // Balanced
model: "gpt-4" // Best quality
// For Ollama (local)
model: "mistral-small3.1:24b-instruct-2503-q8_0"
model: "llama3:70b"
Customize Video Prompts
// In AI Agent nodes
text: `<Instruction>
Turn the following joke into a video.
Make it engaging and easy to understand.
Add humor and timing.
</Instruction>
<Joke>
{{ $json.data[8].contentSnippet }}
</Joke>`
Change Video Configuration
// In "Start generating the video" node
{
"scenes": [...],
"config": {
"paddingBack": 1500, // Silence at end (ms)
"music": "upbeat", // Music tag
"voiceSpeed": 1.0, // TTS speed
"subtitleStyle": "bold" // Subtitle formatting
}
}
Modify YouTube Settings
// In YouTube upload nodes
title: "{{ $json.output.videoTitle }}"
regionCode: "US" // Change region
categoryId: "24" // 24 = Entertainment
privacy: "public" // public, unlisted, private
Adjust Wait Times
// In Wait nodes
// Default uses webhook callback
// For fixed time:
waitTime: 60 // seconds
resumeAfter: 60
🛠️ MCP Server Details
Endpoints
Endpoint | Method | Purpose |
---|---|---|
/mcp/sse | GET | MCP protocol connection |
/api/short-video | POST | Create new video |
/api/short-video/:id | GET | Download video |
/api/short-video/:id/status | GET | Check generation status |
/api/music-tags | GET | List available music |
Video Generation API
Request:
{
"scenes": [
{
"text": "Here's an interesting fact...",
"searchTerms": ["space", "galaxy", "stars"]
}
],
"config": {
"paddingBack": 1500,
"music": "ambient"
}
}
Response:
{
"videoId": "abc123",
"status": "processing"
}
Status Response
{
"status": "processing" | "ready" | "failed",
"progress": 45,
"eta": 120
}
🐛 Troubleshooting
YouTube Upload Failures (400 Error)
# Solution: Enable filesystem binary mode
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
# In Docker Compose:
environment:
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem
YouTube API Quota Limits
# YouTube limits daily API uploads
# Free tier: 6 videos per day
# Solution: Disable YouTube nodes for testing
# Or: Apply for quota increase
MCP Server Connection Failed
# Check server is running:
docker ps | grep short-video-maker
# Check logs:
docker logs short-video-maker
# Test endpoint:
curl http://localhost:3123/api/music-tags
Video Generation Stuck
# Check status manually:
curl http://localhost:3123/api/short-video/VIDEO_ID/status
# Restart MCP server:
docker restart short-video-maker
# Check Pexels API key is valid
AI Agent Not Finding MCP Tools
// Verify MCP Client node configuration:
sseEndpoint: "http://host.docker.internal:3123/mcp/sse"
include: "selected"
includeTools: ["create-short-video"]
Docker Desktop Host Access Issues
# Linux: Use host network mode
docker run --network host ...
# Or: Use container IP
docker inspect short-video-maker | grep IPAddress
📈 Performance Optimization
Reduce AI Costs
// Use GPT-4o-mini instead of GPT-4
model: "gpt-4o-mini" // ~10x cheaper
// Use Ollama for free local inference
model: "mistral-small3.1:24b-instruct-2503-q8_0"
Faster Video Generation
// Use smaller video resolution in MCP server
// Reduce scene count per video
scenes: [...].slice(0, 2) // Only 2 scenes
// Use shorter text per scene
Batch Processing
// Process multiple posts in parallel
// Split into separate workflow executions
// Use n8n's batch processing features
🔒 Security Considerations
- Store all API keys securely in n8n credentials
- Never commit credentials to version control
- Use environment variables for sensitive data
- Restrict MCP server access (firewall rules)
- Monitor YouTube API quota usage
- Review generated content before uploading
- Comply with Reddit's API terms of service
- Respect YouTube's community guidelines
💰 Cost Estimation
Per Video (approximate)
GPT-4o-mini (script): $0.001-0.005
Pexels API: Free (with attribution)
MCP Server: Free (self-hosted)
YouTube Upload: Free
Total: ~$0.001-0.005 per video
GPT-4 (higher quality): $0.01-0.05 per video
Monthly (100 videos)
With GPT-4o-mini: $0.10-0.50/month
With GPT-4: $1-5/month
With Ollama: $0/month (electricity only)
📝 YouTube Upload Limitations
- Daily Quota: 6 videos per day (default)
- File Size: Max 256 GB per video
- Duration: Shorts must be ≤60 seconds
- API Limits: 10,000 quota units/day
- Upload = 1,600 units per video
💡 Content Ideas
Subreddit Sources
- r/Jokes - Funny short jokes
- r/LifeProTips - Useful life advice
- r/Stories - Short narratives
- r/explainlikeimfive - Simple explanations
- r/todayilearned - Interesting facts
- r/ShowerThoughts - Mind-bending thoughts
- r/technicallythetruth - Clever observations
- r/clevercomebacks - Witty responses
Video Styles
- Joke format with punchline timing
- Life hack demonstrations
- Story narration with suspense
- Educational fact delivery
- Motivational quote videos
- Fun fact compilations
🌐 MCP Server Resources
- GitHub: short-video-maker
- npm: short-video-maker
- Docker Hub: gyoridavid/short-video-maker
- Documentation: Available in GitHub repo
📊 Workflow Metrics to Track
- Videos generated per day
- YouTube upload success rate
- Average video generation time
- AI token usage and costs
- MCP server uptime
- Video view counts
- Subscriber growth
- API quota remaining
🎓 Learning Resources
📄 License
This workflow is open source and available under the .
🤝 Contributing
Contributions, issues, and feature requests are welcome!
- Fork the project
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add multiple subreddit support'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
👨💻 Author
Redoanuzzaman
- GitHub: @redoanuzzaman
- Email: redoanuzzaman707@gmail.com
- Website: redoan.dev
🙏 Acknowledgments
- gyoridavid for the short-video-maker MCP server
- n8n community for workflow automation tools
- OpenAI for GPT models
- Pexels for free stock footage
- Reddit for content source
💖 Show Your Support
Give a ⭐️ if this workflow helps you create amazing YouTube Shorts!
📞 Support
Need help?
- Check MCP Server Issues
- Visit n8n Community Forum
- Review YouTube API Docs
- Open an issue in this repository
Made with 🎬 and AI automation
Last Updated: October 2025