blesseddefend/nebula-mcp-server
If you are the rightful owner of nebula-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
Nebula MCP Server is an AI Video Generation MCP Server with a multi-provider pipeline supporting OpenAI, Replicate, HuggingFace, Fal AI, and more.
Nebula MCP Server
AI Video Generation MCP Server with multi-provider pipeline supporting OpenAI, Replicate, HuggingFace, Fal AI, and more.
Overview
Nebula is a Model Context Protocol (MCP) server that provides a complete AI video generation pipeline. It orchestrates multiple AI providers to transform text prompts into professional-quality videos with synchronized audio, music, and visual effects.
Features
- Multi-Provider AI Integration: OpenAI GPT-4o-mini, Replicate FLUX, HuggingFace, Fal AI, Voice AI, Suno AI
- Complete Video Pipeline: Script generation ā Scene breakdown ā Voice synthesis ā Image generation ā Video assembly
- Advanced Audio Processing: Voice synthesis, music generation, and lip-sync technology
- Professional Output: 1080p video with customizable frame rates and audio quality
- MCP Protocol: Native integration with Claude and other MCP-compatible tools
- Async Processing: High-performance concurrent processing of pipeline steps
Requirements
System Dependencies
- Python: 3.11 or higher
- FFmpeg: Required for video processing
# macOS brew install ffmpeg # Ubuntu/Debian sudo apt update && sudo apt install ffmpeg # Windows # Download from https://ffmpeg.org/download.html
Storage Requirements
- Disk Space: 100GB recommended for video cache
- Memory: 8GB+ RAM for optimal performance
- Network: Stable internet connection for AI provider APIs
Installation
1. Clone and Setup
# Clone the repository
git clone https://github.com/nohavewho/nebula-mcp-server.git
cd nebula-mcp-server
# Create and activate virtual environment
python3 -m venv .
source bin/activate # On Windows: Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Or install as package
pip install -e .
2. Environment Configuration
# Copy environment template
cp .env.example .env
# Edit .env file with your API keys
nano .env
Required API keys:
OPENAI_API_KEY
: OpenAI GPT-4o-mini for script generationVOICE_AI_API_KEY
: Voice AI for voice synthesisREPLICATE_API_TOKEN
: Replicate for FLUX image generationHUGGINGFACE_API_KEY
: HuggingFace for FLUX dev modelsFAL_AI_KEY
: Fal AI for additional processingSUNO_API_KEY
: Suno AI for music generation (optional)VEO3_API_KEY
: Veo3 for video generation (when available)
3. Verify Installation
# Test Python imports
python -c "import mcp, openai, httpx; print('Dependencies OK')"
# Check FFmpeg
ffmpeg -version
# Test environment loading
python -c "from dotenv import load_dotenv; load_dotenv(); import os; print('Env OK' if os.getenv('OPENAI_API_KEY') else 'Missing API keys')"
API Usage
Nebula MCP Server provides multiple ways to interact with the video generation pipeline:
1. MCP Protocol (stdio)
Configure in your MCP client (e.g., Claude Desktop):
{
"mcpServers": {
"nebula": {
"command": "python",
"args": ["/path/to/nebula-mcp-server/server.py"],
"env": {
"PYTHONPATH": "/path/to/nebula-mcp-server"
}
}
}
}
2. HTTP API (Railway Deployment)
Access the server through RESTful endpoints:
# Base URL
https://your-railway-app.railway.app
# Health check
curl https://your-railway-app.railway.app/health
# List available tools
curl https://your-railway-app.railway.app/tools
# Execute a tool
curl -X POST https://your-railway-app.railway.app/tools/hello_world \
-H "Content-Type: application/json" \
-d '{"parameters": {}}'
3. MCP JSON-RPC 2.0 Protocol
Standard MCP protocol over HTTP for full compatibility:
# List tools
curl -X POST https://your-railway-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/list",
"params": {},
"id": 1
}'
# Call a tool
curl -X POST https://your-railway-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "server_status",
"arguments": {}
},
"id": 2
}'
4. Direct Python Integration
from nebula import VideoPipeline
# Initialize pipeline
pipeline = VideoPipeline()
# Generate video from prompt
result = await pipeline.generate(
prompt="Create a 30-second video about space exploration",
duration=30,
style="cinematic"
)
print(f"Video generated: {result.output_path}")
API Reference
HTTP Endpoints
Method | Endpoint | Description |
---|---|---|
GET | / | Server information and available endpoints |
GET | /health | Health check with system status |
GET | /status | Detailed server status information |
GET | /tools | List all available tools with metadata |
POST | /tools/{tool_name} | Execute a specific tool |
GET | /templates | List video generation templates |
POST | /templates/instantiate | Create pipeline from template |
GET | /mcp | MCP server information |
POST | /mcp/execute | Execute MCP commands via HTTP |
POST | /mcp/jsonrpc | MCP JSON-RPC 2.0 protocol endpoint |
MCP JSON-RPC 2.0 Methods
Method | Description | Parameters |
---|---|---|
tools/list | List all available tools | {} |
tools/call | Execute a specific tool | {"name": "tool_name", "arguments": {...}} |
Available Tools
The server provides 17 tools across multiple categories:
Core Tools
hello_world
- Test server functionalityserver_status
- Get detailed server statuslist_tools
- List all available tools
Template Management
list_templates
- Browse video generation templatesget_template_info
- Get detailed template informationinstantiate_template
- Create runnable pipeline from templatesearch_templates
- Search templates by keywordsget_template_recommendations
- Get recommended templates
Utility Tools
hash_text
- Generate cryptographic hashesjson_analyzer
- Parse and analyze JSON dataregistry_info
- Get registry statistics (authenticated)
Video Processing Tools
(Additional tools automatically discovered from the registry)
Example Requests & Responses
HTTP API Examples
Get Server Status:
curl https://your-app.railway.app/status
Response:
{
"status": "ok",
"details": "Nebula MCP Server Status:\n\nServer: nebula v1.0.0\nDebug Mode: false\nCache Directory: ./cache\nOutput Directory: ./output\nMax Concurrent Jobs: 3\nAvailable Tools: 17\n\nAPI Keys Configured:\n- OpenAI: ā
\n- Replicate: ā
\n- HuggingFace: ā
\n- Voice AI: ā
\n- Fal AI: ā
",
"environment": {
"railway": true,
"region": "us-west1",
"deployment_id": "d1234567-89ab-cdef-0123-456789abcdef",
"service_name": "nebula-mcp"
}
}
List Templates:
curl "https://your-app.railway.app/templates?category=educational&limit=5"
Execute Tool:
curl -X POST https://your-app.railway.app/tools/hash_text \
-H "Content-Type: application/json" \
-d '{
"parameters": {
"text": "hello world",
"hash_type": "sha256"
}
}'
Response:
{
"tool": "hash_text",
"status": "success",
"result": "SHA256 hash of 'hello world': b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9"
}
MCP JSON-RPC Examples
List Tools:
curl -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/list",
"params": {},
"id": 1
}'
Response:
{
"jsonrpc": "2.0",
"result": {
"tools": [
{
"name": "hash_text",
"description": "Generate various types of hashes for input text",
"inputSchema": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "Text to hash"
},
"hash_type": {
"type": "string",
"description": "Type of hash to generate",
"enum": ["md5", "sha1", "sha256", "sha512"],
"default": "sha256"
},
"encoding": {
"type": "string",
"description": "Text encoding to use",
"enum": ["utf-8", "ascii", "latin-1"],
"default": "utf-8"
}
},
"required": ["text"]
}
}
]
},
"id": 1
}
Call Template Tool:
curl -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "get_template_info",
"arguments": {
"template_id": "educational_01"
}
},
"id": 3
}'
Error Handling
HTTP API Errors
Standard HTTP status codes with JSON error responses:
{
"error": "Tool 'nonexistent_tool' not found",
"tool": "nonexistent_tool",
"status": "error"
}
MCP JSON-RPC Errors
Standard JSON-RPC 2.0 error format:
{
"jsonrpc": "2.0",
"error": {
"code": -32601,
"message": "Method not found"
},
"id": 1
}
Error Codes:
-32700
: Parse error (invalid JSON)-32600
: Invalid Request (missing required fields)-32601
: Method not found-32602
: Invalid params (parameter validation failed)-32603
: Internal error (server-side error)
Client Integration Guides
Python Client (aiohttp)
import aiohttp
import asyncio
import json
from typing import Dict, Any, List, Optional
class NebulaClient:
"""Python client for Nebula MCP Server."""
def __init__(self, base_url: str = "https://your-app.railway.app"):
self.base_url = base_url.rstrip('/')
self.session = None
async def __aenter__(self):
self.session = aiohttp.ClientSession()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
if self.session:
await self.session.close()
# HTTP API Methods
async def get_status(self) -> Dict[str, Any]:
"""Get server status."""
async with self.session.get(f"{self.base_url}/status") as response:
return await response.json()
async def list_tools_http(self) -> Dict[str, Any]:
"""List tools via HTTP API."""
async with self.session.get(f"{self.base_url}/tools") as response:
return await response.json()
async def execute_tool_http(self, tool_name: str, parameters: Dict[str, Any] = None) -> Dict[str, Any]:
"""Execute tool via HTTP API."""
data = {"parameters": parameters or {}}
async with self.session.post(
f"{self.base_url}/tools/{tool_name}",
json=data
) as response:
return await response.json()
# MCP JSON-RPC Methods
async def _jsonrpc_request(self, method: str, params: Dict[str, Any] = None, request_id: int = 1) -> Dict[str, Any]:
"""Send JSON-RPC request."""
payload = {
"jsonrpc": "2.0",
"method": method,
"params": params or {},
"id": request_id
}
async with self.session.post(
f"{self.base_url}/mcp/jsonrpc",
json=payload
) as response:
return await response.json()
async def list_tools(self) -> List[Dict[str, Any]]:
"""List all available tools via MCP JSON-RPC."""
response = await self._jsonrpc_request("tools/list")
if "result" in response:
return response["result"]["tools"]
elif "error" in response:
raise Exception(f"JSON-RPC Error: {response['error']['message']}")
return []
async def call_tool(self, name: str, arguments: Dict[str, Any] = None) -> Any:
"""Call a tool via MCP JSON-RPC."""
params = {
"name": name,
"arguments": arguments or {}
}
response = await self._jsonrpc_request("tools/call", params)
if "result" in response:
return response["result"]
elif "error" in response:
raise Exception(f"JSON-RPC Error: {response['error']['message']}")
return None
# High-level helper methods
async def generate_hash(self, text: str, hash_type: str = "sha256") -> str:
"""Generate hash using hash_text tool."""
result = await self.call_tool("hash_text", {
"text": text,
"hash_type": hash_type
})
# Extract hash from text response
content = result["content"][0]["text"]
return content.split(": ")[-1]
async def get_template_info(self, template_id: str) -> str:
"""Get detailed template information."""
result = await self.call_tool("get_template_info", {
"template_id": template_id
})
return result["content"][0]["text"]
async def search_templates(self, query: str) -> str:
"""Search for templates."""
result = await self.call_tool("search_templates", {
"query": query
})
return result["content"][0]["text"]
# Usage example
async def main():
async with NebulaClient("https://your-app.railway.app") as client:
# Get server status
status = await client.get_status()
print(f"Server Status: {status['status']}")
# List available tools
tools = await client.list_tools()
print(f"Available tools: {len(tools)}")
# Generate a hash
hash_value = await client.generate_hash("hello world", "sha256")
print(f"SHA256 Hash: {hash_value}")
# Search templates
templates = await client.search_templates("educational")
print("Educational templates:", templates)
# Run the example
if __name__ == "__main__":
asyncio.run(main())
JavaScript Client (Node.js)
const https = require('https');
const http = require('http');
class NebulaClient {
constructor(baseUrl = 'https://your-app.railway.app') {
this.baseUrl = baseUrl.replace(/\/$/, '');
}
async request(method, path, data = null) {
return new Promise((resolve, reject) => {
const url = new URL(`${this.baseUrl}${path}`);
const isHttps = url.protocol === 'https:';
const lib = isHttps ? https : http;
const options = {
hostname: url.hostname,
port: url.port || (isHttps ? 443 : 80),
path: url.pathname + url.search,
method: method,
headers: {
'Content-Type': 'application/json',
'User-Agent': 'NebulaClient/1.0.0'
}
};
if (data) {
const postData = JSON.stringify(data);
options.headers['Content-Length'] = Buffer.byteLength(postData);
}
const req = lib.request(options, (res) => {
let responseData = '';
res.on('data', (chunk) => {
responseData += chunk;
});
res.on('end', () => {
try {
const parsed = JSON.parse(responseData);
resolve(parsed);
} catch (e) {
resolve(responseData);
}
});
});
req.on('error', (e) => {
reject(e);
});
if (data) {
req.write(JSON.stringify(data));
}
req.end();
});
}
// HTTP API Methods
async getStatus() {
return await this.request('GET', '/status');
}
async listToolsHttp() {
return await this.request('GET', '/tools');
}
async executeToolHttp(toolName, parameters = {}) {
return await this.request('POST', `/tools/${toolName}`, {
parameters: parameters
});
}
// MCP JSON-RPC Methods
async jsonrpcRequest(method, params = {}, id = 1) {
const payload = {
jsonrpc: '2.0',
method: method,
params: params,
id: id
};
const response = await this.request('POST', '/mcp/jsonrpc', payload);
if (response.error) {
throw new Error(`JSON-RPC Error: ${response.error.message}`);
}
return response.result;
}
async listTools() {
const result = await this.jsonrpcRequest('tools/list');
return result.tools;
}
async callTool(name, arguments = {}) {
return await this.jsonrpcRequest('tools/call', {
name: name,
arguments: arguments
});
}
// High-level helper methods
async generateHash(text, hashType = 'sha256') {
const result = await this.callTool('hash_text', {
text: text,
hash_type: hashType
});
// Extract hash from text response
const content = result.content[0].text;
return content.split(': ').pop();
}
async getTemplateInfo(templateId) {
const result = await this.callTool('get_template_info', {
template_id: templateId
});
return result.content[0].text;
}
async searchTemplates(query) {
const result = await this.callTool('search_templates', {
query: query
});
return result.content[0].text;
}
}
// Usage example
async function main() {
const client = new NebulaClient('https://your-app.railway.app');
try {
// Get server status
const status = await client.getStatus();
console.log('Server Status:', status.status);
// List available tools
const tools = await client.listTools();
console.log('Available tools:', tools.length);
// Generate a hash
const hash = await client.generateHash('hello world', 'sha256');
console.log('SHA256 Hash:', hash);
// Search templates
const templates = await client.searchTemplates('educational');
console.log('Educational templates:', templates);
} catch (error) {
console.error('Error:', error.message);
}
}
// Run if this file is executed directly
if (require.main === module) {
main();
}
module.exports = NebulaClient;
cURL Examples Collection
Basic Server Information:
# Health check
curl -s https://your-app.railway.app/health | jq .
# Detailed status
curl -s https://your-app.railway.app/status | jq .
# Server information
curl -s https://your-app.railway.app/ | jq .
Tool Management:
# List all tools (HTTP)
curl -s https://your-app.railway.app/tools | jq '.tools[] | {name, description, category}'
# List all tools (JSON-RPC)
curl -s -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/list",
"params": {},
"id": 1
}' | jq '.result.tools[] | {name, description}'
# Execute hello_world tool
curl -s -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "hello_world",
"arguments": {}
},
"id": 2
}' | jq '.result.content[0].text'
# Generate MD5 hash
curl -s -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "hash_text",
"arguments": {
"text": "hello world",
"hash_type": "md5"
}
},
"id": 3
}' | jq -r '.result.content[0].text'
Template Management:
# List all templates
curl -s https://your-app.railway.app/templates | jq .
# Search educational templates
curl -s "https://your-app.railway.app/templates?category=educational&limit=3" | jq .
# Get specific template info
curl -s -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "get_template_info",
"arguments": {
"template_id": "educational_01"
}
},
"id": 4
}' | jq -r '.result.content[0].text'
# Search templates by keyword
curl -s -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "search_templates",
"arguments": {
"query": "marketing"
}
},
"id": 5
}' | jq -r '.result.content[0].text'
Error Handling Examples:
# Test invalid method
curl -s -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "invalid/method",
"params": {},
"id": 6
}' | jq .
# Test invalid tool name
curl -s -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "nonexistent_tool",
"arguments": {}
},
"id": 7
}' | jq .
# Test missing parameters
curl -s -X POST https://your-app.railway.app/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "get_template_info",
"arguments": {}
},
"id": 8
}' | jq .
Migration Guide
From MCP stdio to HTTP API
If you're currently using the MCP server via stdio and want to migrate to the HTTP API:
Before (MCP stdio):
- Direct function calls through MCP protocol
- stdio communication channel
- Local process management
After (HTTP API):
- RESTful endpoints for tool execution
- Standard HTTP status codes
- Cloud deployment ready
- Multiple client language support
Migration Steps:
-
Update your client configuration:
# Old: MCP stdio client # from mcp import Client # New: HTTP client import aiohttp client = NebulaClient("https://your-app.railway.app")
-
Update tool invocations:
# Old: Direct MCP tool call # result = await mcp_client.call_tool("hash_text", {"text": "hello"}) # New: HTTP API call result = await client.call_tool("hash_text", {"text": "hello"})
-
Handle response format changes:
# MCP returns TextContent objects # HTTP returns JSON with content array text_result = result["content"][0]["text"]
From HTTP API to MCP JSON-RPC 2.0
For better compatibility and standardization:
Benefits of JSON-RPC:
- Standard protocol specification
- Better error handling
- Full MCP compatibility
- Future-proof design
Update your requests:
# Old HTTP endpoint
curl -X POST https://your-app.railway.app/tools/hash_text
# New JSON-RPC endpoint
curl -X POST https://your-app.railway.app/mcp/jsonrpc \
-d '{"jsonrpc": "2.0", "method": "tools/call", "params": {...}}'
Architecture
Pipeline Steps
- Script Generation: AI-powered script writing with GPT-4o-mini
- Scene Breakdown: Intelligent scene detection and timing
- Voice Synthesis: High-quality voice generation with Voice AI
- Duration Synchronization: Precise audio-visual timing alignment
- Footage Research: Context-aware stock footage selection
- Image Generation: FLUX-powered visual content creation
- Video Generation: Advanced video synthesis with Veo3/alternatives
- Lip Synchronization: Realistic lip-sync technology
- Music Generation: AI-composed background music with Suno
- Video Assembly: Professional editing and final output
Provider Integrations
- OpenAI: Script generation, content analysis
- Voice AI: Voice synthesis and audio processing
- Replicate: FLUX image generation models
- HuggingFace: FLUX dev and custom models
- Fal AI: Additional image/video processing
- Veo3: Advanced video generation (when available)
- Suno AI: Music composition and audio enhancement
Configuration
Pipeline Settings
# Video output settings
DEFAULT_VIDEO_RESOLUTION=1920x1080
DEFAULT_VIDEO_FPS=30
DEFAULT_AUDIO_BITRATE=128k
# Performance settings
MAX_CONCURRENT_JOBS=3
API_TIMEOUT=300
VIDEO_PROCESSING_TIMEOUT=1800
PIPELINE_TIMEOUT=3600
# Storage settings
CACHE_DIR=./cache
OUTPUT_DIR=./output
MAX_CACHE_SIZE_GB=100
Advanced Configuration
See .env.example
for complete configuration options including:
- Provider-specific settings
- Quality presets
- Performance tuning
- Debug options
Development
Project Structure
nebula-mcp-server/
āāā server.py # Main MCP server entry point
āāā pipeline/ # Core pipeline implementation
ā āāā engine.py # Pipeline orchestrator
ā āāā steps/ # Individual pipeline steps
ā āāā state.py # State management
āāā providers/ # AI service integrations
āāā tools/ # MCP tool definitions
āāā utils/ # Utility modules
āāā tests/ # Test suite
Running Tests
# Run all tests
pytest
# Run with coverage
pytest --cov=. --cov-report=html
# Run specific test module
pytest tests/test_pipeline.py -v
Code Quality
# Format code
black .
# Lint code
flake8 .
# Type checking
mypy .
Troubleshooting
Common Issues
-
FFmpeg not found
- Ensure FFmpeg is installed and in PATH
- Test with:
ffmpeg -version
-
API key errors
- Verify all required keys in
.env
- Check API key validity and quotas
- Verify all required keys in
-
Memory issues
- Reduce MAX_CONCURRENT_JOBS
- Increase system RAM or swap
- Clear cache directory
-
Slow processing
- Check network connection
- Verify API provider status
- Monitor system resources
Debug Mode
Enable debug logging:
DEBUG=true
This provides detailed pipeline execution logs and API interaction traces.
Contributing
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature
- Commit changes:
git commit -m 'Add amazing feature'
- Push branch:
git push origin feature/amazing-feature
- Open pull request
License
This project is licensed under the MIT License - see the file for details.
Support
- Issues: GitHub Issues
- Documentation: Read the Docs
- Discord: Community Server
Acknowledgments
- OpenAI for GPT-4o-mini and API infrastructure
- Replicate for FLUX model hosting
- HuggingFace for open-source model ecosystem
- Voice AI for voice synthesis technology
- The MCP Protocol team for the foundational framework