berkbirkan/sora-mcp
If you are the rightful owner of sora-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The OpenAI Video MCP Server is a Python package that facilitates communication with the OpenAI Video API using the Model Context Protocol (MCP).
OpenAI Video MCP Server
Python package that implements a Model Context Protocol (MCP) server for the OpenAI Video API. It can be installed via pip
, supports both stdio
and HTTP (remote) transports, and ships with a Docker image for container deployments.
Features
- PyPI package: installable with
pip install openai-video-mcp
. - Transport flexibility: use
stdio
for native MCP clients or expose JSON-RPC over HTTP. - Docker ready: build a container with the included Dockerfile.
- Configurable defaults: configure via
.env
or environment variables, including default model and output directory. - Single purpose tool:
generate_video
forwards prompts to the OpenAI Video API and stores the generated assets locally.
Requirements
- Python 3.10 or newer
- Valid OpenAI API key with video-generation access
- Internet connectivity for OpenAI API calls
Installation
pip install openai-video-mcp
Or from source:
git clone https://github.com/yourusername/openai-video-mcp.git
cd openai-video-mcp
pip install .
The package automatically installs dependencies such as
python-dotenv
,typer
,fastapi
,uvicorn
, andopenai
.
Configuration
The server reads the following environment variables (they can live in a .env
file):
Variable | Description | Required | Default |
---|---|---|---|
OPENAI_API_KEY | OpenAI API key | Yes | — |
OPENAI_BASE_URL | Custom API endpoint (e.g., proxy) | No | https://api.openai.com/v1 |
OPENAI_ORGANIZATION | Organization identifier | No | — |
OPENAI_PROJECT | Project identifier | No | — |
OPENAI_REQUEST_TIMEOUT | API request timeout (seconds) | No | 600 |
OPENAI_VIDEO_OUTPUT_DIR | Directory where videos are written | No | ~/.cache/openai-video-mcp |
OPENAI_VIDEO_MODEL | Default video model | No | gpt-4o-mini-video-preview |
OPENAI_VIDEO_POLL_INTERVAL | Poll interval for long-running jobs | No | 1.5 seconds |
CLI flags override values loaded from the environment.
Standard Input/Output (stdio
) Mode
MCP clients communicate over stdio
by default. Start the server with:
export OPENAI_API_KEY=sk-...
openai-video-mcp stdio
This command speaks the LSP-style framing (Content-Length headers). A sample tools/list
request looks like:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}
The server responds with the schema for the generate_video
tool.
Remote HTTP Mode
Expose the MCP server over JSON-RPC HTTP:
export OPENAI_API_KEY=sk-...
openai-video-mcp remote --host 0.0.0.0 --port 8000
The HTTP service provides a single POST /rpc
endpoint. Send JSON-RPC 2.0 requests and receive JSON responses.
Example Request
curl -X POST http://localhost:8000/rpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 42,
"method": "tools/call",
"params": {
"name": "generate_video",
"arguments": {
"prompt": "A neon city gliding over waves",
"duration": 6,
"size": "1280x720",
"filename": "neon-city"
}
}
}'
The response includes attachment metadata pointing to files on disk:
{
"jsonrpc": "2.0",
"id": 42,
"result": {
"content": [
{"type": "text", "text": "Generated 1 video file in /Users/.../.cache/openai-video-mcp."}
],
"attachments": [
{
"type": "resource",
"uri": "file:///Users/.../.cache/openai-video-mcp/neon-city.mp4",
"mimeType": "video/mp4",
"name": "neon-city.mp4",
"metadata": {
"bytes": 1234567,
"file_id": "file-abc123"
}
}
]
}
}
generate_video
Tool
Parameter | Type | Description |
---|---|---|
prompt | string (required) | Text prompt describing the video. |
model | string | Override the OpenAI video model. |
duration | number | Clip duration (seconds). |
size | string | Resolution (e.g., 1280x720 ). |
format | string | Output container format (mp4 , webm , ...). |
fps | integer | Frames per second, when supported. |
aspect_ratio | string | e.g., 16:9 . |
seed | integer | Seed for deterministic generations. |
audio | object | Free-form structure forwarded to the API. |
filename | string | Filename stem for saving output (extension inferred). |
The server either decodes base64 payloads or downloads from files.content
, then writes the result to the configured directory.
Note: The OpenAI Video API is evolving quickly. Keep an eye on upstream changes in the SDK or API schema.
Docker Usage
Build the image with the provided Dockerfile:
docker build -t openai-video-mcp .
Run the container:
docker run --rm \
-e OPENAI_API_KEY=sk-... \
-v $(pwd)/outputs:/outputs \
openai-video-mcp \
remote --host 0.0.0.0 --port 8000 --output-dir /outputs
In this example, generated videos are bind-mounted to the local outputs/
directory.
IDE Integration
Claude Desktop
- Install the package (
pip install openai-video-mcp
) and verify the CLI withopenai-video-mcp --help
. - Create or edit the Claude configuration file (macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
, Windows:%APPDATA%\Claude\claude_desktop_config.json
). - Add an MCP server entry similar to:
{ "mcpServers": { "openai-video": { "command": "openai-video-mcp", "args": ["stdio"], "env": { "OPENAI_API_KEY": "sk-...", "OPENAI_VIDEO_OUTPUT_DIR": "/Users/you/.cache/openai-video-mcp" } } } }
- Restart Claude Desktop. The
openai-video
tool will appear in the tools sidebar once the server initializes successfully.
Use the
env
block to pass sensitive credentials instead of hard-coding them into the configuration file.
Cursor
- Install the package in the environment Cursor will launch (global Python or virtualenv).
- Open Cursor →
Settings
→MCP Servers
(or run the commandCursor: Edit MCP Servers
from the command palette). - Add a new server configuration:
{ "name": "openai-video", "command": "openai-video-mcp", "args": ["stdio"], "env": { "OPENAI_API_KEY": "sk-...", "OPENAI_VIDEO_OUTPUT_DIR": "/Users/you/.cache/openai-video-mcp" } }
- Save the configuration and relaunch Cursor. The agent panel should list the
openai-video
MCP tool, and invoking it will surface thegenerate_video
command.
If you prefer to run the HTTP transport instead, change the command to
openai-video-mcp
with arguments["remote", "--host", "127.0.0.1", "--port", "8000"]
and point the IDE tohttp://127.0.0.1:8000/rpc
.
Development
python3 -m venv .venv
source .venv/bin/activate
pip install -e .[dev]
Suggested quality checks:
ruff check openai_video_mcp
pytest
The repository contains core setup files; feel free to add extra tooling as needed.
Versioning and Release
-
Update the version number in
pyproject.toml
. -
Publish to PyPI:
rm -rf dist/ python -m build python -m twine upload dist/*
-
For GitHub releases, include
README.md
,LICENSE
,Dockerfile
, and the package code. Share release notes via GitHub Releases.
License
This project is distributed under the .