sora-mcp

berkbirkan/sora-mcp

3.2

If you are the rightful owner of sora-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The OpenAI Video MCP Server is a Python package that facilitates communication with the OpenAI Video API using the Model Context Protocol (MCP).

Tools
1
Resources
0
Prompts
0

OpenAI Video MCP Server

Python package that implements a Model Context Protocol (MCP) server for the OpenAI Video API. It can be installed via pip, supports both stdio and HTTP (remote) transports, and ships with a Docker image for container deployments.

Features

  • PyPI package: installable with pip install openai-video-mcp.
  • Transport flexibility: use stdio for native MCP clients or expose JSON-RPC over HTTP.
  • Docker ready: build a container with the included Dockerfile.
  • Configurable defaults: configure via .env or environment variables, including default model and output directory.
  • Single purpose tool: generate_video forwards prompts to the OpenAI Video API and stores the generated assets locally.

Requirements

  • Python 3.10 or newer
  • Valid OpenAI API key with video-generation access
  • Internet connectivity for OpenAI API calls

Installation

pip install openai-video-mcp

Or from source:

git clone https://github.com/yourusername/openai-video-mcp.git
cd openai-video-mcp
pip install .

The package automatically installs dependencies such as python-dotenv, typer, fastapi, uvicorn, and openai.

Configuration

The server reads the following environment variables (they can live in a .env file):

VariableDescriptionRequiredDefault
OPENAI_API_KEYOpenAI API keyYes
OPENAI_BASE_URLCustom API endpoint (e.g., proxy)Nohttps://api.openai.com/v1
OPENAI_ORGANIZATIONOrganization identifierNo
OPENAI_PROJECTProject identifierNo
OPENAI_REQUEST_TIMEOUTAPI request timeout (seconds)No600
OPENAI_VIDEO_OUTPUT_DIRDirectory where videos are writtenNo~/.cache/openai-video-mcp
OPENAI_VIDEO_MODELDefault video modelNogpt-4o-mini-video-preview
OPENAI_VIDEO_POLL_INTERVALPoll interval for long-running jobsNo1.5 seconds

CLI flags override values loaded from the environment.

Standard Input/Output (stdio) Mode

MCP clients communicate over stdio by default. Start the server with:

export OPENAI_API_KEY=sk-...
openai-video-mcp stdio

This command speaks the LSP-style framing (Content-Length headers). A sample tools/list request looks like:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/list",
  "params": {}
}

The server responds with the schema for the generate_video tool.

Remote HTTP Mode

Expose the MCP server over JSON-RPC HTTP:

export OPENAI_API_KEY=sk-...
openai-video-mcp remote --host 0.0.0.0 --port 8000

The HTTP service provides a single POST /rpc endpoint. Send JSON-RPC 2.0 requests and receive JSON responses.

Example Request

curl -X POST http://localhost:8000/rpc \
  -H "Content-Type: application/json" \
  -d '{
        "jsonrpc": "2.0",
        "id": 42,
        "method": "tools/call",
        "params": {
          "name": "generate_video",
          "arguments": {
            "prompt": "A neon city gliding over waves",
            "duration": 6,
            "size": "1280x720",
            "filename": "neon-city"
          }
        }
      }'

The response includes attachment metadata pointing to files on disk:

{
  "jsonrpc": "2.0",
  "id": 42,
  "result": {
    "content": [
      {"type": "text", "text": "Generated 1 video file in /Users/.../.cache/openai-video-mcp."}
    ],
    "attachments": [
      {
        "type": "resource",
        "uri": "file:///Users/.../.cache/openai-video-mcp/neon-city.mp4",
        "mimeType": "video/mp4",
        "name": "neon-city.mp4",
        "metadata": {
          "bytes": 1234567,
          "file_id": "file-abc123"
        }
      }
    ]
  }
}

generate_video Tool

ParameterTypeDescription
promptstring (required)Text prompt describing the video.
modelstringOverride the OpenAI video model.
durationnumberClip duration (seconds).
sizestringResolution (e.g., 1280x720).
formatstringOutput container format (mp4, webm, ...).
fpsintegerFrames per second, when supported.
aspect_ratiostringe.g., 16:9.
seedintegerSeed for deterministic generations.
audioobjectFree-form structure forwarded to the API.
filenamestringFilename stem for saving output (extension inferred).

The server either decodes base64 payloads or downloads from files.content, then writes the result to the configured directory.

Note: The OpenAI Video API is evolving quickly. Keep an eye on upstream changes in the SDK or API schema.

Docker Usage

Build the image with the provided Dockerfile:

docker build -t openai-video-mcp .

Run the container:

docker run --rm \
  -e OPENAI_API_KEY=sk-... \
  -v $(pwd)/outputs:/outputs \
  openai-video-mcp \
  remote --host 0.0.0.0 --port 8000 --output-dir /outputs

In this example, generated videos are bind-mounted to the local outputs/ directory.

IDE Integration

Claude Desktop

  1. Install the package (pip install openai-video-mcp) and verify the CLI with openai-video-mcp --help.
  2. Create or edit the Claude configuration file (macOS: ~/Library/Application Support/Claude/claude_desktop_config.json, Windows: %APPDATA%\Claude\claude_desktop_config.json).
  3. Add an MCP server entry similar to:
    {
      "mcpServers": {
        "openai-video": {
          "command": "openai-video-mcp",
          "args": ["stdio"],
          "env": {
            "OPENAI_API_KEY": "sk-...",
            "OPENAI_VIDEO_OUTPUT_DIR": "/Users/you/.cache/openai-video-mcp"
          }
        }
      }
    }
    
  4. Restart Claude Desktop. The openai-video tool will appear in the tools sidebar once the server initializes successfully.

Use the env block to pass sensitive credentials instead of hard-coding them into the configuration file.

Cursor

  1. Install the package in the environment Cursor will launch (global Python or virtualenv).
  2. Open Cursor → SettingsMCP Servers (or run the command Cursor: Edit MCP Servers from the command palette).
  3. Add a new server configuration:
    {
      "name": "openai-video",
      "command": "openai-video-mcp",
      "args": ["stdio"],
      "env": {
        "OPENAI_API_KEY": "sk-...",
        "OPENAI_VIDEO_OUTPUT_DIR": "/Users/you/.cache/openai-video-mcp"
      }
    }
    
  4. Save the configuration and relaunch Cursor. The agent panel should list the openai-video MCP tool, and invoking it will surface the generate_video command.

If you prefer to run the HTTP transport instead, change the command to openai-video-mcp with arguments ["remote", "--host", "127.0.0.1", "--port", "8000"] and point the IDE to http://127.0.0.1:8000/rpc.

Development

python3 -m venv .venv
source .venv/bin/activate
pip install -e .[dev]

Suggested quality checks:

ruff check openai_video_mcp
pytest

The repository contains core setup files; feel free to add extra tooling as needed.

Versioning and Release

  • Update the version number in pyproject.toml.

  • Publish to PyPI:

    rm -rf dist/
    python -m build
    python -m twine upload dist/*
    
  • For GitHub releases, include README.md, LICENSE, Dockerfile, and the package code. Share release notes via GitHub Releases.

License

This project is distributed under the .