jfontestad/ai-openai-codex-mcp
If you are the rightful owner of ai-openai-codex-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A lightweight MCP server that uses OpenAI Responses API as its reasoning core, allowing web searches and structured output.
openai-responses-mcp
A lightweight MCP server that adopts OpenAI Responses API as its reasoning core.
Always allows web_search, letting the model autonomously decide whether to perform searches. Use from MCP clients like Claude Code/Claude Desktop via stdio.
Important: This repository is for local/personal use only. It is not intended for publishing to any registry.
Important: The canonical specification is docs/spec.md. Please refer to it for details.
Repository Structure
src/: TypeScript sourcesscripts/: Verification/utility scripts (mcp-smoke*,clean.js, etc.)config/config.yaml.example: Configuration samplepolicy.md.example: External System Policy sample
docs/: Canonical specification/reference/verification proceduresspec.md: Canonical specificationreference/: Configuration, setup, integration referencesverification.md: E2E verification procedures
README.md: Project overview/quick startLICENSE: Licensepackage.json,package-lock.json: npm configuration/dependency locktsconfig.json: TypeScript configuration.gitignore: Git exclusion settings
Features (Overview)
- Responses API compliant (official JS SDK
openai) - Search delegated to model (
web_searchalways allowed) - Structured output (text,
used_search,citations[],model) - System Policy is code-side SSOT (
src/policy/system-policy.ts) - MCP stdio implementation (
initialize/tools/list/tools/call)
Requirements
- Node.js v20 or higher (recommended: v24)
- npm (bundled with Node)
- OpenAI API key (pass via environment variable) or Codex CLI logged in via
codex login
Security/Privacy defaults:
- No local interaction history is written (
server.history.enabled=false). - Non-debug logging is suppressed by default (
server.quiet=true).
Minimal Setup (Local Only)
- Required setting: either
codex login(recommended) or environment variableOPENAI_API_KEY(YAML not needed) - Build locally, then start stdio server:
npm ci && npm run buildnode build/index.js --stdio
YAML can be added later (default path: macOS/Linux ~/.config/openai-responses-mcp/config.yaml, Windows %APPDATA%\\openai-responses-mcp\\config.yaml).
For Users (Using as MCP)
Please refer to this when using from MCP clients.
1) Example registration with Claude Code (local path)
- Add the following item to
~/.claude.json
{
"mcpServers": {
"openai-responses": {
"command": "node",
"args": ["/ABS/PATH/openai-responses-mcp/build/index.js", "--stdio"],
"env": { "OPENAI_API_KEY": "sk-..." }
}
}
}
- Run the following with the Claude Code CLI
claude mcp add -s user -t stdio openai-responses -e OPENAI_API_KEY=sk-xxxx -- node /ABS/PATH/openai-responses-mcp/build/index.js --stdio
2) Example registration with OpenAI Codex
- Add the following item to
~/.codex/config.toml
[mcp_servers.openai-responses]
command = "npx"
args = ["/ABS/PATH/openai-responses-mcp/build/index.js", "--stdio"]
# env block optional; Codex CLI auth.json is reused automatically when present
3) Instruction examples for CLAUDE.md or AGENTS.md
### Problem-solving Policy
When encountering problems or implementation difficulties during development:
1. **Always consult openai-responses MCP**
- Consultation is the highest priority and mandatory
- Never implement based on independent judgment
2. **Always ask questions in English**
- All questions to openai-responses MCP should be written in English
3. **Research alternative methods and latest best practices**
- Use openai-responses MCP to collect solution methods and latest best practices
4. **Consider multiple solution approaches**
- Do not immediately decide on one method; compare multiple options before deciding on a policy
5. **Document solutions**
- After problem resolution, record procedures and solutions for quick response to recurrences
4) Direct execution (local)
export OPENAI_API_KEY="sk-..."
node /ABS/PATH/openai-responses-mcp/build/index.js --stdio --debug ./_debug.log --config ~/.config/openai-responses-mcp/config.yaml
5) Configuration (YAML optional)
Default path: macOS/Linux ~/.config/openai-responses-mcp/config.yaml, Windows %APPDATA%\openai-responses-mcp\config.yaml
Minimal example:
model_profiles:
answer:
model: gpt-5
reasoning_effort: medium
verbosity: medium
request:
timeout_ms: 300000
max_retries: 3
Sample: config/config.yaml.example
Optional external policy:
policy:
system:
source: file
path: ~/.config/openai-responses-mcp/policy.md
merge: append # replace | prepend | append
Sample: config/policy.md.example
6) Codex CLI Authentication Reuse
- When Codex CLI (
codex) is installed and authenticated, the MCP server automatically reuses$CODEX_HOME/auth.json(default~/.codex/auth.json). - Behavior summary:
- Validates file permissions (enforce 0600 on Unix when
codex.permissionStrict: true). - Transparently refreshes tokens via
codex auth refresh --jsonwhen tokens are stale. - Auto-runs
codex login --jsonwith bounded retries when credentials are missing. - Falls back to
OPENAI_API_KEYwithinauth.jsonor environment variables when auto recovery fails.
- Validates file permissions (enforce 0600 on Unix when
- To disable, set
codex.enabled: falsein YAML or via CLI/ENV overrides.
7) Logging and Debug
- Debug ON (console output):
--debug/DEBUG=1|true/ YAMLserver.debug: true(priority: CLI > ENV > YAML, unified determination) - Debug ON (console + file mirror):
--debug ./_debug.logorDEBUG=./_debug.log - Debug OFF: only minimal operational logging
Additional notes (YAML control):
server.debug: true|false(applies to all modules even when set only in YAML)server.debug_file: <path|null>(mirrors stderr to file when specified)
8) Automated Monitoring (Optional)
- Run
npm run healthto emit a JSON health summary with exit codes (0healthy,10degraded,1catastrophic). - Sample systemd service (
/etc/systemd/system/openai-responses-health.service):[Unit] Description=OpenAI Responses MCP health probe OnFailure=openai-responses-alert@%i.service [Service] Type=oneshot ExecStart=/usr/bin/npm --prefix /path/to/openai-responses-mcp run health SuccessExitStatus=0 10 - Timer (
/etc/systemd/system/openai-responses-health.timer):[Unit] Description=Run OpenAI Responses MCP health probe every 5 minutes [Timer] OnBootSec=2m OnUnitActiveSec=5m [Install] WantedBy=timers.target - Define
openai-responses-alert@.serviceto notify you (email, push, restart) when exit code1indicates a catastrophic failure. Degraded runs (exit code10) are logged but treated as success viaSuccessExitStatus.
For Developers (Clone and Develop)
1) Fetch and Build
git clone https://github.com/<your-org>/openai-responses-mcp.git
cd openai-responses-mcp
npm i
npm run build
2) Smoke Test (MCP Framing)
npm run mcp:smoke | tee /tmp/mcp-smoke.out
grep -c '^Content-Length:' /tmp/mcp-smoke.out # OK when count is 3 or more
3) Local Startup (stdio)
export OPENAI_API_KEY="sk-..."
node build/index.js --stdio --debug ./_debug.log
4) Demo (sample query to OpenAI)
npm run mcp:quick -- "Today's temperature in Tokyo"
npm run mcp:smoke:ldjson # NDJSON-compatible connectivity check
5) Documentation (references)
- Canonical specification:
docs/spec.md - References:
docs/reference/config-reference.md/docs/reference/client-setup-claude.md - Verification procedure:
docs/verification.md
Notes for Maintainers
- This repository is private/local. Do not publish to any registry.
Troubleshooting (Key Points)
Missing API key:OPENAI_API_KEYnot set. Verify environment variables.Cannot find module build/index.js: Build not run -> executenpm run build.- Framing mismatch: Run
npm run mcp:smoketo confirm and rebuild as needed. - Frequent 429/5xx responses: Adjust
request.max_retries/timeout_ms(YAML).
License
MIT
Notes
openai-responses-mcp development notes - built with both Codex and Claude Code