VividNightmareUnleashed/claude-o3pro-mcp
If you are the rightful owner of claude-o3pro-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
An MCP server that integrates Claude Code with OpenAI's o3-pro model for enhanced reasoning capabilities.
@pro
Create an o3-pro request for deep analysis.
@pro_retrieve
Check status and retrieve results of an o3-pro request.
Claude o3-pro MCP Integration
An MCP (Model Context Protocol) server that enables Claude Code to leverage OpenAI's o3-pro model for complex reasoning tasks.
Overview
This MCP server provides a bridge between Claude Code and OpenAI's o3-pro model, allowing you to:
- Delegate complex problems to o3-pro's deep reasoning capabilities
- Check status of long-running o3-pro requests
- Track costs with per-request pricing in cents
- Leverage Claude's codebase navigation with o3-pro's problem-solving
Key Features
- Smart Context Preparation: Claude gathers all relevant code and context, then packages it optimally for o3-pro
- Two-Phase Processing: Create request and retrieve results separately to handle long processing times
- Cost Tracking: Shows estimated and actual costs per request
- Reasoning Summaries: Optional display of o3-pro's thought process
- Environment Awareness: Automatically includes working directory, git status, and project type
Installation
- Clone this repository:
git clone https://github.com/yourusername/claude-o3pro-mcp.git
cd claude-o3pro-mcp
- Install dependencies:
npm install
- Build the project:
npm run build
- Create a
.env
file based on.env.example
:
cp .env.example .env
- Add your OpenAI API key to
.env
:
OPENAI_API_KEY=sk-your-openai-api-key-here
Configuration
Add the MCP server to Claude Code:
claude mcp add file:///path/to/claude-o3pro-mcp/dist/index.js
Or add it to your Claude Code configuration manually:
{
"mcpServers": {
"o3pro": {
"command": "node",
"args": ["/path/to/claude-o3pro-mcp/dist/index.js"],
"env": {
"OPENAI_API_KEY": "sk-your-key-here"
}
}
}
}
Usage
Tools Available
-
@pro
- Create an o3-pro request for deep analysis- Returns a response ID immediately
- Claude will automatically gather relevant code and context before calling o3-pro
- Simply describe your problem and Claude will handle the rest
-
@pro_retrieve
- Check status and retrieve results- Use the response_id from the pro tool
- Call repeatedly until status shows 'completed'
Best Practices
-
Let Claude gather context first: o3-pro excels at reasoning but not at navigating codebases. Let Claude collect all relevant files and context before invoking o3-pro.
-
Use for genuinely complex problems: With costs of $20/million input tokens and $80/million output tokens, reserve o3-pro for problems that truly benefit from deep reasoning.
-
Provide complete context: Since o3-pro has a 200k token context limit, ensure all necessary information is included in your query.
Example Workflow
User: I need to optimize this graph algorithm for finding shortest paths in a weighted directed graph with negative edges
Claude: I'll examine your current implementation and gather the relevant context to send to o3-pro for deep analysis.
[Claude automatically:
1. Navigates the codebase
2. Reads relevant files
3. Understands the structure
4. Calls @pro with the complete context]
šÆ o3-pro Request Created
š Response ID: resp_abc123...
š° Estimated Cost: 45.0 cents
ā±ļø Status: queued
š” Next step: Use the pro_retrieve tool with response_id "resp_abc123..." to check the status and retrieve the result.
User: Check the status
Claude: [Calls @pro_retrieve with the response ID]
š§ o3-pro is thinking deeply... Elapsed: 2m 34s
User: [After a few minutes] Check again
Claude: [Calls @pro_retrieve again]
šÆ o3-pro Analysis Complete
š Response ID: resp_abc123...
ā±ļø Reasoning Time: 5m 23s
š° Request Cost: 52.3 cents
š Response:
[Detailed solution with optimized algorithm]
How It Works
- You describe the problem to Claude
- Claude gathers context - navigates your codebase, reads relevant files
- Claude calls @pro - automatically provides all gathered context to o3-pro
- o3-pro reasons deeply - uses its full capability on the complete problem
- You get the solution - with cost and reasoning time information
Environment Variables
OPENAI_API_KEY
(required): Your OpenAI API keyMAX_COST_PER_SESSION
: Maximum cost per session in USD (default: 10.00)TIMEOUT_SECONDS
: Maximum time for o3-pro requests (default: 300)CACHE_TTL_MINUTES
: Cache duration for similar queries (default: 60)
Cost Information
o3-pro pricing (as of 2025):
- Input: $20 per million tokens
- Output: $80 per million tokens (includes reasoning tokens)
Average request cost: $0.50 - $5.00 depending on problem complexity
Development
# Run in development mode
npm run dev
# Build for production
npm run build
# Start production server
npm start
Architecture
- MCP Server: Handles communication with Claude Code
- OpenAI Client: Manages o3-pro Responses API interactions
- Context Preparer: Optimizes context for o3-pro's strengths
- Progress Handler: Provides real-time status updates
- Cost Tracker: Monitors and limits spending
Troubleshooting
- "Cost limit exceeded": Increase
MAX_COST_PER_SESSION
in.env
- Timeout errors: Increase
TIMEOUT_SECONDS
for very complex problems - No progress updates: Ensure your Claude Code version supports MCP notifications
License
MIT