george7979/gpt-mcp-server
If you are the rightful owner of gpt-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The GPT MCP Server integrates OpenAI GPT capabilities with Claude Code and other MCP clients, following Anthropic's official MCP guidelines.
GPT MCP Server
An MCP (Model Context Protocol) server that brings OpenAI GPT capabilities to Claude Code and other MCP clients. Built following Anthropic's official MCP guidelines.
v2.0.0 - Now using OpenAI's Responses API (v1/responses) for full gpt-5.1-codex support!
Why GPT + Claude?
While Claude excels at many tasks, GPT offers unique capabilities:
- Different Training Data - Alternative perspective from different model training
- Reasoning Models - Access to GPT's reasoning effort levels (none/low/medium/high)
- Model Variety - Access to GPT-4, GPT-4.1, GPT-5.1, GPT-5.1-Codex and future models
- Second Opinion - Get a different AI's take on complex problems
Features
| Tool | Description |
|---|---|
gpt_generate | Simple text generation with input prompts |
gpt_messages | Multi-turn structured conversations |
gpt_status | Server status and configuration check |
Default Model: gpt-5.1-codex (configurable via GPT_MODEL env var)
API: OpenAI Responses API (v1/responses) - supports all GPT models including gpt-5.1-codex
Why Responses API?
| Feature | Chat Completions | Responses API |
|---|---|---|
gpt-5.1-codex support | ❌ No | ✅ Yes |
| All GPT models | ✅ Yes | ✅ Yes |
| Built-in web search | ❌ No | ✅ Yes |
| Built-in file search | ❌ No | ✅ Yes |
The Responses API is OpenAI's newest interface, optimized for agentic coding tasks.
Reasoning Control
GPT-5.x models support configurable reasoning depth via reasoning_effort:
| Value | Behavior |
|---|---|
| (not set) | Default: low - adaptive reasoning enabled |
none | Disable reasoning (like GPT-4.1, fastest) |
low | Light reasoning (fast, server default) |
medium | Moderate reasoning depth |
high | Deep reasoning, best for complex tasks |
Note: This server defaults to
lowas the minimum supported level for gpt-5.1-codex. Usenonefor maximum speed,highfor complex analysis.
Response Format
Both generation tools support response_format parameter:
| Value | Description |
|---|---|
markdown | Human-readable markdown (default) |
json | Structured JSON for programmatic use |
Response Limits
Responses are automatically truncated at 25,000 characters to prevent token overflow. If truncation occurs, a warning is appended to the response.
Quick Start
Prerequisites
- Node.js 18+ - Download
- OpenAI API Key - Get your key
Installation
# Clone the repository
git clone https://github.com/george7979/gpt-mcp-server.git
cd gpt-mcp-server
# Install dependencies
npm install
# Build the server
npm run build
Configuration
Option 1: Quick Install (Recommended)
Use Claude Code's built-in command:
claude mcp add gpt-mcp-server node /absolute/path/to/gpt-mcp-server/dist/index.js -e OPENAI_API_KEY=your-api-key-here
Tip: Run
pwdin the gpt-mcp-server directory to get the absolute path.
To install globally (available in all projects):
claude mcp add gpt-mcp-server node /path/to/dist/index.js -e OPENAI_API_KEY=your-key --scope user
Option 2: Manual Configuration
Add to your Claude Code MCP settings file (~/.claude.json)
{
"mcpServers": {
"gpt-mcp-server": {
"type": "stdio",
"command": "node",
"args": ["/absolute/path/to/gpt-mcp-server/dist/index.js"],
"env": {
"OPENAI_API_KEY": "your-api-key-here",
"GPT_MODEL": "gpt-5.1-codex" // optional - validated at startup
}
}
}
}
Option 3: VS Code with Claude Extension
Add to .vscode/mcp.json
{
"servers": {
"gpt-mcp-server": {
"type": "stdio",
"command": "node",
"args": ["${workspaceFolder}/path/to/gpt-mcp-server/dist/index.js"],
"env": {
"OPENAI_API_KEY": "your-api-key-here",
"GPT_MODEL": "gpt-5.1-codex" // optional - validated at startup
}
}
}
}
Note: Replace the path with your actual installation location. You can find it with
pwdin the gpt-mcp-server directory.
Verify Installation
Restart Claude Code after configuration. You should see the GPT tools available:
gpt_generate - Generate text using OpenAI GPT API
gpt_messages - Multi-turn conversation with GPT
gpt_status - Check server status and configuration
Usage Examples
Simple Generation
Ask GPT: "Explain the difference between async and await in JavaScript"
Multi-turn Conversation
Have a conversation with GPT about software architecture,
maintaining context across multiple exchanges.
Check Configuration
Use gpt_status to see which model is active, API type, and if fallback was used
Tool Reference
gpt_generate
Generate text from a single prompt.
| Parameter | Type | Required | Description |
|---|---|---|---|
input | string | Yes | The prompt or question |
model | string | No | Model to use (default: gpt-5.1-codex) |
instructions | string | No | System instructions |
reasoning_effort | string | No | none/low/medium/high (GPT-5.x reasoning control) |
response_format | string | No | markdown (default) or json |
temperature | number | No | Randomness 0-2 (default: 1) |
max_output_tokens | number | No | Maximum output length |
top_p | number | No | Nucleus sampling 0-1 |
gpt_messages
Multi-turn conversation with message history.
| Parameter | Type | Required | Description |
|---|---|---|---|
messages | array | Yes | Array of {role, content} objects |
model | string | No | Model to use (default: gpt-5.1-codex) |
instructions | string | No | System instructions |
reasoning_effort | string | No | none/low/medium/high (GPT-5.x reasoning control) |
response_format | string | No | markdown (default) or json |
temperature | number | No | Randomness 0-2 |
max_output_tokens | number | No | Maximum output length |
Message format:
{
"role": "user" | "assistant",
"content": "message text"
}
gpt_status
Check server status and configuration.
| Parameter | Type | Required | Description |
|---|---|---|---|
| (none) | - | - | No parameters required |
Returns:
active_model- Currently used modelconfigured_model- Model from GPT_MODEL env var (if set)fallback_model- Default fallback modelfallback_used- Whether fallback was triggered due to invalid modeldefault_reasoning- Default reasoning_effort level (low)character_limit- Maximum response character limit (25000)server_version- Server versionapi_type- OpenAI API type (Responses API (v1/responses))api_key_configured- Whether OPENAI_API_KEY is set
Development
# Development with hot reload
npm run dev
# Build TypeScript
npm run build
# Run compiled server
npm start
# Test with MCP Inspector
npx @modelcontextprotocol/inspector node dist/index.js
Troubleshooting
"OPENAI_API_KEY environment variable is required"
Make sure your Claude Code configuration includes the env block with your API key.
"Invalid API key"
- Verify your key at OpenAI Platform
- Make sure there are no extra spaces or quotes around the key
- Check your API key has sufficient credits
"API quota exceeded"
Check your billing at OpenAI Platform. You may need to add credits.
Tools not appearing in Claude Code
- Verify the path in your configuration is correct (use absolute path)
- Make sure you ran
npm run build - Restart Claude Code after configuration changes
Model validation and fallback
If you configure an invalid model via GPT_MODEL, the server automatically falls back to gpt-5.1-codex. The warning is logged to stderr but may not be visible in Claude Code.
To check your current configuration status:
- Use the
gpt_statustool - it shows active model, API type, and whether fallback occurred - Run Claude Code with
--verboseflag to see MCP server logs
Project Structure
gpt-mcp-server/
├── src/
│ └── index.ts # Server implementation (Responses API)
├── dist/ # Compiled output
├── docs/
│ ├── PRD.md # Product requirements
│ ├── PLAN.md # Implementation roadmap
│ └── TECH.md # Technical specification
├── package.json
├── tsconfig.json
├── .env.example # Environment template
├── README.md # This file
└── CLAUDE.md # AI assistant context
Version History
- v2.0.0 - Migrated to Responses API (
v1/responses), enabling fullgpt-5.1-codexsupport - v1.1.0 - Added response_format, improved error handling
- v1.0.0 - Initial release with Chat Completions API
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT License - see for details.
Acknowledgments
- Anthropic - MCP Protocol and Claude
- OpenAI - GPT API and Responses API
- Model Context Protocol - Protocol specification
Made with Claude Code following Anthropic's MCP guidelines