gpt-mcp-server

george7979/gpt-mcp-server

3.1

If you are the rightful owner of gpt-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The GPT MCP Server integrates OpenAI GPT capabilities with Claude Code and other MCP clients, following Anthropic's official MCP guidelines.

Tools
3
Resources
0
Prompts
0

GPT MCP Server

npm version MCP TypeScript License: MIT

An MCP (Model Context Protocol) server that brings OpenAI GPT capabilities to Claude Code and other MCP clients. Built following Anthropic's official MCP guidelines.

v2.0.0 - Now using OpenAI's Responses API (v1/responses) for full gpt-5.1-codex support!

Why GPT + Claude?

While Claude excels at many tasks, GPT offers unique capabilities:

  • Different Training Data - Alternative perspective from different model training
  • Reasoning Models - Access to GPT's reasoning effort levels (none/low/medium/high)
  • Model Variety - Access to GPT-4, GPT-4.1, GPT-5.1, GPT-5.1-Codex and future models
  • Second Opinion - Get a different AI's take on complex problems

Features

ToolDescription
gpt_generateSimple text generation with input prompts
gpt_messagesMulti-turn structured conversations
gpt_statusServer status and configuration check

Default Model: gpt-5.1-codex (configurable via GPT_MODEL env var)

API: OpenAI Responses API (v1/responses) - supports all GPT models including gpt-5.1-codex

Why Responses API?

FeatureChat CompletionsResponses API
gpt-5.1-codex support❌ No✅ Yes
All GPT models✅ Yes✅ Yes
Built-in web search❌ No✅ Yes
Built-in file search❌ No✅ Yes

The Responses API is OpenAI's newest interface, optimized for agentic coding tasks.

Reasoning Control

GPT-5.x models support configurable reasoning depth via reasoning_effort:

ValueBehavior
(not set)Default: low - adaptive reasoning enabled
noneDisable reasoning (like GPT-4.1, fastest)
lowLight reasoning (fast, server default)
mediumModerate reasoning depth
highDeep reasoning, best for complex tasks

Note: This server defaults to low as the minimum supported level for gpt-5.1-codex. Use none for maximum speed, high for complex analysis.

Response Format

Both generation tools support response_format parameter:

ValueDescription
markdownHuman-readable markdown (default)
jsonStructured JSON for programmatic use

Response Limits

Responses are automatically truncated at 25,000 characters to prevent token overflow. If truncation occurs, a warning is appended to the response.

Quick Start

Prerequisites

Installation

# Clone the repository
git clone https://github.com/george7979/gpt-mcp-server.git
cd gpt-mcp-server

# Install dependencies
npm install

# Build the server
npm run build

Configuration

Option 1: Quick Install (Recommended)

Use Claude Code's built-in command:

claude mcp add gpt-mcp-server node /absolute/path/to/gpt-mcp-server/dist/index.js -e OPENAI_API_KEY=your-api-key-here

Tip: Run pwd in the gpt-mcp-server directory to get the absolute path.

To install globally (available in all projects):

claude mcp add gpt-mcp-server node /path/to/dist/index.js -e OPENAI_API_KEY=your-key --scope user
Option 2: Manual Configuration

Add to your Claude Code MCP settings file (~/.claude.json)

{
  "mcpServers": {
    "gpt-mcp-server": {
      "type": "stdio",
      "command": "node",
      "args": ["/absolute/path/to/gpt-mcp-server/dist/index.js"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here",
        "GPT_MODEL": "gpt-5.1-codex"  // optional - validated at startup
      }
    }
  }
}
Option 3: VS Code with Claude Extension

Add to .vscode/mcp.json

{
  "servers": {
    "gpt-mcp-server": {
      "type": "stdio",
      "command": "node",
      "args": ["${workspaceFolder}/path/to/gpt-mcp-server/dist/index.js"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here",
        "GPT_MODEL": "gpt-5.1-codex"  // optional - validated at startup
      }
    }
  }
}

Note: Replace the path with your actual installation location. You can find it with pwd in the gpt-mcp-server directory.

Verify Installation

Restart Claude Code after configuration. You should see the GPT tools available:

gpt_generate - Generate text using OpenAI GPT API
gpt_messages - Multi-turn conversation with GPT
gpt_status   - Check server status and configuration

Usage Examples

Simple Generation

Ask GPT: "Explain the difference between async and await in JavaScript"

Multi-turn Conversation

Have a conversation with GPT about software architecture,
maintaining context across multiple exchanges.

Check Configuration

Use gpt_status to see which model is active, API type, and if fallback was used

Tool Reference

gpt_generate

Generate text from a single prompt.

ParameterTypeRequiredDescription
inputstringYesThe prompt or question
modelstringNoModel to use (default: gpt-5.1-codex)
instructionsstringNoSystem instructions
reasoning_effortstringNonone/low/medium/high (GPT-5.x reasoning control)
response_formatstringNomarkdown (default) or json
temperaturenumberNoRandomness 0-2 (default: 1)
max_output_tokensnumberNoMaximum output length
top_pnumberNoNucleus sampling 0-1

gpt_messages

Multi-turn conversation with message history.

ParameterTypeRequiredDescription
messagesarrayYesArray of {role, content} objects
modelstringNoModel to use (default: gpt-5.1-codex)
instructionsstringNoSystem instructions
reasoning_effortstringNonone/low/medium/high (GPT-5.x reasoning control)
response_formatstringNomarkdown (default) or json
temperaturenumberNoRandomness 0-2
max_output_tokensnumberNoMaximum output length

Message format:

{
  "role": "user" | "assistant",
  "content": "message text"
}

gpt_status

Check server status and configuration.

ParameterTypeRequiredDescription
(none)--No parameters required

Returns:

  • active_model - Currently used model
  • configured_model - Model from GPT_MODEL env var (if set)
  • fallback_model - Default fallback model
  • fallback_used - Whether fallback was triggered due to invalid model
  • default_reasoning - Default reasoning_effort level (low)
  • character_limit - Maximum response character limit (25000)
  • server_version - Server version
  • api_type - OpenAI API type (Responses API (v1/responses))
  • api_key_configured - Whether OPENAI_API_KEY is set

Development

# Development with hot reload
npm run dev

# Build TypeScript
npm run build

# Run compiled server
npm start

# Test with MCP Inspector
npx @modelcontextprotocol/inspector node dist/index.js

Troubleshooting

"OPENAI_API_KEY environment variable is required"

Make sure your Claude Code configuration includes the env block with your API key.

"Invalid API key"

  1. Verify your key at OpenAI Platform
  2. Make sure there are no extra spaces or quotes around the key
  3. Check your API key has sufficient credits

"API quota exceeded"

Check your billing at OpenAI Platform. You may need to add credits.

Tools not appearing in Claude Code

  1. Verify the path in your configuration is correct (use absolute path)
  2. Make sure you ran npm run build
  3. Restart Claude Code after configuration changes

Model validation and fallback

If you configure an invalid model via GPT_MODEL, the server automatically falls back to gpt-5.1-codex. The warning is logged to stderr but may not be visible in Claude Code.

To check your current configuration status:

  1. Use the gpt_status tool - it shows active model, API type, and whether fallback occurred
  2. Run Claude Code with --verbose flag to see MCP server logs

Project Structure

gpt-mcp-server/
├── src/
│   └── index.ts          # Server implementation (Responses API)
├── dist/                 # Compiled output
├── docs/
│   ├── PRD.md            # Product requirements
│   ├── PLAN.md           # Implementation roadmap
│   └── TECH.md           # Technical specification
├── package.json
├── tsconfig.json
├── .env.example          # Environment template
├── README.md             # This file
└── CLAUDE.md             # AI assistant context

Version History

  • v2.0.0 - Migrated to Responses API (v1/responses), enabling full gpt-5.1-codex support
  • v1.1.0 - Added response_format, improved error handling
  • v1.0.0 - Initial release with Chat Completions API

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT License - see for details.

Acknowledgments


Made with Claude Code following Anthropic's MCP guidelines