keyhoffman/llm-handoff-from-claude-mcp
If you are the rightful owner of llm-handoff-from-claude-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The LLM Handoff MCP Server facilitates seamless interaction between Claude and other LLMs like ChatGPT, Perplexity, and Gemini, streamlining the process of verifying and comparing responses.
LLM Handoff MCP Server
An MCP (Model Context Protocol) server that allows Claude to query other LLMs (ChatGPT, Perplexity, Gemini) for verification and comparison without manual copy-pasting.
The Problem
When working on complex technical problems with Claude, you often want to verify Claude's responses or get alternative perspectives from other LLMs. The typical workflow involves:
- Having a detailed conversation with Claude about a problem
- Manually copying the context and question
- Opening ChatGPT, Perplexity, or Gemini in separate tabs
- Pasting the context and question into each
- Waiting for responses
- Manually comparing the different answers
This is tedious, time-consuming, and breaks your flow when you're deep in a technical discussion.
The Solution
This MCP server eliminates the copy-paste workflow by allowing Claude to directly query other LLMs on your behalf. When you want verification or alternative perspectives, you simply tell Claude "ask the other LLMs what they think about this" and it automatically:
- Takes the current conversation context
- Formulates an appropriate prompt with all the relevant details
- Queries ChatGPT, Perplexity, and/or Gemini in parallel
- Returns all responses formatted for easy comparison
- Lets you continue the conversation with all perspectives in one place
Available Tools
ask_chatgpt(prompt)
- Query ChatGPT directlyask_perplexity(prompt)
- Query Perplexity directlyask_gemini(prompt)
- Query Gemini directlyask_all_llms(prompt)
- Query all available LLMs in parallel (most useful)
The server only creates tools for LLMs you have API keys configured for.
Setup
1. Install Dependencies and Build
npm install
npm run build
2. Configure API Keys
Copy the example environment file and add your API keys:
cp env.example .env
Edit .env
with your actual API keys:
OPENAI_API_KEY=sk-your-actual-openai-key
PERPLEXITY_API_KEY=your-actual-perplexity-key
GEMINI_API_KEY=your-actual-gemini-key
You don't need all three - the server works with whatever APIs you have configured.
3. Add to Claude Desktop
- Open Claude Desktop
- Go to Settings → Developer
- Click Edit Config next to "Local MCP servers"
- Add this configuration to your
claude_desktop_config.json
:
{
"mcpServers": {
"llm-handoff": {
"command": "node",
"args": [
"/Users/yourusername/path/to/llm-handoff-mcp/dist/server.js"
],
"env": {}
}
}
}
Replace the path with the actual path to your project directory.
- Save the config file and restart Claude Desktop
4. Usage
Once configured, you can use natural language to invoke the tools:
- "Ask the other LLMs what they think about this"
- "Get ChatGPT and Perplexity's take on this problem"
- "Verify this solution with other models"
- "Cross-check this with all available LLMs"
Claude will automatically include the relevant conversation context when querying the other LLMs.
Updating
When you make changes to the code:
npm run build
Then restart Claude Desktop to use the updated version.