LLM Integration
MCPHub provides OpenRouter-compatible LLM endpoints, allowing you to use your favorite AI models with the same familiar API interface you already know and love.
Quick Start
🚀 Get Started in Minutes
MCPHub's LLM API is fully compatible with OpenRouter's interface. If you've used OpenRouter before, you can switch to MCPHub with just a simple endpoint change!
MCPHub Endpoint
https://api.mcphub.com/v1
OpenRouter Compatible
Same API interface, same parameters, same response format
Using the OpenAI SDK
The easiest way to get started is using the official OpenAI SDK, just like with OpenRouter. Simply point it to MCPHub's endpoint:
Python
from openai import OpenAI # Initialize the client with MCPHub endpoint client = OpenAI( base_url="https://api.mcphub.com/v1", api_key="<YOUR_MCPHUB_API_KEY>", ) # Make a chat completion request completion = client.chat.completions.create( model="openai/gpt-4o-2024-05-13", messages=[ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello! What model are you?" } ] ) print(completion.choices[0].message.content)
TypeScript/JavaScript
import OpenAI from 'openai'; // Initialize the client with MCPHub endpoint const client = new OpenAI({ baseURL: 'https://api.mcphub.com/v1', apiKey: '<YOUR_MCPHUB_API_KEY>', }); // Make a chat completion request const completion = await client.chat.completions.create({ model: 'openai/gpt-4o-2024-05-13', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'Hello! What model are you?' } ] }); console.log(completion.choices[0].message.content);
Using the API Directly
You can also make direct HTTP requests to the MCPHub API using any HTTP client:
cURL
curl -X POST "https://api.mcphub.com/v1/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <YOUR_MCPHUB_API_KEY>" \ -d '{ "model": "openai/gpt-4o-2024-05-13", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello! What model are you?" } ] }'
Python (requests)
import requests import json response = requests.post( url="https://api.mcphub.com/v1/chat/completions", headers={ "Authorization": "Bearer <YOUR_MCPHUB_API_KEY>", "Content-Type": "application/json" }, data=json.dumps({ "model": "openai/gpt-4o-2024-05-13", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello! What model are you?" } ] }) ) result = response.json() print(result["choices"][0]["message"]["content"])
Available Models
MCPHub supports a wide range of popular LLM models. Here are some examples:
OpenAI Models
openai/gpt-4o-2024-05-13
openai/gpt-4o
openai/gpt-4o-mini
Other Popular Models
claude-3-5-sonnet-20240620
gemini-2.0-flash
deepseek-chat
📋 Get Full Model List
You can retrieve the complete list of available models using the models endpoint:
GET https://api.mcphub.com/v1/models
Authentication
To use MCPHub's LLM API, you'll need an API key. Here's how to get one:
Create an Account
Sign up for a free MCPHub account if you haven't already.
Go to Settings
Navigate to Settings → API Keys in your MCPHub dashboard.
Generate API Key
Create a new API key and copy it to use in your applications.
🔐 Security Best Practices
- • Never expose your API key in client-side code
- • Use environment variables to store your API key
- • Rotate your API keys regularly
- • Only grant necessary permissions to your API keys
Streaming Responses
MCPHub supports streaming responses for real-time chat applications. Just add stream: true
to your request:
const completion = await client.chat.completions.create({ model: 'openai/gpt-4o-2024-05-13', messages: [ { role: 'user', content: 'Tell me a story' } ], stream: true // Enable streaming }); for await (const chunk of completion) { const content = chunk.choices[0]?.delta?.content; if (content) { process.stdout.write(content); } }
Error Handling
MCPHub returns standard HTTP status codes and error messages compatible with OpenRouter's format:
Common Status Codes
200
- Success400
- Bad Request401
- Unauthorized429
- Rate Limited500
- Server Error
Error Response Format
{ "error": { "message": "Invalid API key", "type": "authentication_error", "code": "invalid_api_key" } }
Python Error Handling Example
try: completion = client.chat.completions.create( model="openai/gpt-4o-2024-05-13", messages=[{"role": "user", "content": "Hello!"}] ) print(completion.choices[0].message.content) except Exception as e: print(f"Error: {e}") # Handle specific error types if "authentication" in str(e).lower(): print("Check your API key") elif "rate limit" in str(e).lower(): print("Rate limit exceeded, please try again later")
Migrating from OpenRouter
Switching from OpenRouter to MCPHub is straightforward. Here's what you need to change:
Simple 2-Step Migration
1. Update the Base URL
base_url="https://openrouter.ai/api/v1"
base_url="https://api.mcphub.com/v1"
2. Update Your API Key
api_key="sk-or-v1-..."
api_key="your-mcphub-api-key"
✅ That's it! Everything else stays exactly the same - same models, same parameters, same response format.
Next Steps
📚 Explore More
- • API Reference - Complete API documentation
- • Integration Guide - Advanced integration patterns
- • Security - Learn about our security practices
🚀 Get Started
- • Generate API Key - Create your first API key
- • Add Credits - Purchase credits for API usage
- • Monitor Usage - Track your API requests