LLM Integration
MCPHub provides OpenRouter-compatible LLM endpoints, allowing you to use your favorite AI models with the same familiar API interface you already know and love.
Quick Start
🚀 Get Started in Minutes
MCPHub's LLM API is fully compatible with OpenRouter's interface. If you've used OpenRouter before, you can switch to MCPHub with just a simple endpoint change!
MCPHub Endpoint
https://api.mcphub.com/v1OpenRouter Compatible
Same API interface, same parameters, same response format
Using the OpenAI SDK
The easiest way to get started is using the official OpenAI SDK, just like with OpenRouter. Simply point it to MCPHub's endpoint:
Python
from openai import OpenAI
# Initialize the client with MCPHub endpoint
client = OpenAI(
base_url="https://api.mcphub.com/v1",
api_key="<YOUR_MCPHUB_API_KEY>",
)
# Make a chat completion request
completion = client.chat.completions.create(
model="gpt-5-nano",
messages=[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello! What model are you?"
}
]
)
print(completion.choices[0].message.content)TypeScript/JavaScript
import OpenAI from 'openai';
// Initialize the client with MCPHub endpoint
const client = new OpenAI({
baseURL: 'https://api.mcphub.com/v1',
apiKey: '<YOUR_MCPHUB_API_KEY>',
});
// Make a chat completion request
const completion = await client.chat.completions.create({
model: 'gpt-5-nano',
messages: [
{
role: 'system',
content: 'You are a helpful assistant.'
},
{
role: 'user',
content: 'Hello! What model are you?'
}
]
});
console.log(completion.choices[0].message.content);Using the API Directly
You can also make direct HTTP requests to the MCPHub API using any HTTP client:
cURL
curl -X POST "https://api.mcphub.com/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "X-API-Key: <YOUR_MCPHUB_API_KEY>" \
-d '{
"model": "gpt-5-nano",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello! What model are you?"
}
]
}'Python (requests)
import requests
import json
response = requests.post(
url="https://api.mcphub.com/v1/chat/completions",
headers={
"X-API-Key": "<YOUR_MCPHUB_API_KEY>",
"Content-Type": "application/json"
},
data=json.dumps({
"model": "gpt-5-nano",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello! What model are you?"
}
]
})
)
result = response.json()
print(result["choices"][0]["message"]["content"])Available Models
MCPHub supports a wide range of popular LLM models. Here are some examples:
OpenAI Models
gpt-5-nanoopenai/gpt-4oopenai/gpt-4o-mini
Other Popular Models
claude-3-5-sonnet-20240620gemini-2.0-flashdeepseek-chat
📋 Get Full Model List
You can retrieve the complete list of available models using the models endpoint:
GET https://api.mcphub.com/v1/modelsAuthentication
To use MCPHub's LLM API, you'll need an API key. Here's how to get one:
Create an Account
Sign up for a free MCPHub account if you haven't already.
Go to Settings
Navigate to Settings → API Keys in your MCPHub dashboard.
Generate API Key
Create a new API key and copy it to use in your applications.
🔐 Security Best Practices
- • Never expose your API key in client-side code
- • Use environment variables to store your API key
- • Rotate your API keys regularly
- • Only grant necessary permissions to your API keys
Streaming Responses
MCPHub supports streaming responses for real-time chat applications. Just add stream: true to your request:
const completion = await client.chat.completions.create({
model: 'gpt-5-nano',
messages: [
{
role: 'user',
content: 'Tell me a story'
}
],
stream: true // Enable streaming
});
for await (const chunk of completion) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}Error Handling
MCPHub returns standard HTTP status codes and error messages compatible with OpenRouter's format:
Common Status Codes
200- Success400- Bad Request401- Unauthorized429- Rate Limited500- Server Error
Error Response Format
{
"error": {
"message": "Invalid API key",
"type": "authentication_error",
"code": "invalid_api_key"
}
}Python Error Handling Example
try:
completion = client.chat.completions.create(
model="gpt-5-nano",
messages=[{"role": "user", "content": "Hello!"}]
)
print(completion.choices[0].message.content)
except Exception as e:
print(f"Error: {e}")
# Handle specific error types
if "authentication" in str(e).lower():
print("Check your API key")
elif "rate limit" in str(e).lower():
print("Rate limit exceeded, please try again later")Check Your Quota
MCPHub provides API endpoints to check your current credit balance and remaining quota. This is useful for monitoring usage and ensuring you have sufficient credits before making requests.
Quick Quota Check
Use the /quota endpoint for a quick overview of your remaining balance:
https://api.mcphub.com/quotaReturns your current balance and remaining quota.
curl -X GET "https://api.mcphub.com/quota" \ -H "X-API-Key: <YOUR_MCPHUB_API_KEY>"
Response:
{
"username": "your-username",
"balance": 150.50,
"remaining_quota": 150.50,
"used_quota": 0,
"thinking": {
"enabled": true,
"default_budget": 16000,
"models": ["claude-3-5-sonnet", "claude-3-opus", ...]
}
}Detailed Credit Balance
Use the /api/v3/auth/credit-balance endpoint for detailed balance information including expiring credits:
https://api.mcphub.com/api/v3/auth/credit-balanceReturns detailed credit balance with earning/spending totals and expiration info.
curl -X GET "https://api.mcphub.com/api/v3/auth/credit-balance" \ -H "X-API-Key: <YOUR_MCPHUB_API_KEY>"
Response:
{
"user_id": "abc123",
"current_balance": 150.50,
"total_earned": 200.00,
"total_spent": 49.50,
"tier": "free",
"expiring_credits": 10.00,
"next_expiration": "2026-03-05T00:00:00"
}| Field | Type | Description |
|---|---|---|
user_id | string | Your unique user identifier |
current_balance | float | Your current available credit balance |
total_earned | float | Total credits earned (top-ups and grants) |
total_spent | float | Total credits spent on API usage |
tier | string | Account tier: free, premium, enterprise, or internal |
expiring_credits | float | Credits expiring within the next 7 days (0.0 if none) |
next_expiration | string | ISO timestamp of next credit expiration (empty if none) |
SDK Examples
Python
import requests
response = requests.get(
"https://api.mcphub.com/api/v3/auth/credit-balance",
headers={"X-API-Key": "<YOUR_MCPHUB_API_KEY>"}
)
balance = response.json()
print(f"Current balance: {balance['current_balance']}")
print(f"Tier: {balance['tier']}")
if balance["expiring_credits"] > 0:
print(f"Warning: {balance['expiring_credits']} credits expiring soon")TypeScript/JavaScript
const response = await fetch("https://api.mcphub.com/api/v3/auth/credit-balance", {
headers: { "X-API-Key": "<YOUR_MCPHUB_API_KEY>" }
});
const balance = await response.json();
console.log(`Current balance: ${balance.current_balance}`);
console.log(`Tier: ${balance.tier}`);
if (balance.expiring_credits > 0) {
console.log(`Warning: ${balance.expiring_credits} credits expiring soon`);
}Insufficient Balance
If your balance is insufficient when making an LLM request, the API returns HTTP 402 Payment Required. You can top up credits from the Billing settings page.
Migrating from OpenRouter
Switching from OpenRouter to MCPHub is straightforward. Here's what you need to change:
Simple 2-Step Migration
1. Update the Base URL
base_url="https://openrouter.ai/api/v1"base_url="https://api.mcphub.com/v1"2. Update Your API Key
api_key="sk-or-v1-..."api_key="your-mcphub-api-key"✅ That's it! Everything else stays exactly the same - same models, same parameters, same response format.
Next Steps
📚 Explore More
- • API Reference - Complete API documentation
- • Integration Guide - Advanced integration patterns
- • Security - Learn about our security practices
🚀 Get Started
- • Generate API Key - Create your first API key
- • Add Credits - Purchase credits for API usage
- • Monitor Usage - Track your API requests