lfbos/fastmcp-http-example
If you are the rightful owner of fastmcp-http-example and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
This project demonstrates the integration of Model Context Protocol (MCP) with OpenAI's function calling API, featuring a custom HTTP server implementation.
MCP Chat System
A practical example of integrating Model Context Protocol (MCP) with OpenAI's function calling API, featuring a custom HTTP server implementation.
Why this project? FastMCP primarily uses STDIO transport, which isn't ideal for HTTP-based clients. This project demonstrates how to build a complete HTTP transport layer for MCP, enabling RESTful API access and better integration with web applications.
📋 Table of Contents
- Architecture
- Key Features
- Quick Start
- Usage
- HTTP API Endpoints
- Technical Details
- Example Tools
- Troubleshooting
- License
🏗️ Architecture
This project demonstrates how to build a complete MCP-powered chat system with the following components:
server.py: Base MCP server defining tools, resources, and prompts for sales analyticshttp_server.py: FastAPI-based HTTP server exposing MCP functionality via REST APIhttp_client.py: HTTP client for communicating with the MCP serverchat.py: Interactive REPL chat client integrating OpenAI with MCP over HTTP
✨ Key Features
- ✅ Proper tool call handling: Correctly formats OpenAI function calls for MCP
- ✅ HTTP transport: Custom HTTP implementation for MCP (FastMCP uses STDIO by default)
- ✅ Dynamic tool execution: LLM automatically decides when to use tools based on user queries
- ✅ Rich terminal UI: Beautiful formatting with colors, markdown, and loading animations
- ✅ Resource management: Static and templated resources
- ✅ Optional prompt templates: Reusable templates for multi-client scenarios (not required for single chatbot)
- ✅ Error handling: Comprehensive error handling and informative messages
🚀 Quick Start
Prerequisites
- Python 3.12+
- uv package manager
- OpenAI API key
Installation
- Clone the repository:
git clone <repository-url>
cd mcptests
- Install dependencies:
uv sync
- Create a
.envfile in the project root:
# .env
OPENAI_API_KEY=sk-your-api-key-here
OPENAI_MODEL=gpt-4o-mini
MCP_BASE_URL=http://127.0.0.1:8001/mcp
DOCS_DB=biz.sqlite
# Optional server configuration
MCP_SERVER_NAME=biz-server
DEFAULT_TOP_N=5
Note: Never commit your
.envfile to version control. It contains sensitive API keys.
Running the System
Terminal 1 - Start the MCP Server:
chmod +x start_server.sh
./start_server.sh
Or manually:
uv run python http_server.py
Terminal 2 - Start the Chat Client:
chmod +x start_chat.sh
./start_chat.sh
Or manually:
uv run python chat.py
📚 Usage
Available Commands
/tools- List available tools/resources- List available resources/read <uri>- Read a specific resource/prompt <name> key=value ...- Prepare a prompt template/exitor/quit- Exit the chat
Example Session
you> /tools
- find_products: Return products whose name or id contains the query...
- sales_between: Aggregate sales between [date_start, date_end]...
- top_products: Top-N products by sales amount...
you> Find products containing "laptop"
assistant> [Uses find_products tool and displays results]
you> /prompt summarize_sales date_start=2024-01-01 date_end=2024-01-31
✅ Prompt 'summarize_sales' prepared for the next message.
you> Analyze the sales
assistant> [Generates analysis using the prepared prompt]
🔌 HTTP API Endpoints
The HTTP server exposes the following endpoints:
GET /mcp/tools- List all available toolsGET /mcp/prompts- List all available promptsGET /mcp/resources- List all available resourcesPOST /mcp/tools/call- Execute a tool{"name": "tool_name", "arguments": {...}}POST /mcp/prompts/get- Get a prompt with parameters{"name": "prompt_name", "arguments": {...}}GET /mcp/resources/read/{uri}- Read a resourceGET /health- Health check
🛠️ Technical Details
How It Works
┌─────────────────────────────────────────────────────────────────────────┐
│ INTERACTION FLOW │
└─────────────────────────────────────────────────────────────────────────┘
┌──────────┐
│ USER │ "What products do we have?"
└────┬─────┘
│
▼
┌────────────────────────────────────────────────────────────────┐
│ CHAT CLIENT (chat.py) │
│ │
│ 1️⃣ On Startup: │
│ • Connect to MCP server (http_client.py) │
│ • Fetch available tools → self.cached_tools │
│ • Convert to OpenAI format → self.oai_tools_spec │
│ │
│ 2️⃣ On User Message: │
│ messages = [ │
│ {"role": "system", "content": "You are..."}, │
│ {"role": "user", "content": "What products..."} │
│ ] │
│ │
│ ⚡ TOOLS INJECTION HAPPENS HERE: │
│ ┌──────────────────────────────────────────────────┐ │
│ │ openai.chat.completions.create( │ │
│ │ messages=messages, │ │
│ │ tools=self.oai_tools_spec ← 🔧 MCP TOOLS! │ │
│ │ ) │ │
│ └──────────────────────────────────────────────────┘ │
└────────────────────────┬───────────────────────────────────────┘
│
▼
┌─────────────────┐
│ OPENAI API │
│ (GPT-4o-mini) │
│ │
│ Analyzes: │
│ • User message │
│ • Available │
│ tools 🔧 │
│ │
│ Decides: │
│ "I need to use │
│ find_products"│
└────────┬────────┘
│
│ Returns tool_calls
▼
┌─────────────────────────────────────────────────────────────────┐
│ CHAT CLIENT (chat.py) │
│ │
│ 3️⃣ Receives tool_calls from OpenAI: │
│ { │
│ "tool_calls": [{ │
│ "function": { │
│ "name": "find_products", │
│ "arguments": '{"query": ""}' │
│ } │
│ }] │
│ } │
│ │
│ 4️⃣ Execute tools via HTTP: │
│ result = await mcp.call_tool("find_products", {"query":""}) │
└────────────────────────┬────────────────────────────────────────┘
│
│ POST /mcp/tools/call
▼
┌────────────────────────────────────────────────────────────────┐
│ HTTP SERVER (http_server.py) │
│ │
│ Receives: {"name": "find_products", "arguments": {...}} │
│ │
│ Calls: tool.fn(**args) ────────────────┐ │
└──────────────────────────────────────────┼─────────────────────┘
│
▼
┌─────────────────────────────┐
│ MCP SERVER (server.py) │
│ │
│ @mcp.tool │
│ def find_products(...): │
│ # Execute SQL │
│ # Return products │
└──────────────┬──────────────┘
│
Returns: [{"id": "p-100", ...}, ...]
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ CHAT CLIENT (chat.py) │
│ │
│ 5️⃣ Receives tool results │
│ │
│ 6️⃣ Sends back to OpenAI: │
│ messages = [ │
│ {"role": "system", ...}, │
│ {"role": "user", "content": "What products..."}, │
│ {"role": "assistant", "tool_calls": [...]}, │
│ {"role": "tool", "content": "[{products...}]"} ← 📊 │
│ ] │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────┐
│ OPENAI API │
│ (GPT-4o-mini) │
│ │
│ Synthesizes: │
│ "Here are the │
│ products: ..."│
└────────┬────────┘
│
│ Final response
▼
┌─────────────────────────────────────────────────────────────────┐
│ CHAT CLIENT (chat.py) │
│ │
│ 7️⃣ Displays formatted response with Rich │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌──────────┐
│ USER │ Sees beautiful formatted response
└──────────┘
Key Points
- Tools are injected ONCE at startup - fetched from MCP server and cached
- Tools are sent with EVERY message to OpenAI as available options
- OpenAI decides which tools to call based on the user query
- Chat executes the tools via HTTP to MCP server
- Results flow back through the same path to create the final response
Understanding Tools vs Prompts
🔧 Tools (The Core Functionality)
- What they are: Functions that the LLM can call automatically
- When to use: Always needed for dynamic data and actions
- Who decides: The LLM (GPT) decides when to call them based on user queries
- Example flow:
User: "Show me the products" → LLM decides to call find_products() → Tool executes and returns data → LLM synthesizes natural response
💬 Prompts (Optional Templates)
- What they are: Pre-configured message templates with specific instructions
- When to use:
- ✅ Multiple applications consuming the same MCP server (web app, Slack bot, API)
- ✅ Standardizing responses across different teams/tools
- ✅ Clients without their own LLM (using MCP prompts as instructions)
- ❌ NOT needed for a single chatbot with GPT (like this example)
- How to use: Manually activated with
/promptcommand - Example use case:
# Useful for internal tools with multiple consumers: Web Dashboard ─┐ Slack Bot ─────┼─→ MCP Server (consistent prompts) Mobile App ────┘
💡 For Building a ChatGPT-style Assistant
If you're building a conversational assistant (like this project), you only need Tools:
- ✅ Tools provide the data and actions
- ✅ GPT handles the conversation and decides when to use tools
- ❌ Prompts are optional (mainly for demonstration/multi-client scenarios)
The prompts in this project serve as examples of MCP's capabilities, but aren't required for the chat to work.
Code-Level Implementation
Where Tools Get Injected
Step 1: Startup - Fetch Tools from MCP Server
# chat.py - ChatHost.start()
async def start(self):
self.mcp = MCPHttpClient(MCP_BASE_URL)
# Fetch tools from MCP server via HTTP
self.cached_tools = await self.mcp.list_tools()
# Returns: [{"name": "find_products", "description": "...", "inputSchema": {...}}, ...]
# Convert to OpenAI function calling format
self.oai_tools_spec = as_openai_tools(self.cached_tools)
# Converts to: [{"type": "function", "function": {"name": "...", "parameters": {...}}}, ...]
Step 2: Every Message - Send Tools to OpenAI
# chat.py - ChatHost.chat_round()
async def chat_round(self, user_text: str):
messages = [
{"role": "system", "content": "You are..."},
{"role": "user", "content": user_text}
]
# 🔧 TOOLS INJECTED HERE - sent to OpenAI with every message
first = self.oai.chat.completions.create(
model=OPENAI_MODEL,
messages=messages,
tools=self.oai_tools_spec, # ← MCP tools in OpenAI format
tool_choice="auto" # ← Let OpenAI decide when to use them
)
# OpenAI returns either:
# - Regular response, OR
# - Response with tool_calls
Step 3: Execute Tools When OpenAI Requests Them
# chat.py - ChatHost.chat_round()
if first_msg.tool_calls:
for tc in first_msg.tool_calls:
fn = tc.function.name # e.g., "find_products"
args = json.loads(tc.function.arguments) # e.g., {"query": ""}
# Execute tool on MCP server via HTTP
result = await self.mcp.call_tool(fn, args)
# ↓
# POST http://127.0.0.1:8001/mcp/tools/call
# {"name": "find_products", "arguments": {"query": ""}}
Correct Tool Call Handling
The system properly handles OpenAI's function calling by including tool_calls in the assistant message:
# Build assistant message with tool_calls (REQUIRED by OpenAI)
asst_msg = {"role": "assistant"}
if first_msg.content:
asst_msg["content"] = first_msg.content
if first_msg.tool_calls:
# Include tool_calls in assistant message (required by OpenAI API)
asst_msg["tool_calls"] = [
{
"id": tc.id,
"type": "function",
"function": {"name": tc.function.name, "arguments": tc.function.arguments}
} for tc in first_msg.tool_calls
]
convo.append(asst_msg)
# Then append tool results
for tc in first_msg.tool_calls:
result = await mcp.call_tool(...)
convo.append({
"role": "tool",
"tool_call_id": tc.id, # Must match the tool_call id
"content": json.dumps(result)
})
HTTP Transport for MCP
Since FastMCP 2.x primarily uses STDIO transport, this project implements a custom HTTP layer:
- FastAPI server wraps MCP tools/prompts/resources
- HTTP client provides async interface to the server
- Chat client uses HTTP client instead of FastMCP's STDIO client
🧪 Example Tools
The system includes sample business analytics tools:
find_products: Search products by name or categorysales_between: Aggregate sales data for a date rangetop_products: Get top N products by sales volumesales_report: Comprehensive sales report with KPIs
📝 Available Prompts (Optional)
Note: These prompts are optional examples. For a single chatbot with GPT (like this project), you only need the Tools above. Prompts are useful when multiple applications share the same MCP server.
summarize_sales: Generate sales summary for a periodsales_overview_json: Get JSON-formatted sales overviewcompare_periods_json: Compare two time periodscategory_insights_json: Category-specific analysisproduct_deepdive_markdown: Detailed product analysismerchandising_actions_json: Actionable merchandising recommendationsnatural_language_sales_summary: Human-friendly summary (supports multiple languages)
How to use: /prompt <name> key=value ... then ask your question
🐛 Troubleshooting
"Failed to connect to MCP server"
- Ensure the server is running in Terminal 1
- Check that port 8001 is available:
lsof -i :8001 - Verify the URL in
.env:MCP_BASE_URL=http://127.0.0.1:8001/mcp
"Missing OPENAI_API_KEY"
- Create a
.envfile with your OpenAI API key - Format:
OPENAI_API_KEY=sk-...
📄 License
This project is licensed under the MIT License - see the file for details.
🙏 Acknowledgments
- FastMCP - The MCP framework this project builds upon
- Model Context Protocol - The protocol specification
- OpenAI - For the function calling API
🔗 Related Resources
- Model Context Protocol Specification
- FastMCP Documentation
- OpenAI Function Calling Guide
- FastAPI Documentation
📧 Support
- Check for a quick setup guide
- Review the code comments and docstrings for implementation details
- Open an for questions or bug reports
Made with ❤️ as a practical example of MCP integration