ktgster/mcp_math_server
If you are the rightful owner of mcp_math_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Model Context Protocol (MCP) server facilitates dynamic interaction between AI agents and tools, enabling seamless tool discovery and usage without hardcoding.
MCP Math Agent 🧮
A demonstration of a ReAct-style AI agent where tools are completely decoupled from the client code through the Model Context Protocol (MCP). This architecture showcases how an LLM agent can dynamically discover and use tools without having them hardcoded in the client implementation.
Purpose 🎯
This project demonstrates the power of tool-agent decoupling through MCP:
- Clean Separation: The agent client has zero knowledge of what tools exist until runtime
- Dynamic Discovery: Tools are discovered automatically when the agent connects to the server
- Protocol-Based: Communication happens through the standardized MCP protocol
- Flexible Architecture: You can add/remove/modify tools on the server without touching the client code
- ReAct Pattern: Shows how the Reasoning-Action cycle works with dynamically discovered tools
Features ✨
- ReAct Pattern: The agent follows a Thought-Action-Observation cycle to solve problems
- Multi-step Reasoning: Can chain multiple operations together to solve complex problems
- Dynamic Tool Discovery: Automatically discovers available tools from the MCP server
- Error Handling: Gracefully handles errors and provides helpful feedback
- Interactive CLI: Simple command-line interface for real-time interaction
Architecture 🏗️
The key innovation is the complete decoupling of tools from the agent:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Client │ ──MCP── │ Server │ ──────► │ Tools │
│ (Agent) │ Protocol│ (FastMCP) │ │ (Math) │
└─────────────┘ └─────────────┘ └─────────────┘
↑ ↑ ↑
│ │ │
│ Can be modified Can be added/
│ independently removed freely
│
┌─────────────┐
│ Ollama │ ← Agent has NO hardcoded tool knowledge
│ (LLama3) │ Tools are discovered at runtime!
└─────────────┘
Key Points:
- ✅ Client doesn't know what tools exist until it connects
- ✅ Server can be updated with new tools without changing client
- ✅ Multiple clients can connect to the same server
- ✅ Different servers can provide different tool sets
Available Math Tools 🛠️
- add(a, b): Add two numbers
- subtract(a, b): Subtract b from a
- multiply(a, b): Multiply two numbers
- divide(a, b): Divide a by b
- power(a, b): Raise a to the power of b
- sqrt(a): Compute square root
- factorial(n): Compute n!
- double_factorial(n): Compute n!! (double factorial)
- tetration(a, n): Compute tetration (a raised to itself n times)
Prerequisites 📋
- Python 3.8 or higher
- Ollama installed and running
- LLama3 model downloaded
Installation 🚀
-
Clone the repository
git clone <your-repo-url> cd mcp_math_server -
Install dependencies
pip install -r requirements.txt -
Install Ollama (if not already installed)
- Visit https://ollama.ai for installation instructions
- Download the LLama3 model:
ollama pull llama3:latest
Usage 💻
-
Start the MCP Server
python server.pyThe server will start on
http://127.0.0.1:8000 -
In a new terminal, start the Agent
python client.py -
Interact with the Agent
💬 You: Calculate 5 + 7 🤖 Agent thinking... ---------------------------------------- Thought: I need to add 5 and 7 Action: add Action Input: {"a": 5, "b": 7} Observation: 12 Thought: I now have the result Final Answer: The sum of 5 + 7 is 12 ---------------------------------------- ✅ The sum of 5 + 7 is 12
Example Queries 📝
Try these examples to see the agent in action:
-
Simple operations:
- "Calculate 5 + 7"
- "What's 10 factorial?"
- "Calculate the square root of 144"
-
Multi-step operations:
- "Calculate (5 + 3) * 2"
- "What's 2 to the power of 8, then add 10"
- "Calculate the factorial of 5 and then divide by 10"
-
Advanced operations:
- "What's 2 tetrated to 3?"
- "Calculate the double factorial of 7"
- "What's the square root of (100 + 44)?"
How It Works 🔍
Tool-Agent Decoupling
The magic happens through the MCP protocol:
- Connection Phase: Agent connects to MCP server
- Discovery Phase: Agent calls
client.get_tools()to discover available tools - Dynamic Prompt Generation: Agent builds its prompt based on discovered tools
- Runtime Execution: Agent uses tools without any hardcoded knowledge
# The client discovers tools dynamically - no hardcoding!
tools = await client.get_tools()
print(f"✅ Connected! Available tools: {[t.name for t in tools]}")
# Prompt is built from discovered tools
system_prompt = f"""Available tools:
{chr(10).join([f'- {t.name}: {t.description}' for t in tools])}
"""
ReAct Cycle
- User Input: You provide a natural language math question
- Reasoning: The agent thinks about what operations are needed
- Tool Selection: Selects from dynamically discovered tools
- Execution: Calls the tool through MCP protocol
- Observation: Receives the result
- Iteration: Decides whether to use another tool or provide the final answer
- Response: Delivers the complete answer
ReAct Cycle Example
User Query → Agent Thinks → Selects Tool → Executes → Observes Result
↑ ↓
└──────── Needs More Operations? ←─────────┘
↓ No
Final Answer
Why This Architecture Matters 🚀
Traditional Approach ❌
# Tools are hardcoded in the client
def add(a, b): return a + b
def subtract(a, b): return a - b
# Client must be updated for new tools
MCP Approach ✅
# Client discovers tools at runtime
tools = await client.get_tools() # Could be ANY tools!
# Server can add new tools without client changes
Benefits
- Modularity: Complete separation of concerns
- Scalability: Multiple tool servers can be used simultaneously
- Maintainability: Update tools without touching agent code
- Reusability: Same agent works with different tool servers
- Protocol Standards: Follows the MCP specification for interoperability
Acknowledgments 🙏
- LangChain for the agent framework
- Ollama for local LLM inference
- FastMCP for the MCP server implementation
- Model Context Protocol for the protocol specification