BaruchiHalamish20/devops-ai-first-phase
If you are the rightful owner of devops-ai-first-phase and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Model Context Protocol (MCP) server facilitates the integration and execution of AI tools, enabling seamless communication between agents and tools.
🤖 MCP Agent System - Python Implementation
A practical implementation of Model Context Protocol (MCP) with Agent-to-Agent (A2A) communication using Python and Ollama. Perfect for learning and teaching AI agent architectures.
📖 Overview
This project demonstrates:
- MCP Server: Provides reusable tools (weather, time, calculator)
- Coordinator Agent: Plans and delegates tasks
- Worker Agent: Executes tasks using MCP tools
- A2A Communication: Agents collaborate to solve problems
Architecture
┌─────────────────┐
│ User Request │
└────────┬────────┘
│
▼
┌─────────────────────┐
│ Coordinator Agent │ ◄── Plans & Decides
│ (Ollama LLM) │
└─────────┬───────────┘
│ A2A Communication
▼
┌─────────────────────┐
│ Worker Agent │ ◄── Executes Tasks
│ (Ollama LLM) │
└─────────┬───────────┘
│
▼
┌─────────────────────┐
│ MCP Server │ ◄── Tool Execution
│ (weather, time, │
│ calculator) │
└─────────────────────┘
🚀 Quick Start
Prerequisites
- Python 3.10 or higher
- Ollama installed and running
- Basic understanding of async Python
Installation
- Clone or create the project:
mkdir mcp-agents-demo
cd mcp-agents-demo
- Set up Python environment:
python3 -m venv venv
source venv/bin/activate # On Windows: venv/Scripts/activate
- Install dependencies:
pip install mcp httpx
- Create the files:
- Copy
mcp_server.py(MCP server implementation) - Copy
agents.py(Agent system) - Copy
requirements.txt
- Install and run Ollama:
# Install Ollama (if not already installed)
# curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama server
ollama serve
# In another terminal, pull a model
# ollama pull llama2 # or llama3, mistral, etc.
Running the Demo
python3 agents.py
You'll see:
- ✅ Coordinator analyzing requests
- ✅ Agent-to-agent communication
- ✅ MCP tool execution
- ✅ Complete results
📂 Project Structure
mcp-agents-demo/
├── mcp_server.py # MCP server with tools
├── agents.py # Two-agent system
├── requirements.txt # Python dependencies
└── README.md # This file
🎯 How It Works
1. MCP Server (mcp_server.py)
Provides three tools accessible via MCP protocol:
| Tool | Description | Arguments |
|---|---|---|
get_weather | Get weather for a city | city: string |
get_time | Get current time | timezone: string |
calculate | Math operations | operation, a, b |
2. Coordinator Agent
Role: High-level task planning and delegation
Responsibilities:
- Analyze user requests
- Understand available tools
- Create execution plans
- Delegate to Worker Agent
Example Flow:
User: "What's the weather in Tokyo?"
↓
Coordinator: Analyzes → Identifies "get_weather" tool needed
↓
Delegates to Worker with plan
3. Worker Agent
Role: Task execution using MCP tools
Responsibilities:
- Receive tasks from Coordinator
- Select appropriate MCP tools
- Execute tool calls
- Return results
Example Flow:
Receives: "Get weather for Tokyo"
↓
Calls: get_weather(city="Tokyo")
↓
Returns: Weather data
4. A2A Communication
Agents communicate by sharing:
- Task context
- Execution plans
- Conversation history
- Tool results
# Example A2A flow
coordinator_result = await coordinator.process(request)
↓
worker_result = await worker.execute_task(
coordinator_result["task"],
coordinator_result["decision"] # A2A context sharing
)
🔧 Configuration
Change Ollama Model
Edit agents.py:
OLLAMA_MODEL = "llama3" # Options: llama2, llama3, mistral, codellama
Customize Test Cases
Edit the test_requests list in agents.py:
test_requests = [
"What is the weather in Tokyo?",
"Calculate 15 divided by 3",
"What time is it in Paris?",
"Your custom request here"
]
Add New MCP Tools
In mcp_server.py, add to handle_list_tools():
types.Tool(
name="your_tool",
description="Tool description",
inputSchema={
"type": "object",
"properties": {
"param": {"type": "string"}
},
"required": ["param"]
}
)
Then implement in handle_call_tool():
elif name == "your_tool":
result = your_implementation(arguments)
return [types.TextContent(type="text", text=json.dumps(result))]
📊 Example Output
======================================================================
🚀 Starting MCP Agent System (A2A Demo)
======================================================================
✅ Connected to MCP server
======================================================================
📝 TEST CASE 1: What is the weather in Tokyo?
======================================================================
🎯 COORDINATOR AGENT: Analyzing request...
Request: What is the weather in Tokyo?
🤖 Calling Ollama...
📋 Coordinator Analysis:
This request requires the get_weather tool...
----------------------------------------------------------------------
🔄 AGENT-TO-AGENT COMMUNICATION
Coordinator → Worker: Delegating task
⚙️ WORKER AGENT: Executing task...
Task: What is the weather in Tokyo?
📡 Calling MCP tool: get_weather(city='Tokyo')
✅ Tool Result: {
"city": "Tokyo",
"temperature": 23,
"condition": "Sunny",
"humidity": 65
}
======================================================================
✅ FINAL RESULT:
{
"success": true,
"result": "...",
"tool": "get_weather"
}
======================================================================
🧪 Testing
Run Default Tests
python3 agents.py
Add Interactive Mode
Add to agents.py:
async def interactive_mode():
mcp_client = MCPClientWrapper()
await mcp_client.connect()
coordinator = CoordinatorAgent(mcp_client)
worker = WorkerAgent(mcp_client)
print("Interactive mode. Type 'quit' to exit.\n")
while True:
user_input = input("Your request: ")
if user_input.lower() in ['quit', 'exit']:
break
coord_result = await coordinator.process(user_input)
worker_result = await worker.execute_task(
coord_result["task"],
coord_result["decision"]
)
print("\n✅ Result:")
print(json.dumps(worker_result, indent=2))
print()
await mcp_client.close()
if __name__ == "__main__":
asyncio.run(interactive_mode())
🐛 Troubleshooting
Ollama Connection Issues
Problem: Warning: Ollama not running
Solution:
# Start Ollama in a separate terminal
ollama serve
# Verify it's running
curl http://localhost:11434/api/tags
MCP Server Connection Failed
Problem: Cannot connect to MCP server
Solutions:
- Ensure
mcp_server.pyis in the same directory - Check Python path permissions
- Verify
mcppackage is installed:pip install mcp
Module Import Errors
Problem: ModuleNotFoundError: No module named 'mcp'
Solution:
# Activate virtual environment
source venv/bin/activate
# Install/reinstall dependencies
pip install --upgrade mcp httpx
Model Not Found
Problem: Ollama model not available
Solution:
# List available models
ollama list
# Pull the model you need
ollama pull llama2
# or
ollama pull llama3
🎓 Learning Exercises
-
Add a Database Tool
- Create
query_databasetool in MCP server - Add SQL query execution capability
- Test with sample queries
- Create
-
Create a Validator Agent
- Add third agent to validate results
- Implement result checking logic
- Chain: Coordinator → Worker → Validator
-
Implement Error Recovery
- Add retry logic to Worker Agent
- Handle MCP tool failures gracefully
- Log errors for debugging
-
Add Persistent Memory
- Store conversation history
- Enable context across sessions
- Implement memory retrieval
-
Build Web Interface
- Create FastAPI/Flask endpoint
- Add REST API for agent system
- Build simple web UI
🔬 Advanced Features
Enable Debug Logging
export MCP_DEBUG=1
python3 agents.py
Run with Different Models
# Edit agents.py to change OLLAMA_MODEL
# Then run specific tests
python3 agents.py
Benchmark Agent Performance
Add timing to agents.py:
import time
start = time.time()
result = await coordinator.process(request)
end = time.time()
print(f"Coordinator took {end - start:.2f}s")
📚 Key Concepts
Model Context Protocol (MCP)
- Standardized protocol for AI tool integration
- Enables tool reusability across different agents
- Separates tool implementation from agent logic
Agent-to-Agent Communication
- Agents share context and decisions
- Enables complex multi-agent workflows
- Maintains conversation history
Tool Calling Pattern
- Agent identifies needed tool
- Agent prepares tool arguments
- Tool executes via MCP
- Result returned to agent
- Agent processes and responds
🤝 Contributing
Ideas for contributions:
- Add more MCP tools (file operations, API calls, etc.)
- Improve LLM prompt engineering
- Add unit tests
- Create visualization dashboard
- Improve error handling
📖 Resources
📝 License
This project is provided as-is for educational purposes. Feel free to use, modify, and distribute.
💬 Support
For questions or issues:
- Check the troubleshooting section
- Review MCP documentation
- Test with simplified examples
- Verify Ollama is running correctly
🎯 Next Steps
- ✅ Run the basic demo
- ✅ Understand agent communication flow
- ✅ Modify test cases
- ✅ Add custom tools
- ✅ Experiment with different models
- ✅ Build your own agent system!
Happy Learning! 🚀
Built for DevOps engineers working with AI agents and MCP.