ysnylmzz/calendar-agent
If you are the rightful owner of calendar-agent and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Model Context Protocol (MCP) server is a key component in integrating AI models with external services, providing a standardized interface for communication and tool usage.
vLLM + FastAPI + MCP Calendar System
A modular AI system that combines vLLM (for LLM inference), FastAPI (for REST APIs), and Model Context Protocol (MCP) to create an intelligent calendar management assistant with Google Calendar integration.
๐๏ธ Architecture Overview
This system consists of three main components:
- vLLM Server: Local LLM inference using Qwen3-1.7B model
- MCP Server: Google Calendar integration with FastAPI MCP endpoints
- Agent: Intelligent assistant that coordinates between vLLM and MCP services
๐ Project Structure
calendar-agent/
โโโ agent/ # AI Agent component
โ โโโ agent.py # Main agent logic
โ โโโ Dockerfile # Agent container
โ โโโ requirements.txt # Python dependencies
โโโ mcp-server/ # MCP Server with Google Calendar
โ โโโ app.py # FastAPI MCP server
โ โโโ get_google_token.py # Google OAuth setup
โ โโโ Dockerfile # MCP container
โ โโโ requirements.txt # Python dependencies
โ โโโ secrets/ # Google credentials directory
โ โโโ token.pickle # Generated by OAuth flow
โโโ vllm/ # vLLM inference server
โ โโโ Dockerfile # vLLM container
โโโ tests/ # Test scripts
โ โโโ test_comprehensive.py # Comprehensive system tests
โ โโโ test_vllm.sh # vLLM testing
โ โโโ testmcp.py # MCP testing
โ โโโ send_request.sh # API testing
โ โโโ requirements.txt # Test dependencies
โโโ docker-compose.yml # Multi-service orchestration
โโโ build.sh # Build script
โโโ start_system.sh # Smart startup script
โโโ ngrok.sh # Tunnel setup
โโโ .gitignore # Git ignore rules
โโโ readme.md # This file
๐ Quick Start
Prerequisites
- Docker and Docker Compose
- NVIDIA GPU with CUDA support (for vLLM)
- Google Cloud Platform account
- Python 3.8+ (for local development)
1. Google Console Setup
Step 1: Create Google Cloud Project
- Go to Google Cloud Console
- Create a new project or select existing one
- Enable the Google Calendar API:
- Go to "APIs & Services" > "Library"
- Search for "Google Calendar API"
- Click "Enable"
Step 2: Create OAuth 2.0 Credentials
- Go to "APIs & Services" > "Credentials"
- Click "Create Credentials" > "OAuth 2.0 Client IDs"
- Choose "Desktop application" as application type
- Give it a name (e.g., "Calendar MCP Client")
- Click "Create"
- Download the JSON file and rename it to
credentials.json
Step 3: Place Credentials
- Create the secrets directory:
mkdir -p mcp-server/secrets
- Move the downloaded
credentials.json
tomcp-server/secrets/
2. Generate Google OAuth Token
Run the token generation script locally (not in Docker):
cd mcp-server
python get_google_token.py
This will:
- Open a browser window for Google OAuth
- Generate
token.pickle
andtoken.json
in thesecrets/
directory - Store your authenticated session
3. Start the System
Option A: Smart Startup Script (Recommended)
./start_system.sh
This script will:
- Validate your environment and credentials
- Check system resources (memory, GPU)
- Build and start all services
- Verify all services are healthy
- Display service URLs and test commands
Option B: Manual Startup
# Build all containers
./build.sh
# Or start with Docker Compose
docker-compose up --build
The system will be available at:
- vLLM API: http://localhost:8080/v1
- MCP Server: http://localhost:7000/mcp
- Agent API: http://localhost:8088
๐ง Configuration
Environment Variables
Key configuration options in docker-compose.yml
:
# vLLM Configuration
VLLM_MODEL: qwen/Qwen3-1.7B # Model to serve
VLLM_PORT: "8080" # API port
VLLM_API_KEY: token-abc123 # API key for authentication
# Agent Configuration
OPENAI_BASE_URL: http://vllm:8080/v1 # vLLM endpoint
MCP_HTTP_URL: http://mcp:7000/mcp # MCP endpoint
SERVED_MODEL: qwen3-small # Model name
Model Configuration
To use a different model, modify the VLLM_MODEL
environment variable in docker-compose.yml
:
VLLM_MODEL: "Qwen/Qwen3-30B-A3B" # Larger model
Tool Call Configuration
The vLLM server is configured with tool-call support for MCP integration:
# vLLM Tool Call Settings (in vllm/Dockerfile)
--enable-auto-tool-choice # Enable automatic tool selection
--tool-call-parser hermes # Use Hermes parser for tool calls
--trust-remote-code # Trust model's remote code execution
Important Notes:
- The
--tool-call-parser hermes
is essential for proper MCP tool integration - The
--enable-auto-tool-choice
allows the model to automatically select appropriate tools - These settings enable the agent to use calendar management tools via MCP
๐งช Testing
Comprehensive System Tests (Recommended)
cd tests && python test_comprehensive.py
This will test:
- All service health checks
- Input validation and error handling
- Rate limiting functionality
- Agent chat functionality
- API endpoints and responses
Individual Component Tests
Test vLLM Server
./tests/test_vllm.sh
Test MCP Server
cd tests
python testmcp.py
Test Agent API
# Send a chat message
curl -X POST http://localhost:8088/chat \
-H "Content-Type: application/json" \
-d '{"message": "Schedule a meeting tomorrow at 2 PM for 30 minutes"}'
Quick Health Checks
# Check vLLM health
curl http://localhost:8080/v1/models
# Check MCP health
curl http://localhost:7000/health
# Check agent health
curl http://localhost:8088/health
๐ Available MCP Tools
The MCP server provides these calendar management tools:
- Check Availability - Verify if a time slot is free
- Get Free Slots - Find available time slots on a specific date
- Create Meeting - Schedule a new meeting with attendees
- Update Meeting - Modify existing meeting details
- List Meetings - View meetings in a time range
- Suggest Times - Get meeting time recommendations
- Delete Meeting - Remove scheduled meetings
๐ External Access
ngrok Setup for External Access
To expose your local system to the internet (for testing or integration), the project includes an automated ngrok setup script.
Prerequisites
- ngrok Account: Sign up at https://ngrok.com/
- Authtoken: Get your authtoken from https://dashboard.ngrok.com/get-started/your-authtoken
Method 1: Environment Variable (Recommended)
Set your ngrok authtoken as an environment variable:
export NGROK_AUTHTOKEN='your_ngrok_authtoken_here'
Then run the ngrok script:
./ngrok.sh
Method 2: Interactive Setup
Simply run the script and it will prompt you for your authtoken:
./ngrok.sh
The script will:
- Check if ngrok is installed
- Prompt for authtoken if not set in environment
- Configure ngrok with tunnels for all services
- Start tunnels for MCP (7000), Agent (8088), and vLLM (8080)
- Display public URLs for external access
What You'll See
๐ Setting up ngrok tunnels for external access
================================================
[INFO] Checking ngrok installation...
[INFO] ngrok is installed โ
[INFO] Using authtoken from environment variable
[INFO] Configuring ngrok...
[INFO] ngrok configuration created โ
[INFO] Starting ngrok tunnels...
[INFO] Starting tunnels for MCP (7000), Agent (8088), and vLLM (8080)...
[INFO] Waiting for tunnels to be ready...
[INFO] Getting tunnel URLs...
๐ ngrok External Access URLs
==============================
MCP Server: https://abc123.ngrok-free.app
Agent API: https://def456.ngrok-free.app
vLLM Server: https://ghi789.ngrok-free.app
๐ ngrok Dashboard: http://localhost:4040
๐ Stop ngrok: pkill -f ngrok
[INFO] ngrok tunnels started successfully โ
ngrok Dashboard
Access the ngrok web interface to monitor your tunnels:
- URL: http://localhost:4040
- Features: Real-time tunnel monitoring, request inspection, traffic analysis
Security Notes
- Authtoken Security: Never commit your ngrok authtoken to version control
- Environment Variables: Use environment variables for CI/CD deployments
- Tunnel Access: ngrok tunnels are publicly accessible - use with caution in production
- Rate Limits: Free ngrok accounts have rate limits on concurrent connections
Stopping ngrok
# Stop all ngrok processes
pkill -f ngrok
# Or stop specific tunnels
ngrok stop mcp agent vllm
Integration with Startup Script
The start_system.sh
script automatically calls ngrok.sh
after all services are healthy:
./start_system.sh
# This will start all services and then run ngrok.sh automatically
๐ Troubleshooting
Common Issues
-
Google Authentication Errors
- Ensure
credentials.json
is properly placed inmcp-server/secrets/
- Re-run
python get_google_token.py
if tokens expire - Check that Google Calendar API is enabled in your project
- Ensure
-
vLLM GPU Issues
- Verify NVIDIA drivers and CUDA are installed
- Check
nvidia-smi
output - Ensure
nvidia-container-toolkit
is installed
-
Port Conflicts
- Check if ports 8080, 7000, or 8088 are already in use
- Modify ports in
docker-compose.yml
if needed
-
Memory Issues
- Increase
shm_size
in docker-compose.yml for large models - Reduce model size if running on limited hardware
- Increase
Logs and Debugging
View logs for specific services:
# View all logs
docker-compose logs
# View specific service logs
docker-compose logs vllm
docker-compose logs mcp
docker-compose logs agent
# Follow logs in real-time
docker-compose logs -f
Health Checks
The system includes health checks for all services:
# Check vLLM health
curl http://localhost:8080/v1/models
# Check MCP health
curl http://localhost:7000/health
# Check agent health
curl http://localhost:8088/health
Automated Testing and Validation
Run comprehensive tests:
cd tests && python test_comprehensive.py
Use the smart startup script for validation:
./start_system.sh
This will automatically validate your environment and check all services.
๐ Security Considerations
- API Keys: Change default API keys in production
- Google Credentials: Keep
secrets/
directory secure and never commit to version control - Network Access: Use firewalls to restrict access to production deployments
- HTTPS: Use reverse proxy with SSL for production deployments
๐ค Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
๐ License
This project is licensed under the Apache License - see the LICENSE file for details.
๐ Acknowledgments
- vLLM for efficient LLM inference
- FastAPI for the web framework
- Model Context Protocol for the MCP specification
- Google Calendar API for calendar integration