calendar-agent

ysnylmzz/calendar-agent

3.3

If you are the rightful owner of calendar-agent and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Model Context Protocol (MCP) server is a key component in integrating AI models with external services, providing a standardized interface for communication and tool usage.

Tools
7
Resources
0
Prompts
0

vLLM + FastAPI + MCP Calendar System

A modular AI system that combines vLLM (for LLM inference), FastAPI (for REST APIs), and Model Context Protocol (MCP) to create an intelligent calendar management assistant with Google Calendar integration.

๐Ÿ—๏ธ Architecture Overview

This system consists of three main components:

  • vLLM Server: Local LLM inference using Qwen3-1.7B model
  • MCP Server: Google Calendar integration with FastAPI MCP endpoints
  • Agent: Intelligent assistant that coordinates between vLLM and MCP services

๐Ÿ“ Project Structure

calendar-agent/
โ”œโ”€โ”€ agent/                 # AI Agent component
โ”‚   โ”œโ”€โ”€ agent.py          # Main agent logic
โ”‚   โ”œโ”€โ”€ Dockerfile        # Agent container
โ”‚   โ””โ”€โ”€ requirements.txt  # Python dependencies
โ”œโ”€โ”€ mcp-server/           # MCP Server with Google Calendar
โ”‚   โ”œโ”€โ”€ app.py           # FastAPI MCP server
โ”‚   โ”œโ”€โ”€ get_google_token.py  # Google OAuth setup
โ”‚   โ”œโ”€โ”€ Dockerfile       # MCP container
โ”‚   โ”œโ”€โ”€ requirements.txt # Python dependencies
โ”‚   โ””โ”€โ”€ secrets/         # Google credentials directory
โ”‚       โ””โ”€โ”€ token.pickle # Generated by OAuth flow
โ”œโ”€โ”€ vllm/                 # vLLM inference server
โ”‚   โ””โ”€โ”€ Dockerfile       # vLLM container
โ”œโ”€โ”€ tests/               # Test scripts
โ”‚   โ”œโ”€โ”€ test_comprehensive.py  # Comprehensive system tests
โ”‚   โ”œโ”€โ”€ test_vllm.sh    # vLLM testing
โ”‚   โ”œโ”€โ”€ testmcp.py      # MCP testing
โ”‚   โ”œโ”€โ”€ send_request.sh # API testing
โ”‚   โ””โ”€โ”€ requirements.txt # Test dependencies
โ”œโ”€โ”€ docker-compose.yml   # Multi-service orchestration
โ”œโ”€โ”€ build.sh            # Build script
โ”œโ”€โ”€ start_system.sh     # Smart startup script
โ”œโ”€โ”€ ngrok.sh           # Tunnel setup
โ”œโ”€โ”€ .gitignore         # Git ignore rules
โ””โ”€โ”€ readme.md          # This file

๐Ÿš€ Quick Start

Prerequisites

  • Docker and Docker Compose
  • NVIDIA GPU with CUDA support (for vLLM)
  • Google Cloud Platform account
  • Python 3.8+ (for local development)

1. Google Console Setup

Step 1: Create Google Cloud Project

  1. Go to Google Cloud Console
  2. Create a new project or select existing one
  3. Enable the Google Calendar API:
    • Go to "APIs & Services" > "Library"
    • Search for "Google Calendar API"
    • Click "Enable"

Step 2: Create OAuth 2.0 Credentials

  1. Go to "APIs & Services" > "Credentials"
  2. Click "Create Credentials" > "OAuth 2.0 Client IDs"
  3. Choose "Desktop application" as application type
  4. Give it a name (e.g., "Calendar MCP Client")
  5. Click "Create"
  6. Download the JSON file and rename it to credentials.json

Step 3: Place Credentials

  1. Create the secrets directory:
    mkdir -p mcp-server/secrets
    
  2. Move the downloaded credentials.json to mcp-server/secrets/

2. Generate Google OAuth Token

Run the token generation script locally (not in Docker):

cd mcp-server
python get_google_token.py

This will:

  • Open a browser window for Google OAuth
  • Generate token.pickle and token.json in the secrets/ directory
  • Store your authenticated session

3. Start the System

Option A: Smart Startup Script (Recommended)

./start_system.sh

This script will:

  • Validate your environment and credentials
  • Check system resources (memory, GPU)
  • Build and start all services
  • Verify all services are healthy
  • Display service URLs and test commands

Option B: Manual Startup

# Build all containers
./build.sh

# Or start with Docker Compose
docker-compose up --build

The system will be available at:

๐Ÿ”ง Configuration

Environment Variables

Key configuration options in docker-compose.yml:

# vLLM Configuration
VLLM_MODEL: qwen/Qwen3-1.7B    # Model to serve
VLLM_PORT: "8080"              # API port
VLLM_API_KEY: token-abc123     # API key for authentication

# Agent Configuration
OPENAI_BASE_URL: http://vllm:8080/v1  # vLLM endpoint
MCP_HTTP_URL: http://mcp:7000/mcp     # MCP endpoint
SERVED_MODEL: qwen3-small             # Model name

Model Configuration

To use a different model, modify the VLLM_MODEL environment variable in docker-compose.yml:

VLLM_MODEL: "Qwen/Qwen3-30B-A3B"  # Larger model 

Tool Call Configuration

The vLLM server is configured with tool-call support for MCP integration:

# vLLM Tool Call Settings (in vllm/Dockerfile)
--enable-auto-tool-choice    # Enable automatic tool selection
--tool-call-parser hermes    # Use Hermes parser for tool calls
--trust-remote-code         # Trust model's remote code execution

Important Notes:

  • The --tool-call-parser hermes is essential for proper MCP tool integration
  • The --enable-auto-tool-choice allows the model to automatically select appropriate tools
  • These settings enable the agent to use calendar management tools via MCP

๐Ÿงช Testing

Comprehensive System Tests (Recommended)

cd tests && python test_comprehensive.py

This will test:

  • All service health checks
  • Input validation and error handling
  • Rate limiting functionality
  • Agent chat functionality
  • API endpoints and responses

Individual Component Tests

Test vLLM Server

./tests/test_vllm.sh

Test MCP Server

cd tests
python testmcp.py

Test Agent API

# Send a chat message
curl -X POST http://localhost:8088/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Schedule a meeting tomorrow at 2 PM for 30 minutes"}'

Quick Health Checks

# Check vLLM health
curl http://localhost:8080/v1/models

# Check MCP health
curl http://localhost:7000/health

# Check agent health
curl http://localhost:8088/health

๐Ÿ“‹ Available MCP Tools

The MCP server provides these calendar management tools:

  1. Check Availability - Verify if a time slot is free
  2. Get Free Slots - Find available time slots on a specific date
  3. Create Meeting - Schedule a new meeting with attendees
  4. Update Meeting - Modify existing meeting details
  5. List Meetings - View meetings in a time range
  6. Suggest Times - Get meeting time recommendations
  7. Delete Meeting - Remove scheduled meetings

๐ŸŒ External Access

ngrok Setup for External Access

To expose your local system to the internet (for testing or integration), the project includes an automated ngrok setup script.

Prerequisites
  1. ngrok Account: Sign up at https://ngrok.com/
  2. Authtoken: Get your authtoken from https://dashboard.ngrok.com/get-started/your-authtoken
Method 1: Environment Variable (Recommended)

Set your ngrok authtoken as an environment variable:

export NGROK_AUTHTOKEN='your_ngrok_authtoken_here'

Then run the ngrok script:

./ngrok.sh
Method 2: Interactive Setup

Simply run the script and it will prompt you for your authtoken:

./ngrok.sh

The script will:

  • Check if ngrok is installed
  • Prompt for authtoken if not set in environment
  • Configure ngrok with tunnels for all services
  • Start tunnels for MCP (7000), Agent (8088), and vLLM (8080)
  • Display public URLs for external access
What You'll See
๐ŸŒ Setting up ngrok tunnels for external access
================================================
[INFO] Checking ngrok installation...
[INFO] ngrok is installed โœ“
[INFO] Using authtoken from environment variable
[INFO] Configuring ngrok...
[INFO] ngrok configuration created โœ“
[INFO] Starting ngrok tunnels...
[INFO] Starting tunnels for MCP (7000), Agent (8088), and vLLM (8080)...
[INFO] Waiting for tunnels to be ready...
[INFO] Getting tunnel URLs...

๐ŸŒ ngrok External Access URLs
==============================
MCP Server:   https://abc123.ngrok-free.app
Agent API:    https://def456.ngrok-free.app
vLLM Server:  https://ghi789.ngrok-free.app

๐Ÿ“Š ngrok Dashboard: http://localhost:4040
๐Ÿ›‘ Stop ngrok: pkill -f ngrok

[INFO] ngrok tunnels started successfully โœ“
ngrok Dashboard

Access the ngrok web interface to monitor your tunnels:

Security Notes
  • Authtoken Security: Never commit your ngrok authtoken to version control
  • Environment Variables: Use environment variables for CI/CD deployments
  • Tunnel Access: ngrok tunnels are publicly accessible - use with caution in production
  • Rate Limits: Free ngrok accounts have rate limits on concurrent connections
Stopping ngrok
# Stop all ngrok processes
pkill -f ngrok

# Or stop specific tunnels
ngrok stop mcp agent vllm
Integration with Startup Script

The start_system.sh script automatically calls ngrok.sh after all services are healthy:

./start_system.sh
# This will start all services and then run ngrok.sh automatically

๐Ÿ” Troubleshooting

Common Issues

  1. Google Authentication Errors

    • Ensure credentials.json is properly placed in mcp-server/secrets/
    • Re-run python get_google_token.py if tokens expire
    • Check that Google Calendar API is enabled in your project
  2. vLLM GPU Issues

    • Verify NVIDIA drivers and CUDA are installed
    • Check nvidia-smi output
    • Ensure nvidia-container-toolkit is installed
  3. Port Conflicts

    • Check if ports 8080, 7000, or 8088 are already in use
    • Modify ports in docker-compose.yml if needed
  4. Memory Issues

    • Increase shm_size in docker-compose.yml for large models
    • Reduce model size if running on limited hardware

Logs and Debugging

View logs for specific services:

# View all logs
docker-compose logs

# View specific service logs
docker-compose logs vllm
docker-compose logs mcp
docker-compose logs agent

# Follow logs in real-time
docker-compose logs -f

Health Checks

The system includes health checks for all services:

# Check vLLM health
curl http://localhost:8080/v1/models

# Check MCP health
curl http://localhost:7000/health

# Check agent health
curl http://localhost:8088/health

Automated Testing and Validation

Run comprehensive tests:

cd tests && python test_comprehensive.py

Use the smart startup script for validation:

./start_system.sh

This will automatically validate your environment and check all services.

๐Ÿ”’ Security Considerations

  • API Keys: Change default API keys in production
  • Google Credentials: Keep secrets/ directory secure and never commit to version control
  • Network Access: Use firewalls to restrict access to production deployments
  • HTTPS: Use reverse proxy with SSL for production deployments

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests for new functionality
  5. Submit a pull request

๐Ÿ“„ License

This project is licensed under the Apache License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments