patrickyoung/timeserver
If you are the rightful owner of timeserver and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Time Service is a Go-based web service that provides the current server time through both a REST API and a Model Context Protocol (MCP) server interface over HTTP.
Time Service
A simple Go web service that provides the current server time through both a standard REST API and a Model Context Protocol (MCP) server interface over HTTP.
Features
- REST API: Simple endpoint to get current server time
- Named Locations: SQLite-backed storage for custom location management
- MCP Server: Model Context Protocol server with time-related tools
- Authentication & Authorization: OAuth2/OIDC with JWT-based claims authorization
- Structured Logging: JSON-formatted logs with slog
- Graceful Shutdown: Proper cleanup on termination signals
- Middleware Stack: Logging, recovery, authentication, and CORS support
- Prometheus Metrics: HTTP and MCP metrics including auth and database metrics
- Minimal Docker Image: Multi-stage build producing <10MB images
Quick Start
IMPORTANT: The server requires CORS configuration to start. For local development, use the ALLOW_CORS_WILDCARD_DEV=true environment variable.
Run Locally
HTTP Server Mode (for remote access)
# Download dependencies
make deps
# Run the server (development mode with wildcard CORS)
ALLOW_CORS_WILDCARD_DEV=true make run
The server will start on port 8080 (or the port specified in the PORT environment variable).
For production, always set explicit allowed origins:
ALLOWED_ORIGINS="https://example.com,https://app.example.com" make run
Stdio Mode (for Claude Code / MCP clients)
# Run in stdio mode for MCP communication
# Note: stdio mode doesn't require CORS configuration
go run cmd/server/main.go --stdio
This mode communicates via stdin/stdout using JSON-RPC, which is required for Claude Code and other local MCP clients.
Build Binary
make build
# Run with development CORS (local only)
ALLOW_CORS_WILDCARD_DEV=true ./bin/server
# Or with explicit origins (production)
ALLOWED_ORIGINS="https://example.com" ./bin/server
Run with Docker
# Build image (creates both v1.0.0 and latest tags)
make docker
# Run with versioned tag (recommended)
docker run -p 8080:8080 -e ALLOW_CORS_WILDCARD_DEV=true timeservice:v1.0.0
# Or run with latest tag (local dev only)
docker run -p 8080:8080 -e ALLOW_CORS_WILDCARD_DEV=true timeservice:latest
Production Note: Always use versioned tags (v1.0.0) or image digests (@sha256:...) for production deployments to ensure deterministic, reproducible deployments. The latest tag is mutable and should only be used for local development.
Run with Docker Compose (Hardened)
The project includes a hardened docker-compose.yml with security best practices:
docker-compose up
This configuration includes:
- Read-only root filesystem
- Dropped capabilities (ALL)
- No new privileges
- Resource limits
- Non-root user execution
- Tmpfs for writable directories
API Endpoints
1. Root Endpoint
Get service information:
curl http://localhost:8080/
Response:
{
"service": "timeservice",
"version": "1.0.0",
"endpoints": {
"time": "GET /api/time",
"locations": "GET /api/locations",
"location_detail": "GET /api/locations/{name}",
"location_time": "GET /api/locations/{name}/time",
"mcp": "POST /mcp",
"health": "GET /health"
},
"mcp_info": "Supports both stdio mode (--stdio flag) and HTTP transport (POST /mcp)"
}
2. Time Endpoint
Get the current server time:
curl http://localhost:8080/api/time
Response:
{
"current_time": "2025-10-17T15:30:45.123456Z",
"unix_time": 1729180245,
"timezone": "UTC",
"formatted": "2025-10-17T15:30:45Z"
}
3. Health Endpoint
Check service health:
curl http://localhost:8080/health
Response:
{
"status": "healthy",
"time": "2025-10-17T15:30:45Z"
}
MCP Server Endpoint
The service includes a Model Context Protocol (MCP) server that exposes time-related tools for AI agents and other clients.
List Available Tools
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/list"
}'
Response:
{
"result": {
"tools": [
{
"name": "get_current_time",
"description": "Get the current server time in various formats",
"inputSchema": {
"type": "object",
"properties": {
"format": {
"type": "string",
"description": "Time format (iso8601, unix, rfc3339, or custom Go format)",
"default": "iso8601"
},
"timezone": {
"type": "string",
"description": "IANA timezone (e.g., America/New_York, UTC)",
"default": "UTC"
}
}
}
},
{
"name": "add_time_offset",
"description": "Add a time offset to the current time",
"inputSchema": {
"type": "object",
"properties": {
"hours": {
"type": "number",
"description": "Hours to add (can be negative)",
"default": 0
},
"minutes": {
"type": "number",
"description": "Minutes to add (can be negative)",
"default": 0
},
"format": {
"type": "string",
"description": "Output format",
"default": "iso8601"
}
}
}
}
]
}
}
Call a Tool: Get Current Time
Get the current time in ISO8601 format (UTC):
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "get_current_time",
"arguments": {
"format": "iso8601",
"timezone": "UTC"
}
}
}'
Get the current time in a specific timezone:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "get_current_time",
"arguments": {
"format": "rfc3339",
"timezone": "America/New_York"
}
}
}'
Get the current Unix timestamp:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "get_current_time",
"arguments": {
"format": "unix"
}
}
}'
Call a Tool: Add Time Offset
Add 3 hours to the current time:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "add_time_offset",
"arguments": {
"hours": 3,
"minutes": 0,
"format": "rfc3339"
}
}
}'
Subtract 30 minutes from the current time:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "add_time_offset",
"arguments": {
"hours": 0,
"minutes": -30,
"format": "iso8601"
}
}
}'
Named Location Management
The service provides database-backed storage for managing named locations with their associated IANA timezones. This allows you to define custom location names (like "headquarters", "tokyo-office", "datacenter-west") and query the current time for those locations without remembering timezone strings.
Location Storage
- Database: SQLite with performance optimizations (WAL mode, 64MB cache)
- Schema: Case-insensitive location names, IANA timezone validation
- Persistence: Data stored in
data/timeservice.db(configurable viaDB_PATH) - Auto-migrations: Schema automatically created and updated on startup
Location API Endpoints
Create a Location
Create a new named location (requires locations:write permission when auth is enabled):
curl -X POST http://localhost:8080/api/locations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{
"name": "headquarters",
"timezone": "America/New_York",
"description": "Company HQ in NYC"
}'
Response:
{
"id": 1,
"name": "headquarters",
"timezone": "America/New_York",
"description": "Company HQ in NYC",
"created_at": "2025-10-19T10:00:00Z",
"updated_at": "2025-10-19T10:00:00Z"
}
List All Locations
Get all configured locations:
curl http://localhost:8080/api/locations
Response:
{
"locations": [
{
"id": 1,
"name": "headquarters",
"timezone": "America/New_York",
"description": "Company HQ in NYC",
"created_at": "2025-10-19T10:00:00Z",
"updated_at": "2025-10-19T10:00:00Z"
},
{
"id": 2,
"name": "tokyo-office",
"timezone": "Asia/Tokyo",
"description": "Tokyo branch office",
"created_at": "2025-10-19T10:05:00Z",
"updated_at": "2025-10-19T10:05:00Z"
}
]
}
Get a Specific Location
Retrieve details for a named location:
curl http://localhost:8080/api/locations/headquarters
Response:
{
"id": 1,
"name": "headquarters",
"timezone": "America/New_York",
"description": "Company HQ in NYC",
"created_at": "2025-10-19T10:00:00Z",
"updated_at": "2025-10-19T10:00:00Z"
}
Get Current Time for a Location
Get the current time for a named location:
curl http://localhost:8080/api/locations/headquarters/time
Response:
{
"location": "headquarters",
"timezone": "America/New_York",
"current_time": "2025-10-19T06:30:45.123456-04:00",
"unix_time": 1729180245,
"formatted": "2025-10-19T06:30:45-04:00"
}
Update a Location
Update an existing location's timezone or description (requires locations:write permission):
curl -X PUT http://localhost:8080/api/locations/headquarters \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{
"timezone": "America/Los_Angeles",
"description": "Company HQ relocated to LA"
}'
Delete a Location
Remove a named location (requires locations:write permission):
curl -X DELETE http://localhost:8080/api/locations/headquarters \
-H "Authorization: Bearer $TOKEN"
Response:
{
"message": "location deleted successfully"
}
Location MCP Tools
The MCP server provides tools for managing locations through AI agents and other MCP clients.
Add Location Tool
Add a new named location:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "add_location",
"arguments": {
"name": "london-office",
"timezone": "Europe/London",
"description": "London branch office"
}
}
}'
List Locations Tool
List all configured locations:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "list_locations",
"arguments": {}
}
}'
Get Location Time Tool
Get current time for a named location:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "get_location_time",
"arguments": {
"name": "london-office",
"format": "rfc3339"
}
}
}'
Update Location Tool
Update an existing location:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "update_location",
"arguments": {
"name": "london-office",
"timezone": "Europe/Paris",
"description": "Relocated to Paris"
}
}
}'
Remove Location Tool
Remove a named location:
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"method": "tools/call",
"params": {
"name": "remove_location",
"arguments": {
"name": "london-office"
}
}
}'
Location Database Configuration
Configure the SQLite database location and performance settings:
| Variable | Default | Description |
|---|---|---|
DB_PATH | data/timeservice.db | Path to SQLite database file |
DB_MAX_OPEN_CONNS | 25 | Maximum open database connections |
DB_MAX_IDLE_CONNS | 5 | Maximum idle connections in pool |
DB_CACHE_SIZE_KB | 64000 | Cache size in KB (converted to pages internally) |
DB_WAL_MODE | true | Enable Write-Ahead Logging for better concurrency |
Example with custom database path:
DB_PATH=/var/lib/timeservice/locations.db \
ALLOWED_ORIGINS="https://example.com" \
./bin/server
Performance Tuning:
# For high-traffic workloads
DB_MAX_OPEN_CONNS=50 \
DB_CACHE_SIZE_KB=128000 \
./bin/server
# For low-memory environments
DB_MAX_OPEN_CONNS=10 \
DB_CACHE_SIZE_KB=32000 \
./bin/server
Database Backup and Restore
The service includes a backup script for creating consistent database backups:
Create a Backup:
# Basic usage
./scripts/backup-db.sh data/timeservice.db backups/
# With custom retention (days)
RETENTION_DAYS=30 ./scripts/backup-db.sh data/timeservice.db backups/
The script uses SQLite's VACUUM INTO command to create optimized, consistent backups and automatically removes backups older than the retention period (default: 7 days).
Restore from Backup:
# Stop the service
docker-compose down
# Replace database with backup
cp backups/timeservice_20251020_094227.db data/timeservice.db
# Restart service
docker-compose up -d
Docker Compose Backup:
# Backup volume data
docker run --rm -v time-server_timeservice-data:/data -v $(pwd)/backups:/backup alpine \
tar czf /backup/timeservice-data-$(date +%Y%m%d).tar.gz -C /data .
# Restore volume data
docker run --rm -v time-server_timeservice-data:/data -v $(pwd)/backups:/backup alpine \
tar xzf /backup/timeservice-data-YYYYMMDD.tar.gz -C /data
Kubernetes Backup:
# Copy database from pod
kubectl cp timeservice-0:/app/data/timeservice.db ./timeservice-backup.db
# Restore to pod
kubectl cp ./timeservice-backup.db timeservice-0:/app/data/timeservice.db
kubectl rollout restart statefulset timeservice
For automated backups in Kubernetes, see k8s/README.md for CronJob examples.
Location Use Cases
Team Coordination:
# Add team member locations
curl -X POST .../api/locations -d '{"name":"alice-home","timezone":"America/New_York",...}'
curl -X POST .../api/locations -d '{"name":"bob-home","timezone":"Europe/London",...}'
# Check what time it is for Alice
curl .../api/locations/alice-home/time
Multi-Region Infrastructure:
# Define datacenter locations
curl -X POST .../api/locations -d '{"name":"us-east-dc","timezone":"America/New_York",...}'
curl -X POST .../api/locations -d '{"name":"eu-west-dc","timezone":"Europe/Dublin",...}'
curl -X POST .../api/locations -d '{"name":"ap-south-dc","timezone":"Asia/Singapore",...}'
# Check maintenance window times
curl .../api/locations/us-east-dc/time
International Business Hours:
# Store office locations
curl -X POST .../api/locations -d '{"name":"corporate","timezone":"America/Chicago",...}'
curl -X POST .../api/locations -d '{"name":"apac-support","timezone":"Asia/Tokyo",...}'
# Quickly check if offices are open
for loc in corporate apac-support; do
echo "$loc: $(curl -s .../api/locations/$loc/time | jq -r .formatted)"
done
Configuration
The service can be configured through environment variables. All configuration is validated at startup, and the server will fail to start if invalid values are provided.
Server Configuration
| Variable | Default | Description | Valid Values |
|---|---|---|---|
PORT | 8080 | HTTP server port | 1-65535 |
HOST | `` (all interfaces) | Bind address | Any valid IP or hostname |
Logging Configuration
| Variable | Default | Description | Valid Values |
|---|---|---|---|
LOG_LEVEL | info | Logging level | debug, info, warn, warning, error |
CORS Configuration
SECURITY-CRITICAL: CORS configuration is required for the server to start.
| Variable | Default | Description | Valid Values |
|---|---|---|---|
ALLOWED_ORIGINS | REQUIRED | Allowed CORS origins (comma-separated) | Explicit origins like https://example.com,https://app.example.com |
ALLOW_CORS_WILDCARD_DEV | (none) | Dev-only escape hatch to allow wildcard CORS | true to enable * origin (DEVELOPMENT ONLY) |
Security Notes:
- ALLOWED_ORIGINS is REQUIRED: The server will fail to start if ALLOWED_ORIGINS is not set, preventing accidental wildcard CORS in production.
- No wildcard default: There is no default value. You must explicitly configure allowed origins.
- Production: Always use explicit origins (e.g.,
ALLOWED_ORIGINS="https://example.com,https://app.example.com"). - Development only: Use
ALLOW_CORS_WILDCARD_DEV=trueto enable wildcard CORS (*) for local development. This is a conscious opt-in that prevents accidental production exposure. - Why this matters: Wildcard CORS (
*) allows any website to make authenticated requests to your API, potentially stealing cookies, session tokens, and user data. This is a critical security vulnerability.
Example - Production (CORRECT):
ALLOWED_ORIGINS="https://example.com,https://app.example.com" ./bin/server
Example - Development (USE WITH CAUTION):
ALLOW_CORS_WILDCARD_DEV=true ./bin/server
What happens without configuration:
$ ./bin/server
Configuration error: invalid configuration: ALLOWED_ORIGINS is required. Set explicit origins (e.g., ALLOWED_ORIGINS="https://example.com") or use ALLOW_CORS_WILDCARD_DEV=true for development ONLY. Wildcard CORS (*) is a security vulnerability in production
Timeout Configuration
All timeout values use Go duration format (e.g., 10s, 1m, 500ms).
| Variable | Default | Description | Valid Values |
|---|---|---|---|
READ_TIMEOUT | 10s | Maximum duration for reading request | Positive duration |
WRITE_TIMEOUT | 10s | Maximum duration for writing response | Positive duration |
IDLE_TIMEOUT | 60s | Maximum idle time between requests | Positive duration |
READ_HEADER_TIMEOUT | 5s | Maximum duration for reading request headers | Positive duration |
SHUTDOWN_TIMEOUT | 10s | Maximum duration for graceful shutdown | Positive duration |
Resource Limits
| Variable | Default | Description | Valid Values |
|---|---|---|---|
MAX_HEADER_BYTES | 1048576 (1MB) | Maximum size of request headers | 1-10485760 (1 byte - 10MB) |
Authentication & Authorization Configuration
SECURITY: The service supports OAuth2/OIDC authentication with JWT-based authorization using claims (roles, permissions, scopes). Authentication is opt-in for backward compatibility but strongly recommended for production.
| Variable | Default | Description | Valid Values |
|---|---|---|---|
AUTH_ENABLED | false | Enable authentication (opt-in) | true or false |
OIDC_ISSUER_URL | REQUIRED if auth enabled | OIDC provider URL | Valid HTTPS URL (e.g., https://auth.example.com or https://login.microsoftonline.com/{tenant-id}/v2.0) |
OIDC_AUDIENCE | REQUIRED if auth enabled | Expected audience claim in JWT | Your service identifier (e.g., timeservice or api://timeservice) |
AUTH_PUBLIC_PATHS | /health,/,/metrics | Comma-separated list of public paths (no auth) | Path patterns (e.g., /health,/,/metrics) |
AUTH_REQUIRED_ROLE | (none) | Required role for all protected endpoints | Role name (e.g., time-reader) |
AUTH_REQUIRED_PERMISSION | (none) | Required permission for all protected endpoints | Permission string (e.g., time:read) |
AUTH_REQUIRED_SCOPE | (none) | Required OAuth2 scope for all protected endpoints | Scope string (e.g., time:read) |
OIDC_SKIP_EXPIRY_CHECK | false | DANGEROUS: Skip token expiration check | true (dev only) |
OIDC_SKIP_CLIENT_ID_CHECK | false | DANGEROUS: Skip audience validation | true (dev only) |
OIDC_SKIP_ISSUER_CHECK | false | DANGEROUS: Skip issuer validation | true (dev only) |
ALLOW_HTTP_OIDC_DEV | (none) | Allow HTTP (insecure) OIDC issuer for dev | true (dev only) |
Security Notes:
- Production recommendation: Always enable authentication in production with
AUTH_ENABLED=true - Provider-agnostic: Works with any OIDC-compliant provider (Auth0, Okta, Azure Entra ID, Keycloak, AWS Cognito, Google, etc.)
- Claims-based authorization: Fine-grained access control using JWT claims (roles, permissions, scopes)
- Stateless: No database lookups needed; all authorization data is in the JWT
- HTTPS required: OIDC issuer must use HTTPS in production (set
ALLOW_HTTP_OIDC_DEV=trueonly for local testing)
Public Paths Explained:
/health- Required for container health checks and load balancer probes/- Provides service discovery information (which endpoints exist)/metrics- Required for Prometheus scraping (monitoring tools don't typically use auth tokens)
CRITICAL: CORS and Auth Middleware Ordering
When authentication is enabled, the server requires proper middleware ordering to function correctly with browser clients:
- CORS middleware MUST come before Auth middleware in the chain
- Browser CORS preflight requests (OPTIONS) do not include the
Authorizationheader - If Auth runs before CORS, preflight requests receive 401 errors without CORS headers
- This causes browsers to block all requests to the API, making it unusable
The server is correctly configured with CORS → Auth ordering. If you modify the middleware chain in cmd/server/main.go, preserve this order or browser clients will break.
For detailed explanation, see
Example - Production with Keycloak:
AUTH_ENABLED=true \
OIDC_ISSUER_URL="https://keycloak.example.com/realms/myrealm" \
OIDC_AUDIENCE="timeservice" \
AUTH_PUBLIC_PATHS="/health,/" \
AUTH_REQUIRED_ROLE="time-reader" \
ALLOWED_ORIGINS="https://app.example.com" \
./bin/server
Example - Production with Auth0:
AUTH_ENABLED=true \
OIDC_ISSUER_URL="https://your-tenant.auth0.com/" \
OIDC_AUDIENCE="https://timeservice.example.com" \
AUTH_PUBLIC_PATHS="/health,/,/metrics" \
AUTH_REQUIRED_SCOPE="time:read" \
ALLOWED_ORIGINS="https://app.example.com" \
./bin/server
Example - Production with Azure Entra ID:
AUTH_ENABLED=true \
OIDC_ISSUER_URL="https://login.microsoftonline.com/{tenant-id}/v2.0" \
OIDC_AUDIENCE="api://timeservice" \
AUTH_PUBLIC_PATHS="/health,/" \
AUTH_REQUIRED_ROLE="time-reader" \
ALLOWED_ORIGINS="https://app.example.com" \
./bin/server
Example - Development (local OIDC for testing):
AUTH_ENABLED=true \
OIDC_ISSUER_URL="http://localhost:8080/realms/test" \
OIDC_AUDIENCE="timeservice" \
ALLOW_HTTP_OIDC_DEV=true \
ALLOW_CORS_WILDCARD_DEV=true \
./bin/server
Authenticated API Request Example:
# Obtain JWT token from your OIDC provider first
TOKEN="eyJhbGc..."
# Make authenticated request
curl http://localhost:8080/api/time \
-H "Authorization: Bearer $TOKEN"
What happens when auth is disabled (default):
$ ./bin/server
{"level":"INFO","msg":"authentication disabled - all endpoints are unprotected","recommendation":"enable auth in production with AUTH_ENABLED=true"}
What happens when auth is enabled without required config:
$ AUTH_ENABLED=true ./bin/server
Configuration error: invalid configuration: OIDC_ISSUER_URL is required when AUTH_ENABLED=true
For detailed authentication setup instructions, provider examples, and security best practices, see and .
Configuration Examples
Basic Production Configuration:
PORT=8080 \
LOG_LEVEL=info \
ALLOWED_ORIGINS="https://example.com,https://app.example.com" \
READ_TIMEOUT=15s \
WRITE_TIMEOUT=15s \
make run
Development Configuration with Debug Logging:
PORT=3000 \
LOG_LEVEL=debug \
ALLOW_CORS_WILDCARD_DEV=true \
make run
High-Performance Configuration:
PORT=8080 \
ALLOWED_ORIGINS="https://api.example.com,https://app.example.com" \
READ_TIMEOUT=5s \
WRITE_TIMEOUT=5s \
IDLE_TIMEOUT=30s \
MAX_HEADER_BYTES=524288 \
make run
Docker Compose Configuration:
environment:
- PORT=8080
- LOG_LEVEL=info
- ALLOWED_ORIGINS=https://example.com,https://app.example.com
- READ_TIMEOUT=15s
- WRITE_TIMEOUT=15s
Configuration Validation
The server validates all configuration on startup and will exit with an error message if any values are invalid:
# Missing ALLOWED_ORIGINS example
$ ./bin/server
Configuration error: invalid configuration: ALLOWED_ORIGINS is required. Set explicit origins (e.g., ALLOWED_ORIGINS="https://example.com") or use ALLOW_CORS_WILDCARD_DEV=true for development ONLY. Wildcard CORS (*) is a security vulnerability in production
# Invalid port example
$ PORT=999999 ALLOWED_ORIGINS="https://example.com" ./bin/server
Configuration error: invalid configuration: invalid PORT 999999: must be between 1 and 65535
# Invalid timeout example
$ READ_TIMEOUT=-5s ALLOWED_ORIGINS="https://example.com" ./bin/server
Configuration error: invalid configuration: READ_TIMEOUT must be positive, got -5s
Configuration values are logged at startup (at INFO level) for debugging deployment issues:
{
"time": "2025-10-18T09:22:14Z",
"level": "INFO",
"msg": "configuration loaded",
"port": "8080",
"log_level": "INFO",
"allowed_origins": ["https://example.com","https://app.example.com"],
"read_timeout": 10000000000,
"write_timeout": 10000000000,
"idle_timeout": 60000000000
}
If wildcard CORS is detected (via ALLOW_CORS_WILDCARD_DEV=true), a warning will be logged:
{
"time": "2025-10-18T09:22:14Z",
"level": "WARN",
"msg": "wildcard CORS (*) is enabled - this is INSECURE for production",
"recommendation": "set explicit origins in ALLOWED_ORIGINS",
"dev_only": "use ALLOW_CORS_WILDCARD_DEV=true only in development"
}
Development
Project Structure
timeservice/
├── cmd/ # Command-line applications
│ ├── server/ # Main server application
│ │ └── main.go
│ └── healthcheck/ # Healthcheck utility
│ └── main.go
├── internal/ # Private application code
│ ├── handler/ # HTTP handlers
│ ├── mcpserver/ # MCP server implementation (using mcp-go SDK)
│ ├── middleware/ # HTTP middleware (CORS, logging, metrics, recovery)
│ └── testutil/ # Testing utilities
├── pkg/ # Public packages
│ ├── config/ # Configuration management
│ ├── metrics/ # Prometheus metrics
│ ├── model/ # Data models
│ └── version/ # Version information
├── k8s/ # Kubernetes deployment manifests
│ ├── deployment.yaml # K8s deployment with ServiceMonitor
│ └── prometheus.yml # Sample Prometheus configuration
├── docs/ # Documentation
│ └── TESTING.md # Testing guide
├── bin/ # Compiled binaries (gitignored)
│ ├── server # Main server binary
│ └── healthcheck # Healthcheck binary
├── run-mcp.sh # Helper script to run in stdio mode
├── Makefile # Build commands
├── Dockerfile # Multi-stage container image
├── docker-compose.yml # Docker Compose configuration
└── README.md
Available Make Commands
make help # Show available commands
make build # Build binary
make run # Run server
make test # Run tests
make fmt # Format code
make lint # Lint code
make clean # Remove build artifacts
make deps # Download dependencies
make docker # Build Docker image
Pre-commit Hooks
The project includes pre-commit hooks to enforce code quality and prevent committing binaries or coverage files.
Traditional Git Hooks
A pre-commit hook is automatically installed in .git/hooks/pre-commit that prevents committing:
- Binaries (
.exe,.dll,.so,.dylib) - Build artifacts in
bin/directory - Test binaries (
.test) - Coverage files (
.out,.coverprofile)
The hook runs automatically on every commit. If it detects forbidden files, it will:
- Block the commit
- Display which files matched forbidden patterns
- Provide instructions on how to fix the issue
Modern Pre-commit Framework (Optional)
For teams using the pre-commit framework, a .pre-commit-config.yaml is provided with additional checks:
Setup:
# Install pre-commit (if not already installed)
pip install pre-commit
# Install the git hooks
pre-commit install
# Run hooks manually on all files
pre-commit run --all-files
Included Checks:
- File size limits (max 500KB)
- Merge conflict detection
- YAML syntax validation
- Go formatting (
go fmt) - Go vetting (
go vet) - Go imports organization
- Go mod tidy
- Go build verification
- Go test execution
- Binary and coverage file prevention
.gitignore
The .gitignore file prevents accidentally adding:
- Build artifacts (
bin/,*.exe, etc.) - Test binaries (
*.test) - Coverage files (
*.out,coverage.html) - IDE files (
.idea/,.vscode/) - Environment files (
.env*) - OS files (
.DS_Store) - Temporary files (
tmp/,*.tmp)
All developers should ensure their local builds respect these ignore rules.
Architecture
This service follows the idiomatic Go web service patterns:
- Separation of Concerns: Handler → Service → Store layers (simplified for this example)
- Structured Logging: Using
log/slogfor structured, JSON-formatted logs - Middleware Chain: Composable middleware for cross-cutting concerns
- Graceful Shutdown: Proper cleanup on SIGINT/SIGTERM
- Context Propagation: Request context passed through all layers
- Minimal Dependencies: Relies primarily on Go standard library
Using with Claude Desktop
To use this MCP server with Claude Desktop, add the following configuration to your Claude Desktop MCP settings file:
macOS/Linux: ~/.config/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"timeservice": {
"command": "/full/path/to/time-server/bin/server",
"args": ["--stdio"],
"description": "Time service providing current time and time offset calculations"
}
}
}
Replace /full/path/to/time-server with the actual absolute path to this project directory.
After adding the configuration:
- Build the server:
make build - Restart Claude Desktop
- The timeservice tools will be available to Claude
You can verify it's working by asking Claude: "What time is it in Tokyo right now?"
Available MCP Tools
Time Tools:
get_current_time- Get current server time in various formats and timezones- Parameters:
format(iso8601, unix, unixmilli, rfc3339),timezone(IANA timezone name)
- Parameters:
add_time_offset- Add hours/minutes offset to current time- Parameters:
hours(number),minutes(number),format(output format)
- Parameters:
Location Management Tools:
add_location- Add a named location with timezone- Parameters:
name(string),timezone(IANA timezone),description(string, optional)
- Parameters:
list_locations- List all configured locations- Parameters: none
get_location_time- Get current time for a named location- Parameters:
name(string),format(output format, optional)
- Parameters:
update_location- Update an existing location- Parameters:
name(string),timezone(IANA timezone, optional),description(string, optional)
- Parameters:
remove_location- Remove a named location- Parameters:
name(string)
- Parameters:
MCP Protocol
The Model Context Protocol (MCP) is a protocol that allows AI models to interact with tools and resources. This service implements an MCP server using the mcp-go SDK in two modes:
- Stdio mode (for Claude Desktop and local MCP clients): JSON-RPC over stdin/stdout
- HTTP mode (for remote access): JSON-RPC over HTTP POST using StreamableHTTPServer
MCP Methods
tools/list: List all available toolstools/call: Call a specific tool with arguments
MCP Response Format
Successful response:
{
"result": { ... }
}
Error response:
{
"error": {
"code": 400,
"message": "error description"
}
}
Testing
Run the test suite:
make test
Run tests with race detector:
make test-race
Generate coverage report:
make test-coverage
Generate HTML coverage report:
make test-coverage-html
# Open coverage.html in your browser
CI/CD Pipeline
This project includes a comprehensive CI/CD pipeline using GitHub Actions that runs on every push and pull request.
GitHub Actions Workflow
The CI pipeline (.github/workflows/ci.yml) includes:
Test Job
- Multi-version testing: Tests against Go 1.22, 1.23, and 1.24
- Code formatting: Ensures code is formatted with
go fmt - Static analysis: Runs
go vetto catch common mistakes - Unit tests: Executes all tests with verbose output
- Race detection: Runs tests with the race detector enabled
- Coverage reporting: Generates and uploads coverage reports
- Codecov integration: Optional upload to Codecov for tracking coverage over time
Lint Job
- golangci-lint: Runs comprehensive linting with multiple linters enabled
- Timeout: 5-minute timeout for linting
- Parallel execution: Runs in parallel with tests
Build Job
- Binary compilation: Builds the server binary
- Artifact upload: Uploads binary as GitHub artifact (7-day retention)
- Size reporting: Reports binary size
Docker Job
- Image build: Builds Docker image using BuildKit
- Cache optimization: Uses GitHub Actions cache for faster builds
- Image testing: Validates the built image
- Size reporting: Reports final image size
Security Job
- Gosec scanner: Security-focused Go linter
- Trivy scanner: Vulnerability scanner for dependencies and code
- SARIF upload: Uploads security findings to GitHub Security tab
Local CI Simulation
Run all CI checks locally before pushing:
make ci-local
This runs:
make deps- Download and verify dependenciesmake fmt- Format codemake vet- Run go vetmake lint- Run golangci-lintmake test-race- Run tests with race detectormake test-coverage- Generate coverage report
Linting Configuration
The project uses golangci-lint with a comprehensive configuration (.golangci.yml) that includes:
- Error checking: errcheck, gosec
- Code quality: gosimple, staticcheck, unused
- Style: gofmt, goimports, revive
- Performance: gocritic with performance checks
- Security: gosec with security-focused checks
- Best practices: bodyclose, nilerr, unconvert
Install golangci-lint:
# Linux/macOS
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin
# Or using Go
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
Run linting locally:
make lint
Coverage Artifacts
Coverage reports are automatically:
- Generated for each Go version tested
- Uploaded as GitHub Actions artifacts (30-day retention)
- Available for download from the Actions tab
- Optionally uploaded to Codecov for tracking trends
CI Badge
Add the CI status badge to your README (update repository URL):
[](https://github.com/yourorg/timeservice/actions)
Container Security Hardening
The Docker image has been hardened following security best practices:
Dockerfile Security Features
-
Pinned Base Images
- Uses specific versions:
golang:1.24-alpineandalpine:3.20 - Ensures reproducible builds and prevents supply chain attacks
- Uses specific versions:
-
Minimal Attack Surface
- Multi-stage build reduces final image size to ~16MB
- Only includes necessary runtime dependencies (ca-certificates, tzdata)
- No shell or unnecessary binaries in final image
-
Non-Root User
- Creates dedicated user
appuser(UID 10001) and groupappgroup(GID 10001) - All processes run as non-root by default
- Application files owned by non-root user
- Creates dedicated user
-
Timezone Data
- Installed via
apk add --no-cache tzdatain builder - Copied from
/usr/share/zoneinfoinstead of Go's embedded zoneinfo - Supports all IANA timezones without embedding in binary
- Installed via
-
Build Security Flags
-trimpath: Removes absolute paths from binary-w -s: Strips debugging information-extldflags "-static": Creates static binary (no dynamic dependencies)go mod verify: Ensures dependencies haven't been tampered with
-
Health Check
- Built-in Docker HEALTHCHECK directive
- Validates container is functioning correctly
Runtime Security (Docker/Kubernetes)
The included docker-compose.yml and k8s/deployment.yaml demonstrate runtime hardening:
Docker Compose Features:
# Read-only root filesystem
read_only: true
# Drop all capabilities
cap_drop: - ALL
# Prevent privilege escalation
security_opt:
- no-new-privileges:true
# Resource limits
deploy:
resources:
limits:
cpus: '0.5'
memory: 128M
Kubernetes Security Context:
securityContext:
runAsNonRoot: true
runAsUser: 10001
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
Container Image Scanning
The CI pipeline includes three scanning tools:
-
Trivy (Aqua Security)
- Scans for OS and application vulnerabilities
- Checks for misconfigurations
- Results uploaded to GitHub Security tab
-
Grype (Anchore)
- Multi-source vulnerability database
- Catches CVEs across different sources
- SARIF format for GitHub integration
-
Docker Scout
- Official Docker vulnerability scanner
- Integrated with Docker Hub CVE database
- Provides remediation advice
All scan results are available in:
- GitHub Actions workflow logs
- GitHub Security → Code scanning alerts
- Downloadable SARIF artifacts
Running with Full Security
Docker Run:
docker run -d \
--name timeservice \
--read-only \
--cap-drop=ALL \
--security-opt=no-new-privileges:true \
--tmpfs /tmp:noexec,nosuid,size=10M \
-p 8080:8080 \
-e ALLOW_CORS_WILDCARD_DEV=true \
timeservice:v1.0.0
Docker Compose:
docker-compose up -d
Kubernetes:
# Build and tag image for production
docker build -t timeservice:v1.0.0 .
# Push to your container registry (update with your registry)
# docker tag timeservice:v1.0.0 your-registry.com/timeservice:v1.0.0
# docker push your-registry.com/timeservice:v1.0.0
# Deploy to Kubernetes
kubectl apply -f k8s/deployment.yaml
Note: The Kubernetes deployment uses image: timeservice:v1.0.0 for deterministic deployments. Update k8s/deployment.yaml with your registry URL and credentials if deploying to a real cluster.
Security Verification
Verify the container runs as non-root:
docker run --rm timeservice:v1.0.0 id
# Expected: uid=10001(appuser) gid=10001(appgroup)
Check image vulnerabilities:
docker scout cves timeservice:v1.0.0
# Or
trivy image timeservice:v1.0.0
Prometheus Observability
This service exposes Prometheus metrics for comprehensive observability and monitoring.
Metrics Endpoint
The /metrics endpoint exposes Prometheus-formatted metrics:
curl http://localhost:8080/metrics
Available Metrics
HTTP Metrics
| Metric | Type | Labels | Description |
|---|---|---|---|
timeservice_http_requests_total | Counter | method, path, status | Total number of HTTP requests |
timeservice_http_request_duration_seconds | Histogram | method, path | HTTP request duration in seconds |
timeservice_http_request_size_bytes | Histogram | method, path | HTTP request size in bytes |
timeservice_http_response_size_bytes | Histogram | method, path | HTTP response size in bytes |
timeservice_http_requests_in_flight | Gauge | - | Number of HTTP requests currently being processed |
MCP Tool Metrics
| Metric | Type | Labels | Description |
|---|---|---|---|
timeservice_mcp_tool_calls_total | Counter | tool, status | Total number of MCP tool calls |
timeservice_mcp_tool_call_duration_seconds | Histogram | tool | MCP tool call duration in seconds |
timeservice_mcp_tool_calls_in_flight | Gauge | - | Number of MCP tool calls currently being processed |
Application Metrics
| Metric | Type | Labels | Description |
|---|---|---|---|
timeservice_build_info | Gauge | version, go_version | Build information (always 1) |
Standard Go Metrics
The service also exposes standard Go runtime metrics:
go_goroutines- Number of goroutinesgo_memstats_*- Memory statisticsgo_gc_*- Garbage collection statisticsprocess_*- Process statistics (CPU, memory, file descriptors)
Prometheus Configuration
Docker Compose
The docker-compose.yml includes labels for Prometheus service discovery:
labels:
- "prometheus.scrape=true"
- "prometheus.port=8080"
- "prometheus.path=/metrics"
Kubernetes
The Kubernetes deployment includes pod annotations for automatic Prometheus scraping:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
A ServiceMonitor resource is also provided for Prometheus Operator:
kubectl apply -f k8s/deployment.yaml
Standalone Prometheus
Example Prometheus configuration (k8s/prometheus.yml):
scrape_configs:
- job_name: 'timeservice'
static_configs:
- targets: ['localhost:8080']
metrics_path: '/metrics'
scrape_interval: 30s
Grafana Dashboards
Example Queries
Request Rate (requests/second):
rate(timeservice_http_requests_total[5m])
Request Duration (p95):
histogram_quantile(0.95, rate(timeservice_http_request_duration_seconds_bucket[5m]))
Error Rate:
rate(timeservice_http_requests_total{status=~"5.."}[5m])
/ rate(timeservice_http_requests_total[5m])
MCP Tool Success Rate:
rate(timeservice_mcp_tool_calls_total{status="success"}[5m])
/ rate(timeservice_mcp_tool_calls_total[5m])
In-Flight Requests:
timeservice_http_requests_in_flight
Creating a Dashboard
- Import the metrics endpoint into Grafana datasource
- Create panels using the queries above
- Set up alerts for:
- High error rates (> 5%)
- High latency (p95 > 1s)
- Service down (no metrics scraped)
Monitoring Best Practices
Alerts
Recommended alerts:
High Error Rate:
- alert: HighErrorRate
expr: |
rate(timeservice_http_requests_total{status=~"5.."}[5m])
/ rate(timeservice_http_requests_total[5m]) > 0.05
for: 5m
annotations:
summary: "High error rate detected"
description: "Error rate is {{ $value | humanizePercentage }}"
High Latency:
- alert: HighLatency
expr: |
histogram_quantile(0.95,
rate(timeservice_http_request_duration_seconds_bucket[5m])
) > 1.0
for: 5m
annotations:
summary: "High latency detected"
description: "P95 latency is {{ $value }}s"
Service Down:
- alert: ServiceDown
expr: up{job="timeservice"} == 0
for: 1m
annotations:
summary: "Timeservice is down"
description: "Service has been down for more than 1 minute"
Recording Rules
Pre-compute common queries:
groups:
- name: timeservice
interval: 30s
rules:
- record: timeservice:http_requests:rate5m
expr: rate(timeservice_http_requests_total[5m])
- record: timeservice:http_request_duration:p95
expr: histogram_quantile(0.95, rate(timeservice_http_request_duration_seconds_bucket[5m]))
- record: timeservice:http_error_rate:rate5m
expr: |
rate(timeservice_http_requests_total{status=~"5.."}[5m])
/ rate(timeservice_http_requests_total[5m])
Testing Metrics
Generate test traffic:
# Start server
ALLOWED_ORIGINS="*" ./bin/server
# Generate requests
for i in {1..100}; do
curl -s http://localhost:8080/health > /dev/null
curl -s http://localhost:8080/api/time > /dev/null
done
# View metrics
curl http://localhost:8080/metrics | grep timeservice
Metrics Architecture
The metrics implementation follows Prometheus best practices:
- Automatic Instrumentation: HTTP middleware automatically tracks all requests
- Tool-Level Tracking: MCP tool calls are wrapped with metrics collection
- Cardinality Control: Labels are carefully chosen to prevent metric explosion
- Namespace: All metrics use
timeservicenamespace to avoid conflicts - Standard Buckets: Histograms use Prometheus default buckets for broad coverage
Documentation
Comprehensive documentation is available in the docs/ directory:
Core Documentation
- - Version management practices and validation
- - Testing strategy, coverage summary, and test organization
- - Detailed coverage analysis and testing metrics
- - Security practices, authentication, and threat model
- - DevSecOps practices, security controls, and compliance
- - System architecture and design decisions
- - Critical CORS/Auth ordering explained
Architecture Decision Records (ADRs)
The docs/adr/ directory contains detailed architecture decisions:
- - MCP-Go SDK adoption
- - SQLite database selection
- - Prometheus metrics strategy
- - Structured logging with slog
- - OAuth2/OIDC authorization
- - MCP HTTP transport implementation
- - Location-based time tracking feature
Implementation Plans
The docs/implementation-plans/ directory contains phased implementation guides for major features.
License
MIT License