jacobeverist/ts_api
If you are the rightful owner of ts_api and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Model Context Protocol (MCP) server is a versatile platform designed to handle advanced analytics, forecasting, and anomaly detection for time-series data, supporting both REST and MCP protocols.
Time Series API - Unified REST and MCP Server
A comprehensive time-series data API supporting both traditional REST endpoints and Model Context Protocol (MCP) server functionality. This unified implementation provides advanced analytics, forecasting, and anomaly detection capabilities with production-ready features.
Features
Core Functionality
- Dual Protocol Support: REST API and MCP server in unified application
- Time-Series Data: Query, aggregate, and analyze time-series data with flexible parameters
- Advanced Analytics: Statistical anomaly detection, trend forecasting, and data aggregation
- Sample Data Generation: Built-in synthetic data generators for testing and development
- Interactive Documentation: Swagger UI and ReDoc for REST API exploration
Phase 4 Enhancements
- Unified Entry Point: Single application supporting REST-only, MCP-only, or hybrid modes
- Configuration Management: YAML/JSON configuration with environment variable overrides
- Structured Logging: JSON and text logging with performance metrics
- Integration Tests: Comprehensive end-to-end testing suite
- Health Monitoring: Performance metrics and health check endpoints
- Graceful Shutdown: Proper signal handling and resource cleanup
Quick Start
Installation
- Clone and setup environment:
git clone <repository-url>
cd ts_api
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
Running the Application
Option 1: Unified Entry Point (Recommended)
# REST API only
python app.py --mode rest
# MCP server only
python app.py --mode mcp
# Both REST and MCP (hybrid mode)
python app.py --mode hybrid
# With custom configuration
python app.py --config config.yaml --mode hybrid
Option 2: Legacy Entry Points
# REST API (legacy)
python main.py
# MCP server (legacy)
python mcp_server.py
Testing the Setup
- REST API: Visit
http://localhost:8000/docsfor interactive documentation - Health Check:
curl http://localhost:8000/health - Sample Query:
curl "http://localhost:8000/timeseries?metric=cpu_usage&limit=10"
Configuration
Configuration Files
Create a config.yaml file for customized settings:
mode: hybrid # rest, mcp, or hybrid
debug: false
environment: production # development, staging, production
logging:
level: INFO # DEBUG, INFO, WARNING, ERROR, CRITICAL
json_format: false # Enable JSON structured logging
file_path: logs/ts_api.log # Log file path (optional)
console_output: true # Enable console logging
rest:
host: 0.0.0.0
port: 8000
workers: 1 # Number of worker processes
mcp:
server_name: timeseries-server
transport: stdio # stdio or http
http_port: 8001 # HTTP transport port (if enabled)
data:
default_limit: 100 # Default query result limit
max_limit: 10000 # Maximum allowed limit
cache_enabled: true # Enable data caching
cache_ttl: 300 # Cache TTL in seconds
performance:
enable_metrics: true # Enable performance monitoring
slow_query_threshold: 1.0 # Slow query threshold in seconds
max_concurrent_requests: 100 # Maximum concurrent requests
Environment Variables
Override configuration with environment variables:
export TS_API_MODE=hybrid
export TS_API_REST_PORT=8080
export TS_API_LOG_LEVEL=DEBUG
export TS_API_LOG_JSON=true
export TS_API_CACHE_ENABLED=false
Command Line Options
python app.py --help
usage: app.py [-h] [--mode {rest,mcp,hybrid}] [--config CONFIG]
[--create-config FILE] [--host HOST] [--port PORT]
[--debug] [--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]
[--validate-config]
Examples:
python app.py --mode hybrid --port 8080 --debug
python app.py --config production.yaml --log-level INFO
python app.py --create-config sample-config.yaml
python app.py --validate-config --config config.yaml
API Endpoints (REST)
Core Endpoints
GET / - Root
Returns welcome message and API information.
GET /health - Health Check
{
"status": "ready",
"mode": "hybrid",
"uptime_seconds": 3600,
"metrics": {
"requests_total": 150,
"requests_active": 2,
"errors_total": 1
},
"timestamp": "2024-01-15T10:30:00Z"
}
GET /metrics - Available Metrics
{
"metrics": [
{"name": "cpu_usage", "description": "CPU usage percentage (0-100)"},
{"name": "memory_usage", "description": "Memory usage percentage (0-100)"},
{"name": "temperature", "description": "Temperature in Celsius"}
]
}
Data Query Endpoints
GET /timeseries - Time-Series Data Query
Query time-series data with flexible parameters.
Parameters:
metric(required): Metric name (cpu_usage, memory_usage, temperature)start_time(optional): Start time in ISO 8601 formatend_time(optional): End time in ISO 8601 formatlimit(optional): Maximum points to return (default: 100, max: 10000)frequency(optional): Data frequency (1H, 30T, 1D, etc.) (default: 1H)
Example:
curl "http://localhost:8000/timeseries?metric=cpu_usage&start_time=2024-01-15T00:00:00&end_time=2024-01-16T00:00:00&limit=50&frequency=1H"
Response:
{
"data": [
{
"timestamp": "2024-01-15T00:00:00",
"value": 45.2,
"metric": "cpu_usage"
}
],
"count": 25,
"start_time": "2024-01-15T00:00:00",
"end_time": "2024-01-16T00:00:00"
}
GET /timeseries/aggregate - Aggregated Data
Get aggregated time-series data using various statistical methods.
Parameters:
metric(required): Metric namestart_time(optional): Start time in ISO 8601 formatend_time(optional): End time in ISO 8601 formataggregation(optional): Method (mean, sum, min, max, count) (default: mean)window(optional): Aggregation window (1H, 1D, 1W) (default: 1H)
Example:
curl "http://localhost:8000/timeseries/aggregate?metric=memory_usage&aggregation=mean&window=6H"
MCP Server Tools
The MCP server provides advanced analytics tools for AI applications:
Available Tools
query_timeseries
Equivalent to REST /timeseries endpoint with same parameters.
get_metrics
Lists available metrics and their descriptions.
aggregate_data
Advanced data aggregation with pandas resampling.
detect_anomalies
Statistical anomaly detection using z-score or IQR methods.
Parameters:
metric: Metric name to analyzemethod: Detection method (zscore, iqr)threshold: Sensitivity threshold (default: 2.0 for zscore, 1.5 for iqr)
forecast_trend
Time-series forecasting using linear regression or moving averages.
Parameters:
metric: Metric name to forecastmethod: Forecasting method (linear, moving_average)periods: Number of future periods (default: 24)frequency: Forecast frequency (1H, 1D, etc.)
MCP Resources
The server also provides cached resource endpoints:
timeseries://cpu_usage/last24htimeseries://memory_usage/last24htimeseries://temperature/last24htimeseries://cpu_usage/last7dtimeseries://memory_usage/last7dtimeseries://temperature/last7d
Development
Project Structure
ts_api/
├── app.py # Unified entry point
├── config.py # Configuration management
├── main.py # FastAPI REST application
├── mcp_server.py # MCP server implementation
├── mcp_client.py # MCP client for testing
├── test_integration.py # Integration tests
├── test_api.py # REST API tests (legacy)
├── requirements.txt # Python dependencies
├── CLAUDE.md # Development instructions
└── README.md # This file
Running Tests
Integration Tests (Recommended)
# Run comprehensive integration tests
python test_integration.py
# Run with pytest for better output
pytest test_integration.py -v
Legacy Tests
# Start REST server first
python main.py
# Run REST API tests in another terminal
python test_api.py
MCP Server Tests
# Test MCP server functionality
python test_mcp_server.py
# Test MCP client communication
python test_mcp_client.py
Performance Testing
The integration tests include performance benchmarks:
# Run performance tests
python test_integration.py
# Results include response time analysis:
# /health: avg=0.002s, max=0.005s
# /metrics: avg=0.003s, max=0.008s
# /timeseries: avg=0.050s, max=0.120s
Configuration Validation
# Validate configuration file
python app.py --validate-config --config config.yaml
# Create sample configuration
python app.py --create-config sample-config.yaml
Production Deployment
Docker Deployment
- Create Dockerfile:
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000 8001
# Default to hybrid mode
CMD ["python", "app.py", "--mode", "hybrid", "--host", "0.0.0.0"]
- Build and run:
docker build -t ts-api .
docker run -p 8000:8000 -p 8001:8001 ts-api
Production Configuration
Create a production configuration file:
mode: hybrid
debug: false
environment: production
logging:
level: INFO
json_format: true
file_path: /var/log/ts_api/app.log
rest:
host: 0.0.0.0
port: 8000
workers: 4
performance:
enable_metrics: true
max_concurrent_requests: 200
memory_limit_mb: 2048
Monitoring and Logging
Structured Logging
Enable JSON logging for production:
export TS_API_LOG_JSON=true
export TS_API_LOG_LEVEL=INFO
Health Monitoring
Set up monitoring on the health endpoint:
curl -f http://localhost:8000/health || exit 1
Performance Metrics
Access performance data via the health endpoint:
{
"metrics": {
"requests_total": 10450,
"requests_active": 5,
"errors_total": 12,
"uptime_seconds": 86400
}
}
Migration Guide
From Legacy REST API (main.py)
-
Update startup command:
# Old python main.py # New python app.py --mode rest -
All existing endpoints remain compatible
-
New health and config endpoints available
From MCP Server (mcp_server.py)
-
Update startup command:
# Old python mcp_server.py # New python app.py --mode mcp -
All MCP tools remain unchanged
-
Enhanced logging and configuration support
To Hybrid Mode
-
Use hybrid mode for both protocols:
python app.py --mode hybrid -
Access REST API on port 8000
-
Access MCP server via STDIO transport
-
Monitor both via unified health endpoint
API Compatibility Matrix
| Feature | REST API | MCP Server | Hybrid Mode |
|---|---|---|---|
| Time-series query | ✅ /timeseries | ✅ query_timeseries | ✅ Both |
| Metrics listing | ✅ /metrics | ✅ get_metrics | ✅ Both |
| Data aggregation | ✅ /timeseries/aggregate | ✅ aggregate_data | ✅ Both |
| Anomaly detection | ❌ | ✅ detect_anomalies | ✅ MCP only |
| Trend forecasting | ❌ | ✅ forecast_trend | ✅ MCP only |
| Health monitoring | ✅ /health | ❌ | ✅ REST only |
| Configuration | ✅ /config (debug) | ❌ | ✅ REST only |
| Resource endpoints | ❌ | ✅ Resources | ✅ MCP only |
Performance Benchmarks
Based on integration tests with sample data:
| Endpoint | Avg Response Time | Max Response Time | Notes |
|---|---|---|---|
/health | ~2ms | ~5ms | Minimal processing |
/metrics | ~3ms | ~8ms | Static data |
/timeseries (100 points) | ~50ms | ~120ms | Data generation + processing |
/timeseries/aggregate | ~80ms | ~200ms | Pandas resampling |
| MCP tools | ~60ms | ~150ms | Similar to REST + serialization |
Memory Usage: ~50-100MB baseline, increases ~1MB per 1000 data points
Concurrent Requests: Tested up to 100 concurrent requests with stable performance
Troubleshooting
Common Issues
"Port already in use"
# Check what's using the port
lsof -i :8000
# Use different port
python app.py --port 8080
"Configuration validation errors"
# Validate your config file
python app.py --validate-config --config config.yaml
# Create a working sample
python app.py --create-config working-config.yaml
"MCP server connection issues"
- Ensure STDIO transport is properly configured
- Check that no other process is using STDIN/STDOUT
- Verify MCP client is compatible with protocol version
"Slow query performance"
- Reduce
limitparameter in queries - Use aggregation for large time ranges
- Enable caching in configuration
- Monitor memory usage with
psutil
Debug Mode
Enable debug mode for detailed logging:
python app.py --debug --log-level DEBUG
Health Check Troubleshooting
# Check application status
curl http://localhost:8000/health
# Expected response:
# {"status": "ready", "mode": "hybrid", ...}
# If status is "error", check logs for details
Contributing
- Code Style: Follow PEP 8 and existing patterns
- Testing: Add tests for new features in
test_integration.py - Documentation: Update README.md and docstrings
- Logging: Add structured logging for new components
Adding New Metrics
-
Update data generator in
main.pyandmcp_server.py:elif metric == "new_metric": values = generate_new_metric_data(date_range) -
Add to metrics list in
/metricsendpoint and MCPget_metrics -
Add tests in
test_integration.py
Adding New MCP Tools
- Add tool definition in
handle_list_tools() - Implement handler function following existing patterns
- Add error handling and logging
- Update documentation in this README
License
This project is licensed under the MIT License. See LICENSE file for details.
Support
For issues and questions:
- Check the troubleshooting section
- Review logs with debug mode enabled
- Validate configuration files
- Run integration tests to verify functionality