TUNDR

copyleftdev/TUNDR

3.1

If you are the rightful owner of TUNDR and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The TUNDR MCP Optimization Server is a high-performance server designed for mathematical optimization tasks, focusing on Bayesian Optimization using Gaussian Processes.

TUNDR MCP Optimization Server

Go Report Card Go Reference License: AGPL v3 Coverage Status

A high-performance optimization server implementing the Model Context Protocol (MCP) for mathematical optimization tasks, with a focus on Bayesian Optimization using Gaussian Processes. Designed for reliability, scalability, and ease of integration in production environments.

๐ŸŒŸ Features

๐ŸŽฏ Key Features

Bayesian Optimization
  • Multiple kernel support (Matern 5/2, RBF, Custom)
  • Parallel evaluation of multiple points
  • Constrained optimization support
  • Efficient global optimization of expensive black-box functions
Real-World Use Cases
  1. Hyperparameter Tuning

    • Optimize machine learning model hyperparameters with minimal trials
    • Supports both continuous and categorical parameters
    • Ideal for deep learning, XGBoost, and other ML frameworks
  2. Engineering Design Optimization

    • Optimize product designs with multiple competing objectives
    • Handle physical and operational constraints
    • Applications in aerospace, automotive, and manufacturing
  3. Scientific Research

    • Optimize experimental parameters in chemistry and physics
    • Minimize cost function evaluations in computationally expensive simulations
    • Adaptive experimental design
  4. Financial Modeling

    • Portfolio optimization under constraints
    • Algorithmic trading parameter optimization
    • Risk management parameter tuning
  5. Industrial Process Optimization

    • Optimize manufacturing processes
    • Energy consumption minimization
    • Yield improvement in production lines
  • Expected Improvement acquisition function (with support for Probability of Improvement and UCB)
  • Support for both minimization and maximization problems
  • Parallel evaluation of multiple points
  • Constrained optimization support
  • MCP-compliant API endpoints

๐Ÿ› ๏ธ Robust Implementation

  • Comprehensive test coverage
  • Graceful error handling and recovery
  • Detailed structured logging with zap
  • Context-aware cancellation and timeouts
  • Memory-efficient matrix operations
  • MCP protocol compliance

๐Ÿš€ Performance Optimizations

  • Fast matrix operations with gonum
  • Efficient memory management with object pooling
  • Optimized Cholesky decomposition with fallback to SVD
  • Parallel batch predictions

๐Ÿ“Š Monitoring & Observability

  • Prometheus metrics endpoint
  • Structured logging in JSON format
  • Distributed tracing support (OpenTelemetry)
  • Health check endpoints
  • Performance profiling endpoints

Features

  • Bayesian Optimization with Gaussian Processes

    • Multiple kernel support (Matern 5/2, RBF)
    • Expected Improvement acquisition function
    • Numerical stability with Cholesky decomposition and SVD fallback
    • Support for both minimization and maximization problems
    • Parallel evaluation of multiple points
  • Robust Implementation

    • Comprehensive test coverage (>85%)
    • Graceful error handling and recovery
    • Detailed logging with structured logging (zap)
    • Context-aware cancellation
  • API & Integration

    • JSON-RPC 2.0 over HTTP/2 interface
    • RESTful endpoints for common operations
    • OpenAPI 3.0 documentation
    • gRPC support (planned)
  • Monitoring & Observability

    • Prometheus metrics endpoint
    • Structured logging
    • Distributed tracing (OpenTelemetry)
    • Health checks
  • Scalability

    • Stateless design
    • Horizontal scaling support
    • Multiple storage backends (SQLite, PostgreSQL)
    • Caching layer (Redis)

๐Ÿš€ Quick Start

MCP Protocol Support

This server implements the Model Context Protocol (MCP) for optimization tasks. The MCP provides a standardized way to:

  • Define optimization problems
  • Submit optimization tasks
  • Monitor optimization progress
  • Retrieve optimization results

The server exposes MCP-compatible endpoints for seamless integration with other MCP-compliant tools and services.

Prerequisites

  • Go 1.21 or later
  • Git (for version control)
  • Make (for development tasks)
  • (Optional) Docker and Docker Compose for containerized deployment

Installation

# Clone the repository
git clone https://github.com/copyleftdev/TUNDR.git
cd TUNDR

# Install dependencies
go mod download

# Build the server
go build -o bin/server ./cmd/server

Running the Server

# Start the server with default configuration
./bin/server

# Or with custom configuration
CONFIG_FILE=config/local.yaml ./bin/server

Using Docker

# Build the Docker image
docker build -t tundr/mcp-optimization-server .

# Run the container
docker run -p 8080:8080 tundr/mcp-optimization-server

๐Ÿ“š Documentation

MCP Integration

The server implements the following MCP-compatible endpoints:

REST API
  • POST /api/v1/optimize - Submit a new optimization task
  • GET /api/v1/status/{id} - Check the status of an optimization task
  • DELETE /api/v1/optimization/{id} - Cancel a running optimization task
JSON-RPC 2.0 Endpoint
  • POST /rpc - Unified endpoint for all JSON-RPC 2.0 operations
Available JSON-RPC Methods
  1. optimization.start - Start a new optimization

    {
      "jsonrpc": "2.0",
      "id": 1,
      "method": "optimization.start",
      "params": [{
        "objective": "minimize",
        "parameters": [
          {"name": "x", "type": "float", "bounds": [0, 10]},
          {"name": "y", "type": "float", "bounds": [0, 10]}
        ]
      }]
    }
    
  2. optimization.status - Get status of an optimization

    {
      "jsonrpc": "2.0",
      "id": 2,
      "method": "optimization.status",
      "params": ["optimization_id"]
    }
    
  3. optimization.cancel - Cancel an optimization

    {
      "jsonrpc": "2.0",
      "id": 3,
      "method": "optimization.cancel",
      "params": ["optimization_id"]
    }
    
Error Responses

All endpoints return errors in the following format:

REST API
{
  "error": {
    "code": 400,
    "message": "Invalid input parameters"
  }
}
JSON-RPC 2.0
{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32602,
    "message": "Invalid params"
  }
}

Common error codes:

  • -32600 - Invalid Request
  • -32601 - Method not found
  • -32602 - Invalid params
  • -32603 - Internal error
  • -32000 to -32099 - Server error

API Reference

Check out the API Documentation for detailed information about the available methods and types.

Example: Basic Usage

package main

import (
	"context"
	"fmt"
	"math"
	
	"github.com/copyleftdev/TUNDR/internal/optimization"
	"github.com/copyleftdev/TUNDR/internal/optimization/bayesian"
)

func main() {
	// Define the objective function (to be minimized)
	objective := func(x []float64) (float64, error) {
		// Example: Rosenbrock function
		return math.Pow(1-x[0], 2) + 100*math.Pow(x[1]-x[0]*x[0], 2), nil
	}

	// Define parameter bounds
	bounds := [][2]float64{{-5, 5}, {-5, 5}}

	// Create optimizer configuration
	config := optimization.OptimizerConfig{
		Objective:      objective,
		Bounds:         bounds,
		MaxIterations:  50,
		NInitialPoints: 10,
	}

	// Create and run the optimizer
	optimizer, err := bayesian.NewBayesianOptimizer(config)
	if err != nil {
		panic(fmt.Sprintf("Failed to create optimizer: %v", err))
	}

	// Run the optimization
	result, err := optimizer.Optimize(context.Background(), config)
	if err != nil {
		panic(fmt.Sprintf("Optimization failed: %v", err))
	}

	// Print results
	fmt.Printf("Optimal parameters: %v\n", result.BestSolution.Parameters)
	fmt.Printf("Optimal value: %f\n", result.BestSolution.Value)
	fmt.Printf("Number of iterations: %d\n", result.Iterations)
	fmt.Printf("Converged: %v\n", result.Converged)
}

Configuration

Create a config.yaml file to customize the server behavior:

server:
  port: 8080
  env: development
  timeout: 30s

logging:
  level: info
  format: json
  output: stdout

optimization:
  max_concurrent: 4
  default_kernel: "matern52"
  default_acquisition: "ei"
  
storage:
  type: "memory"  # or "postgres"
  dsn: ""  # Only needed for postgres

metrics:
  enabled: true
  path: "/metrics"
  namespace: "tundr"
  
tracing:
  enabled: false
  service_name: "mcp-optimization-server"
  endpoint: "localhost:4317"

๐Ÿงช Testing

Run the test suite:

# Run all tests
go test ./...

# Run tests with coverage
go test -coverprofile=coverage.out ./... && go tool cover -html=coverage.out

# Run benchmarks
go test -bench=. -benchmem ./...

๐Ÿค Contributing

Contributions are welcome! Please read our for details on how to submit pull requests, report issues, or suggest new features.

๐Ÿ“„ License

This project is part of the CopyleftDev ecosystem and is licensed under the GNU Affero General Public License v3.0 - see the file for details.

๐Ÿ“š Resources

๐Ÿ“ฌ Contact

For questions or support, please open an issue or contact the maintainers at [email protected]

Installation

  1. Clone the repository:

    git clone https://github.com/tundr/mcp-optimization-server.git
    cd mcp-optimization-server
    
  2. Install dependencies:

    make deps
    
  3. Build the binary:

    make build
    

    This will create a tundr binary in the bin directory.

Configuration

Environment Variables

Create a .env file in the project root with the following variables:

# Application
ENV=development
LOG_LEVEL=info
HTTP_PORT=8080

# Database
DB_TYPE=sqlite  # sqlite or postgres
DB_DSN=file:data/tundr.db?cache=shared&_fk=1

# Authentication
JWT_KEY=your-secure-key-change-in-production

# Optimization
MAX_CONCURRENT_JOBS=10
JOB_TIMEOUT=30m

# Monitoring
METRICS_ENABLED=true
METRICS_PORT=9090

Configuration File

For more complex configurations, you can use a YAML configuration file (default: config/config.yaml):

server:
  env: development
  port: 8080
  shutdown_timeout: 30s

database:
  type: sqlite
  dsn: file:data/tundr.db?cache=shared&_fk=1
  max_open_conns: 25
  max_idle_conns: 5
  conn_max_lifetime: 5m

optimization:
  max_concurrent_jobs: 10
  job_timeout: 30m
  default_algorithm: bayesian
  
  bayesian:
    default_kernel: matern52
    default_noise: 1e-6
    max_observations: 1000
    
  cma_es:
    population_size: auto  # auto or number
    max_generations: 1000

monitoring:
  metrics:
    enabled: true
    port: 9090
    path: /metrics
  
  tracing:
    enabled: false
    endpoint: localhost:4317
    sample_rate: 0.1

logging:
  level: info
  format: json
  enable_caller: true
  enable_stacktrace: true

Running the Server

Development Mode

For development with hot reload:

make dev

Production Mode

Build and run the server:

make build
./bin/tundr serve --config config/production.yaml

Using Docker

# Build the Docker image
docker build -t tundr-optimization .

# Run the container
docker run -p 8080:8080 -v $(pwd)/data:/app/data tundr-optimization

The server will be available at http://localhost:8080

Usage Examples

Bayesian Optimization Example

package main

import (
	"context"
	"fmt"
	"log"
	"math"


	"github.com/tundr/mcp-optimization-server/internal/optimization"
	"github.com/tundr/mcp-optimization-server/internal/optimization/bayesian"
	"github.com/tundr/mcp-optimization-server/internal/optimization/kernels"
)

func main() {
	// Define the objective function (to be minimized)
	objective := func(x []float64) (float64, error) {
		// Example: Rosenbrock function
		return math.Pow(1-x[0], 2) + 100*math.Pow(x[1]-x[0]*x[0], 2), nil
	}

	// Define parameter bounds
	bounds := []optimization.Parameter{
		{Name: "x1", Min: -5.0, Max: 10.0},
		{Name: "x2", Min: -5.0, Max: 10.0},
	}

	// Create optimizer configuration
	config := optimization.OptimizerConfig{
		Objective:      objective,
		Parameters:     bounds,
		NInitialPoints: 10,
		MaxIterations:  50,
		Verbose:        true,
	}

	// Create and configure the optimizer
	optimizer, err := bayesian.NewBayesianOptimizer(config)
	if err != nil {
		log.Fatalf("Failed to create optimizer: %v", err)
	}

	// Run the optimization
	result, err := optimizer.Optimize(context.Background())
	if err != nil {
		log.Fatalf("Optimization failed: %v", err)
	}

	// Print results
	fmt.Printf("Best solution: %+v\n", result.BestSolution)
	fmt.Printf("Best value: %f\n", result.BestSolution.Value)
	fmt.Printf("Number of iterations: %d\n", len(result.History))
}

REST API Example

Start a new optimization job:

curl -X POST http://localhost:8080/api/v1/optimize \
  -H "Content-Type: application/json" \
  -d '{
    "algorithm": "bayesian",
    "objective": "minimize",
    "parameters": [
      {"name": "x1", "type": "float", "bounds": [-5.0, 10.0]},
      {"name": "x2", "type": "float", "bounds": [-5.0, 10.0]}
    ],
    "max_iterations": 100,
    "n_initial_points": 20,
    "metadata": {
      "name": "rosenbrock-optimization",
      "tags": ["test", "demo"]
    }
  }'

Check optimization status:

curl http://localhost:8080/api/v1/status/<job_id>

Configuration Reference

Bayesian Optimization Parameters

ParameterTypeDefaultDescription
kernelstring"matern52"Kernel type ("matern52" or "rbf")
length_scalefloat1.0Length scale parameter
noisefloat1e-6Observation noise
xifloat0.01Exploration-exploitation trade-off
n_initial_pointsint10Number of initial random points
max_iterationsint100Maximum number of iterations
random_seedint0Random seed (0 for time-based)

Environment Variables

VariableDefaultDescription
ENVdevelopmentApplication environment
LOG_LEVELinfoLogging level
HTTP_PORT8080HTTP server port
DB_TYPEsqliteDatabase type (sqlite or postgres)
DB_DSNfile:data/tundr.dbDatabase connection string
JWT_KEYSecret key for JWT authentication
MAX_CONCURRENT_JOBS10Maximum concurrent optimization jobs
JOB_TIMEOUT30mMaximum job duration
METRICS_ENABLEDtrueEnable Prometheus metrics
METRICS_PORT9090Metrics server port

Advanced Usage

Custom Kernels

You can implement custom kernel functions by implementing the kernels.Kernel interface:

type Kernel interface {
    Eval(x, y []float64) float64
    Hyperparameters() []float64
    SetHyperparameters(params []float64) error
    Bounds() [][2]float64
}

Example custom kernel:

type MyCustomKernel struct {
    lengthScale float64
    variance    float64
}

func (k *MyCustomKernel) Eval(x, y []float64) float64 {
    // Implement your custom kernel function
    sumSq := 0.0
    for i := range x {
        diff := x[i] - y[i]
        sumSq += diff * diff
    }
    return k.variance * math.Exp(-0.5*sumSq/(k.lengthScale*k.lengthScale))
}

// Implement other required methods...

Parallel Evaluation

The optimizer supports parallel evaluation of multiple points:

config := optimization.OptimizerConfig{
    Objective:      objective,
    Parameters:     bounds,
    NInitialPoints: 10,
    MaxIterations:  50,
    NJobs:         4,  // Use 4 parallel workers
}

Callbacks

You can register callbacks to monitor the optimization process:

optimizer := bayesian.NewBayesianOptimizer(config)

// Add a callback that's called after each iteration
optimizer.AddCallback(func(result *optimization.OptimizationResult) {
    fmt.Printf("Iteration %d: Best value = %f\n", 
        len(result.History), 
        result.BestSolution.Value)
})

API Documentation

REST API

Start Optimization
POST /api/v1/optimize
Content-Type: application/json

{
  "algorithm": "bayesian",
  "objective": "minimize",
  "parameters": [
    {"name": "x", "type": "float", "bounds": [0, 10], "prior": "uniform"},
    {"name": "y", "type": "float", "bounds": [-5, 5], "prior": "normal", "mu": 0, "sigma": 1}
  ],
  "constraints": [
    {"type": "ineq", "expr": "x + y <= 10"}
  ],
  "options": {
    "max_iterations": 100,
    "n_initial_points": 20,
    "acquisition": "ei",
    "xi": 0.01,
    "kappa": 0.1
  },
  "metadata": {
    "name": "example-optimization",
    "tags": ["test"],
    "user_id": "user123"
  }
}
Get Optimization Status
GET /api/v1/status/:id

Response:

{
  "id": "job-123",
  "status": "running",
  "progress": 0.45,
  "best_solution": {
    "parameters": {"x": 1.2, "y": 3.4},
    "value": 0.123
  },
  "start_time": "2025-06-30T10:00:00Z",
  "elapsed_time": "1h23m45s",
  "iterations": 45,
  "metadata": {
    "name": "example-optimization",
    "tags": ["test"]
  }
}

JSON-RPC 2.0 API

The server also supports JSON-RPC 2.0 for more advanced use cases:

POST /rpc
Content-Type: application/json

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "optimization.start",
  "params": [
    {
      "algorithm": "bayesian",
      "objective": "minimize",
      "parameters": [
        {"name": "x", "type": "float", "bounds": [0, 10]},
        {"name": "y", "type": "float", "bounds": [-5, 5]}
      ],
      "options": {
        "max_iterations": 100,
        "n_initial_points": 20,
        "acquisition": "ei",
        "xi": 0.01
      }
    }
  ]
}

Performance Tuning

Memory Usage

For large-scale problems, you may need to adjust the following parameters:

  1. Batch Size: Process points in batches to limit memory usage
  2. GP Model: Use a sparse approximation for large datasets (>1000 points)
  3. Cholesky Decomposition: The default solver uses Cholesky decomposition with SVD fallback

Parallelism

You can control the number of parallel workers:

config := optimization.OptimizerConfig{
    // ... other options ...
    NJobs: runtime.NumCPU(),  // Use all available CPUs
}

Caching

Enable caching of kernel matrix computations:

kernel := kernels.NewMatern52Kernel(1.0, 1.0)
kernel.EnableCache(true)  // Enable kernel cache

Monitoring and Observability

The server exposes Prometheus metrics at /metrics:

  • optimization_requests_total: Total optimization requests
  • optimization_duration_seconds: Duration of optimization jobs
  • optimization_iterations_total: Number of iterations per optimization
  • optimization_errors_total: Number of optimization errors
  • gp_fit_duration_seconds: Duration of GP model fitting
  • acquisition_evaluations_total: Number of acquisition function evaluations

Logging

Logs are structured in JSON format by default. The following log levels are available:

  • debug: Detailed debug information
  • info: General operational information
  • warn: Non-critical issues
  • error: Critical errors

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Workflow

# Run tests
make test

# Run linters
make lint

# Run benchmarks
make benchmark

# Format code
make fmt

# Generate documentation
make docs

License

Apache 2.0 - See for details.

Acknowledgments

  • Gonum - Numerical computing libraries for Go
  • Zap - Blazing fast, structured, leveled logging
  • Chi - Lightweight, composable router for Go HTTP services
  • Testify - Toolkit with common assertions and mocks

Development

Building

make build

Testing

make test

Linting

make lint

Deployment

Docker

docker build -t tundr/mcp-optimization-server .
docker run -p 8080:8080 --env-file .env tundr/mcp-optimization-server

Kubernetes

See the deploy/kubernetes directory for example Kubernetes manifests.

License