tas-mcp

Tributary-ai-services/tas-mcp

3.1

If you are the rightful owner of tas-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Model Context Protocol (MCP) server is designed to facilitate communication and data exchange between different systems and applications, leveraging the capabilities of modern language models and context-aware processing.

πŸ“‘ TAS Model Context Protocol (MCP) Server

Build Go Report Card Test Coverage Go Reference Release Join Slack

Go Version Docker Kubernetes MCP Federation

The TAS MCP Server is a high-performance, cloud-native event gateway and ingestion service that implements the Model Context Protocol to support RAG pipelines, event-driven architectures, and workflow orchestration across distributed AI systems.

🌟 Key Features

  • πŸš€ Multi-Protocol Support: HTTP REST API and bidirectional gRPC streaming
  • πŸ”„ Smart Event Forwarding: Rule-based routing with condition evaluation
  • 🎯 Event Transformation: Template-based and programmatic event transformation
  • πŸ”Œ Integration Ready: Native support for Argo Events, Kafka, webhooks, and more
  • πŸ“š MCP Server Registry: Comprehensive catalog of MCP servers and capabilities
  • πŸ“Š Observability: Built-in metrics, health checks, and distributed tracing support
  • πŸ”’ Production Ready: Rate limiting, circuit breakers, and retry logic
  • ☁️ Cloud Native: Kubernetes-native with Helm charts and operators
  • 🎨 Extensible: Plugin architecture for custom forwarders and processors

πŸš€ Quick Start

Using Docker

# Run with Docker
docker run -p 8080:8080 -p 50051:50051 ghcr.io/tributary-ai-services/tas-mcp:latest

# Or build locally
make docker
make docker-run

Using Docker Compose

# Start all services
make docker-compose

# View logs
docker-compose logs -f tas-mcp-server

Local Development

# Install dependencies
make init

# Run locally
make run

# Run with hot reload
make dev

πŸ“‘ API Usage

HTTP API

# Ingest a single event
curl -X POST http://localhost:8080/api/v1/events \
  -H "Content-Type: application/json" \
  -d '{
    "event_id": "evt-123",
    "event_type": "user.created",
    "source": "auth-service",
    "data": "{\"user_id\": \"usr-456\", \"email\": \"user@example.com\"}"
  }'

# Batch event ingestion
curl -X POST http://localhost:8080/api/v1/events/batch \
  -H "Content-Type: application/json" \
  -d '[
    {"event_id": "evt-1", "event_type": "order.created", "source": "order-service", "data": "{}"},
    {"event_id": "evt-2", "event_type": "payment.processed", "source": "payment-service", "data": "{}"}
  ]'

# Health check
curl http://localhost:8082/health

gRPC API

// Example Go client
import (
    mcpv1 "github.com/tributary-ai-services/tas-mcp/gen/mcp/v1"
    "google.golang.org/grpc"
)

conn, _ := grpc.Dial("localhost:50051", grpc.WithInsecure())
client := mcpv1.NewMCPServiceClient(conn)

// Ingest event
resp, _ := client.IngestEvent(ctx, &mcpv1.IngestEventRequest{
    EventId:   "evt-123",
    EventType: "user.action",
    Source:    "webapp",
    Data:      `{"action": "login"}`,
})

// Stream events
stream, _ := client.EventStream(ctx)

πŸ”§ Configuration

Environment Variables

VariableDefaultDescription
HTTP_PORT8080HTTP API server port
GRPC_PORT50051gRPC server port
HEALTH_CHECK_PORT8082Health check endpoint port
LOG_LEVELinfoLogging level (debug, info, warn, error)
FORWARDING_ENABLEDfalseEnable event forwarding
FORWARDING_WORKERS5Number of forwarding workers
MAX_EVENT_SIZE1048576Maximum event size in bytes (1MB)

Configuration File

{
  "HTTPPort": 8080,
  "GRPCPort": 50051,
  "LogLevel": "info",
  "forwarding": {
    "enabled": true,
    "targets": [
      {
        "id": "argo-events",
        "type": "webhook",
        "endpoint": "http://argo-events-webhook:12000",
        "rules": [
          {
            "conditions": [
              {"field": "event_type", "operator": "contains", "value": "critical"}
            ]
          }
        ]
      }
    ]
  }
}

πŸ—οΈ Architecture

The TAS MCP Server follows a modular, event-driven architecture:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   HTTP API  β”‚     β”‚  gRPC API   β”‚     β”‚   Webhook   β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚                   β”‚                    β”‚
       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           β”‚
                    β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
                    β”‚   Ingestion β”‚
                    β”‚    Layer    β”‚
                    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
                           β”‚
                    β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
                    β”‚    Rules    β”‚
                    β”‚   Engine    β”‚
                    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
                           β”‚
       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
       β”‚                                        β”‚
β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”
β”‚  Forwarder  β”‚  β”‚  Transform   β”‚  β”‚   Metrics     β”‚
β”‚   (gRPC)    β”‚  β”‚   Engine     β”‚  β”‚  Collector    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

🚒 Deployment

Kubernetes

# Deploy to development
kubectl apply -k k8s/overlays/dev

# Deploy to production with auto-scaling
kubectl apply -k k8s/overlays/prod

# Check deployment status
kubectl get pods,svc,ingress -n tas-mcp-prod

Helm Chart (Coming Soon)

helm repo add tas-mcp https://tributary-ai-services.github.io/helm-charts
helm install tas-mcp tas-mcp/tas-mcp-server

πŸ“š MCP Server Registry

The TAS MCP project includes a comprehensive MCP Server Registry - a curated catalog of Model Context Protocol servers across different categories and use cases.

πŸ—‚οΈ Registry Categories

  • πŸ€– AI Models - LLM integrations and model serving
  • πŸ“‘ Event Streaming - Real-time event processing and forwarding
  • πŸ”„ Workflow Orchestration - Complex workflow and agent coordination
  • πŸ’Ύ Knowledge Bases - Vector stores and search capabilities
  • πŸ”§ Data Processing - ETL and data transformation services
  • πŸ“Š Monitoring - Observability and metrics collection
  • πŸ’¬ Communication - Chat bots and messaging integrations
  • πŸ” Search - Web search and content discovery services
  • πŸ•·οΈ Web Scraping - Data extraction and automation tools
  • πŸ—ƒοΈ Database - Database integration and query services
  • πŸ› οΈ Development Tools - Git, CI/CD, and development utilities

πŸš€ Using the Registry

# Browse the registry
cat registry/mcp-servers.json | jq '.servers[] | select(.category == "ai-models")'

# Find servers by capability
cat registry/mcp-servers.json | jq '.servers[] | select(.capabilities[] | contains("event-streaming"))'

# Get deployment information
cat registry/mcp-servers.json | jq '.servers[] | select(.name == "tas-mcp-server") | .deployment'

# Find privacy-focused search servers
cat registry/mcp-servers.json | jq '.servers[] | select(.category == "search" and .privacy.noTracking == true)'

# Find web scraping servers
cat registry/mcp-servers.json | jq '.servers[] | select(.category == "web-scraping")'

# Find database integration servers
cat registry/mcp-servers.json | jq '.servers[] | select(.category == "database")'

πŸ“‹ Registry Features

  • JSON Schema Validation - Ensures data consistency and structure
  • Deployment Ready - Docker and Kubernetes deployment configurations
  • Access Models - Clear documentation of API access patterns
  • Capability Mapping - Searchable capability tags
  • Cost Information - Pricing and resource requirements

See for complete registry documentation and for integration guides.

πŸ”Œ Integrations

TAS MCP Federation Servers

The project includes several fully-integrated MCP servers ready for deployment:

πŸ” DuckDuckGo MCP Server
  • Privacy-focused web search with no tracking or data collection
  • News search with time filtering capabilities
  • Image search with advanced filters (size, color, type)
  • Content extraction from web pages
  • Deployment: deployments/docker-compose/duckduckgo-mcp/
πŸ•·οΈ Apify MCP Server
  • Access to 5,000+ web scraping actors from the Apify platform
  • E-commerce, social media, and news scraping
  • Custom scraping configurations and data extraction
  • Dataset management and export capabilities
  • Deployment: deployments/docker-compose/apify-mcp/
πŸ—ƒοΈ PostgreSQL MCP Server
  • Read-only database access with security-first design
  • Schema inspection and table metadata
  • Query execution with performance analysis
  • Connection pooling and health monitoring
  • Deployment: deployments/postgres-mcp/
πŸ› οΈ Git MCP Server
  • Repository interaction and automation
  • Branch management and commit operations
  • Status and diff operations
  • Working tree management
  • Based on official Model Context Protocol Git server

Full-Stack Deployment

# Deploy complete federation stack
cd deployments/docker-compose
docker-compose -f full-stack.yml up -d

# Check all services
docker-compose -f full-stack.yml ps

# View federation status
curl http://localhost:8080/api/v1/federation/servers | jq '.'

See for complete integration examples in Go.

Argo Events

See for complete integration examples in Go, Python, and Node.js.

Kafka

{
  "type": "kafka",
  "endpoint": "kafka-broker:9092",
  "config": {
    "topic": "mcp-events",
    "batch_size": 100
  }
}

Prometheus Metrics

The server exposes Prometheus metrics at /api/v1/metrics:

  • mcp_events_total - Total events processed
  • mcp_events_forwarded_total - Events forwarded by target
  • mcp_forwarding_errors_total - Forwarding errors by target
  • mcp_event_processing_duration_seconds - Event processing latency

πŸ§ͺ Testing

The project includes comprehensive test coverage across all packages:

# Run all unit tests
make test-unit

# Run integration tests
make test-integration

# Run benchmark tests
make test-benchmark

# Generate coverage report
make test-coverage

# Run all tests (unit + integration + benchmarks)
make test

# Lint code
make lint

# Format code
make fmt

Test Coverage

  • Config Package: 77.6% statement coverage
  • Forwarding Package: 60.1% statement coverage
  • Integration Tests: End-to-end event forwarding scenarios
  • Benchmark Tests: Performance testing for critical paths

Test Features

  • Table-driven tests for comprehensive scenario coverage
  • Mock HTTP servers for integration testing
  • Event matching and validation utilities
  • Concurrent testing patterns
  • Test utilities package for reusable helpers

πŸ—ΊοΈ Roadmap

See our comprehensive for detailed development priorities, including:

  • 1,535+ MCP server federation across 12 categories from mcpservers.org
  • Universal MCP Orchestrator vision and implementation plan
  • Quarterly release schedule with progressive federation milestones
  • Community involvement opportunities and feedback channels
  • Technical implementation plans for massive-scale federation

🀝 Contributing

We welcome contributions! Please see for guidelines.

Development Setup

See for detailed development instructions.

# Setup development environment
make init

# Run tests and linting
make test lint

# Submit changes
make fmt
git add .
git commit -m "feat: add new feature"

πŸ“š Documentation

  • - Development setup and guidelines
  • - Complete API documentation
  • - System design and architecture
  • - Container deployment guide
  • - Integration examples and tutorials

πŸ” Security

  • Non-root container execution
  • TLS support for all protocols
  • Authentication via API keys or OAuth2
  • Rate limiting and DDoS protection
  • Regular security scanning with Trivy

Report security vulnerabilities to: security@tributary-ai-services.com

πŸ“Š Performance

  • Handles 10,000+ events/second per instance
  • Sub-millisecond forwarding latency
  • Horizontal scaling with Kubernetes HPA
  • Efficient memory usage with bounded buffers
  • Connection pooling for downstream services

πŸ—ΊοΈ Roadmap

  • Helm chart for easy deployment
  • WebSocket support for real-time streaming
  • Event replay and time-travel debugging
  • GraphQL API for flexible queries
  • Built-in event store with retention policies
  • SDK libraries for popular languages
  • Terraform modules for cloud deployment

πŸ“„ License

This project is licensed under the Apache License 2.0 - see the file for details.

πŸ™ Acknowledgments


Built with ❀️ by Tributary AI Services