SurriyaGokul/NetMCP
If you are the rightful owner of NetMCP and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
MCP Network Optimizer is an AI-native network configuration system designed to optimize Linux network performance through the Model Context Protocol.
NetMCP
Intelligent, Safe, and Autonomous Network Optimization via Model Context Protocol
Empower AI agents to optimize Linux network performance with 29 configuration cards, 5 specialized profiles, and enterprise-grade safety features
Features • Quick Start • Architecture • Profiles • Tools • Documentation
🎯 What is NetMCP?
NetMCP is an AI-native network configuration system that bridges the gap between high-level optimization goals and low-level Linux networking commands. It exposes a comprehensive suite of network optimization capabilities through the Model Context Protocol (MCP), allowing LLMs and AI agents to:
- 🔍 Discover system network configuration and performance metrics
- 📋 Plan optimizations using declarative, type-safe specifications
- ✅ Validate changes against policies and safety constraints
- 🔧 Render abstract plans into concrete system commands
- ⚡ Apply changes safely with automatic rollback on failure
Why NetMCP?
Traditional network optimization requires deep Linux expertise and manual command execution. NetMCP makes this accessible to AI agents by:
- Declarative Configuration: Describe what you want, not how to do it
- Policy-Driven Safety: All changes validated against configurable rules
- Atomic Operations: Changes applied atomically with checkpoint/rollback
- Research-Backed Profiles: 5 pre-configured profiles based on academic research
- Type-Safe: Pydantic schemas ensure data validity at every step
- Production-Ready: Comprehensive error handling and audit logging
✨ Key Features
🎴 29 Configuration Cards
Comprehensive coverage of Linux networking stack:
- 16 Sysctl Cards: TCP/IP tuning, buffer management, connection handling
- 7 Traffic Control Cards: Queue disciplines, shaping, network emulation
- 6 Firewall/Security Cards: Connection limiting, rate limiting, QoS, NAT
🎭 5 Optimization Profiles
| Profile | Focus | Use Case | Key Metrics |
|---|---|---|---|
| 🎮 Gaming | Ultra-low latency | FPS games, MOBAs | <20ms p95 latency |
| 📺 Streaming | Maximum throughput | Video delivery, CDN | Max bandwidth utilization |
| 📞 Video Calls | Balanced latency/throughput | Zoom, WebRTC | <150ms latency, 2+ Mbps |
| 📦 Bulk Transfer | Absolute max throughput | Backups, replication | >1Gbps sustained |
| 🖥️ Server | High concurrency + security | Web servers, APIs | 10K+ connections |
🛡️ Enterprise Safety Features
- Command Allowlisting: Only approved binaries can execute
- Checkpoint/Rollback: Automatic state snapshots before changes
- Schema Validation: Pydantic models enforce type safety
- Policy Enforcement: All changes validated against limits
- Audit Logging: Complete execution history and rationale
- Atomic Operations: All-or-nothing change application
🔧 Comprehensive Toolset
30+ Discovery Tools: Interface info, routing tables, DNS, latency, QoS, firewall rules
Planning & Validation: Type-safe parameter plans with policy checking
Safe Execution: Idempotent operations with automatic rollback
📋 Table of Contents
- Quick Start
- Architecture
- Optimization Profiles
- Configuration Cards
- MCP Tools Catalog
- Usage Examples
- Testing & Benchmarks
- Security
- Documentation
- Contributing
- License
🚀 Quick Start
Prerequisites
- OS: Linux (tested on Ubuntu 20.04+, Debian 11+)
- Python: 3.10 or higher
- Tools:
ip,sysctl,tc,nft(nftables) - Optional: NetworkManager (
nmcli), wireless-tools, dnsutils
Installation
# Clone the repository
git clone https://github.com/SurriyaGokul/mcp-net-optimizer.git
cd mcp-net-optimizer
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
Running the MCP Server
# Start the server
python -m server.main
The server exposes all tools and resources via the Model Context Protocol.
MCP Client Integration
Configure your MCP-capable client (Claude Desktop, IDE with MCP support):
{
"mcpServers": {
"network-optimizer": {
"command": "python",
"args": ["-m", "server.main"],
"cwd": "/path/to/mcp-net-optimizer"
}
}
}
Quick Usage Example
from server.tools.validator import validate_change_plan
from server.tools.planner import render_change_plan
from server.tools.apply.apply import apply_rendered_plan
# Define optimization plan
plan = {
"iface": "eth0",
"profile": "gaming",
"changes": {
"sysctl": {
"net.ipv4.tcp_congestion_control": "bbr",
"net.ipv4.tcp_low_latency": "1",
"net.core.rmem_max": "16777216"
}
},
"rationale": ["Optimize for competitive gaming"]
}
# Validate → Render → Apply
validation = validate_change_plan(plan)
assert validation["ok"], validation["issues"]
rendered = render_change_plan(plan)
report = apply_rendered_plan(rendered, checkpoint_label="gaming")
print(f"✅ Applied: {report['applied']}")
print(f"📍 Checkpoint: {report['checkpoint_id']}")
🏗️ Architecture
Workflow Overview
NetMCP follows a 5-stage pipeline that transforms high-level optimization goals into safe, validated system changes:
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ DISCOVER │ -> │ PLAN │ -> │ VALIDATE │ -> │ RENDER │ -> │ APPLY │
└──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘
↓ ↓ ↓ ↓ ↓
Inspect Declare Check Generate Execute
System Intent Policies Commands Safely
State (Pydantic) & Limits & Scripts w/Rollback
1️⃣ Discover Phase
Purpose: Gather current system network configuration and performance metrics
Tools: 30+ discovery tools (no side effects)
- Network interfaces:
ip_info,eth_info,nmcli_status - Routing & neighbors:
ip_route,arp_table,ip_neigh - DNS:
resolvectl_status,dig,host,nslookup - Performance:
ping_host,traceroute,tracepath - QoS/Firewall:
tc_qdisc_show,nft_list_ruleset,iptables_list
Output: Structured JSON with current state
2️⃣ Plan Phase
Purpose: Define optimization goals declaratively using type-safe schemas
Input: ParameterPlan (Pydantic model)
{
"iface": "eth0",
"profile": "gaming", # gaming|streaming|video_calls|bulk_transfer|server
"changes": {
"sysctl": {...}, # TCP/IP parameters
"qdisc": {...}, # Queue discipline
"shaper": {...}, # Bandwidth shaping
"htb_classes": [...], # HTB class hierarchy
"netem": {...}, # Network emulation
"dscp": [...], # QoS marking
"connection_limits": [...], # Per-IP connection limits
"rate_limits": [...], # Packet rate limiting
"connection_tracking": {}, # Conntrack configuration
"nat_rules": [...], # NAT rules
"offloads": {...}, # NIC offloads
"mtu": 9000 # MTU size
},
"rationale": [...] # Human-readable justification
}
Key Features:
- Declarative: Specify what, not how
- Type-Safe: Pydantic validation
- Profile-Based: Inherit from research-backed presets
- Extensible: Mix-and-match optimizations
3️⃣ Validate Phase
Purpose: Ensure changes are safe, within policy limits, and schema-compliant
Validation Layers:
- Schema Validation: Pydantic type checking
- Policy Enforcement: Check against
policy/limits.yamlandpolicy/validation_limits.yaml - Interface Validation: Verify target interface exists
- Config Card Validation: Ensure parameters match card specifications
Output: ValidationResult
{
"ok": true,
"issues": [], # Or list of error messages
"normalized_plan": {...} # Normalized ParameterPlan
}
4️⃣ Render Phase
Purpose: Translate abstract ParameterPlan into concrete Linux commands
Transformation:
- Sysctl changes →
sysctl -w key=valuecommands - TC changes → Multi-line bash script with
tc qdisc/class/filtercommands - Nftables → Complete nftables ruleset script
- Offloads →
ethtool -Kcommands - MTU →
ip link set dev <iface> mtu <value>
Output: RenderedPlan
{
"sysctl_cmds": ["sysctl -w net.ipv4.tcp_congestion_control=bbr", ...],
"tc_script": "#!/bin/bash\ntc qdisc add...",
"nft_script": "table inet filter { ... }",
"ethtool_cmds": ["ethtool -K eth0 gro on", ...],
"ip_link_cmds": ["ip link set dev eth0 mtu 9000"]
}
Implementation: server/tools/planner.py - Complete support for all 29 config cards
5️⃣ Apply Phase
Purpose: Execute commands atomically with checkpoint/rollback on failure
Safety Features:
- Command Allowlisting: Only pre-approved binaries (
allowlist.yaml) - Checkpoint Creation: Snapshot current state before changes
- Sequential Execution: Apply commands in order, stop on first error
- Automatic Rollback: Restore checkpoint if any command fails
- Audit Logging: Record all commands and outcomes
Execution Flow:
1. Create checkpoint (save current state)
2. Execute sysctl commands
3. Execute tc script
4. Execute nftables script
5. Execute ethtool commands
6. Execute ip link commands
7. Success? Return ChangeReport
8. Failure? Rollback to checkpoint + return errors
Output: ChangeReport
{
"applied": true,
"dry_run": false,
"errors": [],
"checkpoint_id": "checkpoint_1699123456",
"notes": ["✓ sysctl -w ...", "✓ tc script applied", ...]
}
Component Diagram
┌─────────────────────────────────────────────────────────────────┐
│ MCP Server (FastMCP) │
│ server/main.py + registry.py │
└────────────────────┬────────────────────────────────────────────┘
│
┌───────────┼───────────┐
│ │ │
┌────▼────┐ ┌───▼────┐ ┌───▼────┐
│Discovery│ │Planning│ │ Apply │
│ Tools │ │ Tools │ │ Tools │
└─────────┘ └────┬───┘ └───┬────┘
│ │
┌──────▼──────┐ │
│ Validator │ │
│ + Schemas │ │
└──────┬──────┘ │
│ │
┌──────▼──────────▼─────┐
│ Policy System │
│ (29 Config Cards) │
│ profiles.yaml │
│ limits.yaml │
└───────────────────────┘
Key Files
| Component | File | Purpose |
|---|---|---|
| Entry Point | server/main.py | FastMCP server initialization |
| Tool Registry | server/registry.py | Register MCP tools & resources |
| Schemas | server/schema/models.py | Pydantic models (ParameterPlan, RenderedPlan, Changes, etc.) |
| Discovery | server/tools/discovery.py | 30+ system introspection tools |
| Planner | server/tools/planner.py | ParameterPlan → RenderedPlan (all 29 cards) |
| Validator | server/tools/validator.py | Schema + policy validation |
| Apply - Sysctl | server/tools/apply/sysctl.py | Execute sysctl commands |
| Apply - TC | server/tools/apply/tc.py | Execute traffic control scripts |
| Apply - NFT | server/tools/apply/nft.py | Apply nftables rulesets |
| Apply - Iptables | server/tools/apply/iptables.py | Firewall/NAT/rate limiting |
| Apply - NIC | server/tools/apply/offloads.py | NIC offload configuration |
| Apply - MTU | server/tools/apply/mtu.py | MTU configuration |
| Apply - Orchestration | server/tools/apply/apply.py | Atomic change application |
| Checkpoints | server/tools/apply/checkpoints.py | State snapshot/rollback |
| Policy Loader | server/tools/util/policy_loader.py | Load config cards & profiles |
| Config Cards | policy/config_cards/*.yaml | 29 configuration card definitions |
| Profiles | policy/profiles.yaml | 5 optimization profile definitions |
| Limits | policy/limits.yaml | Global safety constraints |
| Command Allowlist | server/config/allowlist.yaml | Approved system binaries |
🎯 Optimization Profiles
NetMCP includes 5 research-backed profiles optimized for specific network workloads. Each profile activates a curated set of configuration cards based on academic research (BBR, TCP tuning studies) and industry best practices.
🎮 Gaming Profile
Goal: Ultra-low latency for competitive gaming (FPS, MOBAs, esports)
Target Metrics:
- Latency: <20ms p95 to game servers
- Jitter: <5ms variance
- Connection Setup: <50ms
Active Cards (13 total):
sysctl_tcp_congestion_control→ BBR: Low-latency congestion controlsysctl_tcp_low_latency→ 1: Prioritize latency over throughputsysctl_tcp_fastopen→ 3: Reduce connection setup time by 1 RTTsysctl_tcp_fin_timeout→ 10s: Fast connection recyclingsysctl_core_netdev_budget→ 600: Faster packet processingsysctl_default_qdisc→ fq: Fair queuing for consistent deliverysysctl_core_rmem_max/wmem_max→ 16MB: Moderate bufferssysctl_tcp_rmem/wmem→ Tuned for responsivenesssysctl_tcp_window_scaling/timestamps: Enabledtc_qdisc_type→ fq: Fair queueing discipline
Expected Results:
- ✅ 10-30% latency reduction vs defaults
- ✅ 15-25% faster connection setup (FastOpen)
- ✅ 30-50% faster socket reuse (reduced FIN timeout)
Best For: Valorant, CS:GO, League of Legends, Rocket League, online FPS/MOBA
📺 Streaming Profile
Goal: Maximum throughput for video delivery and content distribution
Target Metrics:
- Throughput: 90%+ link utilization
- Latency: <100ms acceptable
- Buffering: Minimal rebuffer events
Active Cards (12 total):
sysctl_tcp_congestion_control→ BBR: 2-25× throughput on bufferbloat-affected pathssysctl_tcp_window_scaling→ 1: Enable large windowssysctl_core_rmem_max/wmem_max→ 64MB: Large buffers for sustained burstssysctl_tcp_rmem/wmem→ Maximum valuessysctl_core_netdev_budget→ 600: Efficient packet processingsysctl_default_qdisc→ fq: Fairness across streamssysctl_tcp_window_scaling/timestamps: Enabledtc_qdisc_type→ htb: Hierarchical Token Bucket for shapingtc_htb_rate/ceil: Bandwidth guarantees and burst capacitytc_htb_priority: Traffic prioritization
Expected Results:
- ✅ 2-4× throughput on lossy/bufferbloat-affected networks
- ✅ 20-50% improvement on clean high-bandwidth links
- ✅ Smooth playback across diverse client connections
Best For: YouTube, Twitch streaming, CDN edge servers, video on demand, OTT platforms
📞 Video Calls Profile
Goal: Balance latency and throughput for real-time communications (WebRTC, VoIP)
Target Metrics:
- Latency: <150ms (ITU-T G.114 standard for interactive voice)
- Jitter: <30ms
- Bandwidth: 2-5 Mbps typical (720p-1080p video)
Active Cards (13 total):
sysctl_tcp_congestion_control→ BBR: Balanced latency/throughputsysctl_tcp_fastopen→ 3: Faster signaling connectionssysctl_core_rmem_max/wmem_max→ 32MB: Moderate bufferssysctl_tcp_rmem/wmem: Balanced sizingsysctl_default_qdisc→ fq: Low jittersysctl_tcp_window_scaling/timestamps: Enabledtc_qdisc_type→ fq: Fair queuingiptables_QoS_marking→ DSCP EF/AF41: Prioritize media packets- UDP ports 3478, 3479, 19302 (STUN/TURN) → EF (Expedited Forwarding)
- TCP port 443 (signaling) → AF41 (Assured Forwarding)
Expected Results:
- ✅ Latency within ITU-T standards (<150ms)
- ✅ 20-40% reduction in jitter
- ✅ Prioritized media traffic via QoS markings
- ✅ Improved call quality on congested networks
Best For: Zoom, Google Meet, Microsoft Teams, Slack Huddles, WebRTC applications, VoIP
📦 Bulk Transfer Profile
Goal: Absolute maximum throughput for large file transfers and data replication
Target Metrics:
- Throughput: >1 Gbps sustained on capable links
- Latency: Not a priority (100-300ms acceptable)
- Efficiency: Maximum link utilization
Active Cards (12 total):
sysctl_tcp_congestion_control→ BBR: Probe bandwidth aggressivelysysctl_tcp_window_scaling→ 1: Essential for high BDP pathssysctl_core_rmem_max/wmem_max→ 128MB: Maximum bufferssysctl_tcp_rmem/wmem: Tuned for large BDP (Bandwidth-Delay Product)sysctl_core_netdev_budget→ 600: Efficient bulk processingsysctl_default_qdisc→ fq: Fair sharing across flowssysctl_tcp_timestamps→ 1: Enable PAWS for high-speed transferssysctl_tcp_window_scaling: Enabledtc_qdisc_type→ htb: Traffic shapingtc_htb_rate/ceil: Utilize full available bandwidth
Expected Results:
- ✅ 2-4× throughput on high-latency paths (100ms+)
- ✅ Sustained >1 Gbps on 10GbE links
- ✅ Efficient utilization of WAN links
- ✅ Optimal for high-BDP scenarios
Best For: Backups, database replication, rsync, scp, data center migrations, cloud uploads
🖥️ Server Profile
Goal: High concurrency, DDoS protection, and security for production servers
Target Metrics:
- Connections: 10,000-1,000,000 concurrent
- Latency: <50ms under load
- Security: SYN flood protection, rate limiting, per-IP limits
Active Cards (17 total - most comprehensive):
sysctl_tcp_congestion_control→ BBR: Fair service across connectionssysctl_tcp_syncookies→ 1: SYN flood protection (RFC 4987)sysctl_tcp_max_syn_backlog→ 8192: Large SYN queuesysctl_tcp_fin_timeout→ 15s: Faster connection recyclingsysctl_ip_local_port_range→ 1024-65000: More ephemeral portssysctl_core_rmem_max/wmem_max→ 64MB: Balanced bufferssysctl_tcp_rmem/wmem: Server-optimized sizingsysctl_default_qdisc→ fq: Fairnesssysctl_tcp_window_scaling/timestamps/fastopen: Enablediptables_connection_limiting: Per-IP connection limits- HTTP (80): 100 connections/IP
- HTTPS (443): 100 connections/IP
iptables_rate_limiting: 1000 packets/second with burst=50iptables_connection_tracking: Support 1M concurrent connectionsnf_conntrack_max→ 1000000nf_conntrack_tcp_timeout_established→ 432000 (5 days)
tc_qdisc_type→ fq: Fair queuing
Expected Results:
- ✅ Protection against SYN floods and DDoS
- ✅ Support 10K-1M concurrent connections
- ✅ Per-IP rate limiting and connection limiting
- ✅ Graceful degradation under attack
- ✅ Production-ready security posture
Best For: Web servers (nginx, Apache), API gateways, load balancers, public services, SaaS platforms
🧩 29 Configuration Cards
All 29 configuration cards are fully implemented in server/tools/planner.py and mapped to executable commands.
Sysctl Cards (16)
| Card ID | Parameter | Purpose | Values |
|---|---|---|---|
sysctl_tcp_congestion_control | Congestion control algorithm | BBR (recommended), CUBIC, Reno | String |
sysctl_tcp_low_latency | Prioritize latency over throughput | Gaming, real-time apps | 0/1 |
sysctl_tcp_fastopen | TFO mode (reduce handshake) | Faster connections | 0-3 |
sysctl_tcp_fin_timeout | TIME_WAIT socket timeout | Connection recycling | 10-60s |
sysctl_tcp_max_syn_backlog | SYN queue size | High concurrency | 1024-8192 |
sysctl_tcp_syncookies | SYN flood protection | DDoS defense | 0/1 |
sysctl_tcp_window_scaling | Enable large TCP windows | High-BDP paths | 0/1 |
sysctl_tcp_timestamps | RFC 1323 timestamps | PAWS, RTT | 0/1 |
sysctl_core_rmem_max | Max receive buffer | Memory for RX | 16-128MB |
sysctl_core_wmem_max | Max send buffer | Memory for TX | 16-128MB |
sysctl_tcp_rmem | TCP RX buffer tuning | min/default/max | Bytes |
sysctl_tcp_wmem | TCP TX buffer tuning | min/default/max | Bytes |
sysctl_core_netdev_budget | Packet processing budget | RX efficiency | 300-600 |
sysctl_default_qdisc | Default queueing discipline | fq, fq_codel, cake | String |
sysctl_ip_forward | Enable IP forwarding | Routing, NAT | 0/1 |
sysctl_ip_local_port_range | Ephemeral port range | More sockets | start-end |
Traffic Control Cards (8)
| Card ID | Function | Use Case | Parameters |
|---|---|---|---|
tc_qdisc_type | Queue discipline | fq, fq_codel, cake, htb, netem | String |
tc_htb_rate | HTB rate limit | Bandwidth guarantee | Mbps |
tc_htb_ceil | HTB burst limit | Max bandwidth | Mbps |
tc_htb_priority | HTB class priority | Traffic prioritization | 1-7 |
tc_default_class | Default HTB class | Fallback classid | String |
tc_netem_delay | Network emulation delay | Testing, simulation | ms |
tc_netem_loss | Packet loss emulation | Testing | 0-100% |
shaper | Bandwidth shaping | egress/ingress limits | Mbps |
Firewall/Security Cards (5)
| Card ID | Feature | Purpose | Configuration |
|---|---|---|---|
iptables_connection_limiting | Per-IP connection limits | Prevent resource exhaustion | proto/port/limit/mask |
iptables_rate_limiting | Packet rate limiting | DDoS mitigation | rate/burst |
iptables_connection_tracking | Conntrack table size | High concurrency | max_connections/timeouts |
iptables_QoS_marking | DSCP tagging | Traffic prioritization | match/dscp |
iptables_nat | NAT rules | SNAT/DNAT | type/source/dest |
NIC Cards (2)
| Card ID | Feature | Purpose | Options |
|---|---|---|---|
| NIC Offloads | Hardware acceleration | CPU offload | GRO, GSO, TSO, LRO, RX checksums, TX checksums |
| MTU | Maximum transmission unit | Jumbo frames | 1500-9000 bytes |
📚 MCP Tools Catalog
🔍 Discovery Tools (30+)
No side effects - Safe to call anytime for system introspection.
Network Interfaces
ip_info_tool(): Network interface details, IP addresses, stateseth_info_tool(): Ethernet interface informationnmcli_status_tool(): NetworkManager status and connectionsiwconfig_tool(): Wireless interface configurationiwlist_scan_tool(): WiFi network scan
Network Configuration
ip_route_tool(): Routing tableip_neigh_tool(): Neighbor table (ARP cache)arp_table_tool(): Legacy ARP table viewresolvectl_status_tool(): DNS resolver configurationcat_resolv_conf_tool(): /etc/resolv.conf contents
DNS & Connectivity
dig_tool(hostname): DNS resolution with dighost_tool(hostname): DNS lookup with hostnslookup_tool(hostname): DNS query with nslookupping_host_tool(target, count): ICMP pingtraceroute_tool(target): Network path tracingtracepath_tool(target): Path MTU discovery
Traffic Control & Firewall
tc_qdisc_show_tool(): Traffic control queue disciplinesnft_list_ruleset_tool(): nftables rulesetiptables_list_tool(): Legacy iptables rulesss_summary_tool(): Socket statistics
System Information
hostname_ips_tool(): Hostname resolutionhostnamectl_tool(): System hostname details
⚙️ Planning & Validation Tools
validate_change_plan_tool(parameter_plan: dict) → ValidationResult
Validate a ParameterPlan against schemas and policies.
Input: ParameterPlan dictionary
Output: ValidationResult with ok status, issues list, and normalized plan
Example:
validation = validate_change_plan_tool({
"iface": "eth0",
"profile": "gaming",
"changes": {"sysctl": {...}}
})
# Returns: {"ok": true, "issues": [], "normalized_plan": {...}}
render_change_plan_tool(plan: dict) → RenderedPlan
Render a ParameterPlan into concrete commands. No side effects.
Input: ParameterPlan dictionary
Output: RenderedPlan with sysctl_cmds, tc_script, nft_script, etc.
Example:
rendered = render_change_plan_tool(plan)
# Returns: {
# "sysctl_cmds": ["sysctl -w net.ipv4.tcp_congestion_control=bbr"],
# "tc_script": "tc qdisc add dev eth0 root fq",
# "nft_script": "...",
# "ethtool_cmds": [],
# "ip_link_cmds": []
# }
🔧 Execution Tools
apply_rendered_plan_tool(rendered_plan: dict, checkpoint_label?: str) → ChangeReport
Apply a RenderedPlan atomically with automatic rollback on failure.
Input: RenderedPlan dictionary, optional checkpoint label
Output: ChangeReport with applied status, errors, checkpoint_id, notes
Safety Features:
- Creates checkpoint before changes
- Applies all commands sequentially
- Automatic rollback on any failure
- Detailed execution log
Example:
report = apply_rendered_plan_tool(rendered, checkpoint_label="optimization_v1")
# Returns: {
# "applied": true,
# "errors": [],
# "checkpoint_id": "checkpoint_1699123456",
# "notes": ["✓ sysctl -w ...", "✓ tc script applied", ...]
# }
snapshot_checkpoint_tool(label?: str) → dict
Create a system state snapshot for rollback.
Input: Optional descriptive label
Output: checkpoint_id for later rollback
rollback_to_checkpoint_tool(checkpoint_id: str) → dict
Restore a previously saved system state.
Input: checkpoint_id from snapshot
Output: Rollback status and notes
🎯 Direct Apply Helpers
Low-level tools for specific operations:
set_sysctl_tool(kv: dict): Apply sysctl key-value pairsapply_tc_script_tool(lines: list): Execute tc commandsapply_nft_ruleset_tool(ruleset: str): Apply nftables configurationset_nic_offloads_tool(iface: str, flags: dict): Configure NIC offloadsset_mtu_tool(iface: str, mtu: int): Set interface MTU
📊 Usage Examples
Example 1: Gaming Optimization
Goal: Minimize latency for competitive gaming
plan = {
"iface": "eth0",
"profile": "gaming",
"changes": {
"sysctl": {
"net.ipv4.tcp_congestion_control": "bbr",
"net.ipv4.tcp_low_latency": "1",
"net.ipv4.tcp_fastopen": "3",
"net.core.rmem_max": "16777216",
"net.core.wmem_max": "16777216",
"net.core.default_qdisc": "fq"
},
"qdisc": {
"type": "fq",
"params": {}
}
},
"rationale": [
"BBR for low-latency congestion control",
"Small buffers to minimize bufferbloat",
"FQ qdisc for consistent packet delivery",
"FastOpen to reduce connection setup time"
]
}
# Validate → Render → Apply
validation = validate_change_plan(plan)
rendered = render_change_plan(plan)
report = apply_rendered_plan(rendered, checkpoint_label="gaming")
Expected Results:
- Latency reduction: 10-30%
- Connection setup time: 15-25% faster
- Jitter reduction: 20-40%
Example 2: Web Server Protection
Goal: Protect against DDoS while maintaining performance
plan = {
"iface": "eth0",
"profile": "server",
"changes": {
"sysctl": {
"net.ipv4.tcp_syncookies": "1",
"net.ipv4.tcp_max_syn_backlog": "8192",
"net.ipv4.tcp_fin_timeout": "15",
"net.ipv4.ip_local_port_range": "1024 65000"
},
"connection_limits": [
{
"protocol": "tcp",
"port": 80,
"limit": 100,
"mask": 32
},
{
"protocol": "tcp",
"port": 443,
"limit": 100,
"mask": 32
}
],
"rate_limits": [
{
"rate": "1000/second",
"burst": 50
}
],
"connection_tracking": {
"max_connections": 1000000,
"tcp_timeout_established": 432000
}
},
"rationale": [
"SYN cookies for SYN flood protection",
"Per-IP connection limits prevent resource exhaustion",
"Rate limiting prevents packet floods",
"Large conntrack table for high concurrency"
]
}
Expected Results:
- Protection against SYN floods
- Per-IP connection limiting (100 max)
- Rate limiting: 1000 packets/second with burst tolerance
- Support for 1M concurrent connections
Example 3: Video Streaming Server
Goal: Maximize throughput for content delivery
plan = {
"iface": "eth0",
"profile": "streaming",
"changes": {
"sysctl": {
"net.ipv4.tcp_congestion_control": "bbr",
"net.ipv4.tcp_window_scaling": "1",
"net.core.rmem_max": "67108864", # 64MB
"net.core.wmem_max": "67108864",
"net.ipv4.tcp_rmem": "8192 262144 67108864",
"net.ipv4.tcp_wmem": "8192 262144 67108864",
"net.core.netdev_budget": "600"
},
"qdisc": {
"type": "htb",
"params": {}
},
"shaper": {
"egress_mbit": 900, # 900 Mbps guaranteed
"ceil_mbit": 1000 # 1 Gbps burst
},
"htb_classes": [
{
"classid": "1:10",
"rate_mbit": 700,
"ceil_mbit": 900,
"priority": 1
}
]
},
"rationale": [
"BBR for maximum throughput on diverse networks",
"Large buffers for sustained high-bandwidth transfers",
"HTB for traffic shaping and bandwidth guarantees"
]
}
Expected Results:
- 2-25× throughput improvement on bufferbloat-affected paths
- Sustained >1 Gbps transfers on capable links
- Fair bandwidth distribution across connections
Example 4: Video Conferencing Optimization
Goal: Balance latency and throughput for WebRTC
plan = {
"iface": "eth0",
"profile": "video_calls",
"changes": {
"sysctl": {
"net.ipv4.tcp_congestion_control": "bbr",
"net.ipv4.tcp_fastopen": "3",
"net.core.rmem_max": "33554432", # 32MB
"net.core.wmem_max": "33554432"
},
"qdisc": {
"type": "fq",
"params": {}
},
"dscp": [
{
"match": {
"proto": "udp",
"dports": [3478, 3479, 19302] # STUN/TURN ports
},
"dscp": "EF" # Expedited Forwarding
},
{
"match": {
"proto": "tcp",
"dports": [443] # Signaling
},
"dscp": "AF41"
}
]
},
"rationale": [
"BBR balances latency and throughput for real-time media",
"DSCP EF marking prioritizes real-time media packets",
"FQ qdisc minimizes latency variance"
]
}
Expected Results:
- Latency within ITU-T G.114 standards (<150ms)
- Prioritized media traffic via QoS
- Reduced jitter and packet loss
🧪 Testing & Benchmarks
Running Tests
# Run all tests
./server/tests/run_all_tests.sh
# Integration tests
pytest server/tests/test_integration.py -v -s
# Real-world scenario tests
pytest server/tests/test_real_world_scenarios.py -v -s
# Comprehensive implementation test
python server/tests/test_comprehensive_implementation.py
# Demo of all capabilities
python server/tests/demo.py
Benchmarking
# Capture baseline performance
python server/tests/benchmark.py -o baseline.json
# Apply optimizations (e.g., gaming profile)
# ... apply your optimizations ...
# Measure after optimization
python server/tests/benchmark.py -o after.json
# Compare results
python server/tests/benchmark.py --compare baseline.json after.json
Test Coverage
✅ Discovery Tools: All 30+ tools tested for correctness and structured output
✅ Policy System: Validates loading of all 29 config cards and 5 profiles
✅ Schema Validation: Tests Pydantic models and policy enforcement
✅ Rendering: Verifies ParameterPlan → RenderedPlan translation for all 29 cards
✅ Scenarios: Tests gaming, server, QoS, VPN/NAT, DDoS hardening profiles
Benchmark Results
Baseline System (typical defaults):
- Latency to 8.8.8.8: ~18.36ms avg
- TCP congestion control: CUBIC
- Buffer sizes: 16MB rmem_max, 16MB wmem_max
- Default qdisc: fq_codel
Gaming Profile (optimized):
- Latency improvement: 10-30% reduction
- Connection setup: 15-25% faster (FastOpen)
- Socket reuse: 30-50% faster (reduced FIN timeout)
Streaming Profile (high throughput):
- Bandwidth: 2-4× improvement on lossy networks (BBR)
- Clean networks: +20-50% throughput
- Bufferbloat: 30-60% reduction under load
🛡️ Security
Command Allowlisting
All executable binaries are validated against server/config/allowlist.yaml:
allowed_binaries:
- /usr/bin/ip
- /usr/sbin/sysctl
- /usr/sbin/tc
- /usr/sbin/nft
- /usr/sbin/ethtool
# ... more binaries
Only pre-approved binaries can execute. Any attempt to run non-allowlisted commands is rejected.
Schema Validation
All ParameterPlans are validated using Pydantic models:
- Type checking (int, str, list, dict)
- Range validation (e.g., buffer sizes 16-128MB)
- Enum validation (e.g., congestion control: bbr|cubic|reno)
Policy Enforcement
All changes checked against policy/limits.yaml and policy/validation_limits.yaml:
- Buffer size limits
- Rate limits
- Port ranges
- Connection limits
No Shell Injection
Commands executed as argv arrays, not shell strings:
# Safe
subprocess.run(["sysctl", "-w", "net.ipv4.tcp_congestion_control=bbr"])
# NOT used (shell=True is never used)
subprocess.run("sysctl -w net.ipv4.tcp_congestion_control=bbr", shell=True)
Checkpoint/Rollback
All changes create automatic snapshots:
- Snapshot current state before changes
- Apply changes sequentially
- On any error: automatic rollback to snapshot
- On success: keep checkpoint for manual rollback if needed
Audit Logging
Complete execution history:
- All commands executed
- Timestamps
- Success/failure status
- Rationale for changes
- Checkpoint IDs for rollback
📖 Documentation
Comprehensive Guides
-
: Complete learning guide covering:
- System architecture (5-stage pipeline)
- Component deep dives
- Schema models and validation
- Policy system and config cards
- All 5 optimization profiles
- Testing and benchmarking
- Real-world examples
-
: Technical implementation details:
- All 29 config cards status
- Schema extensions (7 new models)
- Planner.py rendering functions
- Apply module implementations
- Profile configurations
-
: Verification of planner.py:
- All 29 cards mapped to system calls
- Rendering function coverage
- Command generation logic
- Implementation completeness
Configuration Files
policy/config_cards/*.yaml: 29 individual config card definitionspolicy/profiles.yaml: 5 optimization profiles with active_cards listspolicy/limits.yaml: Global safety constraintspolicy/validation_limits.yaml: Schema validation limitsserver/config/allowlist.yaml: Command allowlist
API Documentation
See docstrings in:
server/schema/models.py: All Pydantic modelsserver/tools/planner.py: Rendering logicserver/tools/validator.py: Validation logicserver/tools/apply/*.py: Apply module implementations
🤝 Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Development Setup
# Clone your fork
git clone https://github.com/YOUR_USERNAME/mcp-net-optimizer.git
cd mcp-net-optimizer
# Install dev dependencies
pip install -r requirements.txt
pip install pytest pytest-cov black isort mypy
# Run tests
pytest server/tests/ -v
# Run linting
black server/
isort server/
mypy server/
Adding New Config Cards
- Create YAML definition in
policy/config_cards/ - Add schema model to
server/schema/models.py - Extend
render_change_plan()inserver/tools/planner.py - Create apply function in
server/tools/apply/ - Add tests in
server/tests/ - Update documentation
📄 License
This project is licensed under the MIT License - see the file for details.