KrishnaChaitanya1027/mcp-copilot-lab
If you are the rightful owner of mcp-copilot-lab and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Model Context Protocol (MCP) server is a system designed to facilitate secure and efficient interactions between a client and a model, such as GPT, by providing a structured environment with specific tools and resources.
MCP Copilot Lab
Opinionated Model Context Protocol (MCP) sandbox that shows how to graduate a JSON-RPC tutorial server into a modular, stateful copilot with guardrails, memory, automation, and a CLI chat surface.
See RECENT_FIXES.txt for a concise changelog.
What You Get
- Guardrailed MCP server (
mcp_server.py) with sandboxed file access, safe math, and consistent{"ok": ...}responses. - Tool packs for storage, templates, plans, watchers, alerts, diagnostics, and an MCP-aware CLI (
cli_chat.py). - Minimal tutorial server (
hello_mcp_server.py) for baseline compatibility checks. - Sample state under
artifacts/,logs/, anddata/plus a cleanup helper (scripts/clean.sh) to reset the lab.
Why This Lab Exists
- Prove out patterns for running an MCP copilot in persistent, stateful environments.
- Ship ready-to-use tooling (kv storage, artifact handling, automation hooks) you can reuse.
- Offer a concise playground for experimenting with server-side guardrails and tool orchestration.
Key Components
mcp_server.py– Feature-rich MCP server with structured responses, guardrails, and dynamic tool packs.hello_mcp_server.py– Minimal JSON-RPC baseline that mirrors the official SDK tutorial.cli_chat.py– Lightweight CLI that connects to the server and shows tool traces and token usage.tools/– Collection of modular tool packs (kv_store,config,artifacts,plans,dynamic_plans,progress,watchers,alerts,templates, etc.).artifacts/,data/,logs/– Sample state and storage roots used by the automation features (safe to remove when you want a clean slate).
Prerequisites
- Python 3.11 or newer
- uv for virtualenv and dependency management (or your preferred
pipwrapper) - Python packages:
mcp,openai, andpython-dotenv - Access to an MCP-capable model (OpenAI example provided)
Quickstart
- Create the environment and install deps
uv venv
uv pip install mcp openai python-dotenv
- Configure your model + server defaults
echo "OPENAI_API_KEY=sk-..." > .env
echo "OPENAI_MODEL=gpt-4o-mini" >> .env
echo "MCP_SERVER=python mcp_server.py" >> .env # optional; CLI defaults to this command
# Toggle optional packs:
echo "MCP_ENABLE_DIAGNOSTICS=0" >> .env # disable heavy net/http/tls tools
echo "MCP_ENABLE_PCAP_TOOLS=0" >> .env # disable packet capture helpers
- Run a server
uv run python mcp_server.py # full lab with guardrails + tool packs
# or
python hello_mcp_server.py # stripped-down tutorial server
The CLI can also spawn mcp_server.py automatically via MCP_SERVER.
- Talk to it from the CLI
uv run python cli_chat.py
The CLI prints tool traces and token counts inline.
Heads-up: the default sandbox lives under artifacts/, logs/, and data/. Override with MCP_SANDBOX_ROOT/MCP_ARTIFACTS_DIR in .env if you want a different location. Clear local state with bash scripts/clean.sh whenever you want a fresh lab.
Docker
Build and run the server inside a container with all optional diagnostic tools available:
docker build -t mcp-lab .
docker run --rm -it \
-p 8765:8765 \
-e OPENAI_API_KEY=... \
-e OPENAI_MODEL=gpt-4o-mini \
mcp-lab
The image uses /app/artifacts as the sandbox root. Mount a host directory if you want to persist artifacts:
docker run --rm -it \
-v "$(pwd)/artifacts:/app/artifacts" \
mcp-lab
Override MCP_SERVER in .env or your CLI session to point at the container if you are connecting from another host.
Feature Highlights
Guardrails & Configuration
- Each built-in tool returns
{"ok": bool, ...}payloads so failures are explicit and machine-checkable. - Sandbox root defaults to
./artifactswith extension and size limits (.log,.txt) for safer file I/O. - Profile-aware settings live in
tools/config.py, letting you toggle betweendev,customerA, or custom profiles via environment overrides.
Persistent Memory & Artifacts
tools/kv_store.pyoffers JSON-backed list/get/set/delete helpers for lightweight agent memory.tools/artifacts.pypersists text/JSON/binary outputs and provides readable previews.tools/tool_utils.pystandardises FastMCP response handling so nested tools can compose cleanly.
Automation & Ops Hooks
tools/plans.pyandtools/dynamic_plans.pyorchestrate templated, multi-step workflows.tools/progress.pytracks log offsets to summarise only new bytes on each read.tools/watchers.pyreacts to file fingerprint changes;tools/alerts.pytriggers when regex thresholds are exceeded.tools/templates.pyturns KV/config/artifact data into incident updates or customer-facing summaries.
Diagnostics & Incident Response
tools/cases.pystores case timelines, artifacts, and exports, whiletools/report_bundles.pyzips recent evidence for handoff.tools/http_diag.py,tools/tls_diag.py, andtools/net_diag.pywrap curl/OpenSSL/ping/dig with redaction and audit trails.tools/audit.pyandtools/watch_dir_summary.pyproduce append-only logs summarising monitoring runs.tools.validators.pyprovides quick checks for PEM chains, rate-limit JSON, and IdP metadata before escalation.
Tool Catalogue
Core tools exported by mcp_server.py, grouped by theme:
- Utility & File Access:
say_hello,get_time,math_eval,search_files,read_file,summarize_logs. - State & Storage:
kv_set,kv_get,kv_delete,kv_list,save_text,save_json,save_bytes,list_artifacts,read_artifact,delete_artifact. - Configuration & Profiles:
config_load,config_set_profile,config_list_profiles,config_override, and helpers surfaced fromtools/config.py. - Plans & Automation:
plan_summarize_logs,run_plan,dynamic_plan_create,dynamic_plan_run, pluswatch_file_once,watch_file_poll,watch_dir_once,watch_dir_poll, andwatch_dir_summary. - Progress Tracking & Alerts:
track_read,track_read_and_summarize,offset_read,offset_reset,alert_count_text,alert_track_and_save,alert_run_plan_if. - Case Management & Reporting:
case_create,case_get,case_list,case_note,case_attach_artifact,case_export, alongsidebundle_latestfor artifact zips. - Diagnostics:
http_get,tls_inspect,net_ping,net_trace,dns_lookup, each saving redacted traces for later review. - Security & Governance:
secret_set,secret_get,secret_list,role_set,role_get,audit_append, plus validators (validate_cert_chain,validate_rate_limits,validate_idp_metadata). - Templating:
tpl_render,tpl_list,tpl_delete, andgen_incident_updatefor structured narrative output.
Every tool returns structured JSON so you can chain them safely from an agent or client.
Working Directories
artifacts/– Sandbox root and default artifact directory for tool-generated output.data/kv.json– Persistent store backing the KV tools (auto-creates and self-heals on corruption).logs/chat.log– Sample log stream used by progress/summary tools.sandbox/kv.json– Legacy sandbox state retained for backward compatibility; new flows usedata/.
Feel free to clear these directories whenever you want to reset the lab environment.
Development & Extension
- Install test tooling with
uv pip install pytestand runuv run pytest -q. - To add your own tool pack:
- Create
tools/<name>.pywith aregister_<name>_tools(mcp)function. - Import and call the new registrar from
mcp_server.py. - Reload your client—FastMCP will advertise the updated schema automatically.
- Create
- Refer to
RECENT_FIXES.txtfor a concise changelog of recent adjustments.
Troubleshooting
- Server hangs until Ctrl+C? Ensure every tool calls
respond(...); FastMCP logs helpful traces to stderr. 400 invalid tool name: stick to[a-zA-Z0-9_-]+.- Missing files? Use recursive glob patterns such as
**/*.log. - Sandbox permission errors? Update the
SAFE_ROOTconstant or exportMCP_SANDBOX_ROOTso the server points at your localartifacts/directory.