estbndlt/pc-reviewer
If you are the rightful owner of pc-reviewer and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
PC-Reviewer is an MCP server that connects your local PC to a custom GPT model, providing guidance on optimizing your PC's performance.
PC Reviewer LLM (Custom GPT + MCP Connector)
A cautious, read-only PC reviewer that lets a Custom GPT on chatgpt.com inspect your machine (disk usage, large files, caches, Docker usage, top processes) and propose a prioritized cleanup plan. You keep full control: destructive actions are opt-in and gated per command.
Protocol
- Protocol: Model Context Protocol (MCP) via WebSocket (Remote MCP)
- Server: Python + FastAPI + websockets; cross-platform (macOS/Linux; Windows planned)
- Connector: ChatGPT Custom GPT β MCP Connector to this server
- Security model: read-only by default; optional
exec.run
behind explicit user approval & allowlists
Why this exists
Local disks fill up with build artifacts, caches, simulator data, containers, and logs. βCleanerβ apps are opaque and risky. This project keeps the LLM outside your machine and exposes only narrow, auditable tools that generate a plan you can verify and approve.
Features
Read-only tools
fs.du(path, depth)
β directory sizesfs.bigfiles(path, min_size, limit)
β biggest filespkg.caches()
β Homebrew / npm / pip cache sizesdocker.df()
β Docker size infoproc.top(limit)
β top memory/CPU processes
Cleanup plans (LLM output) are YAML ranked by GB freed with rationale, risk level, dry-run commands, verification, and rollback.
Guardrails
- Deny deleting
~/Documents
,~/Pictures
,~/Desktop
by default - Dry-run/report-first commands
- Optional
exec.run
tool is off by default - Auditability: every tool call and approved command is loggable
Architecture
flowchart TD
subgraph Remote
A["ChatGPT Custom GPT<br/>('PC Reviewer')<br/>- uses MCP tools"]
A --> C["YAML Cleanup Plan<br/>(sorted by GB freed)"]
end
subgraph Local["Your Computer"]
B["MCP Server<br/>(runs locally)<br/>- /mcp websocket<br/>- read-only tools"]
D["Your Machine<br/>(read-only first)"]
end
A <--> |wss| B
B --> |"OS utilities: du, find, ps, docker"| D
Repo layout
pc-reviewer-mcp/
ββ README.md
ββ src/
β ββ mcp_server.py # WebSocket MCP endpoint + tool registry
β ββ tools/
β β ββ fs_tools.py # du, bigfiles
β β ββ pkg_tools.py # brew/npm/pip caches
β β ββ docker_tools.py # docker df
β β ββ proc_tools.py # top processes (psutil)
β β ββ exec_tool.py # optional, disabled by default
β ββ logging_middleware.py # request/response logging, redaction
ββ requirements.txt
ββ .env.example # PORT, ALLOWLIST paths, etc.
ββ Makefile # dev targets (run, lint, test)
ββ scripts/
β ββ dev-tunnel.sh # ngrok/cloudflared quick-start
ββ tests/
β ββ test_fs_tools.py
β ββ test_pkg_tools.py
β ββ test_protocol.py
ββ .github/
β ββ ISSUE_TEMPLATE.md
β ββ workflows/ci.yml
ββ LICENSE
Quick start
1. Prereqs
- Python 3.10+
- macOS or Linux (Windows support tracked in Issues)
- Optional: Docker Desktop (if you want
docker.df
) - A tunneling tool (ngrok or cloudflared) if you run locally
2. Install & run
git clone https://github.com/your-org/pc-reviewer-mcp.git
cd pc-reviewer-mcp
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# Start server (defaults: 0.0.0.0:8765, ws route /mcp)
uvicorn src.mcp_server:app --host 0.0.0.0 --port 8765
Optional: expose it securely
# Example: cloudflared (no account needed for quick tunnel)
cloudflared tunnel --url http://localhost:8765
# Note WSS URL (e.g., wss://random.trycloudflare.com/mcp if you map /mcp path)
3. Create the MCP Connector in ChatGPT (Team/Enterprise/Edu)
- In ChatGPT: Settings β Connectors β Add custom connector
- Type: Remote MCP (WebSocket)
- URL:
wss://<your-tunnel-host>/mcp
- Name: PC Reviewer
- Save.
4. Create your Custom GPT
- ChatGPT: Explore β Create a GPT β Configure
- Connectors: enable PC Reviewer.
- Instructions (paste):
You are a cautious PC Reviewer. Use the MCP tools:
- fs.du(path, depth)
- fs.bigfiles(path, min_size, limit)
- pkg.caches()
- docker.df()
- proc.top(limit)
Workflow:
1) Gather facts using tools.
2) Output a YAML CLEANUP PLAN sorted by estimated GB freed (descending). Each action includes:
- title, rationale, est_space_gb, risk (Low/Medium/High)
- commands.macos / linux / windows (prefer dry-run/report-first)
- verification steps and rollback notes
3) Never propose deleting Documents/Pictures/Desktop unless user explicitly asks.
4) If an execution tool exists, require user to type APPROVE "<command>" before any destructive action.
- Save.
Using it
Ask your GPT:
- βScan my home folder to depth 2 and show the top 25 heaviest directories.β
- βList files over 500MB in ~/Downloads (limit 150).β
- βEstimate cache cleanup (brew/npm/pip) and Docker savings.β
- βDraft a cleanup plan with only dry-run commands.β
You'll get a ranked YAML plan. Copy/paste commands you agree with into your terminalβor enable the optional exec.run
tool later.
Optional: opt-in execution tool
Disabled by default. When you're ready:
- Add/enable
exec.run
insrc/tools/exec_tool.py
with:- explicit user APPROVE requirement,
- path allowlists (e.g., Xcode DerivedData, simulator devices),
- server-side logging,
- dry-run flags where possible.
Example policy (conceptual):
SAFE_PREFIXES = [
os.path.expanduser("~/Library/Developer/Xcode/DerivedData"),
os.path.expanduser("~/Library/Caches"),
]
# Only allow rm within SAFE_PREFIXES, and require params.approved == True
Security posture
- Default read-only. No deletions without explicit opt-in.
- Denylist & allowlist. Refuse core system paths and user content by default; only allow known cache/artifact dirs.
- Dry-run first. Prefer
brew cleanup -n -s
,docker system df
,du -sh
comparisons. - Audit log. Log tool calls and approved commands with timestamp and arguments (redact paths if requested).
Cross-platform notes
- macOS & Linux:
du
,find
,ps
,docker
should work out of the box. - Windows (tracked in Issues): replace with PowerShell equivalents (
Get-ChildItem
,Measure-Object
,Get-Process
), WSL detection, and Docker Desktop data paths.
Development
make run # uvicorn dev server
make lint # ruff/flake8 (configure as you like)
make test # pytest
Env vars (.env.example)
PORT=8765
LOG_LEVEL=info
ALLOWLIST=/Users/you/Library/Developer/Xcode/DerivedData:/Users/you/Library/Caches
DENYLIST=/Users/you/Documents:/Users/you/Desktop:/Users/you/Pictures
Example tool semantics (JSON-RPC over WS)
# Request: list tools
{"id":"1","method":"tools.list","params":{}}
# Response
{"id":"1","result":{"tools":["fs.du","fs.bigfiles","pkg.caches","docker.df","proc.top"]}}
# Call a tool
{"id":"2","method":"tools.call","params":{"name":"fs.du","arguments":{"path":"~","depth":2}}}
# Result
{"id":"2","result":{"name":"fs.du","data":[{"path":"/Users/you","kb":1234567}, ...]}}
Sample prompts for your GPT
- βUse fs.du (depth 2) on ~ and pkg.caches, then return a single YAML cleanup plan with dry-run commands only.β
- βFind duplicates is out-of-scope; instead, report top 50 files >500MB with fs.bigfiles.β
- βDo not propose removing any user docs/photos; stay in caches/artifacts only.β
Roadmap (issues to open on day 1)
- Windows support: PowerShell tools (
Get-ChildItem
,Measure-Object
,Get-Process
), WSL path mapping. - Exec tool (opt-in):
- Approval handshake (
APPROVE "<command>"
) - Path allowlist policy
- Logging + dry-run support
- Approval handshake (
- Docker metrics: parse
docker system df -v
into structured totals per image/volume. - Browser caches (opt-in): profile directory detection and reporting (no deletes by default).
- Configurable policies: YAML policy file to define allow/deny and risk levels.
- Telemetry toggle: local JSON logs only; no network egress by default.
- Tests: fixtures for
du
/find
outputs; contract tests for WS protocol. - CI: lint/typecheck/tests on PR; pre-commit hooks.
- Docs: βEnable connector in Custom GPTβ screenshots; troubleshooting guide.
- Release: Dockerfile & Compose for self-hosting; Caddy/Nginx reverse proxy snippet with WSS.
Contributing
- Open an issue first for major changes.
- Write tests for new tools or policy changes.
- Keep default behavior read-only and safe.
License
MIT (or your preference). Add a NOTICE if you bundle third-party examples.