jerry426/token-saver-mcp
If you are the rightful owner of token-saver-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
Token Saver MCP is a protocol server that bridges the gap between Language Server Protocol (LSP) and Model Context Protocol (MCP), optimizing AI code assistants by providing direct access to VSCode's code intelligence.
Token Saver MCP β AI as a Full-Stack Developer
Transform AI from a code suggester into a true full-stack developer β with instant access to code intelligence and real browser control.
π |
π |
π Releases
π What is Token Saver MCP?
Modern AI coding assistants waste enormous context (and your money) by stuffing full grep/search results into the model window. That leads to:
- β Slow lookups (seconds instead of milliseconds)
- β Thousands of wasted tokens per query
- β AI βlosing its train of thoughtβ in cluttered context
Token Saver MCP fixes this.
It gives AI assistants direct access to VSCodeβs Language Server Protocol (LSP) and the Chrome DevTools Protocol (CDP), so they can work like real developers:
- Instantly navigate & refactor code
- Run code in a real browser (Edge/Chrome)
- Test, debug, and verify changes themselves
Result: 90β99% fewer tokens, 100β1000Γ faster responses, and $200+ in monthly savings β while enabling AI to truly act as a full-stack engineer.
β¨ Why Token Saver?
Think of your AIβs context window like a workbench. If itβs cluttered with logs, search dumps, and irrelevant snippets, the AI canβt focus.
Token Saver MCP keeps the workbench clean.
π Without Token Saver
grep -r "renderProfileImage" .
# 5000+ tokens, 10β30 seconds, bloated context
β‘ With Token Saver
get_definition('src/components/UserCard.js', 25)
# 50 tokens, <100ms, exact location + type info
Cleaner context = a sharper, more persistent AI assistant.
ποΈ Revolutionary Dual Architecture
Token Saver MCP uses a split architecture designed for speed and stability:
AI Assistant ββ MCP Server ββ VSCode Gateway ββ VSCode Internals
(hot reload) (stable interface)
-
ποΈ VSCode Gateway Extension
- Installed once, rarely updated
- Exposes VSCodeβs LSP via HTTP (port 9600)
-
π Standalone MCP Server
- Hot reloadable β no VSCode restarts
- Language-agnostic (JS/TS, Python, Go, Rustβ¦)
- Bridges MCP protocol β VSCode Gateway + CDP (port 9700 by default)
Why it matters: You can iterate on MCP tools instantly without rebuilding/restarting VSCode. Development is 60Γ faster and much more reliable.
π§° What You Get
Token Saver MCP currently provides 40 production-ready tools across five categories:
- LSP Tools (14) β
get_definition
,get_references
,rename_symbol
,get_hover
,find_implementations
, β¦ - Memory Tools (9) β
smart_resume
(86-99% token savings vs /resume),write_memory
,read_memory
,search_memories
(full-text search),export_memories
,import_memories
, β¦ - Browser Tools (8) β
navigate_browser
,execute_in_browser
,take_screenshot
,get_browser_console
, β¦ - Testing Helpers (5) β
test_react_component
,test_api_endpoint
,check_page_performance
, β¦ - System Tools (4) β
get_instructions
,retrieve_buffer
,get_supported_languages
, β¦
π See the full
π Proven Results
Operation | Traditional Method | With Token Saver MCP | Improvement |
---|---|---|---|
Find function definition | 5β10s, 5k tokens | 10ms, 50 tokens | 100Γ faster |
Find all usages | 10β30s | 50ms | 200Γ faster |
Rename symbol project-wide | Minutes | 100ms | 1000Γ faster |
Resume context (/resume) | 5000+ tokens | 200-500 tokens | 86-99% savings |
Token & Cost Savings (GPT-4 pricing):
- Tokens per search: 5,000 β 50
- Cost per search: $0.15 β $0.0015
- Typical dev workflow: $200+ saved per month
π Browser Control (Edge-Optimized)
Beyond backend code, Token Saver MCP empowers AI to control a real browser through CDP:
- Launch Edge/Chrome automatically
- Click, type, navigate, capture screenshots
- Run frontend tests & debug JS errors in real-time
- Analyze performance metrics
Example workflow:
- AI writes backend API (LSP tools)
- AI launches browser & tests API (CDP tools)
- AI sees error logs instantly
- AI fixes backend code (LSP tools)
- AI verifies fix in browser
β‘οΈ No more βplease test this manuallyβ β AI tests itself.
π§ Smart Memory System (NEW!)
Replace wasteful /resume
commands with intelligent context restoration:
The Problem with /resume
- Dumps entire conversation history (5000+ tokens)
- Includes irrelevant tangents and discussions
- Costs $0.15+ per resume
- AI gets lost in the noise
The Solution: Smart Resume
smart_resume() // 200-500 tokens, focused context only
Features:
- 86-99% token savings compared to /resume
- Progressive disclosure: Start minimal, expand as needed
- Full-text search: Find memories by content, not just keys
- Importance levels (1-5): Critical info persists, trivia can be dropped
- Verbosity levels (1-4): Control detail granularity
- Time-based filtering: Resume work from specific periods
- Export/Import: Backup and share memory contexts between sessions
Example:
// Standard resume - just the essentials
smart_resume()
// Include everything from last 3 days
smart_resume({ daysAgo: 3, verbosity: 3 })
// Critical items only for quick check-in
smart_resume({ minImportance: 4, verbosity: 1 })
Memory is stored locally in SQLite (~/.token-saver-mcp/memory.db
) with automatic initialization.
π₯οΈ Real-Time Dashboard
Visit http://127.0.0.1:9700/dashboard
to monitor:
- Server status & connection health
- Request metrics & response times
- Token & cost savings accumulating live
- Tool usage statistics
Perfect for seeing your AIβs efficiency gains in action.
β‘ Quickstart (30 Seconds)
# Clone repo
git clone https://github.com/jerry426/token-saver-mcp
cd token-saver-mcp
# One-step setup
./mcp setup /path/to/your/project
Thatβs it! The installer:
- Finds open ports
- Creates config files
- Tests connection
- Provides the Claude/Gemini command
β‘οΈ Full installation & build steps:
π Supported AI Assistants
- Claude Code β works out of the box with MCP endpoint
- Gemini CLI β use
/mcp-gemini
endpoint - Other AI tools β MCP JSON-RPC, streaming, or simple REST endpoints available
Endpoints include:
http://127.0.0.1:9700/mcp
(standard MCP)http://127.0.0.1:9700/mcp-gemini
(Gemini)http://127.0.0.1:9700/mcp/simple
(REST testing)http://127.0.0.1:9700/dashboard
(metrics UI)
π¬ Verify It Yourself
Think the claims are too good to be true? Run the built-in test suite:
python3 test/test_mcp_tools.py
Expected output shows: hover, completions, definitions, references, diagnostics, semantic tokens, buffer management, etc. β all passing β
π οΈ Development
pnpm install
pnpm run dev # hot reload
pnpm run build
pnpm run test
MCP server lives in /mcp-server/
, with modular tools organized by category (lsp/
, cdp/
, helper/
, system/
).
See for architecture diagrams, tool JSON schemas, buffer system details, and contributing guide.
π Roadmap / Vision
Token Saver MCP already unlocks full-stack AI workflows. Next up:
- π§ More browser automation tools (multi-tab, network control)
- π¦ Plugin ecosystem for custom toolpacks
- π Multi-assistant coordination (Claude + Gemini + others)
- π§ Expanded context management strategies
π License
MIT β free for personal and commercial use.
π Start today:
- Run
./mcp setup
- Tell your AI: βUse the get_instructions tool to understand Token Saver MCP.β
- Watch your AI become a focused, cost-efficient, full-stack developer.
π For in-depth details: