hitoshura25/gemini-workflow-bridge-mcp
If you are the rightful owner of gemini-workflow-bridge-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Gemini-Workflow-Bridge-MCP is a server that integrates Claude Code with Gemini CLI to optimize workflow tasks by utilizing Gemini's extensive context capabilities.
Gemini Workflow Bridge MCP
Gemini as Context Compression Engine + Claude as Reasoning Engine = A-Grade Results
Overview
This MCP is a context compression engine that optimally leverages both Claude Code and Gemini's strengths.
Key Features
- ✅ Quality: A-grade specifications (Gemini provides facts, Claude does reasoning)
- ✅ Cost: 47-61% reduction in Claude tokens (expensive operations move to free Gemini tier)
- ✅ Compression: 174:1 token compression ratio (50K tokens → 300 token summaries)
- ✅ DX: Auto-generated workflows and slash commands for common tasks
Architecture
┌─────────────────────────────────────────┐
│ Claude Code (Reasoning Engine) │
│ - Superior planning & specifications │
│ - Precise code editing │
│ - A-grade output quality │
└──────────────┬──────────────────────────┘
│ MCP Protocol
↓
┌─────────────────────────────────────────┐
│ MCP Server (Compression Layer) │
│ - 50K tokens → 300 token summaries │
│ - Fact extraction only │
│ - Validation & consistency checks │
└──────────────┬──────────────────────────┘
│ Gemini CLI
↓
┌─────────────────────────────────────────┐
│ Gemini (Context Engine) │
│ - 2M token window (free tier) │
│ - Factual extraction only │
│ - No opinions or planning │
└─────────────────────────────────────────┘
Installation
Prerequisites
-
Gemini CLI - Install and authenticate:
npm install -g @google/gemini-cli gemini # Follow authentication prompts -
Python 3.11+ with pip
Install the MCP Server
# Clone the repository
git clone https://github.com/hitoshura25/gemini-workflow-bridge-mcp
cd gemini-workflow-bridge-mcp
# Install dependencies
pip install -e .
Configure Claude Code
Add to your Claude Code MCP settings (typically claude_desktop_config.json):
{
"mcpServers": {
"gemini-workflow-bridge": {
"command": "python",
"args": ["-m", "hitoshura25_gemini_workflow_bridge"],
"env": {
"CONTEXT_CACHE_TTL_MINUTES": "30",
"MAX_TOKENS_PER_ANSWER": "300",
"TARGET_COMPRESSION_RATIO": "100"
}
}
}
}
Quick Start
1. Set Up Workflows (Recommended First Step)
After installing the MCP server, set up recommended workflows:
In Claude Code:
"Set up the spec-only workflow for me"
Claude will use setup_workflows_tool to create:
- .claude/workflows/spec-only.md
- .claude/commands/spec-only.md
Now you can use the /spec-only slash command:
/spec-only Add user authentication with OAuth2 support
To set up all workflows:
"Set up all workflows"
Creates: spec-only, feature, refactor, and review workflows
2. Use the Tools Directly
# 1. Extract facts about your codebase
query_codebase_tool(
questions=["How is authentication implemented?"],
scope="src/"
)
# Returns: Compressed facts with file:line references
# 2. Create specification using those facts (Claude does this)
# [Your reasoning creates A-grade spec here]
# 3. Validate specification
validate_against_codebase_tool(
spec_content="...",
validation_checks=["missing_files", "undefined_dependencies"]
)
# Returns: Completeness score, issues, suggestions
Documentation
- Full Documentation - You're reading it!
- - Complete workflow documentation
- Implementation Plan - Architecture details
- Configuration Guide - All configuration options
Tools Overview
🔍 Tier 1: Fact Extraction
| Tool | Purpose | Key Feature |
|---|---|---|
query_codebase_tool() | Multi-question analysis | 174:1 compression ratio |
find_code_by_intent_tool() | Semantic search | Returns summaries, not full code |
trace_feature_tool() | Follow execution flow | Step-by-step with data flow |
list_error_patterns_tool() | Extract patterns | Filtering at the edge |
✅ Tier 2: Validation
| Tool | Purpose |
|---|---|
validate_against_codebase_tool() | Validate specs for completeness |
check_consistency_tool() | Verify pattern alignment |
🚀 Tier 3: Workflow Automation
| Tool | Purpose | Quick Example |
|---|---|---|
setup_workflows_tool() | Set up recommended workflows | workflows=["all"] |
generate_feature_workflow_tool() | Generate executable workflows | Progressive disclosure |
generate_slash_command_tool() | Create custom slash commands | Automate common tasks |
🔍 Tier 4: Tool Discovery
| Tool | Purpose | Quick Example |
|---|---|---|
discover_tools() | Find tools by keyword, category, or use case | query="validation" or category="fact_extraction" |
get_tool_schema() | Get detailed schema for a specific tool | tool_name="query_codebase_tool" |
Example: Complete Feature Implementation
User: "Add Redis caching to product API"
# Step 1: Extract facts
→ query_codebase_tool(questions=[...])
← 52K tokens → 387 tokens (134:1 compression)
# Step 2: Claude creates A-grade spec using facts
→ [Your superior reasoning]
← High-quality specification
# Step 3: Validate spec
→ validate_against_codebase_tool(spec=...)
← Completeness: 92%, 1 minor issue
# Step 4: Implement
→ [Your precise code editing]
Result: ✅ A-grade spec, 61% token savings, 3.5 minutes
How It Works
| Aspect | Description |
|---|---|
| Spec Creation | Claude generates A-grade specifications |
| Token Usage | 3,100 Claude tokens (61% reduction vs traditional approaches) |
| Gemini Role | Provides facts only |
| Claude Role | Creates from scratch with facts |
| Quality | A-grade |
| Workflows | Auto-generated |
Configuration
Key environment variables (see .env.example for all):
# Context & Compression
CONTEXT_CACHE_TTL_MINUTES=30 # Cache duration
MAX_TOKENS_PER_ANSWER=300 # Compression target
TARGET_COMPRESSION_RATIO=100 # Aim for 100:1
GEMINI_MODEL=auto # or specific model
# Retry Mechanism (Exponential Backoff)
GEMINI_RETRY_MAX_ATTEMPTS=3 # Total attempts (1 initial + 2 retries)
GEMINI_RETRY_INITIAL_DELAY=1.0 # Starting delay in seconds
GEMINI_RETRY_MAX_DELAY=60.0 # Maximum delay in seconds
GEMINI_RETRY_BASE=2.0 # Exponential backoff multiplier (delay = initial * base^attempt)
GEMINI_RETRY_ENABLED=true # Enable/disable retry mechanism
# Command/Workflow Prefixes (Namespace Management)
GEMINI_COMMAND_PREFIX=gemini- # Prefix for generated commands
GEMINI_WORKFLOW_PREFIX=gemini- # Prefix for generated workflows
# Set to empty string to disable
Usage Example
# 1. Get facts
facts = query_codebase_tool(questions=[...])
# 2. Create spec (Claude does this with superior reasoning)
spec = create_your_a_grade_spec(facts)
# 3. Validate
validate_against_codebase_tool(spec=spec)
Troubleshooting
"Gemini CLI not found"
npm install -g @google/gemini-cli
"Empty response from Gemini"
gemini --version # Check installation
gemini # Re-authenticate if needed
Development
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run with debug logging
DEBUG_MODE=true python -m hitoshura25_gemini_workflow_bridge
Project Structure
hitoshura25_gemini_workflow_bridge/
├── tools/ # 8 tools (Tier 1, 2, 3)
├── prompts/ # Strict fact extraction prompts
├── workflows/ # Workflow templates
├── utils/ # Token counting, prompt loading
├── server.py # MCP server
└── generator.py # Legacy implementations
Success Metrics
- ✅ 61% cost reduction in Claude tokens
- ✅ 174:1 compression ratio (50K → 300 tokens)
- ✅ A-grade quality specifications
- ✅ Progressive disclosure with workflows
Contributing
Contributions welcome! Please read:
- Implementation Plan
- Architecture Overview
- Submit PR with tests
License
Apache 2.0 License - see LICENSE
Credits
- Architecture inspired by Gemini's analysis
- Based on Anthropic's MCP best practices
- Built with FastMCP
Status: ✅ Production Ready Last Updated: November 15, 2025
🌟 Star us on GitHub if you find this useful!