aistudio-gemini-mcp

xumingjun5208/aistudio-gemini-mcp

3.2

If you are the rightful owner of aistudio-gemini-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The Gemini MCP Server is a bridge between MCP clients and Google Gemini models, providing advanced AI capabilities.

Tools
2
Resources
0
Prompts
0

Gemini MCP Server

Python MCP

A Model Context Protocol (MCP) server that provides Google Gemini AI capabilities to MCP-compatible clients like Claude Desktop and Claude Code.

Overview

This MCP server acts as a bridge between MCP clients and Google Gemini models, enabling:

  • Multi-turn conversations with session management
  • File and image analysis with glob pattern support
  • Automatic model selection based on content length
  • Deep thinking mode with reasoning output
  • Google Search integration for up-to-date information

Prerequisites

1. AIStudioProxyAPI Backend

This MCP server requires AIStudioProxyAPI as the backend service.

# Clone and setup AIStudioProxyAPI
git clone https://github.com/CJackHwang/AIstudioProxyAPI.git
cd AIstudioProxyAPI
poetry install
poetry run python launch_camoufox.py --headless

The API will be available at http://127.0.0.1:2048 by default.

2. uv Package Manager

# Install uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh

Installation

# Clone this repository
git clone https://github.com/YOUR_USERNAME/aistudio-gemini-mcp.git
cd aistudio-gemini-mcp

# Install dependencies
uv sync

Configuration

Environment Variables

VariableDefaultDescription
GEMINI_API_BASE_URLhttp://127.0.0.1:2048AIStudioProxyAPI endpoint
GEMINI_API_KEY(empty)Optional API key
GEMINI_PROJECT_ROOT$PWDRoot directory for file resolution

Claude Desktop / Claude Code

Add to ~/.claude/mcp.json:

{
  "mcpServers": {
    "gemini": {
      "command": "uv",
      "args": ["run", "--directory", "/path/to/aistudio-gemini-mcp", "python", "server.py"],
      "env": {
        "GEMINI_API_BASE_URL": "http://127.0.0.1:2048"
      }
    }
  }
}

Tools

gemini_chat

Send a message to Google Gemini with optional file attachments.

ParameterTypeRequiredDescription
promptstringYesMessage to send (1-100,000 chars)
filelist[string]NoFile paths or glob patterns
session_idstringNoSession ID ("last" for recent)
modelstringNoOverride model selection
system_promptstringNoSystem context
temperaturefloatNoSampling temperature (0.0-2.0)
max_tokensintNoMax response tokens
response_formatenumNo"markdown" or "json"

Examples:

# Simple query
gemini_chat(prompt="Explain quantum computing")

# With file
gemini_chat(prompt="Review this code", file=["main.py"])

# With image
gemini_chat(prompt="Describe this", file=["photo.png"])

# Continue conversation
gemini_chat(prompt="Tell me more", session_id="last")

# Multiple files
gemini_chat(prompt="Analyze", file=["src/**/*.py"])

gemini_list_models

List available Gemini models.

ParameterTypeRequiredDescription
filter_textstringNoFilter models by name
response_formatenumNo"markdown" or "json"

Model Selection

Auto-selects model based on content length:

Content SizeModel
≤ 8,000 charsgemini-3-pro-preview
> 8,000 charsgemini-2.5-pro
Fallbackgemini-2.5-flash

Features

Session Management

  • Automatic session creation
  • Use "last" to continue recent conversation
  • LRU eviction (max 50 sessions)

File Support

  • Images: PNG, JPG, JPEG, GIF, WebP, BMP
  • Text: Any text-based file with auto-encoding detection
  • Glob patterns: *.py, src/**/*.ts, etc.

Built-in Capabilities

  • reasoning_effort: high - Deep thinking mode
  • google_search - Web search integration
  • Automatic retry with model fallback

Running Standalone

# Start the MCP server
uv run python server.py

Project Structure

aistudio-gemini-mcp/
├── server.py           # MCP server implementation
├── pyproject.toml      # Project configuration
├── uv.lock             # Dependency lock file
├── README.md           # This file
├── LICENSE             # MIT License
└── mcp_config_example.json

Related Projects

License

MIT License - see for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.