mattkir/gemini-mcp-server
If you are the rightful owner of gemini-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Gemini MCP Server provides access to Google's Gemini AI models, including the latest Gemini 2.5 Pro Preview, through a Model Context Protocol (MCP) server.
Gemini MCP Server
A Model Context Protocol (MCP) server that provides access to Google's Gemini AI models, including the latest Gemini 2.5 Pro Preview.
Features
-
gemini_query: Send single queries to Gemini models
-
gemini_chat: Have conversations with chat history
-
Support for multiple Gemini models including the latest Gemini 1.5 Pro
-
Configurable generation parameters (temperature, max tokens, etc.)
-
Usage metrics and response metadata
Installation
-
Install dependencies:
cd /Users/matthewkirchoff/gemini-mcp-server npm install
-
Set up your API key: Your API key is already configured in the
.env
file, but you can update it if needed:echo "GEMINI_API_KEY=your_api_key_here" > .env
-
Test the server:
npm start
Claude Desktop Configuration
To use this MCP server with Claude Desktop, add the following configuration to your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"gemini": {
"command": "node",
"args": ["/Users/matthewkirchoff/gemini-mcp-server/src/index.js"],
"env": {
"GEMINI_API_KEY": "your_gemini_api_key_here"
}
}
}
}
Note: Make sure to replace "your_gemini_api_key_here"
with the actual value of your Gemini API key, which should be set in the environment. Do not hardcode your API key in the configuration file.
Available Tools
1. gemini_query
Send a single query to Gemini and get a response.
Parameters:
prompt
(required): The query to send to Geminimodel
(optional): Gemini model to use (default: "gemini-2.0-flash-exp")temperature
(optional): Response creativity (0.0-2.0, default: 1.0)maxOutputTokens
(optional): Max response length (1-8192, default: 2048)topP
(optional): Nucleus sampling (0.0-1.0, default: 0.95)topK
(optional): Top-k sampling (1-40, default: 40)
2. gemini_chat
Have a conversation with Gemini using chat history.
Parameters:
messages
(required): Array of conversation messagesmodel
(optional): Gemini model to use (default: "gemini-2.0-flash-exp")temperature
(optional): Response creativity (0.0-2.0, default: 1.0)
Message format:
{
"role": "user|model",
"parts": [{"text": "message content"}]
}
Supported Models
gemini-1.5-pro-latest
- Latest Pro model (recommended)gemini-1.5-pro
- Standard Pro modelgemini-1.5-flash-latest
- Latest Flash modelgemini-1.5-flash
- Fast, efficient modelgemini-1.5-flash-8b
- Lightweight versiongemini-pro
- Basic modelgemini-pro-vision
- Vision-capable model
Example Usage
Once configured in Claude Desktop, you can use the tools like this:
Please use the gemini_query tool to ask Gemini: "What are the latest developments in quantum computing?"
or
Use gemini_chat to have a conversation about machine learning, starting with asking about neural networks.
Troubleshooting
- API Key Issues: Make sure your GEMINI_API_KEY is valid and has the necessary permissions
- Model Not Found: Check if the model name is correct - use one of the supported models listed above
- Rate Limits: Gemini API has rate limits; wait a moment before retrying
- Large Responses: If responses are cut off, increase
maxOutputTokens
- Model Availability: Some newer experimental models may not be available in all regions
Development
To run in development mode with auto-restart:
npm run dev
Security Notes
- Keep your API key secure and never share it
- The API key is stored in the
.env
file and passed through environment variables - Consider using API key rotation for production use
License
MIT License