OriShmila/gemini-mcp-server
If you are the rightful owner of gemini-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
Gemini MCP Server is a production-ready server providing Gemini AI capabilities with Google Search grounding and direct model inference.
Gemini MCP Server
A production-ready Model Context Protocol (MCP) server providing Gemini AI capabilities with Google Search grounding and direct model inference.
🌟 Features
- 🔍 Google Search Grounding: Real-time web search using Gemini with Google Search grounding for factual, up-to-date information
- 🤖 Direct Model Inference: Call Gemini directly for tasks that don't require real-time information
- 📊 JSON Schema Validation: Strict input/output validation for reliable tool responses
- 🧪 Comprehensive Testing: Full test suite with schema validation and error handling
- ⚡ High Performance: Optimized for fast response times
🚀 Quick Start
Installation
-
Clone the repository:
git clone https://github.com/yourusername/gemini-mcp-server.git cd gemini-mcp-server -
Install dependencies:
uv sync -
Set up environment variables:
# Copy the example environment file cp .env.example .env # Edit .env and add your Gemini API key GEMINI_API_KEY=your_api_key_here -
Test the server:
uv run python test_server.py
Getting a Gemini API Key
- Visit Google AI Studio
- Sign in with your Google account
- Create a new API key
- Copy the key and add it to your
.envfile
🛠️ Available Tools
1. gemini_websearch
Search the web using Gemini with Google Search grounding.
Features:
- ✅ Real-time search results with Google Search grounding
- 🌍 Language support - Automatically translates results to requested language
- 📊 Structured output with proper source attribution
- 🎯 Custom fields - Request additional metadata per result
Parameters:
query(required): Search query stringlanguage(optional): Language code for result translation (e.g., "es", "fr", "ja")extraFieldsProperties(optional): Additional fields to include in results
Example:
{
"query": "latest AI developments 2024",
"language": "es"
}
2. gemini_call
Call Gemini model directly without grounding for structured responses.
Features:
- 🎯 JSON Schema constraints - Get structured responses that match your schema
- 📝 Flexible input - Pass arbitrary data and context
- 🚀 Fast responses - Direct model inference without web search
- 💪 Complex schemas - Support for nested objects and arrays
Parameters:
prompt(required): Instruction describing the taskargs(optional): Structured data to include in the promptoutputSchema(required): JSON Schema defining expected response structure
Example:
{
"prompt": "Analyze this financial data and extract key metrics",
"args": {
"revenue": "$1.2B",
"quarter": "Q3 2024"
},
"outputSchema": {
"type": "object",
"properties": {
"revenue_usd": {"type": "number"},
"growth_rate": {"type": "number"}
}
}
}
🔧 Usage with MCP Clients
Claude Desktop Configuration
Add to your Claude Desktop configuration:
{
"mcpServers": {
"gemini": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/yourusername/gemini-mcp-server",
"gemini-server"
],
"env": {
"GEMINI_API_KEY": "your_api_key_here"
}
}
}
}
Other MCP Clients
The server implements the standard MCP protocol and works with any compatible client:
- Tool Discovery: Clients call
list_toolsto see available tools - Tool Execution: Clients call
call_toolwith tool name and arguments - Response: Server returns structured JSON responses with validation
🧪 Testing
Run the comprehensive test suite:
uv run python test_server.py
The test suite includes:
- ✅ Schema validation tests
- ✅ Success case testing for both tools
- ✅ Error handling validation
- ✅ Input parameter validation
- ✅ Output structure verification
- ✅ Performance monitoring
Expected Results:
- All 10 test cases should pass
- Web search tests will show "AFC remote call" logs (indicating real grounding)
- Response times should be under 10 seconds for search operations
📊 Performance
- Search Operations: ~3-6 seconds (with Google Search grounding)
- Direct Model Calls: ~0.5-1 second
- Schema Validation: <1ms per request
- Memory Usage: ~50MB base + request processing
🏗️ Architecture
gemini-mcp-server/
├── gemini_mcp_server/ # Main package
│ ├── handlers.py # Tool implementations
│ ├── server.py # MCP server core
│ ├── tools.json # Tool schemas
│ └── __main__.py # Entry point
├── test_server.py # Test framework
├── test_cases.json # Test definitions
└── pyproject.toml # Dependencies
🔍 Key Technologies
- Google Gemini API: Latest Gemini 2.0 Flash model
- Google Search Grounding: Real-time web search integration
- MCP Protocol: Standard for AI tool integration
- UV Package Manager: Fast Python dependency management
🚦 Error Handling
The server includes comprehensive error handling:
- Invalid API Key: Clear error messages for authentication issues
- Network Errors: Graceful handling of API timeouts and connectivity
- Schema Validation: Detailed error messages for malformed requests
- Rate Limiting: Automatic retry logic for API rate limits
🤝 Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
📄 License
This project is open source and available under the MIT License.
🔗 Resources
🆘 Troubleshooting
Common Issues
-
"GEMINI_API_KEY not configured"
- Ensure your
.envfile exists and contains a valid API key - Check that the key has proper permissions
- Ensure your
-
Tests failing with network errors
- Verify internet connectivity
- Check if API key has rate limiting issues
-
Schema validation errors
- Ensure your requests match the expected input schemas
- Check the
tools.jsonfile for parameter requirements
Getting Help
- Check the test output for detailed error messages
- Review the Gemini API documentation
- Open an issue on GitHub with error logs
Ready to integrate powerful AI capabilities into your applications with real-time web search and structured model inference! 🚀