Sharan-G-S/-MCP-Project-1
If you are the rightful owner of -MCP-Project-1 and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Gemini AI MCP Server is a robust platform that integrates Google's Gemini AI with a Model Context Protocol server, offering a modern web interface for seamless interaction.
🤖 Gemini AI MCP Server with Web Frontend
A powerful Model Context Protocol (MCP) server powered by Google Gemini AI with a beautiful, modern web interface. This project provides both an MCP-compatible server for integration with MCP clients and a standalone web application for direct interaction with Gemini AI.
✨ Features
- 🤖 Gemini AI Integration - Powered by Google's latest Gemini 2.5 models
- 🌐 Beautiful Web Frontend - Modern, responsive chat interface
- 🔌 MCP Server Support - Compatible with Model Context Protocol clients
- ⚙️ Customizable Settings - Adjust model and system instructions
- 📱 Responsive Design - Works seamlessly on desktop and mobile devices
- 🚀 Fast & Efficient - Built with Express.js for optimal performance
🎯 What is MCP?
Model Context Protocol (MCP) is a standardized protocol for AI assistants to securely access up-to-date information and take actions across tools and data sources. This project implements an MCP server that exposes Gemini AI capabilities.
🚀 Quick Start
Prerequisites
- Node.js 18+ installed on your system
- A Google Gemini API key (Get yours from Google AI Studio)
Installation
-
Clone the repository
git clone https://github.com/Sharan-G-S/MCP-Project-1.git cd MCP-Project-1 -
Install dependencies
npm install -
Configure environment variables
Create a
.envfile in the project root:GEMINI_API_KEY=your_gemini_api_key_here GEMINI_MODEL=gemini-2.5-flash💡 Note: Replace
your_gemini_api_key_herewith your actual Gemini API key.
🎮 Usage
Option 1: Web Interface (Recommended for most users)
Start the web server to access the beautiful chat interface:
npm run web
Then open your browser and navigate to:
http://localhost:3000
Features:
- 💬 Real-time chat with Gemini AI
- ⚙️ Customizable system instructions
- 🔄 Model selection (Gemini 2.5 Flash, Gemini 2.5 Pro, Gemini 2.0 Flash, and more)
Option 2: MCP Server Mode
For integration with MCP-compatible clients:
npm start
This starts the MCP server in stdio mode. Configure your MCP client to use:
- Command:
node src/mcp-server.js - Working Directory: Project root
Option 3: Demo Script
Test your API key and configuration:
npm run demo
This runs a quick demo that sends a sample prompt to Gemini AI.
🛠️ Available Scripts
| Command | Description |
|---|---|
npm run web | Start the web server (port 3000) |
npm start | Start the MCP server (stdio mode) |
npm run demo | Run a quick demo test |
📋 Project Structure
MCP-Project-1/
├── src/
│ ├── mcp-server.js # MCP server implementation
│ ├── web-server.js # Express web server
│ └── demo.js # Demo script
├── public/
│ ├── index.html # Frontend HTML
│ ├── styles.css # Styling
│ └── app.js # Frontend JavaScript
├── package.json
├── .env # Environment variables (create this)
└── README.md
🔧 API Endpoint
The web server exposes a REST API endpoint:
POST /api/chat
Send a chat request to Gemini AI.
Request Body:
{
"prompt": "Your question here",
"system": "Optional system instruction",
"model": "gemini-2.5-flash"
}
Response:
{
"response": "AI generated response"
}
🎨 MCP Tool: ask_ai
The MCP server exposes an ask_ai tool with the following schema:
-
Name:
ask_ai -
Description: Get a helpful answer from Gemini AI
-
Parameters:
prompt(string, required) - User prompt/questionsystem(string, optional) - System instructionmodel(string, optional) - Gemini model name (default:gemini-2.5-flash)
Available Models:
gemini-2.5-flash- Fast and efficient (default)gemini-2.5-pro- Most capable modelgemini-2.0-flash- Stable versiongemini-flash-latest- Latest Flash releasegemini-pro-latest- Latest Pro release
🌐 Environment Variables
| Variable | Description | Default |
|---|---|---|
GEMINI_API_KEY | Your Google Gemini API key | Required |
GEMINI_MODEL | Gemini model to use | gemini-2.5-flash |
PORT | Web server port | 3000 |
🎯 Use Cases
- 🤖 AI Chatbot - Deploy your own Gemini-powered chat interface
- 🔌 MCP Integration - Connect to MCP-compatible AI assistants
- 🧪 AI Experimentation - Test and experiment with Gemini models
- 📚 Learning Tool - Understand MCP protocol and Gemini API integration
🔄 Recent Updates
- ✅ Updated to latest Gemini 2.5 models (deprecated
gemini-proremoved) - ✅ Improved error handling with helpful quota and API error messages
- ✅ Simplified UI by removing temperature control (fixed at 0.2)
- ✅ Added support for multiple Gemini model variants
🛡️ Security Notes
- ⚠️ Never commit your
.envfile - It contains sensitive API keys - 🔒 Keep your Gemini API key secure and private
- 🌐 Consider using environment variables in production
- 🔐 Add authentication if deploying publicly
📦 Dependencies
@google/generative-ai- Google Gemini AI SDK@modelcontextprotocol/sdk- MCP protocol implementationexpress- Web server frameworkdotenv- Environment variable management
🤝 Contributing
Contributions are welcome! Feel free to:
- 🐛 Report bugs
- 💡 Suggest new features
- 🔧 Submit pull requests
- 📖 Improve documentation
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Google Gemini AI for the powerful AI model
- Model Context Protocol team for the MCP specification
- All contributors and users of this project
📧 Contact
For questions, suggestions, or support, please open an issue on GitHub.
Made with 💚 by Sharan G S
🚀 "Cultivating Intelligence to Build the Future of Autonomous Systems."