KanishkJagya1/MCP-server
If you are the rightful owner of MCP-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The MCP Server is a lightweight, extensible backend that bridges multiple LLMs with web search capabilities.
š MCP Server ā Multi-LLM Web Search Bridge
A lightweight, extensible backend that leverages Google Gemini and Anthropic Claude to perform intelligent web-assisted queries through DuckDuckGo. The MCP Server extracts meaningful search topics from natural language input and fetches relevant information from the internet.
⨠Features
- Multi-LLM Support: Compatible with Google Gemini and Anthropic Claude
- Intelligent Query Processing: Extracts search topics from natural language
- Web Search Integration: Uses DuckDuckGo for reliable web results
- Multiple Interfaces: Flask API, CLI tool, and Streamlit frontend
- Easy Configuration: Environment-based setup with provider switching
šļø Project Structure
MCP-server/
āāā mcp_server.py # Flask API server
āāā mcp_integration.py # Core logic (LLM handling + search)
āāā ask_llm.py # Command-line interface
āāā streamlit_app.py # Interactive web frontend
āāā requirements.txt # Python dependencies
āāā .env # Environment variables (not committed)
āāā .gitignore # Git ignore rules
āāā README.md # This file
š Quick Start
Prerequisites
- Python 3.8+
- Google Gemini API key or Anthropic Claude API key
- Internet connection for web searches
Installation
-
Clone the repository
git clone https://github.com/KanishkJagya1/MCP-server.git cd MCP-server
-
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Configure environment variables
Create a
.env
file in the root directory:GEMINI_API_KEY=your_google_gemini_api_key_here CLAUDE_API_KEY=your_anthropic_api_key_here LLM_PROVIDER=gemini # Options: "gemini" or "claude" PORT=5001
š„ļø Usage
Flask API Server
Start the backend server:
python mcp_server.py
Available endpoints:
GET /health
- Health checkGET /
- Server infoPOST /tool_call
- Web search endpoint
Example API call:
curl -X POST http://localhost:5001/tool_call \
-H "Content-Type: application/json" \
-d '{
"name": "fetch_web_content",
"parameters": {
"query": "latest Mars discoveries"
}
}'
Command Line Interface
Ask questions directly from the terminal:
python ask_llm.py "What are the latest developments in AI?"
Streamlit Frontend
Launch the interactive web interface:
streamlit run streamlit_app.py
Then open http://localhost:8501 in your browser.
Example queries:
- "What has NASA discovered on Mars recently?"
- "Tell me about the latest AI breakthroughs"
- "What's happening in renewable energy?"
š¤ Supported LLM Providers
Provider | Model | Notes |
---|---|---|
Google Gemini | gemini-pro | Fast and efficient |
Anthropic Claude | claude-3-sonnet | Strong structured responses |
Switch between providers by updating the LLM_PROVIDER
in your .env
file.
š API Response Format
{
"results": [
{
"title": "Example Search Result",
"url": "https://example.com",
"description": "Description of the search result..."
}
]
}
š Deployment
Local Development
The server runs on localhost:5001
by default. Configure the port in your .env
file.
Production Deployment
Deploy on platforms like:
- Streamlit Cloud (for frontend)
- Render / Railway / Replit (for backend)
- Docker (containerized deployment)
For external access, update the Streamlit app to point to your public Flask URL.
šÆ Use Cases
- Research Assistance: Automated information gathering
- Academic Fact-Checking: Verify claims and sources
- Content Exploration: Discover related topics and trends
- News Analysis: Stay updated with current events
- Smart Search Bots: Build intelligent search applications
š§ Configuration
Environment Variables
Variable | Description | Default |
---|---|---|
GEMINI_API_KEY | Google Gemini API key | Required |
CLAUDE_API_KEY | Anthropic Claude API key | Required |
LLM_PROVIDER | LLM provider to use | gemini |
PORT | Flask server port | 5001 |
Git Configuration
The project includes a .gitignore
file to exclude sensitive files:
__pycache__/
*.pyc
.env
.venv/
*.log
.vscode/
.idea/
If you accidentally committed files before adding .gitignore
:
git rm -r --cached .
git add .
git commit -m "Apply .gitignore changes"
š ļø Development
Testing the API
Test endpoints manually or create automated tests:
# Health check
curl http://localhost:5001/health
# Search query
curl -X POST http://localhost:5001/tool_call \
-H "Content-Type: application/json" \
-d '{"name": "fetch_web_content", "parameters": {"query": "test query"}}'
Adding New Features
The modular structure makes it easy to:
- Add new LLM providers in
mcp_integration.py
- Extend API endpoints in
mcp_server.py
- Enhance the frontend in
streamlit_app.py
š Roadmap
- Add result summarization using LLMs
- Implement logging and error tracking
- Add caching for repeated queries
- UI enhancements (dark mode, result cards)
- Support for additional search engines
- Batch query processing
- Result export functionality
š¤ Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
š License
This project is licensed under the MIT License - see the file for details.
šØāš» Author
Kanishk Jagya
Thapar Institute of Engineering and Technology
š§ GitHub Profile
š Acknowledgments
- Google Gemini and Anthropic Claude for LLM capabilities
- DuckDuckGo for search functionality
- Streamlit for the web interface framework
- Flask for the API backend
Need help? Open an issue or reach out via GitHub!