pranav-582/Basic-MCP-Server
If you are the rightful owner of Basic-MCP-Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Model Context Protocol (MCP) server is a crucial component in the architecture of intelligent chatbots, enabling seamless communication between the language model and external data sources like weather and news APIs.
Weather & News MCP with LLM Integration
An intelligent chatbot powered by Groq LLM, implementing the Model Context Protocol (MCP) to access weather and news data through tools.
Architecture
User → Streamlit UI → LLM Client → Groq LLM
↓
MCP Server (JSON-RPC over stdio)
↓
Weather & News APIs
Components
app.py- Streamlit chat interface with streaming responsesllm_client.py- Orchestrates LLM calls and MCP communicationmcp_server.py- MCP server implementing official JSON-RPC 2.0 protocol
Features
- 🌤️ Real-time weather data from OpenWeatherMap
- 📰 Latest news from NewsAPI
- 🤖 Groq LLM with intelligent tool selection
- 💬 ChatGPT-style streaming responses
- 📝 Automatic request/response logging
- 🔧 Official MCP protocol implementation
Setup
1. Install Dependencies
pip install -r requirements.txt
2. Configure API Keys
Copy .env.example to .env and add your API keys:
OPENWEATHER_API_KEY=your_key_here
NEWSAPI_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
GROQ_MODEL=openai/gpt-oss-120b
Get API Keys:
- OpenWeatherMap: https://openweathermap.org/api (free tier available)
- NewsAPI: https://newsapi.org/ (free tier: 100 requests/day)
- Groq: https://console.groq.com/ (free tier available)
3. Run the Application
streamlit run app.py
The chat interface will open in your browser at http://localhost:8501
Usage Examples
Ask natural language questions like:
- "What's the weather in London?"
- "How's the weather in Tokyo right now?"
- "What's the latest news about Bitcoin?"
- "Show me news about artificial intelligence"
The LLM will:
- Analyze your question
- Determine which tool to call
- Execute the tool via MCP server
- Generate a natural, helpful response
How It Works
LLM → MCP Flow
- User Query → Streamlit UI captures input
- Tool Selection → LLM analyzes query and generates JSON tool call
- MCP Communication →
llm_clientspawnsmcp_serversubprocess - JSON-RPC Request → Sent via stdin to MCP server
- Tool Execution → MCP calls Weather or News API
- Response → Tool result returned via stdout
- Final Answer → LLM generates user-friendly response (streaming)
- Logging → All interactions logged to
logs/llm_requests.txt
MCP Protocol
The server implements official MCP methods:
initialize- Handshake and capability exchangetools/list- Returns available toolstools/call- Executes a tool with arguments
Logs
All LLM and tool interactions are logged in JSON format:
logs/llm_requests.txt
Each log entry includes:
- Timestamp
- Stage (tool_decision, tool_execution, final_response)
- Prompts and responses
- Tool calls and results
Development
Project Structure
D:\Projects\MCP\
├── app.py # Streamlit UI
├── llm_client.py # LLM orchestration & MCP client
├── mcp_server.py # MCP server (tools implementation)
├── requirements.txt # Python dependencies
├── .env # API keys
├── logs/ # Request logs
└── README.md # This file