Basic-MCP-Server

pranav-582/Basic-MCP-Server

3.2

If you are the rightful owner of Basic-MCP-Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The Model Context Protocol (MCP) server is a crucial component in the architecture of intelligent chatbots, enabling seamless communication between the language model and external data sources like weather and news APIs.

Tools
2
Resources
0
Prompts
0

Weather & News MCP with LLM Integration

An intelligent chatbot powered by Groq LLM, implementing the Model Context Protocol (MCP) to access weather and news data through tools.

Architecture

User → Streamlit UI → LLM Client → Groq LLM
                           ↓
                      MCP Server (JSON-RPC over stdio)
                           ↓
                   Weather & News APIs

Components

  1. app.py - Streamlit chat interface with streaming responses
  2. llm_client.py - Orchestrates LLM calls and MCP communication
  3. mcp_server.py - MCP server implementing official JSON-RPC 2.0 protocol

Features

  • 🌤️ Real-time weather data from OpenWeatherMap
  • 📰 Latest news from NewsAPI
  • 🤖 Groq LLM with intelligent tool selection
  • 💬 ChatGPT-style streaming responses
  • 📝 Automatic request/response logging
  • 🔧 Official MCP protocol implementation

Setup

1. Install Dependencies

pip install -r requirements.txt

2. Configure API Keys

Copy .env.example to .env and add your API keys:

OPENWEATHER_API_KEY=your_key_here
NEWSAPI_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
GROQ_MODEL=openai/gpt-oss-120b

Get API Keys:

3. Run the Application

streamlit run app.py

The chat interface will open in your browser at http://localhost:8501

Usage Examples

Ask natural language questions like:

  • "What's the weather in London?"
  • "How's the weather in Tokyo right now?"
  • "What's the latest news about Bitcoin?"
  • "Show me news about artificial intelligence"

The LLM will:

  1. Analyze your question
  2. Determine which tool to call
  3. Execute the tool via MCP server
  4. Generate a natural, helpful response

How It Works

LLM → MCP Flow

  1. User Query → Streamlit UI captures input
  2. Tool Selection → LLM analyzes query and generates JSON tool call
  3. MCP Communicationllm_client spawns mcp_server subprocess
  4. JSON-RPC Request → Sent via stdin to MCP server
  5. Tool Execution → MCP calls Weather or News API
  6. Response → Tool result returned via stdout
  7. Final Answer → LLM generates user-friendly response (streaming)
  8. Logging → All interactions logged to logs/llm_requests.txt

MCP Protocol

The server implements official MCP methods:

  • initialize - Handshake and capability exchange
  • tools/list - Returns available tools
  • tools/call - Executes a tool with arguments

Logs

All LLM and tool interactions are logged in JSON format:

logs/llm_requests.txt

Each log entry includes:

  • Timestamp
  • Stage (tool_decision, tool_execution, final_response)
  • Prompts and responses
  • Tool calls and results

Development

Project Structure

D:\Projects\MCP\
├── app.py              # Streamlit UI
├── llm_client.py       # LLM orchestration & MCP client
├── mcp_server.py       # MCP server (tools implementation)
├── requirements.txt    # Python dependencies
├── .env               # API keys 
├── logs/              # Request logs
└── README.md          # This file