alhadbhadekar/AI-Tutor-MCP-Toolkit
If you are the rightful owner of AI-Tutor-MCP-Toolkit and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The AI Tutor MCP Toolkit is a compact server built with Gradio and powered by OpenAI's GPT-4o-mini model, designed for educational tasks.
AI Tutor MCP Toolkit
Overview
The AI Tutor MCP Toolkit is a compact, multi‑tool server built with Gradio and powered by OpenAI's GPT‑4o‑mini model. It exposes four learning‑oriented tools via the Model‑Context‑Protocol (MCP) and provides both a web UI and an MCP‑enabled agent for interactive educational tasks.
This project includes:
- A Gradio UI with four learning tools
- An MCP‑compatible server exposing the tools over SSE
- A Python agent that connects to the MCP server and interacts via streaming
- Fully asynchronous execution with persistent conversation history
Features
1. Explain Concept
Explains a concept at different depth levels:
- Like I'm 5
- Like I'm 10
- High school
- College
- Expert
2. Summarize Text
Summarizes text to a configurable compression ratio (10%–80%).
3. Generate Flashcards
Creates study flashcards in JSON‑Lines format.
4. Quiz Me
Generates quizzes with numbered questions and answer keys.
Each tool is exposed through MCP, enabling programmatic integration and streaming responses.
Directory Structure
project/
│
├── app_server.py # Gradio MCP server (tools exposed)
├── agent_client.py # Agent that connects to MCP via SSE
├── agents/ # Custom agent classes
│ ├── mcp/ # MCP wrapper utilities
│ ├── __init__.py
│ └── Runner.py
├── requirements.txt
└── README.md (this file)
Installation
1. Clone the Project
git clone <repo-url>
cd ai-tutor-mcp
2. Create a Virtual Environment
python3 -m venv venv
source venv/bin/activate
3. Install Requirements
pip install -r requirements.txt
4. Set Environment Variables
Add your OpenAI key:
export OPENAI_API_KEY="your-key-here"
(or create a .env file)
Running the MCP Server
Start the Gradio + MCP server:
python app_server.py
This launches:
- Web UI at:
http://0.0.0.0:7860 - MCP SSE endpoint at:
http://0.0.0.0:7860/gradio_api/mcp/sse - MCP schema endpoint:
http://0.0.0.0:7860/gradio_api/mcp/schema
Running the MCP Agent
In a separate terminal:
python agent_client.py
This will:
- Connect to the MCP server via SSE
- Fetch the schema
- Start an interactive console
- Stream results from whichever tool the agent selects
Example:
User: Explain gravity like I'm 10
The agent will call explain_concept with arguments:
{"question": "gravity", "level": 2}
And stream the output.
How MCP Integration Works
The Gradio server exposes four functions as MCP tools. The agent:
- Reads the full conversation
- Decides which tool fits (or none)
- Returns only
{ "tool": ..., "arguments": ... } - Streams tool output back to the user
This allows:
- Ultra‑fast streaming UX
- Tool‑routed intelligent behavior
- Persistent conversation logic
Customization
Add New Tools
To expose more MCP tools:
- Add a Python function
- Bind it inside Gradio Blocks
- Ensure MCP auto-detects it via Gradio’s
mcp_server=Trueflag
Change Models
Update:
MODEL_NAME = "gpt-4o-mini"
To any OpenAI model.
Modify Agent Behavior
Edit:
agents/Agent.py
agent = Agent(...)
Modify system instructions or logging.
Troubleshooting
MCP Schema not loading
Ensure the server is running and accessible:
curl http://localhost:7860/gradio_api/mcp/schema
Streaming fails
Check CORS or timeout settings in MCPServerSse configuration.
OpenAI errors
Ensure API key is valid and account has access to the model.
Roadmap
- Add vector‑memory for concept chains
- Add spaced‑repetition scheduling
- Add diagram generation tool
- Add multilingual support
License
MIT License
Author
Alhad– Built with ❤️ for fast agentic experimentation.