YOGASairam/claude_educhain_mcp_server
If you are the rightful owner of claude_educhain_mcp_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The EduChain MCP Server is a self-contained server designed to generate educational content using a local Large Language Model (LLM) without relying on external APIs.
MCQ Generator
Generates a specified number of multiple-choice questions on any given topic.
Lesson Plan Generator
Creates a structured lesson plan for a specified topic.
Flashcard Generator
Generates a set of flashcards for a given topic.
AI Intern Assignment: EduChain MCP Server
This repository contains the full implementation for the AI Intern Assignment. The project is an MCP (Meta-Controller Protocol) server that uses a local Large Language Model (LLM) via the educhain
library's principles to generate educational content.
Project Overview
The primary objective of this assignment was to build a server that could provide educational content on demand. The server exposes several tools that can be accessed by a host application like Claude Desktop. All AI content generation is performed locally, without relying on external APIs, ensuring privacy and offline functionality.
Features
The server provides three distinct educational tools, including the bonus requirement:
-
MCQ Generator (
/tools/generate-mcqs
)- Generates a specified number of multiple-choice questions on any given topic.
- Each question includes the question text, four options, the correct answer, and a detailed explanation.
-
Lesson Plan Generator (
/tools/generate-lesson-plan
)- Creates a structured lesson plan for a specified topic.
- The output includes a title, learning objectives, and a list of activities.
-
Flashcard Generator (
/tools/generate-flashcards
) - (Bonus)- Generates a set of flashcards for a given topic.
- Each flashcard has a "front" (a term or question) and a "back" (the definition or answer).
Technical Implementation and Justification
The development process involved a significant architectural pivot to ensure the project was robust and reliable.
Initial Approach: Direct Library Usage
The initial plan was to use the educhain
library directly as installed from its GitHub repository. This involved importing the QnAEngine
and ContentEngine
and using them to generate content.
Problems Encountered
During development, we encountered several critical bugs within the educhain
library that prevented the application from running correctly, especially when using a local LLM:
- Installation Errors: The library's
setup.py
file had aUnicodeDecodeError
on Windows systems, which required a manual patch to fix. - Circular Import Bugs: The library contained several circular import errors (e.g.,
qna_models.py
trying to import from itself), which caused the application to crash on startup. - Incompatible Model Output: The library's code was primarily designed for OpenAI API outputs. When used with a
LlamaCpp
model, it failed because the local model returned a plain string, whereas the library expected an object with a.content
attribute. - Unreliable JSON Formatting: The local model often failed to produce a perfectly structured JSON object, including conversational text that caused the Pydantic parsers to fail.
Final Architecture: A Self-Contained Engine
To overcome these issues, we shifted to a more stable, self-contained architecture. Instead of fixing the library in place (which is fragile), we extracted the core logic and placed it within our own project structure.
Why this approach was chosen:
- Reliability: It eliminates all dependency on the buggy parts of the external library. Our server now controls its own engines.
- Control: It gave us full control to simplify and adapt the code. We rewrote the
generate_questions
function with a more direct prompt and a robust line-by-line parser, which is much more effective for local models. - Efficiency: We implemented a service layer (
src/services.py
) to ensure the multi-gigabyte AI model is loaded into memory only once when the server starts, making subsequent API calls extremely fast.
This final architecture is much more robust and demonstrates a practical approach to solving real-world development challenges when dealing with external or unstable libraries.
Setup and Usage Instructions
Follow these steps to run the server locally.
1. Clone the Repository
git clone [https://github.com/YOGASairam/claude_educhain_mcp_server.git](https://github.com/YOGASairam/claude_educhain_mcp_server.git)
cd claude_educhain_mcp_server