Rohan7530/AI-Medical-Triage-System
If you are the rightful owner of AI-Medical-Triage-System and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Medical Triage MCP Server integrates MCP server capabilities with LangGraph workflows to provide intelligent patient assessment and appointment booking.
title: Medical Triage System emoji: 🩺 colorFrom: blue colorTo: green sdk: docker pinned: false
Medical Triage MCP Server + LangGraph Integration
Summary
The Medical Triage System is an intelligent platform designed to streamline patient assessment and healthcare resource allocation. By integrating a Model Context Protocol (MCP) server with advanced LangGraph workflows, the application automates the triage process from symptom intake to appointment booking. Patients can input their symptoms, which are then analyzed using a medical knowledge base and AI-powered reasoning. The system determines the urgency of each case, matches patients with suitable doctors or specialists, and manages appointment scheduling and notifications. Its modular architecture ensures scalability, maintainability, and easy integration with external large language models (LLMs) for enhanced natural language understanding. The platform is suitable for clinics, hospitals, and telemedicine providers seeking to optimize patient flow, reduce manual workload, and improve healthcare outcomes through intelligent automation.
Features
- MCP Server: Exposes medical resources and tools for AI agents
- LangGraph Workflow: Orchestrates the complete triage process
- Medical Knowledge Base: Symptoms, conditions, and triage protocols
- Doctor Database: Available doctors and specialists
- Appointment System: Booking and availability management
- Notification System: Patient communication
Quick start (local)
python3.12 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Set env for LLM
export USE_LLM=true
export LLM_MODEL=qwen2.5:7b
export LLM_BASE_URL=https://your-public-llm.example.com
# Run frontend
streamlit run frontend.py
Deploy on Hugging Face Spaces (Docker SDK)
- Create a Space (SDK: Docker, Public).
- Push this repo (must include
Dockerfile,requirements.txt,frontend.py). - In Space → Settings → Variables and secrets, set:
USE_LLM=trueLLM_MODEL=qwen2.5:7bLLM_BASE_URL=https://your-public-llm.example.com
- Spaces will build and run the container. Open the Space URL.
Environment variables
USE_LLM(defaulttrue): set tofalseto force fallback parsingLLM_MODEL(defaultqwen2.5:7b)LLM_BASE_URL(required whenUSE_LLM=true)
Project Structure
Medical_mcp/
├── src/
│ ├── mcp_server/
│ │ ├── __init__.py
│ │ ├── server.py
│ │ ├── resources.py
│ │ └── tools.py
│ ├── langgraph_workflow/
│ │ ├── __init__.py
│ │ ├── workflow.py
│ │ ├── nodes.py
│ │ └── state.py
│ ├── models/
│ │ ├── __init__.py
│ │ └── data_models.py
│ └── utils/
│ ├── __init__.py
│ └── helpers.py
├── tests/
├── examples/
├── frontend.py
├── main.py
├── requirements.txt
├── Dockerfile
└── README.md
Installation
- Create a Python 3.12 virtual environment:
python3.12 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Make sure Ollama is running with the llama3:8b model:
ollama pull llama3:8b
ollama serve
Usage
Run the main application:
python main.py
Architecture
The system follows a modular architecture:
- MCP Server (
src/mcp_server/): Handles medical resources and tools - LangGraph Workflow (
src/langgraph_workflow/): Orchestrates the triage process - Data Models (
src/models/): Defines data structures - Utilities (
src/utils/): Helper functions
Workflow
- Parse patient symptoms
- Assess triage urgency using medical knowledge
- Find appropriate doctors
- Book appointments
- Send notifications