pkbythebay29/mcp_modem_api_server
If you are the rightful owner of mcp_modem_api_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The MCP AI Modem Server is a compute-efficient, locally-executable server that facilitates protocol-based communication and local LLM execution using Hugging Face transformers.
MCP AI Modem Server (Local-compute poor)
This project provides an compute poor, locally-executable MCP server that enables protocol-based communication (MQTT, OPC UA, Modbus) and a locally running LLM using Hugging Face transformers
.
✨ Features
- 📡 Protocol Gateway: OPC UA, MQTT, Modbus (for industrial data)
- 🧠 Local LLM Query Support (no internet)
- 🔒 Airgapped: No external calls once models are downloaded
- 🪟 Windows-compatible: Works in Python environments with Conda or venv
📦 Requirements
- Python 3.8+
- transformers
- uvicorn
- OPC UA, Modbus and MQTT libraries
🧪 Installation (Windows)
git clone https://github.com/YOUR_USERNAME/mcp-modem-ai-server.git
cd mcp-modem-ai-server
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
🚀 Run the Server
uvicorn main:app --reload
🧠 Example LLM Query
{
"protocol": "llm",
"query": "How to optimize coolant temperature?",
"context": "Reactor 7, summer operation mode"
}
🔐 Airgapped Use
After model is downloaded, disconnect from internet and inference will continue to work.
🧾 License
MIT License - see LICENSE
file.