gomind

osesantos/gomind

3.2

If you are the rightful owner of gomind and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

GoMind is a lightweight, modular MCP server designed for private retrieval-augmented generation using local LLMs and personal data.

🧠 GoMind

Go

A lightweight, modular MCP (Multi-Component Protocol) server written in Go, focused on Private RAG (Retrieval-Augmented Generation) powered by local LLMs and your personal data.


✨ What is GoMind?

GoMind is a brain orchestrator that sits between a user interface (like LibreChat) and multiple intelligent agents or plugins. It:

  • Receives natural language questions
  • Orchestrates calls to various data sources (Obsidian notes, local vector DBs, etc.)
  • Uses Hermes as a message bus to communicate with agents
  • Assembles the full context and sends it to a local LLM (via Ollama)
  • Returns the response to the user

Think of it as a LangChain killer — 100% private, no dependencies, fast and fully owned by you.


🧩 Architecture Overview

[LibreChat or CLI]
        │
        ā–¼
     [ GoMind ]  ←←←←←←←←←←←←←←←←←←←←←←←←
        │                               │
        │                               │
        ā”œā”€ā”€ā†’ Receives natural language question (e.g. "What is GoMind?")
        │                               │
        ā”œā”€ā”€ā†’ Parses question and identifies required data sources
        │                               │
        │                               ā”œā”€ā”€ā†’ [Obsidian Reader Agent] (MCP Core)
        │                               │
        │                               ā”œā”€ā”€ā†’ [Vector Search Agent] (MCP Core)
        │                               │
        │                               └──→ [Other Agents]
     (MCP Core)                         │
        │                               │
        ā”œā”€ā”€ā†’ Publishes requests via Hermes ("query.obsidian", "query.search", ...)
        │                               │
        ā”œā”€ā”€ā† Receives responses via Hermes (with correlation ID)
        │                               │
        └──→ Assembles context + sends prompt to LLM (Ollama)
        │
        ā–¼
    [ Local Response ]

šŸ”§ Tech Stack

  • Go 1.22+
  • Hermes (lightweight pub/sub message bus)
  • Ollama (local LLM runner, e.g. Mistral)
  • Meilisearch or Chroma (for vector search)
  • Markdown file support (e.g. Obsidian vaults)
  • JSON-based message protocol with correlation ID

šŸ“¦ Features

  • 🧠 Private RAG from your local knowledge base
  • šŸŖ Plugin-based architecture using Hermes
  • ⚔ Fast and lightweight (no LangChain or Python overhead)
  • 🧰 Extensible with hermes-go-sdk for writing new agents
  • ā±ļø Timeouts, fan-out/fan-in orchestration, modular pipeline

🚧 Status

MVP in progress — building core functionality first:
āœ… Ollama connector
āœ… Obsidian reader agent
āœ… Basic message bus with Hermes
šŸ”œ Plugin protocol schema + timeout management
šŸ”œ Agent discovery + CLI mode


šŸš€ Getting Started

git clone https://github.com/osesantos/gomind
cd gomind
go run src/main.go
# Test with curl
curl -X POST http://localhost:4433/ask \
  -H "Content-Type: application/json" \
  -d '{"question": "What is GoMind?"}'

Make sure you have:

  • Hermes running (or embedded mode)
  • Ollama installed and serving a model (e.g. ollama run mistral)
  • Some markdown notes to test against

šŸ“œ License

MIT — made with ā¤ļø and caffeine.


šŸ™Œ Credits

Inspired by:

  • LangChain (but better)
  • Personal RAG workflows
  • Modular AI design

Built by osesantos