MCP-ollama_server

MCP-ollama_server

3.3

If you are the rightful owner of MCP-ollama_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

MCP-Ollama Server bridges the gap between Anthropic's Model Context Protocol (MCP) and local LLMs via Ollama, providing enterprise-grade AI capabilities with complete data privacy.

๐Ÿš€ MCP-Ollama Server

Connect the power of Model Context Protocol with local LLMs

Getting Started โ€ข Features โ€ข Architecture โ€ข Documentation โ€ข Contributing โ€ข FAQ

๐Ÿ“‹ Overview

MCP-Ollama Server bridges the gap between Anthropic's Model Context Protocol (MCP) and local LLMs via Ollama. This integration empowers your on-premise AI models with Claude-like tool capabilities, including file system access, calendar integration, web browsing, email communication, GitHub interactions, and AI image generationโ€”all while maintaining complete data privacy.

Unlike cloud-based AI solutions, MCP-Ollama Server:

  • Keeps all data processing on your local infrastructure
  • Eliminates the need to share sensitive information with third parties
  • Provides a modular approach that allows you to use only the components you need
  • Enables enterprise-grade AI capabilities in air-gapped or high-security environments

โœจ Key Features

  • ๐Ÿ”’ Complete Data Privacy: All computations happen locally through Ollama
  • ๐Ÿ”ง Tool Use for Local LLMs: Extends Ollama models with file, calendar, and other capabilities
  • ๐Ÿงฉ Modular Architecture: Independent Python service modules that can be deployed selectively
  • ๐Ÿ”Œ Easy Integration: Simple APIs to connect with existing applications
  • ๐Ÿš€ Performance Optimized: Minimal overhead to maintain responsive AI interactions
  • ๐Ÿ“ฆ Containerized Deployment: Docker support for each module (coming soon)
  • ๐Ÿงช Extensive Testing: Comprehensive test coverage for reliability

๐Ÿš€ Quick Start

Prerequisites

  • Python 3.8+ installed
  • Ollama set up on your system
  • Git for cloning the repository

๐Ÿงฉ Component Overview

MCP-Ollama Server is organized into specialized modules, each providing specific functionality:

๐Ÿ“… Calendar Module

calendar/
โ”œโ”€โ”€ README.md          # Module-specific documentation
โ”œโ”€โ”€ google_calendar.py # Google Calendar API integration
โ”œโ”€โ”€ pyproject.toml     # Dependencies and package info
โ””โ”€โ”€ uv.lock        # Dependency lock file

The Calendar module enables your local LLM to:

  • Create, modify, and delete calendar events
  • Check availability and scheduling conflicts
  • Send meeting invitations
  • Set reminders and notifications

๐Ÿ”„ Client MCP Module

client_mcp/
โ”œโ”€โ”€ README.md      # Module-specific documentation
โ”œโ”€โ”€ client.py      # Main client implementation
โ”œโ”€โ”€ pyproject.toml # Dependencies and package info
โ”œโ”€โ”€ testing.txt    # Test data
โ””โ”€โ”€ uv.lock        # Dependency lock file

The Client module provides:

  • A unified interface to interact with all MCP-enabled services
  • Conversation history management
  • Context handling for improved responses
  • Tool selection and routing logic

๐Ÿ“ File System Module

file_system/
โ”œโ”€โ”€ README.md          # Module-specific documentation
โ”œโ”€โ”€ file_system.py     # File system operations implementation
โ”œโ”€โ”€ pyproject.toml     # Dependencies and package info
โ””โ”€โ”€ uv.lock            # Dependency lock file

The File System module allows your local LLM to:

  • Read and write files securely
  • List directory contents
  • Search for files matching specific patterns
  • Parse different file formats (text, CSV, JSON, etc.)

Installation

# 1. First install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh

# 2. Clone the repository
git clone https://github.com/sethuram2003/mcp-ollama_server.git
cd mcp-ollama_server

# 3. Verify Ollama model is installed (replace 'llama3' with your preferred model)
ollama pull llama3

Module Configuration

  1. ๐Ÿ“… Calendar Module:
cd calendar
uv add pyproject.toml  # Install calendar-specific dependencies
  1. ๐Ÿ”„ Client MCP Module:
cd client_mcp
uv add pyproject.toml  # Install calendar-specific dependencies
  1. ๐Ÿ“ File System Module:
cd file_system
uv add pyproject.toml  # Install filesystem dependencies

Usage

cd client_mcp
uv run client.py ../file_system/file_system.py

Interactions with Agent:

conversation between AI Agent

๐Ÿ—๏ธ Architecture

MCP-Ollama Server follows a microservices architecture pattern, where each capability is implemented as an independent service:

Key Components:

  1. Ollama Integration Layer: Connects to your local Ollama instance and routes appropriate requests
  2. MCP Protocol Handlers: Translate between standard MCP format and Ollama's requirements
  3. Service Modules: Independent modules that implement specific capabilities
  4. Client Library: Provides a unified interface for applications to interact with the system

This architecture provides several benefits:

  • Scalability: Add new modules without affecting existing ones
  • Resilience: System continues functioning even if individual modules fail
  • Flexibility: Deploy only the components you need
  • Security: Granular control over data access for each module

๐Ÿ“š Documentation

Module-Specific Documentation

Each module contains its own README with detailed implementation notes:

๐Ÿ› ๏ธ Use Cases

Enterprise Security & Compliance

Ideal for organizations that need AI capabilities but face strict data sovereignty requirements:

  • Legal firms processing confidential case files
  • Healthcare providers analyzing patient data
  • Financial institutions handling sensitive transactions

Developer Productivity

Transform your local development environment:

  • Code generation with access to your project files
  • Automated documentation based on codebase analysis
  • Integration with local git repositories

Personal Knowledge Management

Create a powerful second brain that respects your privacy:

  • Process personal documents and notes
  • Manage calendar and schedule optimization
  • Generate content based on your personal knowledge base

๐Ÿค Contributing

We welcome contributions from the community! Here's how you can help:

  1. Fork the Repository: Create your own fork of the project
  2. Create a Feature Branch: git checkout -b feature/amazing-feature
  3. Make Your Changes: Implement your feature or bug fix
  4. Run Tests: Ensure your changes pass all tests
  5. Commit Changes: git commit -m 'Add some amazing feature'
  6. Push to Branch: git push origin feature/amazing-feature
  7. Open a Pull Request: Submit your changes for review

Please read our for more details.

โ“ FAQ

Q: How does this differ from using cloud-based AI assistants?
A: MCP-Ollama Server runs entirely on your local infrastructure, ensuring complete data privacy and eliminating dependence on external APIs.

Q: What models are supported?
A: Any model compatible with Ollama can be used. For best results, we recommend Llama 3, Mistral, or other recent open models with at least 7B parameters.

Q: How can I extend the system with new capabilities?
A: Follow the modular architecture pattern to create new service modules. See our for details.

Q: What are the system requirements?
A: Requirements depend on the Ollama model you choose. For basic functionality, we recommend at least 16GB RAM and a modern multi-core CPU.

๐Ÿ“„ License

This project is licensed under the terms included in the file.

๐Ÿ™ Acknowledgements

  • Anthropic for the Model Context Protocol specification
  • Ollama for their excellent local LLM server

MCP-Ollama Server - Bringing cloud-level AI capabilities to your local environment

โญ Star us on GitHub โ€ข ๐Ÿ› Report Bug โ€ข โœจ Request Feature