MCP-ollama_server
If you are the rightful owner of MCP-ollama_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
MCP-Ollama Server bridges the gap between Anthropic's Model Context Protocol (MCP) and local LLMs via Ollama, providing enterprise-grade AI capabilities with complete data privacy.
๐ MCP-Ollama Server
Connect the power of Model Context Protocol with local LLMs
Getting Started โข Features โข Architecture โข Documentation โข Contributing โข FAQ
๐ Overview
MCP-Ollama Server bridges the gap between Anthropic's Model Context Protocol (MCP) and local LLMs via Ollama. This integration empowers your on-premise AI models with Claude-like tool capabilities, including file system access, calendar integration, web browsing, email communication, GitHub interactions, and AI image generationโall while maintaining complete data privacy.
Unlike cloud-based AI solutions, MCP-Ollama Server:
- Keeps all data processing on your local infrastructure
- Eliminates the need to share sensitive information with third parties
- Provides a modular approach that allows you to use only the components you need
- Enables enterprise-grade AI capabilities in air-gapped or high-security environments
โจ Key Features
- ๐ Complete Data Privacy: All computations happen locally through Ollama
- ๐ง Tool Use for Local LLMs: Extends Ollama models with file, calendar, and other capabilities
- ๐งฉ Modular Architecture: Independent Python service modules that can be deployed selectively
- ๐ Easy Integration: Simple APIs to connect with existing applications
- ๐ Performance Optimized: Minimal overhead to maintain responsive AI interactions
- ๐ฆ Containerized Deployment: Docker support for each module (coming soon)
- ๐งช Extensive Testing: Comprehensive test coverage for reliability
๐ Quick Start
Prerequisites
- Python 3.8+ installed
- Ollama set up on your system
- Git for cloning the repository
๐งฉ Component Overview
MCP-Ollama Server is organized into specialized modules, each providing specific functionality:
๐ Calendar Module
calendar/
โโโ README.md # Module-specific documentation
โโโ google_calendar.py # Google Calendar API integration
โโโ pyproject.toml # Dependencies and package info
โโโ uv.lock # Dependency lock file
The Calendar module enables your local LLM to:
- Create, modify, and delete calendar events
- Check availability and scheduling conflicts
- Send meeting invitations
- Set reminders and notifications
๐ Client MCP Module
client_mcp/
โโโ README.md # Module-specific documentation
โโโ client.py # Main client implementation
โโโ pyproject.toml # Dependencies and package info
โโโ testing.txt # Test data
โโโ uv.lock # Dependency lock file
The Client module provides:
- A unified interface to interact with all MCP-enabled services
- Conversation history management
- Context handling for improved responses
- Tool selection and routing logic
๐ File System Module
file_system/
โโโ README.md # Module-specific documentation
โโโ file_system.py # File system operations implementation
โโโ pyproject.toml # Dependencies and package info
โโโ uv.lock # Dependency lock file
The File System module allows your local LLM to:
- Read and write files securely
- List directory contents
- Search for files matching specific patterns
- Parse different file formats (text, CSV, JSON, etc.)
Installation
# 1. First install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# 2. Clone the repository
git clone https://github.com/sethuram2003/mcp-ollama_server.git
cd mcp-ollama_server
# 3. Verify Ollama model is installed (replace 'llama3' with your preferred model)
ollama pull llama3
Module Configuration
- ๐ Calendar Module:
cd calendar
uv add pyproject.toml # Install calendar-specific dependencies
- ๐ Client MCP Module:
cd client_mcp
uv add pyproject.toml # Install calendar-specific dependencies
- ๐ File System Module:
cd file_system
uv add pyproject.toml # Install filesystem dependencies
Usage
cd client_mcp
uv run client.py ../file_system/file_system.py
Interactions with Agent:
conversation between AI Agent
๐๏ธ Architecture
MCP-Ollama Server follows a microservices architecture pattern, where each capability is implemented as an independent service:
Key Components:
- Ollama Integration Layer: Connects to your local Ollama instance and routes appropriate requests
- MCP Protocol Handlers: Translate between standard MCP format and Ollama's requirements
- Service Modules: Independent modules that implement specific capabilities
- Client Library: Provides a unified interface for applications to interact with the system
This architecture provides several benefits:
- Scalability: Add new modules without affecting existing ones
- Resilience: System continues functioning even if individual modules fail
- Flexibility: Deploy only the components you need
- Security: Granular control over data access for each module
๐ Documentation
Module-Specific Documentation
Each module contains its own README with detailed implementation notes:
๐ ๏ธ Use Cases
Enterprise Security & Compliance
Ideal for organizations that need AI capabilities but face strict data sovereignty requirements:
- Legal firms processing confidential case files
- Healthcare providers analyzing patient data
- Financial institutions handling sensitive transactions
Developer Productivity
Transform your local development environment:
- Code generation with access to your project files
- Automated documentation based on codebase analysis
- Integration with local git repositories
Personal Knowledge Management
Create a powerful second brain that respects your privacy:
- Process personal documents and notes
- Manage calendar and schedule optimization
- Generate content based on your personal knowledge base
๐ค Contributing
We welcome contributions from the community! Here's how you can help:
- Fork the Repository: Create your own fork of the project
- Create a Feature Branch:
git checkout -b feature/amazing-feature
- Make Your Changes: Implement your feature or bug fix
- Run Tests: Ensure your changes pass all tests
- Commit Changes:
git commit -m 'Add some amazing feature'
- Push to Branch:
git push origin feature/amazing-feature
- Open a Pull Request: Submit your changes for review
Please read our for more details.
โ FAQ
Q: How does this differ from using cloud-based AI assistants?
A: MCP-Ollama Server runs entirely on your local infrastructure, ensuring complete data privacy and eliminating dependence on external APIs.
Q: What models are supported?
A: Any model compatible with Ollama can be used. For best results, we recommend Llama 3, Mistral, or other recent open models with at least 7B parameters.
Q: How can I extend the system with new capabilities?
A: Follow the modular architecture pattern to create new service modules. See our for details.
Q: What are the system requirements?
A: Requirements depend on the Ollama model you choose. For basic functionality, we recommend at least 16GB RAM and a modern multi-core CPU.
๐ License
This project is licensed under the terms included in the file.
๐ Acknowledgements
MCP-Ollama Server - Bringing cloud-level AI capabilities to your local environment
โญ Star us on GitHub โข ๐ Report Bug โข โจ Request Feature