Ollama-mcp

Ollama-mcp

3.5

If you are the rightful owner of Ollama-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

A powerful bridge between Ollama and the Model Context Protocol (MCP), enabling seamless integration of Ollama's local LLM capabilities into your MCP-powered applications.

The Ollama MCP Server acts as a bridge between Ollama's local LLM capabilities and the Model Context Protocol (MCP), allowing developers to integrate these capabilities into their MCP-powered applications. It provides a full API coverage of Ollama's functionalities through a clean MCP interface, making it a drop-in replacement for OpenAI's chat completion API. This server allows users to run AI models locally, ensuring full control and privacy. It supports model management tasks such as pulling, pushing, listing, creating, copying, and removing models. Additionally, it offers model execution capabilities with customizable prompts and configurable parameters like temperature and timeout. The server also provides control features to start and manage the Ollama server, view detailed model information, and handle errors and timeouts effectively.

Features

  • Complete Ollama Integration: Access all essential Ollama functionality through a clean MCP interface, serving as a drop-in replacement for OpenAI's chat completion API.
  • Local LLM Power: Run AI models locally with full control and privacy.
  • Model Management: Includes pulling, pushing, listing, creating, copying, and removing models.
  • Model Execution: Run models with customizable prompts and configurable parameters.
  • Server Control: Start and manage the Ollama server with detailed model information and error handling.