dakofler/ollama_mcp_demo
If you are the rightful owner of ollama_mcp_demo and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
This document provides a comprehensive overview of setting up and using a custom Model Context Protocol (MCP) server to expose Python functions as tools, and its integration with Ollama.
Ollama MCP Demo
This demo shows how to create a custom MCP server to expose custom Python functions as tools. It also demonstrates how a custom MCP client class can be used to integrate the MCP server with Ollama. Both, the MCP server and Ollama are independent and can be run on different machines.
Add Tools to your MCP Server
To add new tools to the MCP server, simply create Python functions in mcp_server/tools. Once created, add them to the TOOLS tuple in mcp_server/__main__.py which is used to register them to the MCP server.
# mcp_server/__main__.py
from mcp_server.tools import echo
...
SERVER = FastMCP(name="custom-mcp-server", **SERVER_CONFIG)
TOOLS = (echo,) # add functions here
...
Running the MCP Server
-
(Optional) Configure the environments in the
docker-compose.ymlfile (ports, ollama configs, ...). -
Start the services. This spins up an Ollama instance and the MCP server.
docker compose up -d -
(Optional) To download ollama models once the containers are running, use
docker compose exec ollama ollama pull <your model>
The Ollama server can then be accessed at http://localhost:11434 and the MCP server at http://localhost:7777/mcp (replace ports with your configuration).
MCP Client Usage
Install the dependencies used for the MCP client
uv sync
You can then use the mcp_client.client.MCPClient class to communicate with the MCP server like this:
from mcp_client.client import MCPClient
mcp_client = MCPClient(host="localhost", port=7777)
# list available tools
tools = await mcp_client.list_tools()
...
# call a tool
result = await mcp_client.call_tool(tool_name="some_tool", arguments={"some_arg": "value"})
...
Integrating it with Ollama can be done like so:
from mcp_client.client import MCPClient
from ollama import Client as OllamaClient
mcp_client = MCPClient(host="localhost", port=7777)
ollama_client = OllamaClient("http://localhost:11434")
# invoke llm
response = ollama_client.chat(
model="qwen3:4b",
messages=[{"role": "user", "content": "Echo this message 'Hi, Alice!'"}],
tools=await mcp_client.list_tools(),
)
print(response.message.content)
# handle tool calls
if tool_calls := response.message.tool_calls:
for tool_call in tool_calls:
tool_name = tool_call.function.name
arguments = tool_call.function.arguments
print("Calling", tool_name, "with arguments", arguments)
tool_result = await mcp_client.call_tool(tool_name, arguments)
print("Result: ", tool_result)