ollama-mcp

etnlbck/ollama-mcp

3.2

If you are the rightful owner of ollama-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The Ollama MCP Server is a local server that facilitates interaction with Ollama models using the Model Context Protocol.

Tools
5
Resources
0
Prompts
0

Ollama MCP Server

A Model Context Protocol (MCP) server that provides tools for interacting with Ollama models. This server enables AI assistants to list, chat with, generate responses from, and manage Ollama models through a standardized protocol.

🚀 Features

  • Model Management: List, pull, and delete Ollama models
  • Chat Interface: Multi-turn conversations with models
  • Text Generation: Single-prompt text generation
  • Dual Transport: Stdio (local) and HTTP (remote) support
  • Railway Ready: Pre-configured for Railway deployment
  • Type Safe: Full TypeScript implementation with strict typing

📋 Prerequisites

  • Node.js 18+
  • Ollama installed and running locally
  • For Railway deployment: Railway CLI

🛠️ Installation

Local Development

  1. Clone and install dependencies:

    git clone <repository-url>
    cd ollama-mcp
    npm install
    
  2. Build the project:

    npm run build
    
  3. Start the server:

    npm start
    

Using with Cursor

Add this to your Cursor MCP configuration (~/.cursor/mcp/config.json):

{
  "mcpServers": {
    "ollama": {
      "command": "node",
      "args": ["/path/to/ollama-mcp/dist/main.js"]
    }
  }
}

Quick setup:

curl -sSL https://raw.githubusercontent.com/your-repo/ollama-mcp/main/config/mcp.config.json -o ~/.cursor/mcp/config.json

🏗️ Architecture

The project is structured for maximum readability and maintainability:

src/
├── main.ts                 # Main entry point
├── config/                 # Configuration management
├── server/                 # Core MCP server
├── tools/                  # MCP tool implementations
├── transports/             # Communication transports
└── ollama-client.ts        # Ollama API client

docs/                       # Comprehensive documentation
config/                     # Configuration files
scripts/                    # Deployment scripts

See for detailed architecture documentation.

🔧 Configuration

Environment Variables

VariableDescriptionDefault
MCP_TRANSPORTTransport type (stdio or http)stdio
OLLAMA_BASE_URLOllama API base URLhttp://localhost:11434
MCP_HTTP_HOSTHTTP server host (HTTP mode)0.0.0.0
MCP_HTTP_PORTHTTP server port (HTTP mode)8080
MCP_HTTP_ALLOWED_ORIGINSCORS allowed origins (HTTP mode)None

Transport Modes

Stdio Transport (Default)

Perfect for local development and direct integration:

npm start
HTTP Transport

Ideal for remote deployment and web-based clients:

MCP_TRANSPORT=http npm start

🚀 Deployment

Railway Deployment

  1. Install Railway CLI:

    npm install -g @railway/cli
    railway login
    
  2. Deploy:

    railway up
    
  3. Add models (optional):

    railway shell
    # Follow instructions in docs/RAILWAY_MODELS_SETUP.md
    

The Railway deployment automatically uses HTTP transport and exposes:

  • MCP Endpoint: https://your-app.railway.app/mcp
  • Health Check: https://your-app.railway.app/healthz

Docker Deployment

# Build the image
npm run docker:build

# Run locally
npm run docker:run

# Deploy to Railway
railway up

📚 Available Tools

The server provides 5 MCP tools for Ollama interaction:

  1. ollama_list_models - List available models
  2. ollama_chat - Multi-turn conversations
  3. ollama_generate - Single-prompt generation
  4. ollama_pull_model - Download models
  5. ollama_delete_model - Remove models

See for detailed API documentation.

🧪 Testing

Local Testing

# Test stdio transport
npm start

# Test HTTP transport
MCP_TRANSPORT=http npm start

# Test health check (HTTP mode)
curl http://localhost:8080/healthz

Model Testing

# List available models
ollama list

# Test a model
ollama run llama2 "Hello, how are you?"

📖 Documentation

  • - Detailed system architecture
  • - Complete API documentation
  • - Model deployment guide

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

📄 License

MIT License - see for details.

🆘 Troubleshooting

Common Issues

"Cannot find module" errors:

npm install
npm run build

Ollama connection issues:

# Check if Ollama is running
ollama list

# Check Ollama service
ollama serve

Railway deployment issues:

# Check Railway logs
railway logs

# Verify environment variables
railway variables

Getting Help

  • Check the
  • Review
  • Open an issue on GitHub

Built with ❤️ for the AI community