etnlbck/ollama-mcp
If you are the rightful owner of ollama-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Ollama MCP Server is a local server that facilitates interaction with Ollama models using the Model Context Protocol.
Ollama MCP Server
A Model Context Protocol (MCP) server that provides tools for interacting with Ollama models. This server enables AI assistants to list, chat with, generate responses from, and manage Ollama models through a standardized protocol.
🚀 Features
- Model Management: List, pull, and delete Ollama models
- Chat Interface: Multi-turn conversations with models
- Text Generation: Single-prompt text generation
- Dual Transport: Stdio (local) and HTTP (remote) support
- Railway Ready: Pre-configured for Railway deployment
- Type Safe: Full TypeScript implementation with strict typing
📋 Prerequisites
- Node.js 18+
- Ollama installed and running locally
- For Railway deployment: Railway CLI
🛠️ Installation
Local Development
-
Clone and install dependencies:
git clone <repository-url> cd ollama-mcp npm install -
Build the project:
npm run build -
Start the server:
npm start
Using with Cursor
Add this to your Cursor MCP configuration (~/.cursor/mcp/config.json):
{
"mcpServers": {
"ollama": {
"command": "node",
"args": ["/path/to/ollama-mcp/dist/main.js"]
}
}
}
Quick setup:
curl -sSL https://raw.githubusercontent.com/your-repo/ollama-mcp/main/config/mcp.config.json -o ~/.cursor/mcp/config.json
🏗️ Architecture
The project is structured for maximum readability and maintainability:
src/
├── main.ts # Main entry point
├── config/ # Configuration management
├── server/ # Core MCP server
├── tools/ # MCP tool implementations
├── transports/ # Communication transports
└── ollama-client.ts # Ollama API client
docs/ # Comprehensive documentation
config/ # Configuration files
scripts/ # Deployment scripts
See for detailed architecture documentation.
🔧 Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
MCP_TRANSPORT | Transport type (stdio or http) | stdio |
OLLAMA_BASE_URL | Ollama API base URL | http://localhost:11434 |
MCP_HTTP_HOST | HTTP server host (HTTP mode) | 0.0.0.0 |
MCP_HTTP_PORT | HTTP server port (HTTP mode) | 8080 |
MCP_HTTP_ALLOWED_ORIGINS | CORS allowed origins (HTTP mode) | None |
Transport Modes
Stdio Transport (Default)
Perfect for local development and direct integration:
npm start
HTTP Transport
Ideal for remote deployment and web-based clients:
MCP_TRANSPORT=http npm start
🚀 Deployment
Railway Deployment
-
Install Railway CLI:
npm install -g @railway/cli railway login -
Deploy:
railway up -
Add models (optional):
railway shell # Follow instructions in docs/RAILWAY_MODELS_SETUP.md
The Railway deployment automatically uses HTTP transport and exposes:
- MCP Endpoint:
https://your-app.railway.app/mcp - Health Check:
https://your-app.railway.app/healthz
Docker Deployment
# Build the image
npm run docker:build
# Run locally
npm run docker:run
# Deploy to Railway
railway up
📚 Available Tools
The server provides 5 MCP tools for Ollama interaction:
ollama_list_models- List available modelsollama_chat- Multi-turn conversationsollama_generate- Single-prompt generationollama_pull_model- Download modelsollama_delete_model- Remove models
See for detailed API documentation.
🧪 Testing
Local Testing
# Test stdio transport
npm start
# Test HTTP transport
MCP_TRANSPORT=http npm start
# Test health check (HTTP mode)
curl http://localhost:8080/healthz
Model Testing
# List available models
ollama list
# Test a model
ollama run llama2 "Hello, how are you?"
📖 Documentation
- - Detailed system architecture
- - Complete API documentation
- - Model deployment guide
🤝 Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
📄 License
MIT License - see for details.
🆘 Troubleshooting
Common Issues
"Cannot find module" errors:
npm install
npm run build
Ollama connection issues:
# Check if Ollama is running
ollama list
# Check Ollama service
ollama serve
Railway deployment issues:
# Check Railway logs
railway logs
# Verify environment variables
railway variables
Getting Help
- Check the
- Review
- Open an issue on GitHub
Built with ❤️ for the AI community