Algiras/ollama-hive-mcp-server
If you are the rightful owner of ollama-hive-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Ollama Hive MCP Server is a TypeScript-based server that integrates with Ollama models using LangChain, providing a standardized interface for querying local language models.
Ollama Hive MCP Server
A TypeScript-based Model Context Protocol (MCP) server that integrates with Ollama models using LangChain. This server provides tools and resources for querying local language models through a standardized MCP interface, with model-specific MCP server configurations.
Features
- Model Management: Configure and manage multiple Ollama models
- Model-Specific MCPs: Each model can have its own set of MCP servers
- Session Management: Conversation sessions for context continuity across multiple queries
- Environment Variable Support: Override endpoints and configuration via environment variables
- LangChain Integration: Leverage LangChain for model interactions
- Environment-based Configuration: Load configuration from
MCP_CONFIG_PATH
- MCP Tools: Query models, test connectivity, and list available models
- MCP Resources: Access model configurations and metadata
- Stdio Transport: Standard input/output communication for seamless integration
- Pre-configured Models: Define models with their specific endpoints and settings
- Model Pre-loading: All models are loaded at startup for quick responses
- Performance Monitoring: Built-in response time tracking and status monitoring
Quick Start with NPX
Run the server directly with npx (no installation required):
# Basic usage
npx ollama-hive-mcp-server
# With custom configuration
MCP_CONFIG_PATH=./my-config.json npx ollama-hive-mcp-server
# With environment overrides
OLLAMA_ENDPOINT=http://localhost:11434 MCP_CONFIG_PATH=./config.json npx ollama-hive-mcp-server
Simplified Interface
The server provides a clean, simplified interface:
Tools Available
query_model
- Query pre-configured models with optional session contexttest_model
- Test model connectivityget_model_info
- Get model details and MCP infolist_models
- List all configured modelsget_config_summary
- Configuration overviewget_loading_status
- Model loading statuscreate_session
- Create conversation sessions for context continuitylist_sessions
- List all active conversation sessionsget_session
- Get detailed session information and message historydelete_session
- Delete specific conversation sessionsclear_sessions
- Clear all conversation sessions
Example Usage
{
"tool": "query_model",
"arguments": {
"prompt": "Hello, world!"
}
}
{
"tool": "query_model",
"arguments": {
"model": "llama3.2",
"prompt": "Write a Python function to calculate fibonacci"
}
}
{
"tool": "query_model",
"arguments": {
"prompt": "Explain quantum computing",
"createSession": true
}
}
Session-Based Conversations
{
"tool": "create_session",
"arguments": {
"modelName": "llama3.2",
"title": "Coding Session"
}
}
{
"tool": "query_model",
"arguments": {
"sessionId": "your-session-id",
"prompt": "Write a Python function to calculate fibonacci"
}
}
{
"tool": "query_model",
"arguments": {
"sessionId": "your-session-id",
"prompt": "Now add error handling to that function"
}
}
Models use their pre-configured settings (temperature, endpoint, MCP servers) automatically. Sessions maintain conversation context for natural, multi-turn interactions.
Documentation
- - Complete guide to using conversation sessions
- - Comprehensive testing and validation instructions
- - MCP-compatible logging system documentation
Prerequisites
- Node.js 18+
- Ollama running locally (default:
http://localhost:11434
) - TypeScript knowledge for customization
Installation
- Clone the repository:
git clone <repository-url>
cd ollama-hive
- Install dependencies:
npm install
- Build the project:
npm run build
Configuration
Environment Variables
Set the following environment variables to configure the server:
# Required: Path to your MCP configuration file
export MCP_CONFIG_PATH=/path/to/your/mcp-config.json
# Optional: Override Ollama endpoint for all models
export OLLAMA_ENDPOINT=http://localhost:11434
# Optional: Enable debug logging
export DEBUG=true
# Optional: Set Node environment
export NODE_ENV=production
Configuration File Format
Create a JSON configuration file with the following structure:
{
"globalEndpoint": "http://localhost:11434",
"models": [
{
"name": "llama3.2",
"endpoint": "http://custom-endpoint:11434",
"model": "llama3.2:latest",
"temperature": 0.7,
"maxTokens": 2048,
"description": "Llama 3.2 model for general purpose tasks",
"mcps": [
{
"name": "filesystem-server",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"],
"description": "Filesystem access MCP server",
"env": {
"NODE_ENV": "production"
}
}
]
}
],
"defaultModel": "llama3.2"
}
Configuration Fields
Global Configuration:
globalEndpoint
: Default endpoint for all models (optional)defaultModel
: Name of the default model to use (optional)
Model Configuration:
name
: Unique identifier for the model (required)endpoint
: Model-specific endpoint override (optional)model
: Ollama model name (required, e.g.,llama3.2:latest
)temperature
: Response generation temperature 0.0-2.0 (optional, default: 0.7)maxTokens
: Maximum tokens to generate (optional)description
: Human-readable description (optional)mcps
: Array of MCP servers for this model (optional, default: [])
MCP Server Configuration:
name
: Unique identifier for the MCP server (required)command
: Command to execute the server (required)args
: Command line arguments (optional)env
: Environment variables for the server (optional)description
: Human-readable description (optional)
Endpoint Resolution Priority
The server resolves endpoints in the following priority order:
OLLAMA_ENDPOINT
environment variable (highest priority)- Model-specific
endpoint
configuration - Global
globalEndpoint
configuration - Default:
http://localhost:11434
(lowest priority)
Example Configuration
See config/example-mcp-config.json
for a complete example with multiple models and their associated MCP servers.
Usage
Starting the Server
# Set configuration path
export MCP_CONFIG_PATH=./config/example-mcp-config.json
# Optional: Override Ollama endpoint
export OLLAMA_ENDPOINT=http://localhost:11434
# Start the server
npm start
Development Mode
npm run dev
Using MCP Inspector
Test your server with the MCP Inspector:
npm run inspector
Available Tools
query_model
Query a configured Ollama model with a prompt.
Parameters:
prompt
(required): The prompt to send to the modelmodel
(optional): Model name (uses default if not specified)sessionId
(optional): Session ID for conversation contextcreateSession
(optional): Auto-create a new session if no sessionId provided (enables automatic conversation continuity)
Example:
{
"model": "llama3.2",
"prompt": "Explain the concept of Model Context Protocol",
"createSession": true
}
Session Examples:
// Auto-create session for conversation continuity
{
"prompt": "Write a Python function",
"createSession": true
}
// Use existing session
{
"prompt": "Now add error handling to that function",
"sessionId": "existing-session-id"
}
test_model
Test connectivity to a specific model and its MCP connections.
Parameters:
model
(required): Name of the model to test
Example:
{
"model": "llama3.2"
}
get_model_info
Get detailed information about a model and its MCP servers.
Parameters:
model
(required): Name of the model to get info for
Example:
{
"model": "codellama"
}
list_models
List all available configured models with their MCP counts.
Parameters: None
get_config_summary
Get configuration summary including environment overrides.
Parameters: None
Available Resources
model://list
Returns a list of all configured models with resolved endpoints and availability status.
model://default
Returns the default model configuration.
model://{name}
Returns configuration for a specific model by name.
config://summary
Returns configuration summary with environment overrides.
config://environment
Returns current environment configuration.
Integration Examples
With Claude Desktop
Add to your Claude Desktop configuration:
{
"mcpServers": {
"ollama-hive": {
"command": "node",
"args": ["/path/to/ollama-hive/dist/index.js"],
"env": {
"MCP_CONFIG_PATH": "/path/to/your/config.json",
"OLLAMA_ENDPOINT": "http://localhost:11434"
}
}
}
}
With Cursor IDE
Configure in Cursor's MCP settings:
{
"servers": {
"ollama-hive": {
"command": "node",
"args": ["/path/to/ollama-hive/dist/index.js"],
"env": {
"MCP_CONFIG_PATH": "/path/to/your/config.json",
"OLLAMA_ENDPOINT": "http://localhost:11434"
}
}
}
}
Environment File Setup
Create a .env
file in your project root:
MCP_CONFIG_PATH=./config/example-mcp-config.json
OLLAMA_ENDPOINT=http://localhost:11434
DEBUG=false
NODE_ENV=development
Model-Specific MCPs
Each model can have its own set of MCP servers, allowing for specialized toolsets:
- General Purpose Model: Filesystem, web search, memory tools
- Code Model: GitHub integration, filesystem for code, documentation tools
- Research Model: Web search, academic databases, reference tools
- Creative Model: Image generation, creative writing tools
This allows you to tailor the available tools to each model's intended use case.
Error Handling
The server includes comprehensive error handling:
- Configuration Errors: Invalid JSON or missing required fields
- Environment Errors: Missing or invalid environment variables
- Model Errors: Connection failures or invalid model names
- MCP Errors: MCP server connection or tool execution failures
- Tool Errors: Invalid parameters or execution failures
All errors are returned in a standardized format with detailed error messages.
Development
Project Structure
src/
āāā index.ts # Main server implementation
āāā config.ts # Configuration loader with environment support
āāā langchain-manager.ts # LangChain model management with MCP integration
āāā types.ts # TypeScript type definitions
config/
āāā example-mcp-config.json # Example configuration
Building
npm run build
Watching for Changes
npm run watch
Type Checking
The project uses strict TypeScript settings with comprehensive type safety.
Security Considerations
- Model Access: Only configured models are accessible
- MCP Isolation: Each model's MCPs are isolated from others
- Configuration Validation: All configuration is validated using Zod schemas
- Environment Variable Security: Sensitive data can be passed via environment variables
- Error Sanitization: Error messages are sanitized to prevent information leakage
Performance
- Connection Reuse: Model connections are reused across requests
- Lazy Loading: Models and MCPs are initialized only when first accessed
- Environment Caching: Environment configuration is cached at startup
- Memory Management: Efficient memory usage with proper cleanup
Troubleshooting
Common Issues
-
"Configuration not loaded" Error
- Ensure
MCP_CONFIG_PATH
is set correctly - Verify the configuration file exists and is valid JSON
- Ensure
-
"Model not found" Error
- Check that Ollama is running on the configured endpoint
- Verify the model name matches the Ollama model exactly
- Check endpoint resolution order
-
Connection Timeout
- Ensure Ollama server is accessible
- Check firewall settings if using remote endpoints
- Verify
OLLAMA_ENDPOINT
environment variable if set
-
MCP Server Connection Issues
- Check MCP server command and arguments
- Verify environment variables for MCP servers
- Check that MCP packages are installed
Debug Mode
Enable debug logging:
export DEBUG=true
npm start
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
License
MIT License - see LICENSE file for details.
Related Projects
Support
For questions and support:
- Create an issue in the repository
- Check the MCP documentation
- Review the LangChain documentation