ruijie89/mcp-server-android-emulator
If you are the rightful owner of mcp-server-android-emulator and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A Model Context Protocol (MCP) server that integrates with local Ollama LLM instances.
generate_text
Generate text from a prompt with optional parameters like model, temperature, and max_tokens.
chat
Have a conversation with the LLM using an array of messages.
list_models
List all available Ollama models.
MCP Ollama Server
A Model Context Protocol (MCP) server that provides integration with local Ollama LLM instances.
Features
- Text Generation: Generate text using any Ollama model
- Chat Interface: Have conversations with LLMs
- Model Management: List and switch between available models
- Server Status: Monitor Ollama server health
- Resource Access: Access server status and model information as resources
Prerequisites
-
Ollama: Install Ollama on your system
# On macOS brew install ollama # On Linux curl -fsSL https://ollama.ai/install.sh | sh
-
Node.js: Version 18 or higher
-
Pull a model: Download at least one model in Ollama
ollama pull llama2 # or ollama pull codellama # or ollama pull mistral
Installation
-
Clone or create the project directory
-
Install dependencies:
npm install
-
Build the TypeScript:
npm run build
Usage
Starting the Server
npm start
The server communicates via stdio and is designed to be used with MCP-compatible clients.
Available Tools
-
generate_text
- Generate text from a prompt
- Parameters:
prompt
(required),model
(optional),temperature
(optional),max_tokens
(optional)
-
chat
- Have a conversation with the LLM
- Parameters:
messages
(array of {role, content}),model
(optional)
-
list_models
- List all available Ollama models
- No parameters required
Available Resources
- ollama://status - Current server status and configuration
- ollama://models - Detailed list of available models
Configuration
You can customize the server by modifying the constructor parameters in src/index.ts
:
// Change Ollama URL (default: http://localhost:11434)
// Change default model (default: llama2)
const server = new OllamaMCPServer('http://localhost:11434', 'your-preferred-model');
Example Usage with MCP Client
// Generate text
{
"method": "tools/call",
"params": {
"name": "generate_text",
"arguments": {
"prompt": "Explain quantum computing in simple terms",
"model": "llama2",
"temperature": 0.7
}
}
}
// Chat conversation
{
"method": "tools/call",
"params": {
"name": "chat",
"arguments": {
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"model": "llama2"
}
}
}
Development
- Build:
npm run build
- Watch mode:
npm run watch
- Development:
npm run dev
Troubleshooting
-
"Connection refused" errors:
- Make sure Ollama is running:
ollama serve
- Check if Ollama is accessible at
http://localhost:11434
- Make sure Ollama is running:
-
"Model not found" errors:
- List available models:
ollama list
- Pull a model if needed:
ollama pull llama2
- List available models:
-
Permission errors:
- Make sure the built file is executable:
chmod +x dist/index.js
- Make sure the built file is executable:
Architecture
The server implements the MCP protocol and acts as a bridge between MCP clients and your local Ollama instance. It provides:
- Tools: Interactive functions that can be called by MCP clients
- Resources: Static or dynamic content that can be read by clients
- Stdio Transport: Communication via standard input/output for integration with MCP clients
License
MIT