timhannifan/mcp-demo
If you are the rightful owner of mcp-demo and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A demonstration of the Model Context Protocol (MCP) featuring an academic retrieval and text analytics server with a Python client.
MCP Demo
A demonstration of the Model Context Protocol (MCP) featuring an academic retrieval and text analytics server with a Python client. This project showcases how to build and deploy MCP tools for document search, question answering, and text analysis.
Features
- Corpus Answer Tool: Query a local corpus of academic documents and get synthesized answers with citations
- Text Profile Tool: Analyze text documents for readability, sentiment, and linguistic features
- Docker-based Deployment: Containerized server and client for easy setup and testing
- FastMCP Integration: Built on the FastMCP framework for rapid MCP tool development
- Ollama Integration: Connect to local LLM models via Ollama for enhanced AI capabilities
Architecture
The project consists of three main components:
- MCP Server (
server/
): A FastMCP-based server that exposes two tools and serves academic documents - MCP Client (
client/
): A Python client that demonstrates how to interact with the MCP server - Ollama Integration (
ollmcp/
): Containerized Ollama service with ollmcp client for LLM interactions
Project Structure
mcp-demo/
āāā client/ # MCP client implementation
ā āāā Dockerfile # Client container configuration
ā āāā mcp_client_smoke.py # Demo client script
ā āāā requirements.txt # Python dependencies
ā āāā scripts/ # Utility scripts
āāā server/ # MCP server implementation
ā āāā app.py # Main server application
ā āāā schemas.py # Pydantic data models
ā āāā tools/ # MCP tool implementations
ā ā āāā corpus_answer.py # Document search and Q&A
ā ā āāā text_profile.py # Text analytics
ā āāā data/corpus/ # Sample academic documents
ā āāā requirements.txt # Python dependencies
āāā ollmcp/ # Ollama + ollmcp integration
ā āāā Dockerfile # ollmcp container with Ollama CLI
āāā docker-compose.yml # Multi-container orchestration
āāā Makefile # Development commands
Quick Start
1. Setup Environment Files
make setup
2. Start the Services
# Build and run both server and client
make up
# Or for development with localhost access
make dev
3. View Logs
make logs
4. Stop Services
make down
Ollama Integration
This project includes integration with Ollama for local LLM capabilities. You can use ollmcp to interact with your MCP server using local language models.
Prerequisites
- Docker and Docker Compose
- Sufficient disk space for model downloads (models are cached and persist between restarts)
Running with Ollama
Option 1: Start services and pull model automatically
make ollama
This command will:
- Start the MCP server, Ollama service, and ollmcp container
- Wait for Ollama to be ready
- Download the
llama3.2:1b
model (if not already present) - Provide instructions for starting ollmcp
Option 2: Start services and go straight to shell
make ollama-shell
This drops you directly into the ollmcp container shell.
Using ollmcp
After starting the services, connect to the ollmcp container:
docker compose exec ollmcp bash
Then start ollmcp with your MCP server:
ollmcp --mcp-server-url http://mcp-server:8765/mcp --host http://ollama:11434 --model llama3.2:1b
Available Models
The default model is llama3.2:1b
, but you can use any model available in Ollama:
# List available models
ollama list
# Pull a different model
ollama pull qwen2.5:7b
# Use a different model with ollmcp
ollmcp --mcp-server-url http://mcp-server:8765/mcp --host http://ollama:11434 --model qwen2.5:7b
Model Persistence
Models are automatically cached and persist between container restarts. The first run will download the model, but subsequent runs will use the cached version.
Development Commands
# Rebuild containers from scratch
make rebuild
# Run only the client
make client
# Access server shell
make server-sh
# Access client shell
make client-sh
# Clean up containers and volumes
make clean
# Setup environment files
make setup
# Start Ollama services
make ollama
# Start Ollama services and drop into shell
make ollama-shell
Available Tools
1. Corpus Answer Tool
Answers questions using a local corpus of academic documents:
# Example query
result = await client.call_tool(
"corpus_answer_tool",
{"query": "How do urban transport policies affect emissions and health?"}
)
Returns: AnswerWithCitations
with:
- Synthesized answer (ā¤120 words)
- 1-5 supporting sources with snippets and similarity scores
2. Text Profile Tool
Analyzes text documents for various linguistic features:
# Analyze a document by ID or raw text
result = await client.call_tool(
"text_profile_tool",
{"text_or_doc_id": "air_quality_health.txt"}
)
Returns: TextProfile
with:
- Character and token counts
- Type-token ratio (lexical diversity)
- Top n-grams and keywords
- Flesch reading ease score
- VADER sentiment analysis
Sample Corpus
The server includes three sample academic documents:
ai_labor_markets.txt
- AI's impact on employmentair_quality_health.txt
- Air pollution and public healthurban_transport_emissions.txt
- Urban transportation and emissions
API Endpoints
- MCP Tools: Available at
/mcp
(MCP protocol) - Health Check:
GET /health
- Returns "OK" for container health checks
How It Works
- Server Startup: The MCP server loads the corpus and builds a TF-IDF index for semantic search
- Client Connection: The client waits for the server to be healthy, then connects via HTTP transport
- Tool Execution: Tools process requests using scikit-learn for text analysis and similarity search
- Response Format: All responses use Pydantic models for type safety and validation
- Ollama Integration: ollmcp connects to both the MCP server and Ollama service for LLM-powered interactions
Docker Configuration
- Server: Exposes port 8765 internally, with optional localhost binding for development
- Client: Waits for server health check before running the demo
- Ollama: Runs on port 11434 with model persistence
- Volumes: Corpus data and model data are mounted from the host for easy updates
- Health Checks: Built-in health monitoring for reliable orchestration
Testing
The client automatically runs a smoke test that:
- Pings the server
- Lists available tools
- Tests both corpus answer and text profile tools
- Displays results for verification
Customization
Adding New Documents
Place .txt
files in server/data/corpus/
- they'll be automatically indexed and searchable.
Extending Tools
Add new tools in server/tools/
and register them in server/app.py
using the @mcp.tool
decorator.
Modifying Schemas
Update server/schemas.py
to change response formats or add new data models.
Using Different Models
Change the default model in the Makefile or specify a different model when starting ollmcp.