enesbasbug/ToDo-MCP-Server
If you are the rightful owner of ToDo-MCP-Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A production-ready Model Context Protocol (MCP) server for managing todo lists, accessible via HTTP (SSE transport) or stdio transport for AI-powered task management.
Todo List MCP Server
A production-ready Model Context Protocol (MCP) server for managing todo lists. This server can be accessed via HTTP (SSE transport) for web integrations or stdio transport for Claude Desktop, enabling AI-powered task management through natural language.
š What is MCP?
The Model Context Protocol (MCP) is an open protocol that enables seamless integration between AI models and external data sources or tools. It provides a standardised way for AI assistants to interact with your applications, databases, and services.
SSE vs Stdio Transport
This server supports two transport methods:
-
SSE (Server-Sent Events)
- Used for HTTP-based communication
- Perfect for web applications, APIs, and cloud deployments
- Allows remote access over network
- Ideal for integrating with services like OpenAI
-
Stdio (Standard Input/Output)
- Used for local inter-process communication
- Required for Claude Desktop integration
- More secure (local only)
- Lower latency for local applications
š Features
- Full CRUD Operations: Create, Read, Update, Delete todos
- Natural Language Processing: Interact using conversational commands
- Priority Management: Organize tasks by priority (high, medium, low)
- Smart Filtering: View all, completed, or pending todos
- Statistics Dashboard: Get insights about your productivity
- Persistent Storage: Todos are saved locally in JSON format
- Multi-Transport Support: Both SSE (HTTP) and stdio modes
- AI Integration Ready: Works with Claude Desktop
- Docker Support: Ready for containerized deployment
- Cloud Native: Designed for Google Cloud Run and similar platforms
š Available MCP Tools
Tool | Description | Example Usage |
---|---|---|
create_todo | Create a new todo with title, description, and priority | "Add a high priority task to review the budget" |
list_todos | List todos with filtering options | "Show me all pending tasks" |
get_todo | Get detailed information about a specific todo | "Get details of todo_20240115_143022_0" |
update_todo | Update todo title, description, or priority | "Change the budget review priority to medium" |
complete_todo | Mark a todo as completed | "Complete todo_20240115_143022_0" |
complete_todo_by_number | Complete a todo by its position | "Complete the 2nd task" |
uncomplete_todo | Mark a todo as pending | "Reopen the budget review task" |
delete_todo | Delete a specific todo | "Delete todo_20240115_143022_0" |
clear_completed_todos | Delete all completed todos | "Clear all completed tasks" |
get_todo_stats | Get statistics about your todos | "Show me my productivity stats" |
š Quick Start
Prerequisites
- Python 3.8 or higher
- pip (Python package manager)
- Docker (optional, for containerized deployment)
- Claude Desktop (optional, for Claude integration)
- OpenAI API key (optional, for GPT-4.1 integration)
Installation
- Clone the repository:
git clone https://github.com/enesbasbug/ToDo-MCP-Server.git
cd ToDo-MCP-Server
- Create a virtual environment (recommended):
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
š„ļø Running Locally
Option 1: SSE Mode (HTTP Server)
Use this mode for web applications, API access, or AI integrations:
python server.py
The server will start on http://localhost:8050/sse
Test the SSE server:
python client.py
-
Outputs:
Todo List MCP Client 1. Run automated tests 2. Interactive mode Select mode (1 or 2): X
Choose mode 1 for automated tests or mode 2 for interactive mode.
Option 2: Stdio Mode (Claude Desktop)
For Claude Desktop integration, see our detailed guide:
Quick overview:
- Uses stdio transport for local communication
- Enables natural language todo management through Claude
- Requires configuration in Claude Desktop settings
See it in action:
Creating todos naturally |
Managing your tasks |
Option 3: Docker
# Build the image
docker build -t todo-mcp-server .
# Run with SSE mode
docker run -p 8050:8050 todo-mcp-server
# Run with persistent storage
docker run -p 8050:8050 -v $(pwd)/todos_data:/app/data todo-mcp-server
š¤ AI Integrations
OpenAI GPT-4.1 Integration
This server can be integrated with OpenAI's GPT-4.1 to enable natural language todo management:
-
Set up your OpenAI API key:
# Create .env file echo "OPENAI_API_KEY=your-api-key-here" > .env
-
Run the OpenAI client:
python openai_client.py
-
Interact naturally:
You: I need to prepare for tomorrow's team meeting and review the Q3 reports Assistant: I'll help you organize these tasks. Let me create them for you... [Creates two todos with appropriate priorities] You: What's on my plate for today? Assistant: Here are your pending tasks for today: 1. [HIGH] Review Q3 reports 2. [HIGH] Prepare for team meeting 3. [MEDIUM] Update project documentation You: I finished reviewing the reports Assistant: Great! I'll mark the Q3 reports review as completed.
How It Works with LLMs
- Natural Language Understanding: The LLM interprets your intent
- Tool Selection: It chooses the appropriate MCP tool
- Parameter Extraction: Extracts necessary information from your message
- Action Execution: Calls the MCP server to perform the action
- Response Generation: Provides a human-friendly response
Benefits of LLM Integration
- No Commands to Remember: Just speak naturally
- Context Awareness: The AI remembers your conversation (For the local one only (Claude Desktop), for now.)
- Smart Suggestions: Get intelligent task prioritization
- Bulk Operations: Handle multiple tasks in one request
āļø Cloud Deployment
Google Cloud Run
-
Prepare for deployment:
# Update deploy.sh with your project ID chmod +x deploy.sh ./deploy.sh
-
Update your client URLs
After deployment, you'll receive a URL like:
https://todo-mcp-server-xxxxx-uc.a.run.app
You MUST update all client code to use this URL:
# ā OLD (localhost) client = TodoMCPClient("http://localhost:8050/sse") # ā NEW (Cloud Run URL) client = TodoMCPClient("https://todo-mcp-server-xxxxx-uc.a.run.app/sse")
š§ Configuration
Environment Variables
Variable | Description | Default | Required For |
---|---|---|---|
PORT | Server port for SSE mode | 8050 | Cloud Run |
TODOS_FILE | Path to store todos | todos.json | All modes |
OPENAI_API_KEY | OpenAI API key | None | OpenAI integration |
š Post-Deployment Configuration (if you deployed)
Updating Client URLs
When you move from local development to cloud deployment, remember to update ALL client connections:
File | Local URL | Cloud URL |
---|---|---|
openai_client.py | http://localhost:8050/sse | https://your-service.run.app/sse |
client.py | http://localhost:8050/sse | https://your-service.run.app/sse |
Custom scripts | http://localhost:8050/sse | https://your-service.run.app/sse |
Pro tip: Use environment variables for URLs:
import os
MCP_URL = os.environ.get('MCP_SERVER_URL', 'http://localhost:8050/sse')
client = TodoMCPClient(MCP_URL)
š¾ Data Storage
Current Implementation
This server uses JSON file storage for simplicity and portability. While not suitable for production use with multiple users or high-volume operations, it's perfect for:
- Learning and experimenting with MCP
- Personal todo management
- Small team deployments
- Proof of concept implementations
Storage Locations
- SSE Mode:
todos.json
in the current directory - Stdio Mode:
~/todo_mcp_data.json
in home directory - Docker: Configurable via volume mounts
Production Considerations
For production deployments, consider:
- SQLite: For single-user applications
- PostgreSQL/MySQL: For multi-user applications
- Cloud Firestore: For serverless deployments
- Redis: For high-performance caching
The modular design makes it easy to swap the storage backend without changing the MCP interface.
š Usage Examples
Direct API Usage (Python)
import asyncio
from mcp import ClientSession
from mcp.client.sse import sse_client
async def manage_todos():
async with sse_client("http://localhost:8050/sse") as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Create a todo
result = await session.call_tool(
"create_todo",
arguments={
"title": "Review project proposal",
"description": "Review and provide feedback on Q1 project proposal",
"priority": "high"
}
)
print(result.content[0].text)
# List all todos
result = await session.call_tool("list_todos")
print(result.content[0].text)
asyncio.run(manage_todos())
Natural Language Examples
With Claude Desktop or other apps support MCP:
"I need to finish three things today: email the client, update the budget spreadsheet, and call the vendor"
ā Creates 3 todos with appropriate details
"Show me what I haven't finished yet"
ā Lists all pending todos
"The client email is done"
ā Marks the email task as completed
"What's my highest priority right now?"
ā Shows high priority pending tasks
šļø Architecture
Two transport modes available:
- SSE Mode (
server.py
): For HTTP/API access, Docker, and cloud deployment - Stdio Mode (
server_stdio.py
): For Claude Desktop ()
āāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāā
ā Claude Desktop ā ā Web Client ā
ā (Natural Lang) ā ā (Direct API) ā
āāāāāāāāāā¬āāāāāāāāā āāāāāāāāāā¬āāāāāāāāā
ā ā
ā Stdio ā HTTP/SSE
ā ā
āāāāāāāāāā¼āāāāāāāāā āāāāāāāāāā¼āāāāāāāāā
ā server_stdio.py ā ā server.py ā
ā (Local Only) ā ā (Network Ready)ā
āāāāāāāāāā¬āāāāāāāāā āāāāāāāāāā¬āāāāāāāāā
ā ā
āāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāā
ā
ā¼
āāāāāāāāāāāāāāā
ā todos.json ā
ā Storage ā
āāāāāāāāāāāāāāā
š Monitoring & Logging
-
Local Development
- Logs are output to the console with timestamps and severity levels.
šØ Troubleshooting
SSE Server Issues
-
Port already in use:
lsof -i :8050 # Find process using port kill -9 <PID> # Kill the process
-
Connection refused:
- Check if server is running:
ps aux | grep server.py
- Verify firewall settings
- Ensure correct URL format:
http://localhost:8050/sse
- Check if server is running:
Claude Desktop Issues
-
Server not appearing:
- Verify JSON syntax in config file
- Use absolute paths
- Restart Claude Desktop
-
Tools not working:
- Check Python path:
which python
- Verify MCP installation:
pip show mcp
- Check Python path:
OpenAI Integration Issues
-
API key errors:
- Verify .env file exists
- Check API key validity
- Ensure sufficient API credits
-
Tool execution failures:
- Check server is running
- Verify network connectivity
- Review error logs
š¤ Contributing
We welcome contributions! Please see our contributing guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
š License
This project is licensed under the MIT License - see the file for details.
š Acknowledgments
- Built with Model Context Protocol
- Powered by FastMCP
- AI integrations via OpenAI and Anthropic Claude