Amplify360/mcp-sse-server
If you are the rightful owner of mcp-sse-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
This document provides a structured overview of a minimal Model Context Protocol (MCP) server implementation using Server-Sent Events (SSE) transport.
MCP SSE Server - Reference Implementation
A minimal MCP (Model Context Protocol) server implementation using Server-Sent Events (SSE) transport, demonstrating email sending capability. Unlike many MCP servers examples that use stdio transport, this is a remote SSE server that can be deployed via Docker to any hosting platform. This serves as a clean, easy-to-understand reference for building deployable MCP servers.
Note: This implementation prioritizes clarity and ease of understanding over production-ready features. It does not include comprehensive production aspects such as robust error handling, monitoring, security hardening, rate limiting, or scalability considerations that would be required for enterprise deployments.
Features
- SSE Transport: Server-Sent Events for remote MCP server deployment
- Docker Ready: Containerized for easy deployment to any cloud platform
- Simple Email Sending: Send emails via Postmark SMTP
- MCP Integration: Exposes email functionality as MCP tools
- Clean Architecture: Minimal dependencies and clear separation of concerns
- Configurable Logging: Console logging with optional log level override
- Environment-based Configuration: Uses
.env
files for setup
Quick Start
1. Environment Setup
Copy .env.example
to .env
and update with your configuration:
cp .env.example .env
Then edit .env
with your values:
# Required
MCP_SERVER_AUTH_KEY=your-mcp-auth-key
POSTMARK_API_KEY=your-postmark-api-key
SENDER_EMAIL=your-sender@example.com
# Optional
LOG_LEVEL=INFO
ENVIRONMENT=development
# Docker/Deployment Specific (optional)
FILE_LOGGING=true
2. Installation
First, install uv (see Astral's installation guide):
# Create virtual environment and install dependencies
uv venv
# Activate the virtual environment
# On macOS/Linux:
source .venv/bin/activate
# On Windows:
# .venv\Scripts\activate
uv sync # Install dependencies into virtual environment
3. Run the Server
uv run python mcp_server.py
Deployment
Docker Deployment
# Build and run
docker build -f deployment/Dockerfile -t mcp-sse-server .
docker run -d --name mcp-sse-server -p 8080:8080 --env-file .env mcp-sse-server
Azure Container Apps Deployment
Quick Deploy
cd deployment/bicep
chmod +x deploy.sh
./deploy.sh
Custom Deployment Options
The deployment script now reads BASE_NAME
and REGION_CODE
from your .env
file by default. For one-off deployments, you can override these:
# Use custom names from environment variables
export BASE_NAME=myclient-mcp REGION_CODE=eastus
./deploy.sh
# Or override via command line (takes precedence over .env)
./deploy.sh --base-name myclient-mcp --environment prod --region-code eastus
# Update code only (no infrastructure changes)
./deploy.sh --update
Azure Resources Created
The deployment creates these Azure resources following standard naming conventions:
Resource | Name Pattern | Example |
---|---|---|
Resource Group | rg-{service}-{env}-{region} | rg-mcp-sse-dev-weu |
Container App | ca-{service}-{env}-{region} | ca-mcp-sse-dev-weu |
Container App Environment | cae-{service}-{env}-{region} | cae-mcp-sse-dev-weu |
Log Analytics Workspace | log-{service}-{env}-{region} | log-mcp-sse-dev-weu |
Container Registry | cr{service}{env}{region} | crmcpssedevweu |
Configuration:
- CPU: 0.5 vCPU, Memory: 1GB
- Fixed scaling: 1 replica (min=1, max=1)
- HTTPS ingress enabled
Prerequisites
- Azure CLI installed and configured
- Docker installed and running
.env
file with required variables
Live Deployment
The service is currently deployed at:
- URL: https://ca-mcp-sse-development-weu.mangosea-a4cea9ef.westeurope.azurecontainerapps.io
- Environment: Development (West Europe)
- Resource Group: rg-mcp-sse-dev-weu
Azure Management
Portal Access:
- Navigate to Azure Portal
- Search for resource group:
rg-mcp-sse-dev-weu
- Find container app:
ca-mcp-sse-development-weu
CLI Commands:
# Get container app details
az containerapp show --name ca-mcp-sse-development-weu --resource-group rg-mcp-sse-dev-weu
# View logs
az containerapp logs show --name ca-mcp-sse-development-weu --resource-group rg-mcp-sse-dev-weu
# Restart app
az containerapp restart --name ca-mcp-sse-development-weu --resource-group rg-mcp-sse-dev-weu
Troubleshooting
Common Issues:
- Missing Environment Variables: Ensure
.env
file exists with all required variables - Azure CLI Issues: Verify login with
az account show
- Container Failures: Check Docker daemon is running
- Runtime Issues: Review container logs in Azure Log Analytics
Health Check:
curl https://your-container-app-url.azurecontainerapps.io/health
Local Development with ngrok
For local development with web clients, you can use ngrok to expose your local server:
- Install ngrok: https://ngrok.com/download
- Start your local MCP server:
uv run python mcp_server.py
- In another terminal, expose the server:
ngrok http 8080
- Use the provided HTTPS URL (e.g.,
https://abc123.ngrok.io
) in your web client - Remember to include your
X-API-Key
header when making requests
Testing with AI Buddy
You can test your MCP SSE server in AI Buddy using ngrok to create a secure tunnel:
-
Start your local server:
uv run python mcp_server.py
-
Create an ngrok tunnel:
ngrok http 8080
-
Configure AI Buddy MCP connector:
- Open AI Buddy and create a new MCP connector
- Set the server URL to your ngrok public URL with
/sse
endpoint - Example:
https://abc123.ngrok.io/sse
- Set the
X-API-Key
header value to match yourMCP_SERVER_AUTH_KEY
from.env
-
Test the connection:
- AI Buddy should now be able to connect to your local MCP server
- Ask the expert to reveal what tool calls it has access to
- You should see a request come through ngrok and the app server (in console )output
- The expert should include in its list of tools the email sending tool
- You can then request it to send a test email
Project Structure
mcp-sse-server/
āāā mcp_server.py # Main entry point and core application logic
āāā src/ # Main source code
ā āāā __init__.py # Package marker
ā āāā config.py # Configuration management
ā āāā mcp_tools.py # MCP server and tools registration
ā āāā utils/ # Utility modules
ā ā āāā __init__.py # Package marker
ā ā āāā email.py # Email utilities (moved from email_utils.py)
ā āāā actions/ # MCP action implementations
ā āāā __init__.py # Package marker
ā āāā send_email.py # Email sending action
āāā tests/ # Test files
ā āāā test_config.py # Configuration tests
ā āāā test_email_utils.py # Email utility tests
ā āāā test_mcp_tools.py # MCP tools tests
ā āāā test_*.py # Other test files
āāā deployment/ # Deployment files
ā āāā Dockerfile # Container configuration
ā āāā bicep/ # Azure Bicep templates and scripts
ā āāā deploy.sh # Automated deployment script
ā āāā main.bicep # Azure resource definitions
āāā logs/ # Runtime logs (created automatically)
āāā pyproject.toml # Dependencies and project config
āāā README.md # This file
Configuration
Environment Variables
Required:
MCP_SERVER_AUTH_KEY
: Authentication key for MCP requestsPOSTMARK_API_KEY
: Your Postmark API key for sending emailsSENDER_EMAIL
: The email address to send from
Optional:
LOG_LEVEL
: Logging level (default: INFO)ENVIRONMENT
: Environment name (default: development)FILE_LOGGING
: Enable file logging (used in Docker containers)
Development
Actions System - Adding New MCP Tools
The server uses a transparent actions-based architecture where each MCP tool is implemented as a separate action module. Dependencies are explicitly declared in function signatures, making the system easy to understand and extend.
Directory Structure
src/actions/
āāā __init__.py # Package marker
āāā send_email.py # Email sending functionality
āāā status.py # Server status functionality (no dependencies)
How It Works
The system uses a dependency registry approach:
-
Central Registry: All server dependencies are declared in
src/mcp_tools.py
:DEPENDENCIES: dict[str, object] = { "postmark_api_key": api_key, "sender_email": from_email, # Add new dependencies here ā # "weather_api_key": os.getenv("WEATHER_API_KEY"), }
-
Signature-Based Injection: Only dependencies that appear in the function signature are injected - no hidden behavior.
Adding a New Action (< 60 seconds)
Step 1: Write the Action
Create src/actions/my_feature.py
:
"""
My feature action implementation.
"""
import logging
from typing import Any
logger = logging.getLogger(__name__)
async def my_feature_action(
user_param1: str,
user_param2: int,
postmark_api_key: str, # Only injected if you need it
sender_email: str, # Only injected if you need it
) -> Any:
"""
Description of what this action does.
Args:
user_param1: User-provided parameter
user_param2: Another user-provided parameter
postmark_api_key: Postmark API key (injected)
sender_email: Sender email (injected)
Returns:
Result of the action
"""
logger.info("My feature action called")
# Your implementation here
result = f"Processed {user_param1} with value {user_param2}"
logger.info("My feature action completed")
return result
Step 2: Add New Dependencies (if needed)
If your action needs additional services (like a weather API key), add them to the DEPENDENCIES
registry in src/mcp_tools.py
:
DEPENDENCIES: dict[str, object] = {
"postmark_api_key": api_key,
"sender_email": from_email,
"weather_api_key": os.getenv("WEATHER_API_KEY"), # ā Add this
}
Step 3: Restart the Server
That's it! The action is automatically registered as my_feature_tool
.
Action Examples
Simple Action (No Dependencies):
async def status_action() -> dict:
"""Get server status - needs no external dependencies."""
return {"status": "ok", "version": "1.0.0"}
Action with User Parameters Only:
async def greet_user_action(name: str, greeting: str = "Hello") -> str:
"""Greet a user - no server dependencies needed."""
return f"{greeting}, {name}!"
Action Using Server Dependencies:
async def send_notification_action(
message: str,
recipient: str,
postmark_api_key: str, # Injected because it's in DEPENDENCIES
sender_email: str, # Injected because it's in DEPENDENCIES
) -> str:
"""Send notification email using server dependencies."""
# Use postmark_api_key and sender_email here
return f"Sent '{message}' to {recipient}"
Action with Custom Dependencies:
async def fetch_weather_action(
city: str,
weather_api_key: str, # Must be added to DEPENDENCIES first
) -> dict:
"""Fetch weather data using external API."""
# Use weather_api_key to call external service
return {"city": city, "temperature": "22°C"}
Function Requirements
Naming Convention:
- Function name must end with
_action
(e.g.,send_email_action
) - The registered MCP tool will be named by replacing
_action
with_tool
Parameters:
- User parameters: Exposed to MCP clients, must be documented
- Dependency parameters: Must match names in the
DEPENDENCIES
registry - Type hints: Required for all parameters
- No
**kwargs
: Dependencies are passed as explicit named parameters
Return Value:
- Can return any serializable type (str, dict, list, etc.)
- Return value will be sent back to the MCP client
Async Function:
- Must be an
async
function usingasync def
- Can use
await
for I/O operations
Auto-Discovery Process
When the server starts:
- The
register_tools()
function populates theDEPENDENCIES
registry - It scans the
src/actions/
package for Python modules - It looks for async functions ending with
_action
- For each action, it inspects the function signature
- It creates a wrapper that injects only the dependencies the action requests
- It registers the wrapper as an MCP tool
Testing
# Run all tests
uv run python -m pytest tests/ -v
# Test action registration specifically
uv run python -m pytest tests/test_mcp_tools.py::TestRegisterTools -v
# Test individual actions
uv run python -m pytest tests/test_send_email_action.py -v
Dependencies
- httpx: HTTP client
- mcp[cli]: Model Context Protocol implementation
- starlette: ASGI framework for the web server
- uvicorn: ASGI server
- python-dotenv: Environment variable loading
- pydantic-settings: Configuration management
License
This project is licensed under the MIT License.