rb58853/simple-mcp-server
If you are the rightful owner of simple-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A Python implementation of the Model Context Protocol (MCP) server using FastMCP and FastAPI.
echo
A simple echo tool
add_two
A simple add tool
Simple Python MCP-Server
A python implementation of the Model Context Protocol (MCP) server with fastmcp
and fastapi
.
Table of Contents
Overview
This repository is based on the official MCP Python SDK repository, with the objective of creating an MCP server in Python using FastMCP. The project incorporates the following basic functionalities:
- To facilitate understanding and working with the Model Context Protocol (MCP), from the fundamentals and in an accessible manner
- To provide a testing platform for MCP clients
- To integrate the server with FastAPI and offer it as a streamable HTTP service, maintaining a clear separation between the service and the client
The project focuses on the implementation of a simple MCP server that is served through FastAPI with httpstream. This approach represents the recommended methodology for creating MCP servers. To explore other implementation forms and server services, it is recommended to consult the official documentation.
Transport
Streamable HTTP Transport
Note: Streamable HTTP transport is superseding SSE transport for production deployments.
from mcp.server.fastmcp import FastMCP
# Stateless server (no session persistence)
mcp = FastMCP("StatelessServer", stateless_http=True)
You can mount multiple FastMCP servers in a FastAPI application
# echo.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP(name="EchoServer", stateless_http=True)
@mcp.tool(description="A simple echo tool")
def echo(message: str) -> str:
return f"Echo: {message}"
# math.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP(name="MathServer", stateless_http=True)
@mcp.tool(description="A simple add tool")
def add_two(n: int) -> int:
return n + 2
# fast_api.py
import contextlib
from fastapi import FastAPI
from mcp.echo import echo
from mcp.math import math
# Create a combined lifespan to manage both session managers
@contextlib.asynccontextmanager
async def lifespan(app: FastAPI):
async with contextlib.AsyncExitStack() as stack:
await stack.enter_async_context(echo.mcp.session_manager.run())
await stack.enter_async_context(math.mcp.session_manager.run())
yield
app = FastAPI(lifespan=lifespan)
app.mount("/echo", echo.mcp.streamable_http_app())
app.mount("/math", math.mcp.streamable_http_app())
Autorization (OAuth)
For the authorization system, a package offering a simple client-credentials authorization method is used, called mcp-oauth
. This package allows running an OAuth server in parallel with the MCP server. The source code can be found in .
# oauth_server.py
from mcp_oauth import (
OAuthServer,
SimpleAuthSettings,
AuthServerSettings,
)
from dotenv import load_dotenv
load_dotenv()
OAUTH_HOST = "127.0.0.1"
OAUTH_PORT = 9000
OAUTH_SERVER_URL = f"http://{OAUTH_HOST}:{OAUTH_PORT}"
def run_oauth_server():
server_settings: AuthServerSettings = AuthServerSettings(
host=OAUTH_HOST,
port=OAUTH_PORT,
server_url=f"{OAUTH_SERVER_URL}",
auth_callback_path=f"{OAUTH_SERVER_URL}/login",
)
auth_settings: SimpleAuthSettings = SimpleAuthSettings(
superusername=os.getenv("SUPERUSERNAME"),
superuserpassword=os.getenv("SUPERUSERPASSWORD"),
mcp_scope="user",
)
oauth_server: OAuthServer = OAuthServer(
server_settings=server_settings, auth_settings=auth_settings
)
oauth_server.run_starlette_server()
if __name__ == "__main__":
run_oauth_server()
To start this server, you can open a terminal in the root directory of the project and execute:
python3 src/services/fast_mcp/private_server/oauth_server.py
MCP Integration
Once the OAuth server is running, it must be integrated with the MCP server by providing the address where the OAuth server is running to the MCP server:
def create_private_server(settings: ServerSettings = ServerSettings()) -> FastMCP:
token_verifier = IntrospectionTokenVerifier(
introspection_endpoint=settings.auth_server_introspection_endpoint,
server_url=str(settings.server_url),
validate_resource=settings.oauth_strict, # Only validate when --oauth-strict is set
)
mcp: FastMCP = FastMCP(
name="private-example-server",
instructions="This server specializes in private operations of user profiles data",
debug=True,
# Auth configuration for RS mode
token_verifier=token_verifier,
auth=AuthSettings(
issuer_url=settings.auth_server_url,
required_scopes=[settings.mcp_scope],
resource_server_url=settings.server_url,
),
)
The MCP requires a TokenVerifier
, for which a simple one provided by the mcp_oauth
package is used. In this case, settings.auth_server_url
must be the address where the OAuth server is running, for example "http://127.0.0.1:9000"
. For further configuration details, please refer to the code in .
Configuration
This OAuth server uses a credential-based system for authentication (initial authorization token acquisition). You must fill the .env
file with the following variables:
SUPERUSERNAME=user
SUPERUSERPASSWORD=password
Deployment
Local Deployment
To set up the development environment, execute the following commands:
1. Install project dependencies
pip install -r requirements.txt
2.1 Start the server in development mode
uvicorn src.app:app --host 127.0.0.1 --port 8000 --reload
2.2 Start the oauth server
python3 src/services/fast_mcp/private_server/oauth_server.py
3. Verify Proper Server Startup
To confirm that the server is operating correctly, open a web browser and navigate to the address http://127.0.0.1:8000. This should redirect to a user help page that provides guidance on how to use the server.
4. Run tests
python tests/run.py
Docker Deployment
The project can be run using Docker Compose:
docker compose -f docker-compose.yml up -d --build
Use Case
To verify the correct operation of this server, it is recommended to install the mcp-llm-client
package and create a project based on it by following the steps outlined below:
⚠️ Configuration Note: To use this chat with an LLM, an OpenAI API key is required. If you do not have one, you can create it by following the instructions on the official OpenAI page.
1. Server Deployment
Deploy this server according to the instructions provided in the Deployment section. This step is essential, as the server must be running either locally or on a cloud server. Once the server is deployed, it can be used through the MCP client.
2. Clone a template from GitHub
Clone a template from GitHub that provides a simple base to use the MCP client:
# clone repo
git clone https://github.com/rb58853/template_mcp_llm_client.git
# change to project dir
cd template_mcp_llm_client
# install dependencies
pip install -r requirements.txt
3. Add Server to Configuration
In the cloned project, locate the config.json
file in the root directory and add the following configuration inside the mcp_servers object:
{
"mcp_servers": {
"example_mcp_server": {
"http": "your_http_path (e.g., http://127.0.0.1:8000/server_name/mcp)",
"name": "server_name (optional)",
"description": "server_description (optional)"
}
}
}
💡 Hint: Once the server is deployed, you can access its root URL to obtain help. This section provides the exact configuration needed to add the server to the MCP client. For example, opening
http://127.0.0.1:8000
in a browser will redirect to the help page.
4. Execution
Follow the instructions in the readme.md
file of the cloned project to run a local chat using this MCP server. Typically, this is done by running the following command in the console:
# Run app (after set OPENAI-API-KEY and add servers to config)
python3 main.py
Bibliography
For more detailed information on using this MCP client, please refer to its official repository.
License
MIT License. See .