charite-iaim/mcp-risk-server
If you are the rightful owner of mcp-risk-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
This document provides a comprehensive guide to setting up and running a Model Context Protocol (MCP) server for risk score computation using various language models.
🫀 FastMCP Risk Scoring Platform
🚀 Installation
Prerequisites
Create and activate virtual environment
python3 -m venv .mcp-env
macOS/Linux:
source .mcp-env/bin/activate
Windows:
.mcp-env\Scripts\activate # or
# .mcp-env\Scripts\Activate.psl
Install missing packages user mode:
pip install -e .
Optionally, install missing packages dev mode:
pip install -e . [dev]
📝 Data Preparation
Create a directory, e.g., named data and place your patient reports as individual text files inside it.
File Naming: Each file should correspond to a single case; the case ID will be extracted from the file name by removing the extension. If an underscore (_) is present, only the part before the first underscore is used as the case ID.
File Format: plain text format and UTF-8 encoding recommended.
data
├ Pa30df485.txt
├ P6d9b89a8.txt
:
└ P0b1d9044.txt
⚙️ Configuration
Populate your configuration file, e.g., by duplicating config.example.yaml and re-naming it to say my_config.yaml:
cp config.example.yaml my_config.yaml
then edit the fields in my_config.yaml to select the risk score you want to compute on the case files and select the language model and give provider-specific details
risk_score: HAS-BLED # or CHA2DS2-VASc, EuroSCORE II
provider: openai # supported: openai, deepseek, perplexity, qwen
model: gpt-4-0613 # or any other model name the provider gives you access to
Provisioning of API Keys
Depending on which LLM provider you select, you have to set the following keys either as environment variables (option 1):
| Provider | Environment Variable | Config Variable |
|---|---|---|
| Alibaba | API_KEY | api:api_key |
| DeepSeek | API_KEY | api:api_key |
| OpenAI | API_KEY (sk-...) | api:api_key |
ORG_KEY (org-...) | api:org_key | |
PROJECT_ID (proj_...) | api:project_id | |
| Perplexity | API_KEY | api:api_key |
or directly in the config file (option 2):
api:
api_key: # Secret API key (e.g., sk-...)
org_key: # Organization key, if required (e.g., org-...)
project_id: # Project ID, if required (e.g., proj_...)
Advanced: You can set the keys permanently in a separate bash script .bash_keys with tight permissions. First create a separate file:
vi ~/.bash_keys
and write needed keys into it, e.g., API_KEY=<secret_api_key>. Now, adjust rights such that file can only be read by current user:
chmod 400 ~/.bash_keys
Allow automatic source execution upon source ~/.bash_profile call by inserting these three lines into you ~/.bash_profile:
if [ -f ~/.bash_secrets ]; then
source ~/.bash_secrets
fi
🖥️ Running the Server
After having configured the configuration file and prepared the text data, for a local run of the Scoring server run it as a module from repo root level:
python -m src.server
This will start the server with the HTTP transport layer on port 8000. In terminal you should see output that indicates that the FastMCP server is running and is open for MCP communication on http://127.0.0.1:8000/mcp/. You can quit the server anytime with Ctrl + C in terminal.
🤖 Client Usage
Assuming your server is up and running, you can call the client simply by giving your config file:
python -m src.client --cfg <config_path> --data <data_path> --url <mcp_server_url>
without arguments call defaults to:
python -m src.client --cfg ./my_config.yaml --data ./data --url http://127.0.0.1:8000/mcp
📊 Output
Two basic output directories are created, prefixed :
- Log dir:
./output/logs/<run_name> - Results dir:
./output/<run_name>
Default folder is output, but can be changed under output_dir in your config.
For logging purposes LLM responses will be stored in the log dir separated by case id item-wise, in the format <item>_<timestamp>.log. run_name is taken from your config and case_ids extracted from text file prefixes as described here.
Important to you is that items extracted from the LLM-returned JSON strings are aggregated over all texts and stored in table ./output/<run_name>/stage1/<score>/<score>_llm.csv. Final risk scores calculated on <score>_llm.csv are placed into table ./output/<run_name>/stage2/<score>/<score>_calc.csv.
🧩 Customization: Adding a New Risk Score
To extend the MCP server app by a new risk score, two things have to be done:
- Define
NewRiskScoreas a derivative of the base classRiskScoreand provide acalculatefunction - Provide prompt instructions for information extraction under
prompts/newriskscore_template.yaml - optionally, add unit tests under
tests/scoring
Finally, modify the configuration file to name the new risk score under payload -> risk_score and run the pipeline as described above.
🧪 Unit Testing
Most unit tests do not require an LLM provider connection or use a mockup.
To execute these tests, run from top level:
pytest tests # Runs complete test suit
pytest tests/scoring # Runs all scoring tests
pytest tests/pipeline # Runs end-to-end pipeline tests
To run integration tests with mocked LLM API calls (no API key required)
pytest -m mock_llm # Runs only tests marked as "mock_llm"
Only tests decorated with @pytest.mark.real_api will require a valid API key set as environment variable and consume tokens from your account. In addition the provider and model name must be set. The tests will will run with the dummy data located under .\tests\data\<score>. By default the CHA2DS2-VASc score will be calculated from the reports under .\tests\data\<score> and compared to expected values of the fictitious patients whose true score is the value after underscore in their report names.
export TEST_PROVIDER=perplexity
export TEST_MODEL=sonar-pro
export TEST_API_KEY=sk-...
# Runs dedicated HAS-BLED score calcuation test with real
# API calls on two test reports located at tests/data/hasbled
pytest -m real_api::test_real_api_hasbled
🛠️ Trouble Shooting
Terminate Server Process Not Working with CTRL + C
Kill server for restart or termination if ctrl + c does not work by identifying associated process id and kill
ps aux | grep server.py
kill <pid>
Repeated Warning That Items Could Not Be Extracted
Check the log output under cfg['log_dir'] / score / case_id / <item>_timestamp.log and verify that captured LLM output is matched by regex defined in the Extractor class in provider_tools.py