vsn411/llm_forensics
If you are the rightful owner of llm_forensics and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The LLM Trace Server is a lightweight, fast Model Context Protocol (MCP) server designed for storing, searching, retrieving, and exporting LLM traces, making it ideal for audit logging and forensic analysis of AI interactions.
LLM Trace Server (MCP Server)
This is a lightweight, fast Model Context Protocol (MCP) server to store, search, retrieve, and export LLM traces. Ideal for audit logging and forensic analysis of AI interactions.
๐ ๏ธ Installation Steps
Step 1: Clone the GitHub Repository
git clone https://github.com/vsn411/llm_forensics.git
cd llm_forensics
Step 2: Create and Activate a Virtual Environment
python -m venv venvmcp
source venvmcp/bin/activate # On Windows use: venvmcp\Scripts\activate
Step 3: Install Python Dependencies
pip install -r requirements.txt
๐งโ๐ป Claude Desktop Client Setup (Optional for GUI Testing)
- Download the Claude Desktop Client or any LLM desktop interface that supports tool plugins.
- Ensure the client is allowed to use local tool endpoints.
๐ Run the MCP Server
python mcp_server.py
or
mcp install mcp_server.py
The server will initialize an SQLite database named
llm_traces.db
and expose tools via MCP.
๐งช Example Usage with Claude Desktop Client
๐ Example 1: Store a Trace
Store a trace with prompt as "how to make pasta sauce" and response as "here are instructions to make it"
Tool call (automatically handled if integrated via Claude Desktop):
{
"tool": "store_trace",
"args": {
"prompt": "how to make pasta sauce",
"response": "Tomatoes, olive oil, garlic, basil, salt, pepper"
}
}
๐ Example 2: Search Traces
{
"tool": "search_traces",
"args": {
"query": "pasta sauce"
}
}
๐ Example 3: Export Traces to CSV
{
"tool": "export_traces_to_csv",
"args": {}
}