GoomeGum/mcp-server
If you are the rightful owner of mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The MCP Server is a FastAPI-based tool designed to enhance simple text contexts into detailed prompts, optimized for Google Colab with GPU support.
get_elaborate_description_prompt
Transform simple context into elaborate prompts.
MCP Server - Context Enhancement Tool
A FastAPI-based Model Context Protocol (MCP) server that enhances simple text contexts into elaborate, detailed prompts with caching functionality. Developed for Google Colab with GPU support.
Features
- Context Elaboration Tool: Transform simple text contexts into detailed, elaborate prompts
- Cached Prompts: Access pre-generated elaborate prompts by concept ID
- Prompt Cache Management: List all cached elaborate prompts
- CSV Data Loading: Load pre-trained elaborate prompts from CSV files
Project Structure
mcp_server/
āāā main.py # FastAPI application with MCP endpoints
āāā models.py # Pydantic models for request/response schemas
āāā cache.py # Caching functionality and prompt elaboration
āāā requirements.txt # Python dependencies
āāā similarity_results_train.csv # Training data for cached elaborate prompts
āāā MCPServer.ipynb # Google Colab notebook implementation
āāā README.md # This file
API Endpoints
Tools
POST /tool/get_elaborate_description_prompt
- Transform simple context into elaborate prompts
Resources
GET /resource/cached_description/{concept_id}
- Get cached elaborate prompt by IDGET /resource/cached_description_list
- List all cached elaborate prompts
Quick Start
For Google Colab (Recommended)
- Open the
MCPServer.ipynb
notebook in Google Colab - Run all cells to install dependencies and start the server
- The server will create a public ngrok tunnel - access the API at:
{ngrok_url}/docs
For Local Development (GPU is needed)
-
Install dependencies:
pip install -r requirements.txt
-
Access the API documentation at:
http://localhost:8000/docs
Usage Examples
Generate Elaborate Prompt
curl -X POST "http://localhost:8000/tool/get_elaborate_description_prompt" \
-H "Content-Type: application/json" \
-d '{"context": "a sunset over mountains"}'
Get Cached Elaborate Prompt
curl "http://localhost:8000/resource/cached_description/concept123"
List All Cached Elaborate Prompts
curl "http://localhost:8000/resource/cached_description_list"
Data Format
The server loads cached elaborate prompts from similarity_results_train.csv
with the following format:
input
: The simple concept/context identifierfinetuned_model_answer
: The elaborate, enhanced prompt
Development
This server is designed to work with Phi-4 or similar language models for transforming simple text contexts into elaborate, detailed prompts. The project is optimized for Google Colab with GPU support for running the fine-tuned model efficiently.
Key Features for Colab:
- ngrok integration: Creates public tunnels for accessing the server externally
- GPU support: Leverages Colab's free GPU for model inference
- Pre-configured setup: All dependencies and model loading handled in the notebook
- Easy access: Server documentation available at
{ngrok_url}/docs
after running
Local Development Notes:
The current implementation includes placeholder logic in cache.py
that can be replaced with actual model calls for context enhancement and prompt elaboration when running locally without GPU access.