Axyor/family-serve-delicious
If you are the rightful owner of family-serve-delicious and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Family Serve Delicious MCP Server is an AI-driven, constraint-aware meal planning server that integrates local LLM models with structured nutritional and preference data.
🥗🤖 Family Serve Delicious MCP Server
Model Context Protocol (MCP) server powering AI‑driven, constraint‑aware meal planning for families & groups with local LLM models.
family-serve-delicious bridges local LLM models with structured nutritional & preference data. It fetches groups, applies constraints (allergies, restrictions, health goals) and exposes normalized MCP resources, tools, and prompts so the model can reason safely and generate reliable meal recommendations.
Data layer:
@axyor/family-serve-database- Complete database abstraction with services, validation, and domain entities. Provides TypeScript interfaces, enums, and business logic for family dietary management.
Disclaimer Menu suggestions generated by this server (or any connected LLM model) are provided for informational purposes only. The project author cannot be held responsible for any errors, omissions, or consequences resulting from model suggestions. It is the user's responsibility to verify that proposed menus do not contain allergens or dangerous ingredients for any group member.
📑 Table of Contents
- Core Capabilities
- MCP Architecture
- Resources, Tools & Prompts
- LLM Integration Workflow
- Privacy & Anonymization
- Quick Start
- Development Scripts
- AI Client Integration
- Configuration
- Prompt Selection Strategy
- Testing
- License
✨ Core Capabilities
🎯 MCP Primitives
- 📦 Resources: Full group data access via URI templates
- 🛠️ Tools: 4 specialized tools for group discovery and context retrieval
- 💬 Prompts: 4 multilingual prompt templates for common meal planning scenarios
🍽️ Meal Planning Features
- 🍽️ Multi‑profile contextual meal recommendations
- 🛡️ Strict enforcement of allergies, restrictions, dislikes
- 🧠 Lightweight RAG: targeted group context injection into the LLM
- 🦊 Smart group lookup (avoid loading unnecessary data)
- 🌍 Multilingual support (English, French, Spanish)
💾 Data & Architecture
- 🏘️ Structured data model (Group / MemberProfile)
- 🔐 Privacy-first anonymization and aggregation
- 🐳 Docker-ready with MongoDB persistence
- ⚡ Optimized for local LLMs (token-efficient prompts)
🧩 MCP Architecture
The server implements the complete Model Context Protocol specification:
- Data Source: MongoDB via
@axyor/family-serve-databasepackage - MCP Resource:
groups://{groupId}- Full group serialization - MCP Tools (4):
find-group-by-name- Fast ID resolutiongroups-summary- Paginated lightweight listgroup-recipe-context- Aggregated anonymized contextfind-members-by-restriction- Targeted filtering
- MCP Prompts (4):
meal-planning-system- Base system promptplan-family-meals- Constraint-aware meal suggestionsquick-meal-suggestions- Fast meal ideasweekly-meal-plan- Multi-day planning with shopping list
- Transport: stdio (standard input/output) for universal MCP client compatibility
- LLM Integration: Combination of injected context + tool results + prompt templates
🗂️ Resources, Tools & Prompts
The Family Serve Delicious MCP server exposes three types of MCP primitives for AI-powered meal planning:
📦 Resources
| Name | URI Pattern | Description | Use Case |
|---|---|---|---|
group | groups://{groupId} | Full group details with member profiles | When you need names or detailed personal information |
🛠️ Tools
| Name | Description | Token Cost | Recommended Use |
|---|---|---|---|
find-group-by-name | Resolve group name → ID | Very low | First step: find target group |
groups-summary | List all groups (no members) | Low | Browse/explore available groups |
group-recipe-context | Aggregated anonymized constraints | Low→Medium | Primary meal planning data source |
find-members-by-restriction | Filter members by dietary restriction | Low→Medium | Targeted constraint investigation |
💬 MCP Prompts
Built-in prompt templates for common meal planning scenarios:
| Name | Description | Key Parameters |
|---|---|---|
meal-planning-system | Base system prompt for meal planning | language, format, groupId? |
plan-family-meals | Generate meal suggestions with constraints | groupId, mealType?, servings?, budget? |
quick-meal-suggestions | Get 3-5 quick meal ideas (≤30min) | groupId, language? |
weekly-meal-plan | Create weekly meal plan + shopping list | groupId, days?, includeBreakfast?, includeLunch?, includeDinner? |
Multilingual Support: All prompts available in English (en), French (fr), and Spanish (es)
Client Compatibility: Works with any MCP client supporting prompts (Claude Desktop, VS Code Copilot, etc.)
Example Usage:
// Using the weekly-meal-plan prompt
{
name: "weekly-meal-plan",
arguments: {
groupId: "family-alpha",
days: 7,
includeBreakfast: true,
includeLunch: true,
includeDinner: true,
language: "fr"
}
}
🔌 LLM Integration Workflow
Suggested strategy:
- Resolve group (via
find-group-by-nameor browsegroups-summary). - Fetch
group-recipe-contextfor anonymized aggregated constraints. - (Optional) Use
find-members-by-restrictionfor focused constraint clarification.
This minimizes token usage and keeps reasoning focused.
Returned formats:
- Every tool wraps data with
typeandschemaVersionfields (e.g.{ "type": "groups-summary", "schemaVersion": 1, ... }). - Empty / null fields pruned server-side to reduce tokens.
- Pagination:
limit(≤100) &offsetforgroups-summary.
Example minimal groups-summary response:
{
"type": "groups-summary",
"schemaVersion": 1,
"total": 3,
"limit": 20,
"offset": 0,
"count": 3,
"groups": [
{ "id": "g1", "name": "Alpha", "membersCount": 2 },
{ "id": "g2", "name": "Beta", "membersCount": 1 },
{ "id": "g3", "name": "Gamma", "membersCount": 0 }
]
}
Example group-recipe-context response (simplified):
{
"type": "group-recipe-context",
"schemaVersion": 1,
"group": { "id": "g1", "name": "Alpha", "size": 2 },
"members": [ { "id": "m-1", "alias": "M1", "ageGroup": "adult" } ],
"segments": { "ageGroups": { "adult": 2 } },
"allergies": [],
"hardRestrictions": [],
"stats": { "cookingSkillSpread": {} },
"hash": "sha256:abcd1234ef567890"
}
🔒 Privacy & Anonymization
The group-recipe-context tool implements privacy-first design while preserving 100% of nutritional constraints needed for meal planning.
Key Features:
- Data minimization: Only essential constraint data (allergies, restrictions, age groups)
- Pseudonymization: Members identified as
M1,M2, etc. instead of real names - Aggregation: Focus on patterns, not individuals (allergy counts, age group distribution)
- Hash-based caching:
sha256:prefix enables efficient context reuse without re-injection
Two-layer architecture:
groups://{id}resource: Full personal data (use only for personalization)group-recipe-contexttool: Anonymized constraints (default for meal planning)
🚀 Quick Start
🔒 Security Configuration (Required)
CRITICAL: This project requires authentication credentials for MongoDB and Mongo Express.
1. Generate Strong Credentials
# Generate MongoDB admin password (32 characters)
openssl rand -base64 32 | head -c 32
# Generate Mongo Express password (32 characters)
openssl rand -base64 32 | head -c 32
2. Configure Environment Variables
Create a .env file from the template:
cp .env.example .env
Edit .env and set the following variables:
# MongoDB Authentication (REQUIRED)
MONGODB_USERNAME=family_serve_admin
MONGODB_PASSWORD=<paste_generated_password_here>
# Mongo Express Authentication (REQUIRED)
ME_USERNAME=admin_custom
ME_PASSWORD=<paste_generated_password_here>
# GitHub Token (for private packages)
GITHUB_TOKEN=<your_github_token>
# MongoDB URI (will use credentials automatically)
MONGODB_URI=mongodb://${MONGODB_USERNAME}:${MONGODB_PASSWORD}@mongodb:27017/family_serve?authSource=admin
NODE_ENV=production
🔑 GitHub Token Configuration (Required)
This project uses a private npm package @axyor/family-serve-database. Configure your GitHub Personal Access Token:
./manage.sh setup
⚡ Instant Development
git clone <repository-url>
cd your-project-name
nvm use || echo "(Optional) Use Node 22 LTS: nvm install 22"
npm install
# 1. Configure authentication (see above)
cp .env.example .env
# Edit .env with your credentials
# 2. Setup GitHub token + build
./manage.sh setup
# 3. Start development environment
npm run dev
🛠️ Development Scripts
The project follows standard Node.js conventions for everyday operations:
🚀 Development Scripts
npm run dev # Start development environment (Docker)
npm run build # Build TypeScript application
npm run test # Run all tests
npm run start # Start built application locally
🐳 Docker Operations
npm run prod # Start production environment
npm run stop # Stop all Docker services
npm run status # Show container status
npm run logs # Show all container logs
npm run logs:app # Show application logs only
npm run clean # Clean all containers and volumes
🗄️ Database Operations
npm run db:start # Start MongoDB only
npm run db:stop # Stop MongoDB only
npm run db:gui # Start MongoDB web interface
⚙️ Configuration
npm run lm-studio # Generate LM Studio config
npm run claude # Generate Claude Desktop config
🛠️ Essential Management
For complex operations, use the simplified manage.sh:
./manage.sh setup # Complete project initialization
./manage.sh lmstudio config # Generate LM Studio configuration
./manage.sh lmstudio help # Open LM Studio setup guide
./manage.sh claude config # Generate Claude Desktop configuration
./manage.sh claude help # Show Claude Desktop setup guide
./manage.sh reset # Complete system reset (destructive)
🤖 AI Client Integration
The Family Serve Delicious MCP server integrates seamlessly with popular AI clients:
🎯 Claude Desktop Integration
Claude Desktop provides native MCP support with an intuitive interface:
Quick Setup:
# 1. Generate configuration
npm run claude
# 2. Start MongoDB
npm run db:start
# 3. Build the application
npm run build
# 4. Copy generated config to Claude Desktop
# Configuration file: config/claude_desktop_mcp_config.json
Configuration Location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Or see:
🧠 LM Studio Integration
LM Studio offers powerful local LLM capabilities with MCP protocol support:
Quick Setup:
# 1. Generate configuration
npm run lm-studio
# 2. Start MongoDB
npm run db:start
# 3. Build the application
npm run build
# 4. Add server to LM Studio
# Configuration file: config/lm_studio_mcp_config.json
Configuration Location:
- Windows:
%APPDATA%\LMStudio\mcp_servers.json - macOS:
~/Library/Application Support/LMStudio/mcp_servers.json - Linux:
~/.config/LMStudio/mcp_servers.json
Verify Connection:
- Open LM Studio → My Projects
- Check that "family-serve-delicious" appears in available servers
- Test with:
"Use the groups-summary tool to show available groups"
Or see:
🔧 Other MCP Clients
The server uses standard stdio transport and works with any MCP-compatible client:
- VS Code Copilot - Configure via MCP settings
- Continue.dev - Add to MCP servers configuration
- Custom Clients - Use the MCP SDK with stdio transport
Generic Configuration (with authentication):
{
"command": "node",
"args": ["/absolute/path/to/family-serve-delicious/dist/index.js"],
"env": {
"MONGODB_URI": "mongodb://your_username:your_password@localhost:27017/family_serve?authSource=admin",
"NODE_ENV": "production",
"OUTPUT_VALIDATION_MODE": "warn",
"OUTPUT_VALIDATION_MAX_LENGTH": "50000",
"OUTPUT_VALIDATION_LOG_PATH": "logs/output-validation.log"
}
}
⚙️ Configuration
Environment Variables
Required Variables:
| Name | Description | Example |
|---|---|---|
MONGODB_USERNAME | MongoDB admin username | family_serve_admin |
MONGODB_PASSWORD | MongoDB admin password (32+ chars) | <generated_strong_password> |
ME_USERNAME | Mongo Express username | admin_custom |
ME_PASSWORD | Mongo Express password (32+ chars) | <generated_strong_password> |
MONGODB_URI | MongoDB connection string with auth | mongodb://user:pass@mongodb:27017/family_serve?authSource=admin |
GITHUB_TOKEN | GitHub PAT for private packages | ghp_xxxxxxxxxxxxx |
NODE_ENV | Environment mode | development or production |
Generating Strong Passwords:
# Generate a 32-character random password
openssl rand -base64 32 | head -c 32
Optional Safety Controls:
| Name | Description | Default |
|---|---|---|
OUTPUT_VALIDATION_MODE | Enforcement mode for tool responses (warn, mask, block) | warn |
OUTPUT_VALIDATION_MAX_LENGTH | Character threshold before large-output warnings | 50000 |
OUTPUT_VALIDATION_LOG_PATH | Destination for JSONL audit records | logs/output-validation.log |
ALLOW_RAW_CONTEXT | Allow non-anonymized member data in group-recipe-context | false |
Privacy Note: By default, group-recipe-context always returns anonymized member data (aliases and age groups only). Set ALLOW_RAW_CONTEXT=true to permit the anonymize=false parameter, which exposes first/last names. This is recommended only for trusted, private deployments.
GitHub Token Setup
For private package access, configure your GitHub Personal Access Token:
-
Generate Token
- Go to GitHub → Settings → Developer settings → Personal access tokens
- Generate new token with
packages:readscope
-
Configure npm
./manage.sh setupThis will prompt for your GitHub username and token.
Allergen Synonyms Configuration
The server uses config/allergen-synonyms.json for multilingual allergen normalization.
Format:
{
"peanut": ["peanut", "peanuts", "arachide", "cacahuete"],
"dairy": ["dairy", "milk", "lait"]
}
Implementation:
- Lazy loading: Built on first
group-recipe-contextcall - Reverse index: O(1) lookup (synonym → canonical form)
- Fallback: Missing allergens use lowercase form
- Case-insensitive: All comparisons normalized
Usage: Restart server after editing (no dynamic reload).
Preference Pattern Configuration
The server uses config/preference-patterns.json for multilingual negative preference detection.
Format:
{
"dislikeIndicators": ["déteste", "dislike", "hate"],
"avoidIndicators": ["éviter", "avoid", "vermeide"],
"excludeIndicators": ["sans", "without", "sin", "no"],
"splitDelimitersRegex": ",|;|/| et | and | y | ou "
}
Processing logic:
- Detect indicators: Longest-first matching (case-insensitive)
- Extract tokens: Split remaining text by regex delimiters
- Store as dislikes: Added to soft preference constraints
Usage: Restart server after editing (no dynamic reload).
Example Family Data
The project includes a complete example family in config/family-example-template.json to help you understand the data structure and test the system.
Usage:
# Load the example data into your database
# (Implementation depends on your database setup scripts)
# The file serves as a perfect template for creating your own families
Customization: Use this template as a starting point to create your own family profiles with appropriate dietary restrictions, preferences, and cooking skills.
Runtime Requirements:
- Node.js 22.x (LTS) – see
.nvmrc - TypeScript target: ES2023
- MongoDB for data storage
🧠 Local LLM Compatibility
Key Insight: Local LLMs with limited context windows need optimized prompts to function effectively.
Prompt Compatibility Matrix
| LLM Category | Context Window | Recommended Prompt | Examples |
|---|---|---|---|
| Large (≥70B) | 32K+ tokens | system-full.md (1,600 tokens) | GPT-4, Claude 3 |
| Medium (13-70B) | 4K-16K tokens | system.short.md (190 tokens) | Llama 2 70B, Mixtral 8x7B |
| Small (7-20B) | 2K-8K tokens | system.short.md (Essential) | GPT OSS 20B, Llama 7B-13B |
| Embedded | <4K tokens | system.short.md (Critical) | Edge deployment models |
Tested Models
- ✅ GPT OSS 20B +
system.short.md→ Works well - ❌ GPT OSS 20B +
system-full.md→ Token budget exceeded - ✅ Llama 7B-13B → Requires
system.short.md - ✅ Mixtral 8x7B → Both prompts work, prefer short for efficiency
📋 Prompt Selection Strategy
The MCP server provides two system prompt versions optimized for different LLM capabilities:
| Prompt File | Size | Best For |
|---|---|---|
system.short.md | ~190 tokens | Small LLMs (7B-20B), limited context (≤16K) |
system-full.md | ~1,600 tokens | Large LLMs (≥70B), generous context (≥32K) |
Quick Selection:
- Use
system.short.mdfor most local/smaller LLMs (GPT OSS 20B, Llama 7B-13B) - Use
system-full.mdfor large cloud models or 70B+ local models
Implementation: Copy the appropriate prompt file content to your MCP client configuration.
# For small LLMs
cat src/prompts/en/system.short.md
# For large LLMs
cat src/prompts/en/system-full.md
🧪 Testing
The project includes comprehensive test coverage with unit and integration tests:
npm test # Run all tests
npm test -- --watch # Run tests in watch mode
npm test -- --coverage # Generate coverage report
📜 License
Distributed under AGPL-3.0-or-later. See LICENSE.
Private server use allowed; modified network service redistribution must publish corresponding source (network copyleft).