family-serve-delicious

Axyor/family-serve-delicious

3.2

If you are the rightful owner of family-serve-delicious and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The Family Serve Delicious MCP Server is an AI-driven, constraint-aware meal planning server that integrates local LLM models with structured nutritional and preference data.

Tools
4
Resources
0
Prompts
0

🥗🤖 Family Serve Delicious MCP Server

Model Context Protocol (MCP) server powering AI‑driven, constraint‑aware meal planning for families & groups with local LLM models.

MCP Node.js Version TypeScript

family-serve-delicious bridges local LLM models with structured nutritional & preference data. It fetches groups, applies constraints (allergies, restrictions, health goals) and exposes normalized MCP resources, tools, and prompts so the model can reason safely and generate reliable meal recommendations.

Data layer: @axyor/family-serve-database - Complete database abstraction with services, validation, and domain entities. Provides TypeScript interfaces, enums, and business logic for family dietary management.

Disclaimer Menu suggestions generated by this server (or any connected LLM model) are provided for informational purposes only. The project author cannot be held responsible for any errors, omissions, or consequences resulting from model suggestions. It is the user's responsibility to verify that proposed menus do not contain allergens or dangerous ingredients for any group member.

📑 Table of Contents

  1. Core Capabilities
  2. MCP Architecture
  3. Resources, Tools & Prompts
  4. LLM Integration Workflow
  5. Privacy & Anonymization
  6. Quick Start
  7. Development Scripts
  8. AI Client Integration
  9. Configuration
  10. Prompt Selection Strategy
  11. Testing
  12. License

✨ Core Capabilities

🎯 MCP Primitives

  • 📦 Resources: Full group data access via URI templates
  • 🛠️ Tools: 4 specialized tools for group discovery and context retrieval
  • 💬 Prompts: 4 multilingual prompt templates for common meal planning scenarios

🍽️ Meal Planning Features

  • 🍽️ Multi‑profile contextual meal recommendations
  • 🛡️ Strict enforcement of allergies, restrictions, dislikes
  • 🧠 Lightweight RAG: targeted group context injection into the LLM
  • 🦊 Smart group lookup (avoid loading unnecessary data)
  • 🌍 Multilingual support (English, French, Spanish)

💾 Data & Architecture

  • 🏘️ Structured data model (Group / MemberProfile)
  • 🔐 Privacy-first anonymization and aggregation
  • 🐳 Docker-ready with MongoDB persistence
  • ⚡ Optimized for local LLMs (token-efficient prompts)

🧩 MCP Architecture

The server implements the complete Model Context Protocol specification:

  1. Data Source: MongoDB via @axyor/family-serve-database package
  2. MCP Resource: groups://{groupId} - Full group serialization
  3. MCP Tools (4):
    • find-group-by-name - Fast ID resolution
    • groups-summary - Paginated lightweight list
    • group-recipe-context - Aggregated anonymized context
    • find-members-by-restriction - Targeted filtering
  4. MCP Prompts (4):
    • meal-planning-system - Base system prompt
    • plan-family-meals - Constraint-aware meal suggestions
    • quick-meal-suggestions - Fast meal ideas
    • weekly-meal-plan - Multi-day planning with shopping list
  5. Transport: stdio (standard input/output) for universal MCP client compatibility
  6. LLM Integration: Combination of injected context + tool results + prompt templates

🗂️ Resources, Tools & Prompts

The Family Serve Delicious MCP server exposes three types of MCP primitives for AI-powered meal planning:

📦 Resources

NameURI PatternDescriptionUse Case
groupgroups://{groupId}Full group details with member profilesWhen you need names or detailed personal information

🛠️ Tools

NameDescriptionToken CostRecommended Use
find-group-by-nameResolve group name → IDVery lowFirst step: find target group
groups-summaryList all groups (no members)LowBrowse/explore available groups
group-recipe-contextAggregated anonymized constraintsLow→MediumPrimary meal planning data source
find-members-by-restrictionFilter members by dietary restrictionLow→MediumTargeted constraint investigation

💬 MCP Prompts

Built-in prompt templates for common meal planning scenarios:

NameDescriptionKey Parameters
meal-planning-systemBase system prompt for meal planninglanguage, format, groupId?
plan-family-mealsGenerate meal suggestions with constraintsgroupId, mealType?, servings?, budget?
quick-meal-suggestionsGet 3-5 quick meal ideas (≤30min)groupId, language?
weekly-meal-planCreate weekly meal plan + shopping listgroupId, days?, includeBreakfast?, includeLunch?, includeDinner?

Multilingual Support: All prompts available in English (en), French (fr), and Spanish (es)
Client Compatibility: Works with any MCP client supporting prompts (Claude Desktop, VS Code Copilot, etc.)

Example Usage:

// Using the weekly-meal-plan prompt
{
  name: "weekly-meal-plan",
  arguments: {
    groupId: "family-alpha",
    days: 7,
    includeBreakfast: true,
    includeLunch: true,
    includeDinner: true,
    language: "fr"
  }
}

🔌 LLM Integration Workflow

Suggested strategy:

  1. Resolve group (via find-group-by-name or browse groups-summary).
  2. Fetch group-recipe-context for anonymized aggregated constraints.
  3. (Optional) Use find-members-by-restriction for focused constraint clarification.

This minimizes token usage and keeps reasoning focused.

Returned formats:

  • Every tool wraps data with type and schemaVersion fields (e.g. { "type": "groups-summary", "schemaVersion": 1, ... }).
  • Empty / null fields pruned server-side to reduce tokens.
  • Pagination: limit (≤100) & offset for groups-summary.

Example minimal groups-summary response:

{
	"type": "groups-summary",
	"schemaVersion": 1,
	"total": 3,
	"limit": 20,
	"offset": 0,
	"count": 3,
	"groups": [
		{ "id": "g1", "name": "Alpha", "membersCount": 2 },
		{ "id": "g2", "name": "Beta",  "membersCount": 1 },
		{ "id": "g3", "name": "Gamma", "membersCount": 0 }
	]
}

Example group-recipe-context response (simplified):

{
	"type": "group-recipe-context",
	"schemaVersion": 1,
	"group": { "id": "g1", "name": "Alpha", "size": 2 },
	"members": [ { "id": "m-1", "alias": "M1", "ageGroup": "adult" } ],
	"segments": { "ageGroups": { "adult": 2 } },
	"allergies": [],
	"hardRestrictions": [],
	"stats": { "cookingSkillSpread": {} },
	"hash": "sha256:abcd1234ef567890"
}

🔒 Privacy & Anonymization

The group-recipe-context tool implements privacy-first design while preserving 100% of nutritional constraints needed for meal planning.

Key Features:

  • Data minimization: Only essential constraint data (allergies, restrictions, age groups)
  • Pseudonymization: Members identified as M1, M2, etc. instead of real names
  • Aggregation: Focus on patterns, not individuals (allergy counts, age group distribution)
  • Hash-based caching: sha256: prefix enables efficient context reuse without re-injection

Two-layer architecture:

  • groups://{id} resource: Full personal data (use only for personalization)
  • group-recipe-context tool: Anonymized constraints (default for meal planning)

🚀 Quick Start

🔒 Security Configuration (Required)

CRITICAL: This project requires authentication credentials for MongoDB and Mongo Express.

1. Generate Strong Credentials
# Generate MongoDB admin password (32 characters)
openssl rand -base64 32 | head -c 32

# Generate Mongo Express password (32 characters)  
openssl rand -base64 32 | head -c 32
2. Configure Environment Variables

Create a .env file from the template:

cp .env.example .env

Edit .env and set the following variables:

# MongoDB Authentication (REQUIRED)
MONGODB_USERNAME=family_serve_admin
MONGODB_PASSWORD=<paste_generated_password_here>

# Mongo Express Authentication (REQUIRED)
ME_USERNAME=admin_custom
ME_PASSWORD=<paste_generated_password_here>

# GitHub Token (for private packages)
GITHUB_TOKEN=<your_github_token>

# MongoDB URI (will use credentials automatically)
MONGODB_URI=mongodb://${MONGODB_USERNAME}:${MONGODB_PASSWORD}@mongodb:27017/family_serve?authSource=admin
NODE_ENV=production

🔑 GitHub Token Configuration (Required)

This project uses a private npm package @axyor/family-serve-database. Configure your GitHub Personal Access Token:

./manage.sh setup

⚡ Instant Development

git clone <repository-url>
cd your-project-name
nvm use || echo "(Optional) Use Node 22 LTS: nvm install 22"
npm install

# 1. Configure authentication (see above)
cp .env.example .env
# Edit .env with your credentials

# 2. Setup GitHub token + build
./manage.sh setup

# 3. Start development environment
npm run dev

🛠️ Development Scripts

The project follows standard Node.js conventions for everyday operations:

🚀 Development Scripts

npm run dev              # Start development environment (Docker)
npm run build            # Build TypeScript application
npm run test             # Run all tests
npm run start            # Start built application locally

🐳 Docker Operations

npm run prod             # Start production environment
npm run stop             # Stop all Docker services
npm run status           # Show container status
npm run logs             # Show all container logs
npm run logs:app         # Show application logs only
npm run clean            # Clean all containers and volumes

🗄️ Database Operations

npm run db:start         # Start MongoDB only
npm run db:stop          # Stop MongoDB only
npm run db:gui           # Start MongoDB web interface

⚙️ Configuration

npm run lm-studio        # Generate LM Studio config
npm run claude           # Generate Claude Desktop config

🛠️ Essential Management

For complex operations, use the simplified manage.sh:

./manage.sh setup          # Complete project initialization
./manage.sh lmstudio config # Generate LM Studio configuration
./manage.sh lmstudio help   # Open LM Studio setup guide
./manage.sh claude config   # Generate Claude Desktop configuration
./manage.sh claude help     # Show Claude Desktop setup guide
./manage.sh reset           # Complete system reset (destructive)

🤖 AI Client Integration

The Family Serve Delicious MCP server integrates seamlessly with popular AI clients:

🎯 Claude Desktop Integration

Claude Desktop provides native MCP support with an intuitive interface:

Quick Setup:

# 1. Generate configuration
npm run claude

# 2. Start MongoDB
npm run db:start

# 3. Build the application
npm run build

# 4. Copy generated config to Claude Desktop
# Configuration file: config/claude_desktop_mcp_config.json

Configuration Location:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

Or see:

🧠 LM Studio Integration

LM Studio offers powerful local LLM capabilities with MCP protocol support:

Quick Setup:

# 1. Generate configuration
npm run lm-studio

# 2. Start MongoDB
npm run db:start

# 3. Build the application
npm run build

# 4. Add server to LM Studio
# Configuration file: config/lm_studio_mcp_config.json

Configuration Location:

  • Windows: %APPDATA%\LMStudio\mcp_servers.json
  • macOS: ~/Library/Application Support/LMStudio/mcp_servers.json
  • Linux: ~/.config/LMStudio/mcp_servers.json

Verify Connection:

  1. Open LM Studio → My Projects
  2. Check that "family-serve-delicious" appears in available servers
  3. Test with: "Use the groups-summary tool to show available groups"

Or see:

🔧 Other MCP Clients

The server uses standard stdio transport and works with any MCP-compatible client:

  • VS Code Copilot - Configure via MCP settings
  • Continue.dev - Add to MCP servers configuration
  • Custom Clients - Use the MCP SDK with stdio transport

Generic Configuration (with authentication):

{
  "command": "node",
  "args": ["/absolute/path/to/family-serve-delicious/dist/index.js"],
  "env": {
    "MONGODB_URI": "mongodb://your_username:your_password@localhost:27017/family_serve?authSource=admin",
    "NODE_ENV": "production",
    "OUTPUT_VALIDATION_MODE": "warn",
    "OUTPUT_VALIDATION_MAX_LENGTH": "50000",
    "OUTPUT_VALIDATION_LOG_PATH": "logs/output-validation.log"
  }
}

⚙️ Configuration

Environment Variables

Required Variables:

NameDescriptionExample
MONGODB_USERNAMEMongoDB admin usernamefamily_serve_admin
MONGODB_PASSWORDMongoDB admin password (32+ chars)<generated_strong_password>
ME_USERNAMEMongo Express usernameadmin_custom
ME_PASSWORDMongo Express password (32+ chars)<generated_strong_password>
MONGODB_URIMongoDB connection string with authmongodb://user:pass@mongodb:27017/family_serve?authSource=admin
GITHUB_TOKENGitHub PAT for private packagesghp_xxxxxxxxxxxxx
NODE_ENVEnvironment modedevelopment or production

Generating Strong Passwords:

# Generate a 32-character random password
openssl rand -base64 32 | head -c 32

Optional Safety Controls:

NameDescriptionDefault
OUTPUT_VALIDATION_MODEEnforcement mode for tool responses (warn, mask, block)warn
OUTPUT_VALIDATION_MAX_LENGTHCharacter threshold before large-output warnings50000
OUTPUT_VALIDATION_LOG_PATHDestination for JSONL audit recordslogs/output-validation.log
ALLOW_RAW_CONTEXTAllow non-anonymized member data in group-recipe-contextfalse

Privacy Note: By default, group-recipe-context always returns anonymized member data (aliases and age groups only). Set ALLOW_RAW_CONTEXT=true to permit the anonymize=false parameter, which exposes first/last names. This is recommended only for trusted, private deployments.

GitHub Token Setup

For private package access, configure your GitHub Personal Access Token:

  1. Generate Token

    • Go to GitHub → Settings → Developer settings → Personal access tokens
    • Generate new token with packages:read scope
  2. Configure npm

    ./manage.sh setup
    

    This will prompt for your GitHub username and token.

Allergen Synonyms Configuration

The server uses config/allergen-synonyms.json for multilingual allergen normalization.

Format:

{
  "peanut": ["peanut", "peanuts", "arachide", "cacahuete"],
  "dairy": ["dairy", "milk", "lait"]
}

Implementation:

  • Lazy loading: Built on first group-recipe-context call
  • Reverse index: O(1) lookup (synonym → canonical form)
  • Fallback: Missing allergens use lowercase form
  • Case-insensitive: All comparisons normalized

Usage: Restart server after editing (no dynamic reload).

Preference Pattern Configuration

The server uses config/preference-patterns.json for multilingual negative preference detection.

Format:

{
  "dislikeIndicators": ["déteste", "dislike", "hate"],
  "avoidIndicators": ["éviter", "avoid", "vermeide"],
  "excludeIndicators": ["sans", "without", "sin", "no"],
  "splitDelimitersRegex": ",|;|/| et | and | y | ou "
}

Processing logic:

  1. Detect indicators: Longest-first matching (case-insensitive)
  2. Extract tokens: Split remaining text by regex delimiters
  3. Store as dislikes: Added to soft preference constraints

Usage: Restart server after editing (no dynamic reload).

Example Family Data

The project includes a complete example family in config/family-example-template.json to help you understand the data structure and test the system.

Usage:

# Load the example data into your database
# (Implementation depends on your database setup scripts)
# The file serves as a perfect template for creating your own families

Customization: Use this template as a starting point to create your own family profiles with appropriate dietary restrictions, preferences, and cooking skills.

Runtime Requirements:

  • Node.js 22.x (LTS) – see .nvmrc
  • TypeScript target: ES2023
  • MongoDB for data storage

🧠 Local LLM Compatibility

Key Insight: Local LLMs with limited context windows need optimized prompts to function effectively.

Prompt Compatibility Matrix
LLM CategoryContext WindowRecommended PromptExamples
Large (≥70B)32K+ tokenssystem-full.md (1,600 tokens)GPT-4, Claude 3
Medium (13-70B)4K-16K tokenssystem.short.md (190 tokens)Llama 2 70B, Mixtral 8x7B
Small (7-20B)2K-8K tokenssystem.short.md (Essential)GPT OSS 20B, Llama 7B-13B
Embedded<4K tokenssystem.short.md (Critical)Edge deployment models
Tested Models
  • GPT OSS 20B + system.short.md → Works well
  • GPT OSS 20B + system-full.md → Token budget exceeded
  • Llama 7B-13B → Requires system.short.md
  • Mixtral 8x7B → Both prompts work, prefer short for efficiency

📋 Prompt Selection Strategy

The MCP server provides two system prompt versions optimized for different LLM capabilities:

Prompt FileSizeBest For
system.short.md~190 tokensSmall LLMs (7B-20B), limited context (≤16K)
system-full.md~1,600 tokensLarge LLMs (≥70B), generous context (≥32K)

Quick Selection:

  • Use system.short.md for most local/smaller LLMs (GPT OSS 20B, Llama 7B-13B)
  • Use system-full.md for large cloud models or 70B+ local models

Implementation: Copy the appropriate prompt file content to your MCP client configuration.

# For small LLMs
cat src/prompts/en/system.short.md

# For large LLMs  
cat src/prompts/en/system-full.md

🧪 Testing

The project includes comprehensive test coverage with unit and integration tests:

npm test                    # Run all tests
npm test -- --watch        # Run tests in watch mode
npm test -- --coverage     # Generate coverage report

📜 License

Distributed under AGPL-3.0-or-later. See LICENSE.

Private server use allowed; modified network service redistribution must publish corresponding source (network copyleft).