mcp-server

echoes-io/mcp-server

3.1

If you are the rightful owner of mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The Model Context Protocol (MCP) server is designed for seamless AI integration with the Echoes storytelling platform, providing a robust infrastructure for content management and synchronization.

Echoes MCP Server

CI npm Node License: MIT Coverage Badge

Model Context Protocol server for AI integration with Echoes storytelling platform.

Features

  • Narrative Knowledge Graph: Automatically extracts characters, locations, events, and their relationships using Gemini AI
  • Semantic Search: Find relevant chapters using natural language queries
  • Entity Search: Search for characters, locations, and events
  • Relation Search: Explore relationships between entities
  • Arc Isolation: Each arc is a separate narrative universe - no cross-arc contamination
  • Statistics: Aggregate word counts, POV distribution, and more
  • Dynamic Prompts: Reusable prompt templates with placeholder substitution

Installation

npm install -g @echoes-io/mcp-server

Or run directly with npx:

npx @echoes-io/mcp-server --help

Requirements

  • Node.js 20+
  • Gemini API key (for entity extraction)

Usage

CLI

# Count words in a markdown file
echoes words-count ./content/arc1/ep01/ch001.md

# Index timeline content
echoes index ./content

# Index only a specific arc
echoes index ./content --arc bloom

# Get statistics
echoes stats
echoes stats --arc arc1 --pov Alice

# Search (filters by arc to avoid cross-arc contamination)
echoes search "primo incontro" --arc bloom
echoes search "Alice" --type entities --arc bloom

# Check narrative consistency
echoes check-consistency bloom
echoes check-consistency bloom --rules kink-firsts,outfit-claims

MCP Server

Configure in your MCP client (e.g., Claude Desktop, Kiro):

{
  "mcpServers": {
    "echoes": {
      "command": "npx",
      "args": ["@echoes-io/mcp-server"],
      "cwd": "/path/to/timeline",
      "env": {
        "GEMINI_API_KEY": "your_api_key"
      }
    }
  }
}

Environment Variables

VariableRequiredDefaultDescription
GEMINI_API_KEYYes-API key for Gemini entity extraction
ECHOES_GEMINI_MODELNogemini-2.5-flashGemini model for extraction
ECHOES_EMBEDDING_MODELNoXenova/e5-small-v2HuggingFace embedding model
ECHOES_EMBEDDING_DTYPENofp32Quantization level: fp32, q8, q4 (see Performance Notes)
HF_TOKENNo-HuggingFace token for gated models

Available Tools

ToolDescription
words-countCount words and statistics in a markdown file
indexIndex timeline content into LanceDB
searchSearch chapters, entities, or relations
statsGet aggregate statistics
check-consistencyAnalyze arc for narrative inconsistencies
graph-exportExport knowledge graph in various formats
historyQuery character/arc history (kinks, outfits, locations, relations)
review-generateGenerate review file for pending entity/relation extractions
review-statusShow review statistics for an arc
review-applyApply corrections from review file to database

Available Prompts

PromptArgumentsDescription
arc-resumearc, episode?, lastChapters?Load complete context for resuming work on an arc
new-chapterarc, chapterCreate a new chapter
revise-chapterarc, chapterRevise an existing chapter
expand-chapterarc, chapter, targetExpand chapter to target word count
new-characternameCreate a new character sheet
new-episodearc, episodeCreate a new episode outline
new-arcnameCreate a new story arc
revise-arcarcReview and fix an entire arc

Architecture

Content Hierarchy

Timeline (content directory)
└── Arc (story universe)
    └── Episode (story event)
        └── Chapter (individual .md file)

Arc Isolation

Each arc is treated as a separate narrative universe:

  • Entities are scoped to arcs: bloom:CHARACTER:Alicework:CHARACTER:Alice
  • Relations are internal to arcs
  • Searches can be filtered by arc to avoid cross-arc contamination

Data Flow

┌─────────────────────────────────────────────────────────────┐
│                     INDEXING PHASE                          │
├─────────────────────────────────────────────────────────────┤
│  1. Scan content/*.md (filesystem scanner)                  │
│  2. Parse frontmatter + content (gray-matter)               │
│  3. For each chapter:                                       │
│     a. Extract entities/relations with Gemini API           │
│     b. Generate embeddings (Transformers.js ONNX)           │
│     c. Calculate word count and statistics                  │
│  4. Save everything to LanceDB                              │
└─────────────────────────────────────────────────────────────┘

Development

# Install dependencies
npm install

# Run tests
npm test

# Run tests with coverage
npm run test:coverage

# Lint
npm run lint

# Type check
npm run typecheck

# Build
npm run build

Tech Stack

PurposeTool
RuntimeNode.js 20+
LanguageTypeScript
Vector DBLanceDB
Embeddings@huggingface/transformers (ONNX)
Entity ExtractionGemini AI
MCP SDK@modelcontextprotocol/sdk
TestingVitest
LintingBiome

Performance Notes

Embedding Quantization

The default embedding model (Xenova/e5-small-v2) supports different quantization levels via ECHOES_EMBEDDING_DTYPE:

LevelSpeedQualityMemoryRecommendation
fp32BaselineBest (100%)HighProduction with ample resources
q82-3x fasterExcellent (99.6%)50% lessRecommended - optimal balance
q43-4x fasterGood (99.1%)75% lessResource-constrained environments

Note: Some models like onnx-community/embeddinggemma-300m-ONNX don't support fp16. Always check model documentation.

Recommended setting:

export ECHOES_EMBEDDING_DTYPE=q8

License

MIT


Part of the Echoes project - a multi-POV digital storytelling platform.