chris-arsenault/game-mcp-server
If you are the rightful owner of game-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Game MCP Server is a Model Context Protocol server that supports the genai-game-engine project by providing research, architecture, narrative, QA, and playtest knowledge stored in Qdrant.
Game MCP Server
MCP (Model Context Protocol) server that backs the genai-game-engine project with research, architecture, narrative, QA, and playtest knowledge stored in Qdrant. The server exposes the Streamable HTTP MCP transport on port 3000 by default, so Claude or any MCP-compatible client can connect over HTTP or curl.
Projects in this repo
| Directory | Purpose |
|---|---|
mcp/ | Streamable HTTP MCP server with rich tool catalog for research, architecture, QA, lore, etc. |
graph-builder/ | Pipelines that populate the knowledge-graph collections used by explore_graph_entity and search_graph_semantic. |
backlog-editor/ | Visual kanban + handoff editor that talks directly to Qdrant, offering a browser UI for PBIs and session handoff notes. |
generate-image/ | Standalone STDIO MCP server that proxies OpenAI image generation and saves outputs to disk. |
Project Namespaces
All persistence now lives in project-scoped Qdrant collections. A canonical project list is stored in mcp/config/projects.json:
-
Each project ID is lower-case/kebab-case (e.g.
default,prototype-alpha). -
MCP HTTP endpoints are available at
/<project>/mcpand/<project>/sse(the legacy/mcpand/ssepaths resolve to the default project). -
Create a new project and its Qdrant collections with
POST /projecton the MCP server:curl -X POST http://localhost:5356/project \ -H 'Content-Type: application/json' \ -d '{"id":"glass"}'curl -v -X POST http://localhost:5356/reset \ -H 'Content-Type: application/json' \ -d '{"id":"glass","snapshot":false}'
curl -X POST "http://localhost:6333/collections/glass__backlog_items/points/delete?wait=true"
-H "Content-Type: application/json"
-d '{"filter": {"must": []}}'
curl "http://localhost:6333/collections/glass__backlog_items
The response lists the fully-qualified collection names (e.g. prototype-alpha__research_findings).
- The backlog editor forwards the current project via the
projectquery parameter or theX-Project-Idheader (UI includes a project switcher). If omitted, it falls back to the default project defined in the shared config file. - The graph builder accepts an optional
projectfield onPOST /buildto populate the correspondingcode_graphnamespace. When omitted the default project is used. list_qdrant_collectionsreturns project-specific collection names, whileget_server_metadataadvertises the project-aware HTTP templates.- Use
POST /resetwith body{ "id": "<project>" }to snapshot the project's Qdrant collections and Neo4j entities to/app/snapshots/<project>/<timestamp>/, then clear them so the project starts fresh.
Feature Management
- Feature intake can be paused per project. Call
set_feature_lock { "locked": true }to reject newcreate_featurerequests (the MCP server responds with "no new features at this time") and unlock with{ "locked": false }when planning resumes. - Backlog items support an optional
feature_idduringcreate_backlog_itemandupdate_backlog_item. Useassign_backlog_to_featureto link existing PBIs andlist_feature_backlog_itemsto retrieve the feature’s work queue.
Prerequisites
- Node.js 18+
- Qdrant instance (defaults to
http://qdrant:6333) - Neo4j instance (defaults to
bolt://localhost:7687) - Embedding service exposing
POST /embed(defaults tohttp://embedding-service:80, see Embedding Service) - OpenAI API key with access to
gpt-4o-mini(configurable viaOPENAI_MODEL)
Scripts
npm run dev # Start with ts-node + Streamable HTTP transport (default port 3000)
npm run build # Type-check and compile TypeScript to dist/
npm start # Build then run the compiled server from dist/
Set PORT and/or MCP_PATH to override the HTTP binding (defaults are 3000 and /mcp).
Smoke Testing with curl
The transport implements MCP’s Streamable HTTP flow. Every session starts with an initialize request. The response returns an Mcp-Session-Id header that must be included on subsequent requests and SSE streams.
ℹ️ The POST endpoints require
Accept: application/json, text/event-streamso the transport can negotiate either JSON replies or server-sent events. Include this header on every JSON-RPC POST request.
-
Initialize the session
curl -i -X POST http://localhost:3000/default/mcp \ -H 'Content-Type: application/json' \ -H 'Accept: application/json, text/event-stream' \ -d '{ "jsonrpc": "2.0", "id": "init-1", "method": "initialize", "params": { "protocolVersion": "2024-11-05", "clientInfo": { "name": "curl", "version": "0.1" }, "capabilities": {} } }'
Note the `Mcp-Session-Id` header in the response (e.g. `bf6fda5a-a8d5-4ad6-b8e1-...`). Use it for all follow-up requests.
2. **List available tools**
```bash
curl -i -X POST http://localhost:3000/default/mcp \
-H 'Content-Type: application/json' \
-H 'Accept: application/json, text/event-stream' \
-H 'Mcp-Session-Id: d3baee33-df43-442d-a164-5675450c6860' \
-H 'Mcp-Protocol-Version: 2024-11-05' \
-d '{
"jsonrpc": "2.0",
"id": "list-1",
"method": "tools/list",
"params": {}
}'
-
Call a tool (example: get server metadata)
curl -i -X POST http://localhost:3000/default/mcp \ -H 'Content-Type: application/json' \ -H 'Accept: application/json, text/event-stream' \ -H 'Mcp-Session-Id: <SESSION_ID>' \ -H 'Mcp-Protocol-Version: 2024-11-05' \ -d '{ "jsonrpc": "2.0", "id": "call-1", "method": "tools/call", "params": { "name": "get_server_metadata", "arguments": {} } }' -
Subscribe to the SSE stream (optional)
curl -N http://localhost:3000/default/sse \ -H 'Accept: text/event-stream' \ -H 'Mcp-Session-Id: <SESSION_ID>' \ -H 'Mcp-Protocol-Version: 2024-11-05'Leave this curl running to receive streaming responses (e.g. logging events). Use
Ctrl+Cto exit. -
Terminate the session (optional cleanup)
curl -i -X DELETE http://localhost:3000/default/mcp \ -H 'Mcp-Session-Id: <SESSION_ID>' \ -H 'Mcp-Protocol-Version: 2024-11-05'
Collections & Tools
config/collections.jsondocuments every Qdrant collection, its cardinality, and which Claude agents rely on it.docs/mcp/usage.mdcovers Claude integration and the full MCP tool catalog exposed by the server.- Bug-fix memory lives in the
bug_fix_patternscollection and is accessible via therecord_bug_fix,match_bug_fix, andget_bug_fixtools. Error messages can be stored alongside fixes so agents can perform exact log-line lookups before falling back to semantic matches. - Knowledge-graph embeddings live in project-scoped collections named
<project>__code_graph. Useexplore_graph_entityto pull the Neo4j node plus surrounding relationships, andsearch_graph_semanticfor vector search against the graph-builder output. - Feature definitions live in
<project>__features; manage them withcreate_feature,update_feature,list_features, andget_feature, and link PBIs viaassign_backlog_to_featureorlist_feature_backlog_items. GET /statsreturns per-boot tool usage counters (writes/reads) for MCP endpoints.
Use list_qdrant_collections and get_mcp_documentation to programmatically discover server capabilities from clients.
Knowledge Graph
Set the following environment variables so the server can reach the graph-builder databases:
| Variable | Default | Purpose |
|---|---|---|
NEO4J_URL | bolt://localhost:7687 | Bolt endpoint for the Neo4j instance populated by the graph builder |
NEO4J_USER | neo4j | Username for Neo4j auth |
NEO4J_PASSWORD | password | Password for Neo4j auth |
GRAPH_COLLECTION | code_graph | Base collection name for graph embeddings (<project>__code_graph is created per project) |
DEFAULT_PROJECT | memory | Initial namespace used when clients omit project |
SNAPSHOT_DIR | ./snapshots | Directory where POST /reset writes archives (mounted to the host in Docker compose) |
GRAPH_BUILDER_PORT | 4100 | HTTP port for the graph-builder service |
OPENAI_API_KEY | (required) | Used by the graph builder to enrich entities |
OPENAI_MODEL | gpt-5 | Override the OpenAI model for semantic enrichment |
REPO_URL | https://github.com/chris-arsenault/genai-game-engine.git | Default repository cloned by the builder |
REPO_BRANCH | main | Default branch synced before each build |
The graph builder clones chris-arsenault/genai-game-engine into /mnt/apps/apps/mcp-server/game-mcp/server/source/genai-game-engine by default and syncs data into Neo4j + Qdrant. Once the builder runs, MCP clients can:
- Call
search_graph_semanticwith natural-language or code snippets to fetch the most relevant graph entities. - Pass an entity ID (e.g.,
file:src/tools/graph.tool.ts) toexplore_graph_entityto inspect inbound/outbound relationships straight from Neo4j.
The builder exposes a REST API on http://<host>:${GRAPH_BUILDER_PORT}:
POST /buildwith body{"mode":"full|incremental","stage":"all|parse|enrich|populate","baseCommit":"...","repoUrl":"...","branch":"...","project":"<id>"}to start a job.projectis optional and defaults to the configured default project.repoUrlandbranchdefault to the service configuration (see env vars above).GET /statusto poll the current or last run summary.POST /resetto clear staging artifacts and the incremental marker.
POST /build returns immediately (HTTP 202) after queuing work; use GET /status to observe progress and obtain the final summary.
Example: trigger a full rebuild (all stages, default repo/branch) from localhost using curl:
curl -s -X POST http://localhost:5346/build \
-H 'Content-Type: application/json' \
-d '{
"mode": "full",
"stage": "all",
"branch": "main",
"project": "default"
}'
Then poll:
curl -s http://localhost:4100/status | jq '.'
Embedding Service
The provided docker-compse.yml now launches Hugging Face Text Embeddings Inference on CPU with the nomic-ai/nomic-embed-text-v1.5 model. It supports an 8192-token context window and 768-dimensional embeddings—ample room for long tool payloads while remaining practical on a dual Xeon E5 / 128 GB RAM TrueNAS box. Because the vector size changed relative to the original setup, run ./init-collections.sh (or recreate the Qdrant collections manually) before ingesting new data.
The container mounts /mnt/apps/apps/mcp-server/embedding-cache and sets MODEL_CACHE=/data, so model weights persist across restarts. To pre-seed the cache on an offline machine:
huggingface-cli download nomic-ai/nomic-embed-text-v1.5 --local-dir /mnt/apps/apps/mcp-server/embedding-cache
The embedding client in src/services/embedding.service.ts now surfaces HTTP error bodies (e.g. token-limit warnings) directly in the logs to make diagnosing misconfiguration easier.