tiger-memory-mcp-server

murrayju/tiger-memory-mcp-server

3.1

If you are the rightful owner of tiger-memory-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Tiger Memory MCP Server is a memory system designed to allow LLMs to store and retrieve information using the Model Context Protocol.

Tiger Memory MCP Server

A simple memory system designed to allow LLMs to store and retrieve information. This provides some focused tools to LLMs via the Model Context Protocol.

API

All methods are exposed as MCP tools and REST API endpoints.

Development

Cloning and running the server locally.

git clone --recurse-submodules git@github.com:timescale/tiger-memory-mcp-server.git

Submodules

This project uses git submodules to include the mcp boilerplate code. If you cloned the repo without the --recurse-submodules flag, run the following command to initialize and update the submodules:

git submodule update --init --recursive

You may also need to run this command if you pull changes that update a submodule. You can simplify this process by changing you git configuration to automatically update submodules when you pull:

git config --global submodule.recurse true

Building

Run npm i to install dependencies and build the project. Use npm run watch to rebuild on changes.

Create a .env file based on the .env.sample file.

cp .env.sample .env

Testing

The MCP Inspector is very handy.

npm run inspector
FieldValue
Transport TypeSTDIO
Commandnode
Argumentsdist/index.js
Testing in Claude Desktop

Create/edit the file ~/Library/Application Support/Claude/claude_desktop_config.json to add an entry like the following, making sure to use the absolute path to your local tiger-memory-mcp-server project, and real database credentials.

{
  "mcpServers": {
    "tiger-memory": {
      "command": "node",
      "args": [
        "/absolute/path/to/tiger-memory-mcp-server/dist/index.js",
        "stdio"
      ],
      "env": {
        "PGHOST": "x.y.tsdb.cloud.timescale.com",
        "PGDATABASE": "tsdb",
        "PGPORT": "32467",
        "PGUSER": "readonly_mcp_user",
        "PGPASSWORD": "abc123"
      }
    }
  }
}

Deployment

We use a Helm chart to deploy to Kubernetes. See the chart/ directory for details.

The service is accessible to other services in the cluster via the DNS name tiger-memory-mcp-server.savannah-system.svc.cluster.local.

Database setup

Creating the database user:

CREATE USER tiger_memory WITH PASSWORD 'secret';
GRANT CREATE ON DATABASE tsdb TO tiger_memory;

Secrets

Run the following to create the necessary sealed secrets. Be sure to fill in the correct values.

kubectl -n savannah-system create secret generic tiger-memory-mcp-server-database \
  --dry-run=client \
  --from-literal=user="tiger_memory" \
  --from-literal=password="secret" \
  --from-literal=database="tsdb" \
  --from-literal=host="x.y.tsdb.cloud.timescale.com" \
  --from-literal=port="32467" \
  -o yaml | kubeseal -o yaml

# https://logfire-us.pydantic.dev/tigerdata/tigerdata/settings/write-tokens
kubectl -n savannah-system create secret generic tiger-memory-mcp-server-logfire \
  --dry-run=client \
  --from-literal=token="pylf_v1_us_" \
  -o yaml | kubeseal -o yaml

# https://login.tailscale.com/admin/settings/keys
kubectl -n savannah-system create secret generic tiger-memory-mcp-server-tailscale \
  --dry-run=client \
  --from-literal=authkey="tskey-auth-" \
  -o yaml | kubeseal -o yaml

Update ./chart/values/dev.yaml with the output.