mcp-titan

mcp-titan

3.5

If you are the rightful owner of mcp-titan and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

A neural memory system for LLMs that can learn and predict sequences while maintaining state through a memory vector.

The Titan Memory MCP Server is a neural memory system designed for large language models (LLMs) to maintain and manage memory states across interactions. It is particularly useful for applications like Claude 3.7 Sonnet and other LLMs, providing a robust memory architecture that can learn and predict sequences. The server operates in 'yolo mode' within Cursor, allowing for hands-free operation. It features a transformer-based memory system, efficient tensor operations, and the ability to save and load memory states. The server is fully compatible with various MCP clients, making it a versatile tool for developers working with LLMs.

Features

  • Perfect for Cursor: Now that Cursor automatically runs MCP in yolo mode, you can take your hands off the wheel with your LLM's new memory
  • Neural Memory Architecture: Transformer-based memory system that can learn and predict sequences
  • Memory Management: Efficient tensor operations with automatic memory cleanup
  • MCP Integration: Fully compatible with Cursor and other MCP clients
  • Text Encoding: Convert text inputs to tensor representations

Tools

  1. help

    Get help about available tools.

  2. init_model

    Initialize the Titan Memory model with custom configuration.

  3. forward_pass

    Perform a forward pass through the model to get predictions.

  4. train_step

    Execute a training step to update the model.

  5. get_memory_state

    Get the current memory state and statistics.

  6. manifold_step

    Update memory along a manifold direction.

  7. prune_memory

    Remove less relevant memories to free up space.

  8. save_checkpoint

    Save memory state to a file.

  9. load_checkpoint

    Load memory state from a file.

  10. reset_gradients

    Reset accumulated gradients to recover from training issues.