memof-ai/memofai-mcp-server
If you are the rightful owner of memofai-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Model Context Protocol (MCP) server is designed to facilitate communication between language models and various applications, ensuring efficient data exchange and context management.
Memory-of-Agents MCP Server
A production-ready Model Context Protocol (MCP) server that enables Large Language Models to manage AI memory, workspaces, and bots through the Memory-of-Agents API.
🎯 What is This?
This MCP server acts as a bridge between LLMs (like Claude, GPT-4, etc.) and the Memory-of-Agents platform, allowing AI assistants to:
- 📁 Manage Workspaces - Create and organize AI agent environments
- 🤖 Configure Bots - Set up specialized AI agents with different capabilities
- 🧠 Store & Retrieve Memories - Enable persistent context and knowledge across conversations
- 🔍 Semantic Search - Find relevant information using natural language queries
🚀 Quick Start
Prerequisites
- Node.js 18 or higher
- A Memory-of-Agents API token (Get one here)
Installation
# Clone or download this repository
git clone https://github.com/memof-ai/memofai-mcp-server.git
cd mcp-server
# Install dependencies
npm install
# Build the server
npm run build
Configuration
Set your API token as an environment variable:
export MEMOFAI_API_TOKEN="moa_your_token_here"
# Optional: Set environment (default is production)
export MEMOFAI_ENVIRONMENT="production" # or: dev, alpha, beta, sandbox
Testing with MCP Inspector
The easiest way to test and interact with the server:
npm run inspector
This opens an interactive inspector where you can:
- See all available tools
- Test tool executions
- View request/response data
- Debug issues
Integration with Claude Desktop
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"memofai": {
"command": "npx",
"args": ["memofai-mcp-server"],
"env": {
"MEMOFAI_API_TOKEN": "moa_your_token_here",
"MEMOFAI_ENVIRONMENT": "production"
}
}
}
}
Integration with Other MCP Clients
This server uses stdio transport and can integrate with any MCP-compatible client:
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
const transport = new StdioClientTransport({
command: 'npx',
args: ['memofai-mcp-server'],
env: {
MEMOFAI_API_TOKEN: 'moa_your_token',
}
});
const client = new Client({ name: 'my-client', version: '1.0.0' }, { capabilities: {} });
await client.connect(transport);
📚 Available Tools
Workspace Management
list_workspaces
List all workspaces for the authenticated user.
Use Case: Discover available workspaces before creating bots.
Example:
{}
create_workspace
Create a new workspace for organizing bots.
Parameters:
name(string, required): Name of the workspace (1-200 chars)description(string, optional): Description (max 500 chars)
Example:
{
"name": "Customer Support AI",
"description": "Workspace for customer support chatbots"
}
get_workspace
Retrieve detailed information about a specific workspace.
Parameters:
workspace_id(string, required): UUID of the workspace
Example:
{
"workspace_id": "123e4567-e89b-12d3-a456-426614174000"
}
update_workspace
Update an existing workspace's name or description.
Parameters:
workspace_id(string, required): UUID of the workspacename(string, optional): New namedescription(string, optional): New description
Example:
{
"workspace_id": "123e4567-e89b-12d3-a456-426614174000",
"name": "Updated Workspace Name"
}
delete_workspace
Delete a workspace. WARNING: This also deletes all associated bots and memories.
Parameters:
workspace_id(string, required): UUID of the workspace
Bot Management
list_bots
List all bots for the authenticated user.
Use Case: Discover available bots before storing or searching memories.
Example:
{}
create_bot
Create a new bot within a workspace.
Parameters:
name(string, required): Name of the bot (1-200 chars)workspace_id(string, required): UUID of the workspacedescription(string, optional): Description (max 500 chars)type(string, optional): Bot type - one of:conversational(default) - For chat interactionsknowledge_base- For Q&A systemstask_oriented- For specific tasksanalytical- For data analysiscreative- For content generation
Example:
{
"name": "Customer Support Bot",
"workspace_id": "123e4567-e89b-12d3-a456-426614174000",
"description": "Handles customer inquiries",
"type": "conversational"
}
get_bot
Retrieve detailed information about a specific bot.
Parameters:
bot_id(string, required): UUID of the bot
update_bot
Update an existing bot's configuration.
Parameters:
bot_id(string, required): UUID of the botname(string, optional): New namedescription(string, optional): New descriptiontype(string, optional): New bot typeis_active(boolean, optional): Active status
delete_bot
Delete a bot. WARNING: This also deletes all associated memories.
Parameters:
bot_id(string, required): UUID of the bot
Memory Management
store_memory
Store a new memory for a bot.
Parameters:
bot_id(string, required): UUID of the botcontent_text(string, required): The memory contentmemory_type(string, optional): Type of memoryfact- Factual informationpreference- User preferencescredential- Access credentialsevent- Events or occurrencestask- Tasks or to-dosother(default) - General information
source_type(string, optional): Source (e.g., "api", "mcp")user_note(string, optional): Note about this memoryimportance_score(number, optional): 0-1, default 0.5permanence_level(string, optional): Retention policyephemeral- Temporarysession- Current sessionpermanent(default) - Long-term
privacy_level(string, optional): Access controlprivate(default) - Owner onlyteam- Team accesspublic- Public access
Example:
{
"bot_id": "123e4567-e89b-12d3-a456-426614174000",
"content_text": "User prefers technical explanations with code examples",
"memory_type": "preference",
"importance_score": 0.8,
"permanence_level": "permanent"
}
search_memories
Search bot memories using natural language (semantic search).
Parameters:
bot_id(string, required): UUID of the botquery(string, required): Natural language search querytop_k(number, optional): Number of results (1-100, default 10)generate_answer(boolean, optional): Generate answer from memories (default false)
Example:
{
"bot_id": "123e4567-e89b-12d3-a456-426614174000",
"query": "What are the user's communication preferences?",
"top_k": 5,
"generate_answer": true
}
list_memories
List memories for a bot with filtering and pagination.
Parameters:
bot_id(string, required): UUID of the botmemory_type(string, optional): Filter by typelimit(number, optional): Max results (1-100, default 20)offset(number, optional): Skip count for pagination (default 0)
Example:
{
"bot_id": "123e4567-e89b-12d3-a456-426614174000",
"memory_type": "preference",
"limit": 50
}
delete_memory
Delete a specific memory by ID.
Parameters:
memory_id(string, required): UUID of the memory
reprocess_memory
Reprocess a memory to update embeddings, summary, or entities.
Parameters:
memory_id(string, required): UUID of the memory
🎓 LLM Usage Patterns & Best Practices
For AI Assistants Using This MCP Server
When you (the AI assistant) have access to this MCP server, follow these patterns for optimal results:
1. Setup Workflow
Always establish context before performing memory operations:
1. List available workspaces → Check if user has workspaces
2. If no workspace: Create one first
3. List bots in the workspace → Check if relevant bot exists
4. If no bot: Create appropriate bot type
5. Now ready for memory operations
2. Memory Storage Strategy
DO:
- Store distinct, atomic pieces of information
- Use appropriate
memory_typefor categorization - Set
importance_scorebased on relevance (user preferences: 0.7-0.9, casual facts: 0.3-0.5) - Use
permanence_level: permanent for user preferences, session for temporary context
DON'T:
- Store entire conversations as single memories
- Mix multiple unrelated facts in one memory
- Store trivial, transient information as permanent
Example - Good Memory Storage:
User says: "I'm a software engineer working on Python projects, and I prefer detailed explanations."
Store as 2 separate memories:
1. "User is a software engineer working primarily with Python"
- memory_type: "fact"
- importance_score: 0.8
2. "User prefers detailed, thorough explanations"
- memory_type: "preference"
- importance_score: 0.9
3. Search Strategy
When to search:
- At conversation start (retrieve user context)
- When user asks questions that might relate to past interactions
- Before making recommendations based on preferences
- When uncertain about user preferences
Search query tips:
- Use natural language that captures intent
- Be specific about what you're looking for
- Use
generate_answer: truewhen you want synthesized information
Example:
User: "What kind of projects was I working on?"
Search query: "user's current projects and work"
With: generate_answer: true, top_k: 5
4. Workspace & Bot Organization
Workspace naming conventions:
- Purpose-based: "Customer Support", "Personal Assistant"
- Project-based: "Project Apollo", "Marketing Campaign"
- User-based: "John's AI Workspace"
Bot type selection:
conversational: General chat, customer supportknowledge_base: Documentation Q&A, FAQ botstask_oriented: Appointment booking, task managementanalytical: Data analysis, report generationcreative: Content creation, brainstorming
5. Error Handling
When operations fail:
- Check the error message for specifics
- Verify IDs are valid UUIDs
- Ensure workspace exists before creating bots
- Ensure bot exists before memory operations
- Inform user clearly about the issue
6. Privacy & Security
- Default to
privacy_level: "private"unless user specifies sharing - Use
permanence_level: "ephemeral"for sensitive information - Never store credentials as plain text (use
memory_type: "credential"and ensure encryption)
Example Conversation Flow
User: "Remember that I prefer Python over JavaScript"
AI Assistant:
1. Check if workspace exists (list_workspaces)
2. Check if bot exists (list_bots)
3. Store memory:
{
"bot_id": "...",
"content_text": "User prefers Python programming language over JavaScript",
"memory_type": "preference",
"importance_score": 0.85,
"permanence_level": "permanent",
"privacy_level": "private"
}
4. Respond: "I've remembered your preference for Python over JavaScript."
Later conversation:
User: "What languages do I like?"
AI Assistant:
1. Search memories:
{
"bot_id": "...",
"query": "programming language preferences",
"top_k": 5,
"generate_answer": true
}
2. Use results to respond with personalized answer
🛠️ Development
Project Structure
mcp-server/
├── src/
│ └── index.ts # Main MCP server implementation
├── build/ # Compiled JavaScript (generated)
├── package.json # Dependencies and scripts
├── tsconfig.json # TypeScript configuration
├── README.md # This file
├── LICENSE # MIT License
└── CHANGELOG.md # Version history
Building
npm run build
Development with Watch Mode
npm run watch
Testing Changes
# After making changes
npm run build
npm run inspector
Documentation
For detailed information, see the /docs folder:
- - Complete guide to all 15 available tools
- - System design and technical architecture
- - Optimal prompts and usage patterns for LLMs
Security
- API tokens are passed via environment variables (not hardcoded)
- All communication uses HTTPS (when not in dev mode)
- Memories default to private access level
- UUIDs are validated before API calls
📝 License
This project is licensed under the MIT License - see the file for details.
🤝 Contributing
Contributions are welcome! This is an open-source project under the MIT license.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
📞 Support
- Documentation: https://docs.memof.ai
- Issues: GitHub Issues
- Email: dev@memof.ai
- Discord: Join our community
🔗 Related Projects
🙏 Acknowledgments
- Built on the Model Context Protocol by Anthropic
- Powered by Memory-of-Agents infrastructure
- Uses the memofai JavaScript SDK
Made with ❤️ by the Memory-of-Agents team