mirrorsecai/mirror-vectax-mcp-server
If you are the rightful owner of mirror-vectax-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Secure Embedding MCP Server is a Model Context Protocol implementation designed for secure text embeddings with privacy-preserving features using the Mirror SDK.
Secure Embedding MCP Server
A Model Context Protocol (MCP) implementation for secure text embeddings with privacy-preserving features using the Mirror SDK.
Overview
The Secure Embedding MCP Server provides a robust interface for processing text data with various security levels while generating embeddings for semantic search and analysis. It leverages the Mirror SDK to provide advanced security features including:
- Format-preserving encryption (FPE) for sensitive entities
- Vector encryption for secure embeddings
- Role-based access control (RBAC) for fine-grained security policies
- Entity detection for PII and sensitive information
Features
-
Unified Text Processing: Single entry point for various text operations with appropriate security measures
-
Multiple Operation Modes:
-
embed
: Generate text embeddings -
secure
: Apply security measures to text and embeddings -
analyze
: Detect and analyze sensitive information -
mask
: Anonymize sensitive entities -
auto
: Automatically determine the appropriate operation -
Configurable Security Levels:
-
none
: No security measures -
low
: Basic vector encryption -
medium
: Entity encryption with FPE -
high
: Full encryption with RBAC -
auto
: Automatically determine security level based on content -
Natural Language Interface: Process requests in natural language
-
Batch Processing: Handle multiple texts efficiently
-
Semantic Search: Search across documents using embeddings
Prerequisites
- Python 3.10+
- Mirror SDK
- LangChain with Hugging Face integration
- MCP Server framework
- Claude Desktop (for integration)
Mirror Platform Setup
1. Registration
- Visit the Mirror Platform
- Click on "Sign Up" or "Register"
- Fill in your details and create an account
- Verify your email address
2. Getting Your API Keys
- Log in to your Mirror Platform account
- Navigate to the "API Keys" section in your dashboard
- Click "Generate New Key"
- Save both the API Key and Secret securely
- These keys will be used in your environment variables or configuration file
3. Additional Capabilities
This implementation demonstrates a subset of Mirror's capabilities. For full enterprise features including:
- Advanced encryption algorithms
- Custom security policies
- Enterprise-grade RBAC
- Advanced entity detection
- Custom model integration
- Dedicated support
Please contact our team at or https://mirrorsecurity.io/
4. Reference Documentation
Development Environment Setup
Install uv
uv is a fast Python package installer and resolver that we'll use for our environment setup.
Available MCP Servers:
Screenshot1
Screenshot2
MacOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows
# Using PowerShell
irm https://astral.sh/uv/install.ps1 | iex
Make sure to restart your terminal afterwards to ensure that the uv
command gets picked up.
Installation
Automatic Installation (Recommended)
We provide two automatic installation scripts for different operating systems:
Windows
Run the following command in PowerShell or Command Prompt:
.\setup_claude_config.bat
This script will:
- Install uv if not present
- Set up the virtual environment
- Install required dependencies
- Configure Claude Desktop integration
- Create necessary configuration files
macOS/Linux
Run the following command in your terminal:
chmod +x setup_claude_config.sh
./setup_claude_config.sh
This script will:
- Check for required dependencies
- Install uv if not present
- Set up the virtual environment
- Install required dependencies
- Configure Claude Desktop integration
- Create necessary configuration files
Both scripts will create a log file (setup_log.txt
) in the project directory for troubleshooting purposes.
Manual Installation
If you prefer to install manually or if you already have the project cloned:
- Clone the repository:
git clone https://github.com/yourusername/mirror-vectax-mcp-server.git
cd mirror-vectax-mcp-server
- Set up virtual environment with uv:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
- Install mcp cli
uv add mcp[cli] httpx
- Install dependencies:
uv add -r requirements.txt
- Install Mirror SDK
Download Mirror SDK from and copy whl files into dist
folder.
Replace <version>
with download version.
uv add .\dist\mirror_sdk-<version>.whl
uv add .\dist\mirror_enc-<version>.whl
- Set up environment variables:
export MIRROR_API_KEY="your-mirror-api-key"
export MIRROR_SECRET="your-mirror-secret"
export MIRROR_SERVER_URL="https://your-mirror-server-url/v1"
export EMBEDDING_MODEL="nomic-ai/nomic-embed-text-v1.5" # Optional
export EMBEDDING_DEVICE="cpu" # Or "cuda" for GPU acceleration
- Alternatively, create a configuration file
secure_search_config.json
with the following content:
{
"api_key": "your-mirror-api-key",
"secret": "your-mirror-secret",
"server_url": "https://your-mirror-server-url/v1",
"policy_eval_enabled": false,
"app_policy": {
"roles": ["admin", "researcher", "user", "analyst"],
"groups": ["ai_team", "ml_team", "nlp_team"],
"departments": ["research", "engineering", "IT"]
}
}
Claude Desktop Integration
1. Install Claude Desktop
- Download Claude Desktop from Anthropic's website
- Install and launch Claude Desktop
- Sign in with your Anthropic account
2. Configure Claude Desktop
- Create a
claude_desktop_config.json
file in your Claude Desktop configuration directory:
Windows:
%APPDATA%\Claude\claude_desktop_config.json
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Linux:
~/.config/Claude/claude_desktop_config.json
- Add the following configuration (adjust paths as needed):
{
"mcpServers": {
"secure-embedding": {
"command": "uv",
"args": [
"--directory",
"/ABSOLUTE/PATH/TO/YOUR/PROJECT/mirror-vectax-mcp-server",
"run",
"mirror_vectax_server.py"
]
}
}
}
- Find the full path to the
uv
executable:
- macOS/Linux:
which uv
- Windows:
where uv
- Update the
command
field in the config with the full path if needed - Save the file and restart Claude Desktop
- Look for the hammer icon in Claude Desktop to confirm the MCP tools are available
3. Testing the Integration
- Open Claude Desktop
- Look for the hammer icon in the interface
- Try a simple test:
Can you create an embedding for this sentence: "Machine learning models can process large amounts of data efficiently."
Usage
Starting the Server
You can start the server using one of the following methods:
Method 1: Using the run_mcp_server.sh Script (Recommended)
- Make the script executable:
chmod +x run_mcp_server.sh
- Run the server:
./run_mcp_server.sh
This script sets up the environment variables and runs the server with proper Python settings:
- Adds local bin to PATH
- Sets Python to unbuffered mode
- Enables Python debug mode
- Changes to the correct directory
- Runs the server with Python 3
Method 2: Using uv
uv run mirror_vectax_server.py
Method 3: Using MCP CLI
mcp run mirror_vectax_server.py
Testing the Server
To verify the server is running correctly, you can use the MCP CLI to list available tools:
mcp list-tools --transport stdio --binary "python mirror_vectax_server.py"
This should output a list of all the tools provided by the server.
Testing with MCP Inspector
We can use mcp inspector to test the tools:
npx @modelcontextprotocol/inspector \
uv \
--directory /ABSOLUTE/PATH/TO/YOUR/PROJECT/mirror-vectax-mcp-server \
run \
mirror_vectax_server.py
After running mcp inspector, we can test tool, for example
Customizing run_mcp_server.sh
If you need to modify the server startup configuration, you can edit the run_mcp_server.sh
script. Here's what each line does:
#!/bin/bash
# Add local bin to PATH
export PATH="$PATH:/Users/Yourname/.local/bin"
# Enable Python debug mode
export PYTHONUNBUFFERED=1
export PYTHONDEBUG=1
# Change to project directory
cd /Users/your_downloaded_path/mirror-vectax-mcp-server-main
# Run the server with Python 3
exec python3 -u mirror_vectax_server.py
Make sure to:
- Update the paths to match your system
- Keep the script executable (
chmod +x run_mcp_server.sh
) - Run it from the project directory
Watch Video
Architecture
The server consists of two main services:
- EmbeddingService: Creates and manages text embeddings using HuggingFace models.
- EncryptionService: Provides encryption capabilities using the Mirror SDK.
These services are initialized during server startup and made available to the MCP tools.
Test Prompts for Secure Embedding MCP Server
Basic Functionality Tests
1. Test Basic Embedding
Can you create an embedding for this sentence: "Machine learning models can process large amounts of data efficiently."
Expected outcome: Should use the process
tool with "embed" operation and return embedding information.
2. Test Entity Detection
Can you analyze this text for sensitive information? "My social security number is 123-45-6789 and my email is "
Expected outcome: Should use the process
tool with "analyze" operation, detect SSN and email entities, and return analysis details.
3. Test Secure Processing
Please securely process this text: "My credit card is 4111-1111-1111-1111 and my phone number is (555) 123-4567."
Expected outcome: Should use the process
tool with "secure" operation at medium/high security level, encrypt sensitive entities, and potentially encrypt the embedding.
4. Test Natural Language Processing
I need to encrypt and protect this confidential medical information: "Patient John Doe (DOB: 01/15/1980) has been diagnosed with hypertension."
Expected outcome: Should use the natural-language-process
tool to determine intent (secure/mask) and apply appropriate security measures.
Advanced Functionality Tests
5. Test Batch Processing
I need embeddings for the following phrases:
- "Artificial intelligence is transforming industries."
- "Data privacy is an important concern for organizations."
- "Secure embeddings protect sensitive information during processing."
Expected outcome: Should use the batch-process
tool to create embeddings for all three phrases.
6. Test RBAC Functionality
Generate a user key for a data analyst who belongs to the research department and has user-level access.
Expected outcome: Should use the generate-user-key
tool with appropriate roles, groups, and departments.
7. Test Search Functionality
First, create embeddings for these phrases:
- "Security is a top priority for financial institutions."
- "Privacy regulations impact how companies handle data."
- "Machine learning models require careful validation." Then search these documents for information about "data privacy".
Expected outcome: Should first use the batch-process
tool to create embeddings, then use the search
tool to find relevant documents, with the second phrase likely scoring highest.
Edge Cases and Error Testing
8. Test with Empty/Short Text
Can you analyze this text: ""
Expected outcome: Should handle empty input gracefully, possibly returning an error message or minimal analysis.
9. Test with Very Long Text
Please secure this document: [Insert 1000+ word document about sensitive financial information]
Expected outcome: Should handle large input without issues, possibly detecting multiple entities and recommending high security level.
10. Test with Multilingual Text
Analyze this text for sensitive information: "Mi número de pasaporte es AB123456 y mi dirección es Calle Principal 123, Madrid, España."
Expected outcome: Should detect entities in non-English text if the underlying entity detection supports it.
Complex Integration Tests
11. End-to-End Workflow Test
- First, create secure embeddings for a set of medical records that contain patient information.
- Generate a user key for a doctor in the cardiology department.
- Search the embeddings for information about "heart conditions" using the doctor's access key.
Expected outcome: Should demonstrate a full workflow involving multiple tools - process for secure embeddings, generate-user-key for RBAC, and search with the key.
12. Auto-Detection Test
Process this text and determine the appropriate security measures automatically: "This quarterly financial report contains projections for Q3 2025 and includes account numbers for our top clients."
Expected outcome: Should use the process
tool with "auto" operation and security level, detect sensitive content, and apply appropriate security measures based on the content.
Troubleshooting
Common Issues
-
Claude Desktop Not Recognizing Tools
- Ensure the
claude_desktop_config.json
file is in the correct location - Verify the paths in the configuration are absolute and correct
- Restart Claude Desktop after making changes
- Ensure the
-
Server Connection Issues
- Check if the server is running (
uv run mirror_vectax_server.py
) - Verify environment variables are set correctly
- Check the logs for any error messages
- Check if the server is running (
-
Memory Issues with Batch Processing
- The server processes texts one at a time to prevent memory issues
- If you experience disconnections, try processing fewer texts at once
- Monitor system memory usage during processing
-
Model Loading Issues
- Ensure you have sufficient disk space for model caching
- Check internet connectivity for initial model download
- Verify the model cache directory is writable
Getting Help
If you encounter issues:
- Check the logs in Claude Desktop
- Review the server logs
- Ensure all prerequisites are installed correctly
- Contact support if issues persist