ambaricloud/snow-emp-details-mcp
If you are the rightful owner of snow-emp-details-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A high-performance Model Context Protocol (MCP) server designed for efficient retrieval of employee salary information from Snowflake databases.
Employee Salary MCP Server 🏢
A high-performance Model Context Protocol (MCP) server for retrieving employee salary information from Snowflake databases. Features intelligent employee identifier extraction and optimized batch processing.
✨ Features
- 🚀 Batch Processing: Single Snowflake query for multiple employee IDs (up to 85% reduction in database calls)
- 🎯 Smart ID Detection: Automatically recognizes employee IDs vs. person names
- 📁 Multi-Format Support: PDF, DOCX, CSV, TXT file processing
- ☁️ S3 Integration: Direct processing of employee files from Amazon S3
- 🔄 Connection Pooling: Shared database connections for optimal performance
- ⚡ Optimized Queries: Efficient batch lookups with flexible name matching
- 📊 Comprehensive Results: Detailed statistics and progress reporting
Quick Start
Environment Variables
Required:
SNOWFLAKE_ACCOUNT=your_account.region
SNOWFLAKE_USER=your_username
SNOWFLAKE_PASSWORD=your_password
Optional:
SNOWFLAKE_DATABASE=iceberg_Db
SNOWFLAKE_SCHEMA=sales
SNOWFLAKE_WAREHOUSE=COMPUTE_WH
SNOWFLAKE_ROLE=PUBLIC
MAX_FILE_SIZE_MB=50
CONFIDENCE_THRESHOLD=0.7
MAX_EMPLOYEES_PER_FILE=1000
PORT=8099
Local Development
# Install dependencies
pip install -r requirements.txt
# Run the optimized server
python snow_emp_details_server_sampling.py
Render Cloud Deployment
1. Web Service Setup
Create a new Web Service in Render with:
- Repository: Link your GitHub repository
- Branch:
main - Root Directory:
iceberg_mcp/snow_emp_details - Runtime:
Python 3 - Build Command:
pip install -r requirements.txt - Start Command:
python snow_emp_details_server.py
2. Environment Variables
Add these in Render dashboard:
SNOWFLAKE_ACCOUNT=your_account.region
SNOWFLAKE_USER=your_username
SNOWFLAKE_PASSWORD=your_password
SNOWFLAKE_DATABASE=HR_DB
SNOWFLAKE_SCHEMA=EMPLOYEES
SNOWFLAKE_WAREHOUSE=COMPUTE_WH
OPENAI_API_KEY=your_openai_api_key
PORT=8099
# Optional: For S3 functionality
AWS_ACCESS_KEY_ID=your_aws_access_key
AWS_SECRET_ACCESS_KEY=your_aws_secret_key
AWS_DEFAULT_REGION=us-east-1
3. Database Schema
Ensure your Snowflake database has this table structure:
CREATE TABLE IF NOT EXISTS EMPLOYEE_SALARIES (
EMPLOYEE_ID VARCHAR(50) PRIMARY KEY,
FIRST_NAME VARCHAR(100) NOT NULL,
LAST_NAME VARCHAR(100) NOT NULL,
SALARY DECIMAL(12,2) NOT NULL,
DEPARTMENT VARCHAR(100),
POSITION VARCHAR(100),
HIRE_DATE DATE,
CREATED_AT TIMESTAMP DEFAULT CURRENT_TIMESTAMP(),
UPDATED_AT TIMESTAMP DEFAULT CURRENT_TIMESTAMP()
);
-- Indexes for performance
CREATE INDEX IF NOT EXISTS idx_employee_names
ON EMPLOYEE_SALARIES (UPPER(FIRST_NAME), UPPER(LAST_NAME));
CREATE INDEX IF NOT EXISTS idx_full_name
ON EMPLOYEE_SALARIES (UPPER(FIRST_NAME || ' ' || LAST_NAME));
🛠️ Available Tools
1. get_all_employees_by_ids (Recommended)
- Batch lookup for multiple employee IDs in a single query
- Performance: 1 database query for N employees vs. N individual queries
- Returns comprehensive results with statistics
- Example:
["EMP001", "EMP002", "EMP003", "EMP004", "EMP005", "EMP006"]
2. read_local_employee_file
- Process local files directly (no base64 encoding needed)
- Supports
~/Downloads/emp_ids.txtand similar paths - Smart employee ID extraction from structured files
- Automatic batch processing for found IDs
3. upload_and_process_employee_file
- Upload and process employee files (PDF, DOCX, CSV, TXT)
- Base64 encoded file content
- Intelligent identifier extraction
- Batch salary lookups with statistics
4. get_employee_by_id
- Direct lookup by employee ID
- Fast primary key search
- Returns full employee details
5. get_employee_by_name
- Look up employee by name (flexible matching)
- Handles various name formats
- Returns complete employee information
6. test_snowflake_connection
- Test database connectivity
- Verify table access
- Get database statistics
7. process_employee_file_from_s3 ☁️ NEW
- S3 Integration: Direct processing of employee files from Amazon S3
- Downloads and processes files from S3 buckets
- Supports same file formats as local/upload tools (PDF, DOCX, CSV, TXT)
- Returns S3 metadata along with employee data
- Example:
process_employee_file_from_s3("my-bucket", "employees/emp_ids.txt")
8. get_processing_stats
- Server configuration info
- Supported file formats
- Processing capabilities
Claude Desktop Configuration
Add to your claude_desktop_config.json:
{
"mcpServers": {
"employee-salary": {
"command": "python",
"args": ["/path/to/snow_emp_details_server.py"],
"env": {
"SNOWFLAKE_ACCOUNT": "your_account.region",
"SNOWFLAKE_USER": "your_username",
"SNOWFLAKE_PASSWORD": "your_password",
"OPENAI_API_KEY": "your_openai_api_key"
}
}
}
}
For remote deployment:
{
"mcpServers": {
"employee-salary-remote": {
"command": "python",
"args": ["-c", "import mcp.client; mcp.client.connect('https://your-render-app.onrender.com')"]
}
}
}
📊 Usage Examples
Batch Processing (Recommended)
# Process multiple employee IDs efficiently
result = await get_all_employees_by_ids([
"EMP001", "EMP002", "EMP003", "EMP004", "EMP005", "EMP006"
])
# Result includes:
{
"found_employees": {
"EMP001": {
"employee_name": "John Smith",
"salary": 75000.00,
"department": "Engineering",
"position": "Software Developer"
}
},
"summary": {
"total_found": 6,
"success_rate_percent": 100.0,
"total_salary_budget": 480000.00,
"average_salary": 80000.00
}
}
File Processing
# Process local file (most convenient)
result = await read_local_employee_file("~/Downloads/emp_ids.txt")
# Upload and process (base64 encoded)
result = await upload_and_process_employee_file(
file_content="base64_encoded_content",
filename="emp_ids.txt"
)
Individual Lookups
# By employee ID (exact match)
employee = await get_employee_by_id("EMP001")
# By name (flexible matching)
employee = await get_employee_by_name("John Doe")
File Format Support
- PDF: Text extraction with PyPDF2
- DOCX: Word documents including tables
- CSV: Various delimiters and encodings
- TXT: Multiple encoding support
Configuration Options
MAX_FILE_SIZE_MB: Maximum upload size (default: 50MB)CONFIDENCE_THRESHOLD: AI confidence threshold (default: 0.7)MAX_EMPLOYEES_PER_FILE: Processing limit (default: 1000)OPENAI_TEMPERATURE: LLM temperature (default: 0.1)
Troubleshooting
Connection Issues
- Verify Snowflake credentials
- Check network connectivity
- Ensure warehouse is running
File Processing
- Ensure files are base64 encoded
- Check file size limits
- Verify file format support
AI Processing
- Check OpenAI API key
- Adjust confidence threshold
- Review file content quality
Security Considerations
- Store credentials as environment variables
- Use Render's built-in secret management
- Enable Snowflake IP restrictions if needed
- Monitor API usage and costs
Support
For issues and questions:
- Check the logs for detailed error messages
- Verify environment variables are set correctly
- Test database connection using
test_snowflake_connection - Review file format and encoding requirements