rlaabs/polars_mcp
If you are the rightful owner of polars_mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Polars MCP Server is a Model Context Protocol server that provides AI assistants with access to Polars DataFrame operations.
Polars MCP Server
A Model Context Protocol (MCP) server that exposes core Polars DataFrame operations as individual tools for AI assistants.
Features
This server provides comprehensive data analysis capabilities through Polars, including:
- Data I/O: Read/write CSV, Parquet, and JSON files
- Data Selection: Filter rows, select columns, drop columns
- Data Transformation: Add columns, rename, cast data types
- Aggregation: Group by operations with various aggregation functions
- Joins: Inner, left, right, outer, and cross joins
- Reshaping: Pivot and melt operations
- Statistics: Descriptive statistics, value counts, unique values
- Null Handling: Fill or drop null values with various strategies
- SQL Interface: Execute SQL queries on loaded datasets
Installation
This project uses uv for dependency management.
# Install dependencies
uv sync
# Run the server
uv run server.py
Requirements
- Python ≥ 3.13
- Dependencies:
fastmcp≥ 2.10.2polars≥ 1.31.0
Usage
The server stores datasets in memory by name, allowing you to chain operations across multiple tool calls. All operations work on named datasets stored in the server's memory.
Basic Workflow
- Load data with
read_csv,read_parquet, orread_json - Transform data with operations like
select,filter,with_columns - Analyze data with
describe,group_by,value_counts - Export results with
write_csvor similar functions
Available Tools
Data I/O
read_csv- Load CSV files into datasetswrite_csv- Export datasets to CSV files
Selection & Filtering
select- Choose specific columnsfilter- Filter rows based on conditionsdrop- Remove columns
Data Transformation
with_columns- Add or modify columns using expressionsrename- Rename columnscast- Change column data types
Aggregation
group_by- Group data and apply aggregation functionsdescribe- Get descriptive statistics
Joins & Concatenation
join- Join two datasetsconcat- Concatenate multiple datasets
Utilities
sort- Sort datasets by columnsunique- Get unique valuesvalue_counts- Count occurrences of valuesnull_count- Count null valuesfill_null- Fill null valuesdrop_nulls- Remove rows with nulls
Advanced Operations
pivot- Reshape from long to wide formatmelt- Reshape from wide to long formatsql- Execute SQL queries on datasets
Resources
The server also provides resources for dataset inspection:
datasets://list- List all datasets in memorydatasets://{name}/head- Preview first rows of a datasetdatasets://{name}/schema- Get schema informationdatasets://{name}/info- Get comprehensive dataset information
Example
# Load a CSV file
await read_csv(file_path="data.csv", dataset_name="sales_data")
# Filter the data
await filter(
dataset_name="sales_data",
conditions=[{"column": "amount", "op": "gt", "value": 100}],
output_name="large_sales"
)
# Group and aggregate
await group_by(
dataset_name="large_sales",
by=["region"],
agg=[{"column": "amount", "func": "sum"}],
output_name="regional_totals"
)
# Export results
await write_csv(dataset_name="regional_totals", file_path="results.csv")
License
This project is open source. Please check the license file for details.