polars_mcp

rlaabs/polars_mcp

3.1

If you are the rightful owner of polars_mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The Polars MCP Server is a Model Context Protocol server that provides AI assistants with access to Polars DataFrame operations.

Tools
5
Resources
0
Prompts
0

Polars MCP Server

A Model Context Protocol (MCP) server that exposes core Polars DataFrame operations as individual tools for AI assistants.

Features

This server provides comprehensive data analysis capabilities through Polars, including:

  • Data I/O: Read/write CSV, Parquet, and JSON files
  • Data Selection: Filter rows, select columns, drop columns
  • Data Transformation: Add columns, rename, cast data types
  • Aggregation: Group by operations with various aggregation functions
  • Joins: Inner, left, right, outer, and cross joins
  • Reshaping: Pivot and melt operations
  • Statistics: Descriptive statistics, value counts, unique values
  • Null Handling: Fill or drop null values with various strategies
  • SQL Interface: Execute SQL queries on loaded datasets

Installation

This project uses uv for dependency management.

# Install dependencies
uv sync

# Run the server
uv run server.py

Requirements

  • Python ≥ 3.13
  • Dependencies:
    • fastmcp ≥ 2.10.2
    • polars ≥ 1.31.0

Usage

The server stores datasets in memory by name, allowing you to chain operations across multiple tool calls. All operations work on named datasets stored in the server's memory.

Basic Workflow

  1. Load data with read_csv, read_parquet, or read_json
  2. Transform data with operations like select, filter, with_columns
  3. Analyze data with describe, group_by, value_counts
  4. Export results with write_csv or similar functions

Available Tools

Data I/O
  • read_csv - Load CSV files into datasets
  • write_csv - Export datasets to CSV files
Selection & Filtering
  • select - Choose specific columns
  • filter - Filter rows based on conditions
  • drop - Remove columns
Data Transformation
  • with_columns - Add or modify columns using expressions
  • rename - Rename columns
  • cast - Change column data types
Aggregation
  • group_by - Group data and apply aggregation functions
  • describe - Get descriptive statistics
Joins & Concatenation
  • join - Join two datasets
  • concat - Concatenate multiple datasets
Utilities
  • sort - Sort datasets by columns
  • unique - Get unique values
  • value_counts - Count occurrences of values
  • null_count - Count null values
  • fill_null - Fill null values
  • drop_nulls - Remove rows with nulls
Advanced Operations
  • pivot - Reshape from long to wide format
  • melt - Reshape from wide to long format
  • sql - Execute SQL queries on datasets

Resources

The server also provides resources for dataset inspection:

  • datasets://list - List all datasets in memory
  • datasets://{name}/head - Preview first rows of a dataset
  • datasets://{name}/schema - Get schema information
  • datasets://{name}/info - Get comprehensive dataset information

Example

# Load a CSV file
await read_csv(file_path="data.csv", dataset_name="sales_data")

# Filter the data
await filter(
    dataset_name="sales_data", 
    conditions=[{"column": "amount", "op": "gt", "value": 100}],
    output_name="large_sales"
)

# Group and aggregate
await group_by(
    dataset_name="large_sales",
    by=["region"],
    agg=[{"column": "amount", "func": "sum"}],
    output_name="regional_totals"
)

# Export results
await write_csv(dataset_name="regional_totals", file_path="results.csv")

License

This project is open source. Please check the license file for details.