jwilger/mcp-pytest-runner
If you are the rightful owner of mcp-pytest-runner and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
MCP server providing opinionated pytest execution interface for AI agents.
mcp-pytest-runner
MCP server providing opinionated pytest execution interface for AI agents.
Overview
mcp-pytest-runner is a Model Context Protocol (MCP) server that enables AI coding assistants to execute pytest with intelligent test selection, structured result interpretation, and context-aware recommendations.
Status
This project is in active development. The MCP server implementation is complete with stdio transport, test discovery, and test execution capabilities.
Installation
Install via uvx for immediate use:
uvx mcp-pytest-runner
Or add to your Python environment:
pip install mcp-pytest-runner
MCP Integration
mcp-pytest-runner provides a Model Context Protocol server that enables AI coding assistants to execute pytest with intelligent test selection and structured result interpretation.
Claude Code Configuration
Add mcp-pytest-runner to your Claude Code MCP settings using the claude mcp add command:
claude mcp add pytest uvx mcp-pytest-runner
Or manually configure by editing your MCP settings. The configuration file location depends on your platform:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Example manual configuration:
{
"mcpServers": {
"pytest": {
"command": "uvx",
"args": ["mcp-pytest-runner"]
}
}
}
Why uvx? Ensures you always use the latest version without manual updates. Alternative: use mcp-pytest-runner directly if installed in your system Python.
Available Tools
discover_tests
Discover pytest test structure without executing tests.
Parameters:
path(optional): Directory or file path to discover tests within (default: project root)pattern(optional): Test file pattern (default:test_*.pyor*_test.py)
Returns: Hierarchical test organization with node IDs for subsequent execution.
Why use this? Understand test suite structure before execution. Enables intelligent test selection in TDD workflows.
execute_tests
Execute pytest tests with validated parameters.
Parameters:
node_ids(optional): Specific test node IDs to execute (e.g.,["tests/test_user.py::test_login"])markers(optional): Pytest marker expression (e.g.,"not slow and integration")keywords(optional): Keyword expression for test name matchingverbosity(optional): Output verbosity level (-2 to 2)failfast(optional): Stop execution on first failuremaxfail(optional): Stop after N failuresshow_capture(optional): Include captured stdout/stderrtimeout(optional): Execution timeout in seconds
Returns: Structured results including pass/fail status, error messages, stack traces, and test summary.
Exit Code Handling:
- Success response (exit codes 0, 1, 5): Structured test results with failure details
- Error response (exit codes 2, 3, 4): pytest configuration or execution errors
Why this design? Test failures are normal TDD workflow outcomes. The tool succeeds when pytest executes successfully, regardless of test pass/fail status.
Connection Testing
Verify the MCP server is working correctly:
- Restart Claude Code after updating configuration
- Check MCP connection: Look for pytest tools in Claude's available tools
- Test discovery: Ask Claude to "discover tests in this project"
- Test execution: Ask Claude to "run all tests"
Troubleshooting
MCP server not appearing in Claude Code:
- Verify JSON configuration syntax (no trailing commas)
- Check file path matches your platform
- Restart Claude Code completely (quit and relaunch)
Tests not discovered:
- Verify pytest is installed in your project environment
- Check path parameter points to valid test directory
- Ensure pattern matches your test file naming convention
Test execution failures:
- Review error response for pytest configuration issues
- Verify node IDs from discovery match execution parameters
- Check timeout parameter if tests run longer than default
Development
This project uses Nix for reproducible development environments. To get started:
# Enter development shell
nix develop
# Run tests
pytest
# Run type checking
mypy src
# Run linting
ruff check src tests
# Run security scanning
bandit -r src
License
MIT License - See LICENSE file for details.