eamonnfaherty/oh-no-mcp-server
If you are the rightful owner of oh-no-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The oh-no-mcp-server is a specialized server designed to handle Model Context Protocol (MCP) requests, facilitating communication between clients and language models.
Oh No MCP Server
An MCP (Model Context Protocol) server that provides code performance review capabilities. This server exposes tools and prompts to analyze code for performance issues, bottlenecks, and optimization opportunities.
Features
- Highlighted Text Review: Analyze code snippets selected in your IDE
- Single File Review: Analyze an entire file for performance issues
- Directory Review: Recursively scan and analyze all files in a directory
- Flexible Output: Text output for snippets/files, written reports for directories
Installation
Using uvx (Recommended)
For quick installation and usage:
uvx --from git+https://github.com/eamonnfaherty/oh-no-mcp-server oh-no-mcp
From Source
- Clone this repository:
git clone https://github.com/eamonnfaherty/oh-no-mcp-server.git
cd oh-no-mcp-server
- Install dependencies using
uv:
make install
For development with all extras:
make install-dev
Usage
As an MCP Server
Run the server using the Makefile:
make run
Or directly with uv using the console script:
uv run oh-no-mcp
Or with uvx (no installation needed):
uvx --from git+https://github.com/eamonnfaherty/oh-no-mcp-server oh-no-mcp
Configuration for Claude Desktop
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
Using uvx with git repository (recommended):
{
"mcpServers": {
"oh-no": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/eamonnfaherty/oh-no-mcp-server",
"oh-no-mcp"
]
}
}
}
Or using uv from a local clone:
{
"mcpServers": {
"oh-no": {
"command": "uv",
"args": ["--directory", "/path/to/oh-no-mcp-server", "run", "oh-no-mcp"]
}
}
}
Available Tools
oh_no
Returns a prompt for reviewing code performance.
Parameters:
scope(required): Type of review - "text", "file", or "directory"content(required): The code text, file path, or directory pathoutput_path(optional): For directories, where to write the report
Examples:
# Review highlighted text
{
"scope": "text",
"content": "def slow_function():\n result = []\n for i in range(1000):\n result.append(i)"
}
# Review a single file
{
"scope": "file",
"content": "/path/to/your/file.py"
}
# Review a directory
{
"scope": "directory",
"content": "/path/to/your/project",
"output_path": "/path/to/output/performance_report.md"
}
Available Prompts
oh_no
Directly invokes a performance review with the same parameters as the tool.
Arguments:
scope: "text", "file", or "directory"content: The code, file path, or directory pathoutput_path: (For directories) Where to write the report
What the Review Includes
The performance review analyzes code for:
- Performance Bottlenecks: Identifies slow operations and inefficiencies
- Memory Usage: Highlights potential memory leaks or excessive allocations
- Algorithm Complexity: Analyzes time and space complexity
- Optimization Suggestions: Provides actionable improvements
- Best Practices: Recommends industry-standard patterns
Project Structure
oh-no-mcp-server/
├── pyproject.toml
├── README.md
└── src/
└── oh_no_mcp_server/
├── __init__.py
└── server.py
Development
This project uses uv for dependency management. All common development tasks are available via the Makefile.
Makefile Commands
Installation
make install- Install project dependencies usinguv syncmake install-dev- Install dependencies with all extras for development
Code Quality
make format- Auto-format code usingblackand fix issues withruffmake lint- Check code style withruffand verify formatting withblack(no changes made)
Testing
make test- Run all unit tests usingpytestmake test-coverage- Run tests with coverage reporting (terminal + HTML)make test-coverage-report- Open the HTML coverage report in your browser
Evaluation
make eval- Run evaluation tests with standard outputmake eval-verbose- Run evaluation tests with detailed output (includes print statements)
Utilities
make clean- Remove all build artifacts, cache files, and coverage reports- Deletes
__pycache__directories - Removes
.pycfiles - Cleans up
.egg-infodirectories - Removes
.pytest_cache,htmlcov,.coverage - Removes
distandbuilddirectories
- Deletes
make run- Start the MCP server usinguv run oh-no-mcp
Testing Framework
This project includes two types of tests:
Unit Tests (tests/)
Traditional unit tests with 100% code coverage testing individual functions and components.
Evaluation Tests (evals/)
Advanced semantic similarity-based tests that compare MCP tool outputs using:
- BERT embeddings - Semantic understanding via sentence transformers
- Self-BLEU - N-gram overlap measurement
- Cosine similarity - Normalized embedding comparison
Run evaluation tests:
make eval # Run evaluations
make eval-verbose # Run with detailed output
Each evaluation provides detailed metrics and a pass/fail result. See for more details.
License
MIT