code-review-mcp

igor-safonov-git/code-review-mcp

3.1

If you are the rightful owner of code-review-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

A Model Context Protocol (MCP) server designed for comprehensive code reviews using multiple AI models.

Tools
1
Resources
0
Prompts
0

Code Review MCP Server

A Model Context Protocol (MCP) server that performs comprehensive code reviews using multiple AI models (O3, Gemini, Claude Opus) and consolidates the results with GPT-4.

Features

  • Multi-Model Analysis: Leverages O3, Google Gemini, and Claude Opus for diverse perspectives
  • Consolidated Reviews: Uses GPT-4 to merge and organize feedback from all models
  • Comprehensive Feedback: Focuses on bugs, security, performance, and best practices
  • MCP Integration: Works seamlessly with Claude Code and other MCP-compatible clients

Installation

  1. Clone the repository:
git clone <repository-url>
cd code_review_mcp
  1. Install dependencies:
pip install -r requirements.txt
  1. Install in development mode:
pip install -e .

API Keys Required

You'll need API keys from:

  • OpenAI: For O3-mini (reviews) and GPT-4o (consolidation)
  • Google: For Gemini 2.5 Pro
  • Anthropic: For Claude 3 Opus Latest
  • Hugging Face: For Qwen2.5-Coder model

Configuration

Choose one of these two methods to provide your API keys:

Method 1: Environment File (Recommended)

  1. Set up API keys:
cp .env.template .env
# Edit .env and add your API keys
  1. Add to your Claude Code MCP settings:
{
  "mcpServers": {
    "code-review": {
      "command": "python",
      "args": ["-m", "code_review_mcp.server"]
    }
  }
}

Method 2: MCP Settings

Add API keys directly to your Claude Code MCP settings:

{
  "mcpServers": {
    "code-review": {
      "command": "python",
      "args": ["-m", "code_review_mcp.server"],
      "env": {
        "OPENAI_API_KEY": "your-openai-key",
        "GOOGLE_API_KEY": "your-google-key", 
        "ANTHROPIC_API_KEY": "your-anthropic-key",
        "HUGGINGFACE_API_KEY": "your-huggingface-key"
      }
    }
  }
}

Usage

The server provides one tool:

multi_model_code_review

Performs comprehensive code review using multiple AI models.

Parameters:

  • code (required): The source code to review
  • description (required): Author's description of the code's purpose
  • language (optional): Programming language (default: auto-detect)

Example usage in Claude:

Please review this Python function using the multi_model_code_review tool:

Code:
def calculate_average(numbers):
    total = 0
    for num in numbers:
        total += num
    return total / len(numbers)

Description: This function calculates the average of a list of numbers.

Performance Note

This tool can be slow (30-60 seconds) due to:

  • Multiple API calls to different models
  • High reasoning effort for O3 model
  • Consolidation step with GPT-4

API Requirements

  • OpenAI API key (for O3 and GPT-4)
  • Google API key (for Gemini)
  • Anthropic API key (for Claude Opus)

Error Handling

The server includes robust error handling:

  • Individual model failures won't break the entire review
  • Timeout protection (3 minutes total)
  • Fallback consolidation if GPT-4 fails
  • Clear error messages for missing API keys