mcp-code-crosscheck
If you are the rightful owner of mcp-code-crosscheck and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
This document provides a structured summary of the Model Context Protocol (MCP) server designed to mitigate biases in AI code review.
The MCP Code Crosscheck server is an experimental proof-of-concept designed to address known biases in AI code review processes. It employs two main strategies: cross-model evaluation and bias-aware prompting. Cross-model evaluation involves using different models for code generation and review to minimize self-preference bias. Bias-aware prompting instructs models to ignore common bias triggers identified in recent research. Despite these efforts, the server cannot completely eliminate biases, as all major language models share similar vulnerabilities. The server offers two review modes: a default 'bias_aware' mode that reduces false positives from style and comment biases, and an opt-in 'adversarial' mode that provides a thorough review but may introduce its own biases. This tool is intended to be part of a comprehensive review process that includes static analysis, testing, and human judgment.
Features
- Cross-model evaluation to reduce self-preference bias.
- Bias-aware prompting to ignore common bias triggers.
- Two review modes: 'bias_aware' and 'adversarial'.
- Structured review output with severity levels and metrics.
- Integration with GitHub MCP servers and CLI fallback.
Tools
review_code
Review code with bias mitigation strategies.
detect_model_from_authors
Detect AI model from commit author information.
fetch_commit
Fetch commit details using GitHub CLI.
fetch_pr_commits
Fetch PR commits using GitHub CLI.