jaggederest/mcp_reviewer
If you are the rightful owner of mcp_reviewer and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Reviewer MCP is a Model Context Protocol service that enhances development workflows with AI-powered tools, supporting multiple AI providers like OpenAI and Ollama.
generate_spec
Generate a technical specification document.
review_spec
Review a specification for completeness and provide critical feedback.
review_code
Review code changes and provide feedback.
run_tests
Run standardized tests for the project.
run_linter
Run standardized linter for the project.
Reviewer MCP
An MCP (Model Context Protocol) service that provides AI-powered development workflow tools. It supports multiple AI providers (OpenAI and Ollama) and offers standardized tools for specification generation, code review, and project management.
Features
- Specification Generation: Create detailed technical specifications from prompts
- Specification Review: Review specifications for completeness and provide critical feedback
- Code Review: Analyze code changes with focus on security, performance, style, or logic
- Test Runner: Execute tests with LLM-friendly formatted output
- Linter: Run linters with structured output formatting
- Pluggable AI Providers: Support for both OpenAI and Ollama (local models)
Installation
npm install
npm run build
Configuration
Environment Variables
Create a .env
file based on .env.example
:
# AI Provider Configuration
AI_PROVIDER=openai # Options: openai, ollama
# OpenAI Configuration
OPENAI_API_KEY=your_api_key_here
OPENAI_MODEL=o1-preview
# Ollama Configuration (for local models)
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama2
Project Configuration
Create a .reviewer.json
file in your project root to customize commands:
{
"testCommand": "npm test",
"lintCommand": "npm run lint",
"buildCommand": "npm run build",
"aiProvider": "ollama",
"ollamaModel": "codellama"
}
Using with Claude Desktop
Add the following to your Claude Desktop configuration:
{
"mcpServers": {
"reviewer": {
"command": "node",
"args": ["/path/to/reviewer-mcp/dist/index.js"],
"env": {
"OPENAI_API_KEY": "your-api-key-here"
}
}
}
}
Using with Ollama
- Install Ollama: https://ollama.ai
- Pull a model:
ollama pull llama2
orollama pull codellama
- Set
AI_PROVIDER=ollama
in your.env
file - The service will use your local Ollama instance
Available Tools
generate_spec
Generate a technical specification document.
Parameters:
prompt
(required): Description of what specification to generatecontext
(optional): Additional context or requirementsformat
(optional): Output format - "markdown" or "structured"
review_spec
Review a specification for completeness and provide critical feedback.
Parameters:
spec
(required): The specification document to reviewfocusAreas
(optional): Array of specific areas to focus the review on
review_code
Review code changes and provide feedback.
Parameters:
diff
(required): Git diff or code changes to reviewcontext
(optional): Context about the changesreviewType
(optional): Type of review - "security", "performance", "style", "logic", or "all"
run_tests
Run standardized tests for the project.
Parameters:
testCommand
(optional): Test command to run (defaults to configured command)pattern
(optional): Test file pattern to matchwatch
(optional): Run tests in watch mode
run_linter
Run standardized linter for the project.
Parameters:
lintCommand
(optional): Lint command to run (defaults to configured command)fix
(optional): Attempt to fix issues automaticallyfiles
(optional): Array of specific files to lint
Development
# Run in development mode
npm run dev
# Run tests
npm test
# Run unit tests only
npm run test:unit
# Run integration tests (requires Ollama)
npm run test:integration
# Type checking
npm run typecheck
# Linting
npm run lint
End-to-End Testing
The project includes a comprehensive e2e test that validates the full workflow using a real Ollama instance:
- Install and start Ollama: https://ollama.ai
- Pull a model:
ollama pull llama2
- Run the test:
npm run test:e2e
The e2e test demonstrates:
- Specification generation
- Specification review
- Code creation
- Code review
- Linting
- Test execution
All using real AI responses from your local Ollama instance.
License
MIT