mcp-sage
If you are the rightful owner of mcp-sage and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
An MCP server that selects between OpenAI's O3 and Google's Gemini 2.5 Pro models based on token count for detailed code reviews and second opinions.
The `mcp-sage` server is designed to facilitate complex code reviews and implementation planning by leveraging the large context capabilities of OpenAI's O3 model and Google's Gemini 2.5 Pro. It automatically selects the appropriate model based on the token count of the input, ensuring efficient use of resources. The server provides three main tools: `sage-opinion` for general feedback, `sage-review` for code change suggestions, and `sage-plan` for detailed implementation plans. The `sage-plan` tool is particularly advanced, orchestrating a multi-model debate to refine and select the best implementation plan. This approach allows for comprehensive analysis and robust recommendations, making it ideal for handling large and complex codebases.
Features
- Automatic model selection based on token count
- Multi-model debate for implementation planning
- Fallback mechanisms for API key and network issues
- Detailed logging and monitoring capabilities
- Structured XML format for file context inclusion
Tools
sage-opinion
Provides feedback on a given prompt and file context using the selected model.
sage-review
Offers code change suggestions formatted as SEARCH/REPLACE blocks.
sage-plan
Generates a detailed implementation plan through a multi-model debate process.