mcp-server-gh-models-helper

michaelwybraniec/mcp-server-gh-models-helper

3.1

If you are the rightful owner of mcp-server-gh-models-helper and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The GitHub Models Helper MCP Server facilitates interaction and comparison of various language models available through GitHub Models and AzureML, including those from OpenAI, Microsoft, Meta, and Mistral.

GitHub Models Helper MCP Server

This MCP server helps you interact with and compare different language models available via GitHub Models and AzureML, including OpenAI, Microsoft, Meta, Mistral, and more.

Features

  • List available language models with metadata
  • Compare responses from different models for the same prompt
  • Filter and sort models by various criteria
  • Comprehensive error handling and fallbacks
  • Visualize model comparisons (see below)

Getting Started

  1. Install dependencies:
    npm install
    
  2. Set up environment variables: Copy .env.template to .env and add your GitHub token:
    GITHUB_TOKEN=your_github_personal_access_token
    
  3. Build the project:
    npm run build
    
  4. Run the MCP server in development mode:
    npx @modelcontextprotocol/inspector dist/index.js
    
  5. Add the MCP server to Claude Desktop: In claude_desktop_config.json:
    {
       "mcpServers": {
          "GitHub Models Helper": {
             "command": "node",
             "args": [
             "/absolute/path/to/gh-models-helper/dist/index.js"
             ],
             "env": {
             "GITHUB_TOKEN": "your_github_personal_access_token"
             }
          }
       }
    }
    

Available Phi-3 Models

Model IDDisplay NameContext WindowSummary
Phi-3-medium-128k-instructPhi-3-medium instruct (128k)131,072Same Phi-3-medium model, but with a larger context size
Phi-3-medium-4k-instructPhi-3-medium instruct (4k)4,09614B parameters, better quality than Phi-3-mini

Note: There is currently no model named "Phi-4" or "Phi-3-mini-4k-instruct" in the available list. Use the above IDs for comparisons.

Visualizing Model Comparisons

You can compare how different models respond to the same prompt and visualize the results. For example, to compare three models:

  • Phi-3-medium-128k-instruct
  • gpt-4o-mini
  • Mistral-small

Example prompt:

Explain the difference between AI and machine learning.

Sample output visualization:

ModelResponse
Phi-3-medium-128k-instruct...
gpt-4o-mini...
Mistral-small...

You can use your own prompt and models. The server will return a JSON object with the responses, which you can render as a table or chart in your application.

Example Prompts

  • "list all available phi-3 models"
  • "compare Phi-3-medium-4k-instruct and Mistral-small on this prompt: how many ns in bananasss??"
  • "Do a comparison between the Phi-3-medium-128k-instruct, gpt-4o-mini, and Mistral-small models"

For more details, see the code and documentation in project.md.