llm-mcp-bridge

ramgeart/llm-mcp-bridge

3.2

If you are the rightful owner of llm-mcp-bridge and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The LLM MCP Bridge is a versatile server designed to interface with any API compatible with OpenAI, enabling the analysis and evaluation of LLM models.

Tools

Functions exposed to the LLM to take actions

llm_get_models

Obtains a list of models in JSON format.

llm_status

Checks the connection with the server.

llm_list_models

Lists models in a human-readable format.

llm_chat

Engages in chat with performance metrics.

llm_benchmark

Conducts benchmarks with multiple prompts.

llm_evaluate_coherence

Evaluates model consistency.

llm_test_capabilities

Tests model performance in different areas.

llm_compare_models

Compares multiple models.

llm_quality_report

Generates a comprehensive quality report.

Prompts

Interactive templates invoked by user choice

No prompts

Resources

Contextual data attached and managed by the client

No resources