StanleyChanH/vllm-mcp
3.3
If you are the rightful owner of vllm-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A Model Context Protocol (MCP) server that enables text models to call multimodal models, supporting both OpenAI and Dashscope multimodal models.
Tools
Functions exposed to the LLM to take actions
generate_multimodal_response
Generate responses from multimodal models.
list_available_providers
List available model providers and their supported models.
validate_multimodal_request
Validate if a multimodal request is supported by the specified provider.
Prompts
Interactive templates invoked by user choice
No prompts
Resources
Contextual data attached and managed by the client