mcp-server-demo

jacksonon/mcp-server-demo

3.2

If you are the rightful owner of mcp-server-demo and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

This document provides a comprehensive guide to setting up and using a Model Context Protocol (MCP) server with Python and various tools for image processing and API integration.

Tools
1
Resources
0
Prompts
0

尝鲜说明

依赖Python,确保安装Python3.10+

安装uv和mcp[CLI]

brew install uv
uv add "mcp[CLI]"
uv add "Pillow"
uv add "httpx"
uv add chardet
uv add openpyxl

调试工程

进入工程根目录启动服务

source .venv/bin/activate 
mcp dev server.py

打开网页调试

启动服务后,打开浏览器访问 需要配置网页使用的Add Environment Variables 需要设置网页使用的Timeout

http://127.0.0.1:6274/#tools

配置到mcp ide

{
    "mcpServers": {
        "sdk-mcp-server": {
            "command": "uv",
            "args": [
                "--directory",
                "/Users/os/Desktop/mcp-server-demo",
                "run",
                "server.py"
            ],
            "env": {
                "API_KEY": "「你的API_KEY」",
                "API_URL": "https://open.bigmodel.cn/api/paas/v4/chat/completions",
                "MODEL_NAME": "glm-4-flash-250414",
                "Unity_Version": "2022.3.50f1c1",
                "Unity_Path": "/Applications/Unity/Hub/Editor/",
                "Unity_Project_Path": "/Users/os/Desktop/GameProject"
            }
        }
    }
}

安装rumps,启动macOS菜单栏应用

作用:创建macOS菜单栏应用,图形化应用。代替上述的命令行操作。

pip install rumps
python app.py

解决端口占用

查看占用PID

lsof -i :[PORT]

结束进程

kill -9 [PID]

调试用控制台临时环境变量

export API_KEY="[YOUR_API_KEY]"
export API_URL="https://open.bigmodel.cn/api/paas/v4/chat/completions"
export MODEL_NAME="glm-4-flash-250414"
export Unity_Version="2022.3.50f1c1"
export Unity_Path="/Applications/Unity/Hub/Editor/"
export Unity_Project_Path="/Users/os/Desktop/GameProject"
export Unity_Resources="WMOnePlugin_5.116.0.0_f06b905f_202504251800.zip;WMWebActivity_v1.42.1.0_e753ef9c.zip;WMPatcherPlugin_2.15.0.0_7cb0683_202410151802.zip;OverSeaUnitySDK_v3.30.0.0.zip"

示例代码

  • 图片理解示例
# 添加图片理解工具示例
@mcp.tool()
async def extract_algorithm_from_image(image_path: str) -> dict:
    """
    从上传的图片中提取算法题内容
    
    参数:
    - imagePath: 图片的本地路径或URL
    
    返回:
    - 包含提取文本的字典
    """
    try:

         # Check if image_path is a URL
        if image_path.startswith(('http://', 'https://')):
            # Download image from URL
            response = httpx.get(image_path)
            response.raise_for_status()
            img = PILImage.open(io.BytesIO(response.content))
        else:
            # Open local file
            img = PILImage.open(image_path)

        # 转换为base64
        buffer = io.BytesIO()
        img.save(buffer, format="PNG")
        image_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')

        # 打印image_base64
        print(f"Image base64: {image_base64[:30]}...")  # 仅打印前30个字符以避免过长输出
        
        # 智谱API配置
        api_url = "https://open.bigmodel.cn/api/paas/v4/chat/completions"
        # 从环境变量中读取apikey
        api_key = os.getenv("API_KEY", "") # 注意:实际应用中应使用环境变量或配置文件
        model_type = "glm-4v-flash"
        
        # 构建请求体
        request_body = {
            "model": model_type,
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "image_url",
                            "image_url": {
                                "url": f"data:image/jpeg;base64,{image_base64}"
                            }
                        },
                        {
                            "type": "text",
                            "text": "提取图片中的算法题文本内容,只需要返回识别到的文本,不需要其他任何内容"
                        }
                    ]
                }
            ],
            "stream": False
        }
        # 发送请求
        headers = {
            "Content-Type": "application/json",
            "Authorization": f"Bearer {api_key}"
        }
        
        async with httpx.AsyncClient() as client:
            response = await client.post(api_url, headers=headers, json=request_body)
            response_json = response.json()
        
        # 解析响应
        if "choices" in response_json and len(response_json["choices"]) > 0:
            content = response_json["choices"][0]["message"]["content"]
            return {"status": "success", "algorithm_text": content}
        elif "error" in response_json:
            return {"status": "error", "message": response_json["error"].get("message", "Unknown API error")}
        else:
            return {"status": "error", "message": "Invalid response format"}
            
    except Exception as e:
        return {"status": "error", "message": str(e)}