genie-mcp-server

bruriah1999/genie-mcp-server

3.1

If you are the rightful owner of genie-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

This project implements a Model Context Protocol (MCP) server with a greeting tool, integrated with a Gradio UI and SmolAgents for AI-driven interactions.

Tools
1
Resources
0
Prompts
0

MCP Greeting Server with SmolAgents and Gradio UI

Overview

This project implements a Model Context Protocol (MCP) server with a simple greet tool that returns personalized greetings based on a provided name. The server is built using the FastMCP library and is integrated with a Gradio web interface for user-friendly interaction and a SmolAgents CodeAgent for AI-driven tool invocation. The entire application is containerized using Docker for easy deployment and portability.

Features

  • MCP Server: Hosts a greet tool that generates a greeting (e.g., "Hello, Alice! Welcome to the MCP server.") for a given name.
  • Gradio UI: A clean, browser-based interface for users to input names and receive greetings.
  • SmolAgents Integration: A CodeAgent powered by the HuggingFaceTB/SmolLM2-1.7B-Instruct model, which programmatically calls the greet tool.
  • Dockerized Setup: Ensures consistent deployment across environments.

Project Structure

  • Dockerfile: Defines the Docker image for the project, based on python:3.11-slim, and runs the MCP server, Gradio UI, and SmolAgents agent.
  • server.py: Implements the MCP server with the greet tool, using HTTP transport on port 6275.
  • app.py: Runs the Gradio web interface on port 7860, allowing users to interact with the greet tool via a browser.
  • agent.py: Configures a SmolAgents CodeAgent to call the greet tool programmatically, using the SmolLM2-1.7B-Instruct model.
  • requirements.txt: Lists Python dependencies (mcp[cli], gradio==4.44.0, smolagents[transformers]).

Prerequisites

  • Docker: Ensure Docker is installed on your system. Download it from docker.com if needed.
  • A modern web browser to access the Gradio UI and MCP Inspector.

Setup Instructions

  1. Clone or Create Project Directory

    • Create a directory for the project and save the following files: Dockerfile, server.py, app.py, agent.py, and requirements.txt. These can be obtained from the project source or created based on the provided content.
  2. Build the Docker Image

    • Open a terminal in the project directory and run:
      docker build -t mcp-greeting-server .
      
    • This builds a Docker image named mcp-greeting-server with all dependencies installed.
  3. Run the Docker Container

    • Start the container, exposing the necessary ports (6274 for MCP Inspector, 7860 for Gradio UI, 6275 for MCP server):
      docker run -i --rm -p 6274:6274 -p 7860:7860 -p 6275:6275 mcp-greeting-server
      
    • The -i flag enables interactive mode, --rm removes the container after stopping, and -p maps the container ports to the host.

Usage

Gradio UI

  • Open a web browser and navigate to http://localhost:7860.
  • Enter a name in the text box (e.g., "Alice") and submit to receive a personalized greeting (e.g., "Hello, Alice! Welcome to the MCP server.").
  • The interface is styled with a soft theme for a clean user experience.

MCP Inspector

  • Access the MCP Inspector at http://localhost:6274 to explore and test the greet tool directly.
  • Use the web-based UI to input a name and view the tool's output.

SmolAgents

  • The smolagents CodeAgent runs automatically in the container and outputs a sample greeting for "Alice" to the console (e.g., "Hello, Alice! Welcome to the MCP server.").
  • To interact with the agent programmatically, modify agent.py to accept different inputs or integrate it with another interface (e.g., extend the Gradio UI).

Example Output

  • Gradio UI: Enter "Bob" → Output: "Hello, Bob! Welcome to the MCP server."
  • SmolAgents: Console logs: "Hello, Alice! Welcome to the MCP server." (based on the default query in agent.py).

Notes

  • The MCP server uses HTTP transport on port 6275 to support smolagents integration, as stdio transport is less reliable for agent communication.
  • The SmolLM2-1.7B-Instruct model is used for its efficiency in local inference, but you can modify agent.py to use a different model supported by smolagents.
  • To extend the project, add more tools to server.py or enhance the Gradio UI in app.py with additional features.

Troubleshooting

  • Port Conflicts: Ensure ports 6274, 7860, and 6275 are not in use on your host machine.
  • Docker Issues: Verify Docker is running and you have sufficient permissions (sudo may be required on Linux).
  • Dependencies: If the build fails, check that requirements.txt includes all necessary packages and that your Docker environment has internet access.

License

This project is provided as-is for educational purposes. Refer to the licenses of mcp, gradio, and smolagents for their respective terms.