agent-mcp-server

cliffordru/agent-mcp-server

3.1

If you are the rightful owner of agent-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

This project implements an Agent as a Model Context Protocol (MCP) Server to automate developer workflows.

Tools
1
Resources
0
Prompts
0

Agent MCP Server

This project is an implementation of an Agent as an MCP (Model Context Protocol) Server. It acts as an on-demand assistant to automate developer workflows. Based on a user's prompt, it retrieves issues from GitHub, uses an LLM to create a plan, and sends that plan to Slack.

Design Philosophy

This agent follows a modern, action-oriented design philosophy for its MCP (Model Context Protocol) interface. It exposes a single, high-level tool (process_github_issues_and_send_slack_summary) that represents a complete, conceptual action rather than wrapping a low-level REST API. This approach ensures that the agent's interaction is efficient and robust, as all the complexity of orchestrating multiple downstream services is encapsulated on the server side. This aligns with the principle of designing for actions, not just data manipulation, which is critical for effective Agent-Computer Interaction (ACI).

Features

  • MCP Endpoint: A server that listens for prompts from MCP clients, built with fastmcp.
  • GitHub Integration: Fetches recent issues from a specified repository.
  • Slack Integration: Posts messages to a specified Slack channel.
  • LLM Planning: Uses a configurable LLM to analyze issues and generate a summary and plan.
  • Secure: Requires client-side tokens for all external service interactions.

Project Structure

.
├── src/
│   ├── mcp_server/
│   │   ├── agents/
│   │   │   ├── general_question_agent.py    # Handles direct Q&A with the LLM
│   │   │   ├── github_issues_agent.py       # Fetches issues from GitHub
│   │   │   ├── llm_task_processor.py        # Executes specific LLM tasks (planning, parsing)
│   │   │   ├── workflow_agent.py            # Orchestrates the main developer workflow
│   │   │   └── slack_notifications_agent.py # Sends notifications to Slack
│   │   ├── core/
│   │   │   ├── config.py             # Configuration management (Pydantic)
│   │   │   ├── exceptions.py         # Custom exception classes
│   │   │   ├── langfuse_config.py    # Optional Langfuse callback setup
│   │   │   └── logging_config.py     # Structured logging (structlog)
│   │   ├── schemas/
│   │   │   ├── action_parameters.py  # Pydantic model for action parameters
│   │   │   ├── github_issue.py       # Pydantic model for GitHub API validation
│   │   │   ├── llm_plan.py           # Pydantic model for structured LLM output
│   │   │   └── slack_notification.py # Pydantic model for Slack notifications
│   │   ├── services/
│   │   │   ├── github_service.py     # GitHub API client (httpx)
│   │   │   ├── slack_service.py      # Slack API client (slack-sdk)
│   │   │   └── llm_gateway.py        # Generic gateway for LLM calls (LangChain)
│   │   ├── cli.py                  # Command-line interface entry point
│   │   ├── mcp_server.py           # Application factory (create_app)
│   │   ├── mcp_tools.py            # MCP tool implementations (the agent's capabilities)
│   │   └── proxy.py                # LocalProxy object for decoupling tool registration from app creation
│   └── tests/
│       ├── test_core.py
│       ├── test_general_question_agent.py
│       ├── test_github_issues_agent.py
│       ├── test_github_service.py
│       ├── test_llm_gateway.py
│       ├── test_llm_task_processor.py
│       ├── test_slack_notifications_agent.py
│       ├── test_slack_service.py
│       └── test_workflow_agent.py
├── env.example
├── LICENSE
├── logs/
├── poetry.lock
├── poetry.toml
├── pyproject.toml
├── README.md
└── temp.md

Getting Started

Prerequisites

Installation

  1. Clone the Repository:

    git clone https://github.com/cliffordru/agent-mcp-server.git
    cd agent-mcp-server
    
  2. Configure Environment Variables: Create a .env file from the example. This file is for configuring the server's own dependencies, such as the LLM provider, GitHub API endpoint, and optional observability tools.

    cp env.example .env
    

    Fill in the required LLM_API_KEY and other desired values in your new .env file. The GITHUB_API_BASE_URL can be modified to point to a GitHub Enterprise instance if needed.

  3. Install Dependencies: Poetry will create a virtual environment and install all necessary packages.

    poetry install
    

    Note: This project is configured to create the virtual environment (.venv) inside the project directory. This behavior is set in the poetry.toml file. This is convenient for IDEs like Cursor to automatically detect and use the correct interpreter.

Running the Server

Before running, ensure you have created and configured your .env file as described in the installation steps. Then, use the console script entry point:

poetry run agent-mcp-server

This will start the server, typically making it available for clients to connect to.

Running Tests

To run the test suite located in the src/tests directory, use pytest:

poetry run pytest src/tests

To get a full coverage report, run:

poetry run pytest --cov=mcp_server src/tests

Usage

This server exposes tools via the Model Context Protocol (MCP). To use it, you need to connect an MCP-supported client (like Cursor) to the server.

Available Tools

check_for_github_issues_and_send_slack_summary

Checks for recent GitHub issues, generates an LLM-powered summary, and posts it to a Slack channel.

Arguments:

  • owner (str, required): The owner of the GitHub repository (e.g., 'langchain-ai').
  • repo (str, required): The name of the GitHub repository (e.g., 'langchain').
  • channel (str, required): The Slack channel to send the summary to (e.g., 'dev-ops-alerts').
  • github_token (str, required): A GitHub token with permissions to read repository issues.
  • slack_token (str, required): A Slack bot token with permissions to post in the specified channel.
  • hours (int, optional): The number of hours to look back for new issues. Defaults to 24.
ask_a_general_question

Answers a general question by passing it directly to the configured LLM.

Arguments:

  • question (str, required): The question to ask the LLM.

Production Considerations

Disclaimer: This project is a proof-of-concept and is not intended for production use without further hardening.

Before deploying this agent in a production environment, the following critical security and reliability measures should be implemented:

  1. Rate Limiting: Implement rate limiting to prevent denial-of-service attacks and resource exhaustion.
  2. Authentication: The MCP server itself should be protected by an authentication layer (e.g., API keys, OAuth2) to prevent unauthorized access.
  3. Input Sanitization: Strictly sanitize and validate all inputs to prevent injection attacks.
  4. Explicit Timeouts: Ensure all outbound network calls have explicit, reasonable timeouts.

Observability with Langfuse (Optional)

This project includes an optional integration with Langfuse for LLM observability, allowing you to trace, debug, and analyze your agent's performance.

Setup

  1. Run Langfuse Locally: Langfuse provides a Docker Compose setup for easy local deployment.

    git clone https://github.com/langfuse/langfuse.git
    cd langfuse
    docker-compose up -d
    

    The Langfuse UI will be available at http://localhost:3000.

  2. Configure Environment: After setting up a project in the Langfuse UI, get your API keys and update your .env file with the LANGFUSE_SECRET_KEY and LANGFUSE_PUBLIC_KEY. The LANGFUSE_HOST should be http://localhost:3000 if you are running it locally.

If these variables are set, the agent will automatically start sending traces to your local Langfuse instance.