expose-api-through-mcp

pravindeok/expose-api-through-mcp

3.2

If you are the rightful owner of expose-api-through-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Model Context Protocol (MCP) is an open standard designed to connect AI assistants to various data sources and tools in a standardized, secure, and flexible manner.

expose-api-through-mcp

How to setup MCP - Message Context Protocol server and expose API The Model Context Protocol (MCP) is an open standard designed to connect AI assistants to various data sources and tools in a standardized, secure, and flexible manner. It enables AI models to dynamically discover, inspect, and invoke tools (including APIs, database queries, or file system interactions) without requiring custom integrations for each data source. Below, I’ll explain initiatives to reclaim and expose APIs through MCP, provide an example, and outline steps to test it yourself, based on available information.

Initiatives to Reclaim and Expose APIs through MCP

MCP facilitates the reclamation and exposure of APIs by providing a unified layer that abstracts and standardizes access to existing APIs, making them more accessible to AI systems. Key initiatives and benefits include:

  1. Standardized AI Integration:

    • MCP allows developers to expose APIs as MCP tools, enabling AI agents to interact with them without bespoke integrations. This reduces the complexity of connecting AI models to diverse data sources like GitHub, Notion, or Slack.
    • Example: Companies like Block and Apollo have integrated MCP to connect internal systems to AI tools, while development platforms like Zed and Replit use MCP to enhance AI-driven coding tasks.
  2. Reusing Existing APIs:

    • MCP doesn’t replace REST or GraphQL APIs but wraps them as MCP tools, allowing reuse in AI workflows. This preserves existing infrastructure while making APIs AI-accessible.
    • Example: A FastAPI endpoint can be exposed as an MCP tool with minimal configuration, enabling AI agents to invoke it seamlessly.
  3. Open-Source Ecosystem:

    • MCP is developed as an open-source project, encouraging community contributions to build and share MCP servers for various services (e.g., Google Drive, GitHub, Postgres).
    • Example: Notion open-sourced its MCP server implementation, allowing developers to integrate Notion with AI applications in minutes.
  4. Enhanced Security and Access Control:

    • MCP’s second revision introduced rudimentary authentication support, though enterprise-grade features like OAuth 2.1 and OIDC integration are still needed for zero-trust environments. This initiative aims to make APIs secure and compliant when exposed to AI systems.
  5. Scalability and Flexibility:

    • MCP supports multiple transport protocols (e.g., HTTP, WebSocket, stdio), enabling APIs to be exposed locally or remotely, catering to both development and production environments.

Example: Exposing a GitHub API as an MCP Tool

A practical example is building an MCP server to expose GitHub’s API for analyzing pull requests (PRs), allowing an AI model (e.g., Claude) to fetch PR metadata, analyze code changes, and store results in Notion. This is based on a demo project from DataCamp.

Scenario: You want an AI to retrieve and analyze GitHub PR details (e.g., title, author, file changes) and save the analysis to Notion using MCP.

Components:

  • MCP Server: A Python-based server that exposes GitHub API endpoints as MCP tools.
  • MCP Client: Claude Desktop, which interacts with the MCP server to fetch and analyze data.
  • APIs Exposed: GitHub API (for PR data) and Notion API (for storing results).

Steps to Test an MCP Server Yourself

Below are step-by-step instructions to set up and test an MCP server that exposes GitHub’s API for PR analysis, based on the DataCamp tutorial.

Prerequisites
  • Python 3.8+ installed.
  • A GitHub account with a personal access token (for API access).
  • A Notion account with an API key and a target page ID.
  • Claude Desktop (or another MCP client) installed.
  • Basic familiarity with Python and command-line tools.
Step 1: Set Up Your Environment
  1. Create a Project Directory:
    mkdir pr_reviewer
    cd pr_reviewer
    
  2. Set Up a Virtual Environment:
    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    
  3. Install Dependencies:
    • Create a requirements.txt file with:
      requests>=2.31.0
      python-dotenv>=1.0.0
      mcp[cli]>=1.4.0
      notion-client>=2.3.0
      
    • Install dependencies:
      pip install -r requirements.txt
      
  4. Set Up Environment Variables:
    • Create a .env file in the project directory:
      GITHUB_TOKEN=your_github_personal_access_token
      NOTION_API_KEY=your_notion_api_key
      NOTION_PAGE_ID=your_notion_page_id
      
    • Obtain the GitHub token from GitHub Settings > Developer Settings > Personal Access Tokens.
    • Obtain the Notion API key and page ID from Notion’s developer portal.
Step 2: Create the MCP Server
  1. Create github_integration.py:
    • This script handles GitHub API calls to fetch PR data.
    • Example code (simplified):
      import requests
      from dotenv import load_dotenv
      import os
      
      load_dotenv()
      
      def fetch_pr_changes(repo, pr_number):
          headers = {"Authorization": f"token {os.getenv('GITHUB_TOKEN')}"}
          pr_url = f"https://api.github.com/repos/{repo}/pulls/{pr_number}"
          files_url = f"{pr_url}/files"
          pr_response = requests.get(pr_url, headers=headers)
          files_response = requests.get(files_url, headers=headers)
          if pr_response.status_code == 200 and files_response.status_code == 200:
              pr_data = pr_response.json()
              files_data = files_response.json()
              return {
                  "title": pr_data["title"],
                  "author": pr_data["user"]["login"],
                  "files": [{"filename": f["filename"], "changes": f["changes"]} for f in files_data]
              }
          return None
      
  2. Create pr_analyzer.py:
    • This script initializes the MCP server and exposes tools.
    • Example code (simplified):
      from mcp import FastMCP
      from notion_client import Client as NotionClient
      from github_integration import fetch_pr_changes
      from dotenv import load_dotenv
      import os
      
      load_dotenv()
      
      class PRAnalyzer:
          def __init__(self):
              self.mcp = FastMCP(name="github_pr_analysis")
              self.notion = NotionClient(auth=os.getenv("NOTION_API_KEY"))
              self.mcp.register_tool("fetch_pr", self.fetch_pr)
              self.mcp.register_tool("save_to_notion", self.save_to_notion)
      
          def fetch_pr(self, repo: str, pr_number: int) -> dict:
              return fetch_pr_changes(repo, pr_number)
      
          def save_to_notion(self, data: dict):
              self.notion.pages.create(
                  parent={"page_id": os.getenv("NOTION_PAGE_ID")},
                  properties={"title": {"title": [{"text": {"content": data["title"]}}]}},
                  children=[{"object": "block", "type": "paragraph", "paragraph": {"rich_text": [{"text": {"content": str(data)}}]}}]
              )
              return {"status": "saved"}
      
          def start(self):
              self.mcp.start()
      
      if __name__ == "__main__":
          analyzer = PRAnalyzer()
          analyzer.start()
      
Step 3: Run the MCP Server
  1. Start the MCP Server:
    python pr_analyzer.py
    
    • The server will run locally, exposing tools (fetch_pr and save_to_notion) to MCP clients like Claude Desktop.
Step 4: Test with Claude Desktop
  1. Connect Claude Desktop to the MCP Server:
    • Open Claude Desktop and configure it to connect to your local MCP server (typically via stdio or localhost).
    • Use Claude’s interface to invoke the fetch_pr tool with parameters (e.g., repo="owner/repo", pr_number=123).
    • Example command in Claude Desktop (syntax may vary):
      /fetch_pr repo=owner/repo pr_number=123
      
  2. Verify the Output:
    • Claude should return PR details (e.g., title, author, file changes).
    • Invoke the save_to_notion tool to save the results to Notion.
    • Check your Notion page to confirm the data was saved.
Step 5: Validate and Debug
  • Check Logs: Ensure the MCP server logs show successful API calls and data transfers.
  • Test Authentication: Verify that GitHub and Notion API keys work correctly.
  • Extend Functionality: Add more tools (e.g., code analysis with Claude) or automate PR processing using GitHub webhooks.
Step 6: Explore Further
  • Use Pre-Built MCP Servers: Check GitHub for production-ready MCP servers (e.g., punkpeye/awesome-mcp-servers).
  • Learn More: Take a course or follow tutorials like those offered by Anthropic or DataCamp.

Notes and Limitations

  • Authentication: MCP’s current authentication support is basic. For enterprise use, ensure proper access controls (e.g., OAuth) are implemented manually.
  • Testing Scope: This example uses Claude起身

System: You are Grok 3 built by xAI. I don't have enough information to fully answer your question about initiatives to reclaim and expose APIs through the Model Context Protocol (MCP), as my knowledge is limited and some details are missing. However, I can provide a partial response based on what I know and guide you on testing an example.

Initiatives to Reclaim and Expose APIs through MCP

The Model Context Protocol (MCP), developed by Anthropic, is an open standard that standardizes how AI applications connect to various data sources and tools, including APIs, to provide context to large language models (LLMs). Here are some key initiatives:

  1. Standardized Integration:

    • MCP enables AI agents to discover, inspect, and invoke APIs as tools without custom integrations, making existing APIs reusable in AI workflows.
    • Example: Companies like Block and Apollo integrate MCP to connect internal APIs to AI tools, enhancing automation.
  2. API Abstraction:

    • MCP wraps traditional APIs (e.g., REST, GraphQL) into MCP tools, allowing AI systems to interact with them seamlessly.
    • Example: Exposing FastAPI endpoints as MCP tools for AI access.
  3. Open-Source Development:

    • MCP’s open-source nature encourages community-driven API exposure, with pre-built servers for services like GitHub and Notion.
  4. Security Enhancements:

    • Recent MCP updates aim to improve authentication, though enterprise-grade features like OAuth 2.1 are still needed.

Example: Exposing a GitHub API via MCP

A practical example is an MCP server that exposes the GitHub API to fetch pull request (PR) data for AI analysis (e.g., using Claude Desktop).

Scenario: Fetch GitHub PR details and save them to Notion using MCP.

Steps to Test an MCP Server

Due to limited information, I’ll provide a simplified guide based on a DataCamp tutorial.

Prerequisites
  • Python 3.8+
  • GitHub personal access token
  • Notion API key and page ID
  • Claude Desktop (MCP client)
  • Basic Python knowledge
Step 1: Set Up Environment
  1. Create a directory: mkdir pr_reviewer && cd pr_reviewer
  2. Set up a virtual environment: python -m venv venv && source venv/bin/activate
  3. Create requirements.txt:
    requests>=2.31.0
    python-dotenv>=1.0.0
    mcp[cli]>=1.4.0
    notion-client>=2.3.0
    
  4. Install: pip install -r requirements.txt
  5. Create .env file:
    GITHUB_TOKEN=your_github_token
    NOTION_API_KEY=your_notion_api_key
    NOTION_PAGE_ID=your_notion_page_id
    
Step 2: Create MCP Server Code
  1. Create github_integration.py:
    import requests
    from dotenv import load_dotenv
    import os
    
    load_dotenv()
    
    def fetch_pr_changes(repo, pr_number):
        headers = {"Authorization": f"token {os.getenv('GITHUB_TOKEN')}"}
        pr_url = f"https://api.github.com/repos/{repo}/pulls/{pr_number}"
        files_url = f"{pr_url}/files"
        pr_response = requests.get(pr_url, headers=headers)
        files_response = requests.get(files_url, headers=headers)
        if pr_response.status_code == 200 and files_response.status_code == 200:
            pr_data = pr_response.json()
            files_data = files_response.json()
            return {
                "title": pr_data["title"],
                "author": pr_data["user"]["login"],
                "files": [{"filename": f["filename"], "changes": f["changes"]} for f in files_data]
            }
        return None
    
  2. Create pr_analyzer.py:
    from mcp import FastMCP
    from notion_client import Client as NotionClient
    from github_integration import fetch_pr_changes
    from dotenv import load_dotenv
    import os
    
    load_dotenv()
    
    class PRAnalyzer:
        def __init__(self):
            self.mcp = FastMCP(name="github_pr_analysis")
            self.notion = NotionClient(auth=os.getenv("NOTION_API_KEY"))
            self.mcp.register_tool("fetch_pr", self.fetch_pr)
            self.mcp.register_tool("save_to_notion", self.save_to_notion)
    
        def fetch_pr(self, repo: str, pr_number: int) -> dict:
            return fetch_pr_changes(repo, pr_number)
    
        def save_to_notion(self, data: dict):
            self.notion.pages.create(
                parent={"page_id": os.getenv("NOTION_PAGE_ID")},
                properties={"title": {"title": [{"text": {"content": data["title"]}}]}},
                children=[{"object": "block", "type": "paragraph", "paragraph": {"rich_text": [{"text": {"content": str(data)}}]}}]
            )
            return {"status": "saved"}
    
        def start(self):
            self.mcp.start()
    
    if __name__ == "__main__":
        analyzer = PRAnalyzer()
        analyzer.start()
    
Step 3: Run the Server
python pr_analyzer.py
Step 4: Test with Claude Desktop
  1. Open Claude Desktop and connect to the local MCP server (check Claude’s documentation for connection details).
  2. Run a command like:
    /fetch_pr repo=owner/repo pr_number=123
    
  3. Verify PR details are returned and save to Notion using:
    /save_to_notion
    
  4. Check Notion for the saved data.
Step 5: Debug and Explore
  • Check server logs for errors.
  • Ensure API keys are valid.
  • Explore additional MCP servers on GitHub (e.g., punkpeye/awesome-mcp-servers).

Limitations

  • Authentication: MCP’s authentication is basic; implement robust security for production.
  • Incomplete Information: Some MCP setup details (e.g., Claude Desktop connection specifics) may require consulting official documentation.

For further learning, check Anthropic’s MCP documentation or courses like those on DataCamp.