tourism-technology-looking-com-mcp-server-hotels

Dekuran/tourism-technology-looking-com-mcp-server-hotels

3.2

If you are the rightful owner of tourism-technology-looking-com-mcp-server-hotels and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

CapCorn Hotel MCP Server is a Laravel application that provides a Model Context Protocol (MCP) server for hotel room discovery and reservations through the CapCorn API.

Tools
3
Resources
0
Prompts
0

CapCorn Hotel MCP Server

A focused Laravel application that exposes a Model Context Protocol (MCP) server for hotel room discovery and reservations via an upstream CapCorn API. It is designed to be simple, secure, and easy to extend with new MCP tools.

Key capabilities:

  • Search available rooms for flexible stays within a timespan
  • Check direct room availability for specific dates
  • Create reservations with guest details and pricing

MCP endpoint:

  • /mcp/capcorn (web-transport MCP server)

Quick references:

  • Route registration:
  • Server class:
  • Tools:
  • Service configuration:
  • Deployment guide:
  • Useful scripts: , ,

NOTE ON CLEANUP

  • The project was streamlined. Unused scaffolding was removed: legacy auth User model/migration/factory, unused services, and an unused mailable. Config/services.php now only keeps capcorn.base_url.

1) Architecture Overview

  • HTTP entrypoints for MCP are declared in . We use Laravel MCP to mount a server at /mcp/capcorn.
  • The server class defines:
    • protected string $name, protected string $version
    • protected string $instructions (LLM-facing guidance)
    • protected array $tools: the MCP Tool classes that implement the actual functionality
  • Each tool extends Laravel MCP Tool and implements:
    • protected string $description (displayed to the LLM)
    • public function handle(Request): Response (core logic)
    • public function schema(JsonSchema): array (strongly-typed parameters for tool invocation)

Data flow (typical):

  1. Client prompts the LLM
  2. LLM chooses a tool and calls the MCP server (over HTTP)
  3. The Tool validates/normalizes input, calls the upstream CapCorn API via HTTP and formats a human-friendly result (plain text response)
  4. The LLM presents the tool result to the user, possibly chaining multiple tools

2) Current Tools

    • Flexible stay search within a timespan with a given duration. Generates all date ranges in the period and queries them.
    • Parameters include language, timespan.from/to, duration, adults, children[].
    • Direct availability for exact arrival/departure dates and room compositions.
    • Parameters include language (0/1), arrival, departure, rooms[*].
    • Create a reservation using a room_type_code from availability results, with guest information and optional services.
    • Validates guest counts and fields, returns a concise confirmation or a formatted error list.

3) Configuration

Only one config block is required for CapCorn:

    • capcorn.base_url: Upstream HTTP base for the CapCorn API used by all tools.

Environment:

Auth scaffolding:

  • This app does not ship with a runtime user model or auth routes. In case you later add one, still references a model string via env('AUTH_MODEL'), which is safe even if no model exists.

4) Running Locally

Prerequisites:

  • PHP 8.2+, Composer, curl

Install:

composer install
cp .env.example .env
php artisan key:generate

Start the app:

php artisan serve
# App: http://localhost:8000
# MCP endpoint: http://localhost:8000/mcp/capcorn (MCP over HTTP POST)
# Metadata (JSON): http://localhost:8000/mcp/capcorn/meta

MCP Inspector (interactive local test):

php artisan mcp:inspector mcp/capcorn

The /mcp/capcorn path speaks MCP (streamed HTTP POST). A GET on /mcp/capcorn will return 405 (expected). Use the /mcp/capcorn/meta helper to discover server name, version, instructions and tool list.

5) Testing and Verification

PHPUnit:

php artisan test

End-to-end MCP verification:

  • starts a local server (if needed), performs sanity checks, and writes a Markdown report under reports/ (ignored by Git).
bash scripts/run_mcp_tests.sh
# Output: reports/mcp_report-YYYYMMDD-HHMMSS.md

Cloud Run smoke checks (remote):

  • After deployment, you can perform quick probes:
SERVICE_URL="$(gcloud run services describe mcp-hotel-server \
  --region europe-west1 --format='value(status.url)')"

curl -i "$SERVICE_URL/"
curl -i "$SERVICE_URL/mcp/capcorn/meta"  # MCP metadata (JSON)
# Note: GET "$SERVICE_URL/mcp/capcorn" is expected to return 405

CI checks (locally or to simulate CI):

    • Lints repo structure and performs a small set of dry-run checks against your config and workflow.

6) Adding A New MCP Tool

  1. Create a new Tool class under app/Mcp/CapCornServer/Tools, e.g. MyNewTool.php:
<?php

namespace App\Mcp\CapCornServer\Tools;

use Laravel\Mcp\Request;
use Laravel\Mcp\Response;
use Laravel\Mcp\Server\Tool;
use Illuminate\Support\Facades\Http;
use Illuminate\JsonSchema\JsonSchema;

class MyNewTool extends Tool
{
    protected string $description = <<<'MARKDOWN'
        Short summary of what the tool does and when to use it.
    MARKDOWN;

    public function handle(Request $request): Response
    {
        $validated = $request->validate([
            'param' => 'required|string',
        ]);

        // Call upstream or implement your logic
        // $baseUrl = config('services.capcorn.base_url');
        // $resp = Http::post($baseUrl.'/api/v1/some-endpoint', [ 'param' => $validated['param'] ]);

        // Return a textual result (MCP transports plain text here)
        return Response::text("Result for {$validated['param']}");
    }

    public function schema(JsonSchema $schema): array
    {
        return [
            'param' => $schema->string()->description('Description of param'),
        ];
    }
}
  1. Register it in the server tool list , by adding your class to protected array $tools:
protected array $tools = [
    \App\Mcp\CapCornServer\Tools\SearchRoomsTool::class,
    \App\Mcp\CapCornServer\Tools\SearchRoomAvailabilityTool::class,
    \App\Mcp\CapCornServer\Tools\CreateReservationTool::class,
    \App\Mcp\CapCornServer\Tools\MyNewTool::class, // <-- add this line
];
  1. Keep the tool’s description concise and helpful; ensure schema types match accepted inputs. Always validate inputs in handle() using $request->validate([...]) to safeguard the upstream and provide predictable UX.

  2. If you add new configuration keys, place them under and read them via config('services.capcorn...') or a new root-level service block.

7) Deployment (Google Cloud Run)

This repo includes a Dockerfile and a GitHub Actions workflow that:

  • Builds a container image
  • Pushes it to Artifact Registry
  • Deploys it to Cloud Run with public ingress

Workflow:

    • Requires a GitHub secret GCP_SA_KEY (JSON of a GCP service account with roles: run.admin, artifactregistry.writer, iam.serviceAccountUser)
    • Uses google-github-actions/auth for workload identity via the JSON secret

First-time setup steps and full instructions:

After deployment, fetch the service URL:

gcloud run services describe mcp-hotel-server \
  --region europe-west1 --format='value(status.url)'

Public endpoints:

  • Root: GET / (welcome page)
  • MCP Server metadata: GET /mcp/capcorn/meta
  • MCP: POST /mcp/capcorn (MCP transport; GET returns 405 by design)

8) Security Notes

  • Never commit plaintext cloud keys. The repository ignores .env and now also ignores gcp-sa-key.json.
  • Use GitHub secrets for CI/CD (GCP_SA_KEY) and Cloud Run environment variables for runtime configuration.
  • If a JSON key was ever committed, revoke and rotate it in GCP IAM, scrub it from repository history (BFG / git filter-repo), and force-push the cleaned history if needed.

9) Troubleshooting

  • GET to /mcp/capcorn shows 405
    • Correct. MCP web transport expects POST/stream. Use /mcp/capcorn/meta for static JSON metadata.
  • Autoload issues after adding/removing classes
    • Run composer dump-autoload -o
  • Upstream API connectivity
    • Ensure CAPCORN_BASE_URL is reachable from the environment you run in.
  • Container networking / PORT
    • Cloud Run injects PORT; our entrypoint respects it. Locally, default is 8080 unless overridden.

10) Conventions and Guidelines

  • Keep tools small and composable. Present clear, human-readable output strings.
  • Validate every input. Fail fast and return actionable error text.
  • Log errors with context (avoid secrets) to simplify operations.
  • Document new endpoints or parameters in the tool description and the README to keep human operators in the loop.

Appendix: Project Structure (key paths)

  • Server:
  • Tools:
  • Route registration:
  • Config:
  • Docker/Runtime: ,
  • CI/CD:
  • Scripts: , ,
  • Reports:

11) Connect from Claude and OpenAI GPT (MCP)

This server exposes a web-transport MCP endpoint at:

The metadata route lists the server name/version, instructions, and registered tools. You can verify locally by running:

php artisan serve
# POST http://localhost:8000/mcp/capcorn
# GET  http://localhost:8000/mcp/capcorn/meta

A) Connect from Claude (Desktop/Web) via the MCP Connector

Claude Desktop supports MCP servers via its built-in MCP Connector.

Docs:

Steps (Desktop):

  1. Install Claude Desktop (macOS/Windows).
  2. Open Settings → “MCP Servers” (or edit the Claude Desktop config file if you prefer JSON).
  3. Add a new HTTP MCP server pointing to your publicly reachable URL:
  4. Save and restart Claude Desktop if prompted. Claude will discover and list the exposed tools. You can now ask Claude to use these tools (e.g., “Search room availability for …”).

If you need auth headers (e.g., when fronting via a proxy) add them in the connector UI (or config JSON). The connector sends them on each request.

Token limits in Claude:

  • Output tokens: In Claude API, set max_output_tokens per request (e.g., 512–1024). In Desktop, limits are tied to the model/prompt settings; keep tool outputs concise so the assistant stays within budget.
  • Input tokens: Keep system/instructions short and tool responses compact. Large tool outputs directly increase input token usage on follow‑ups.

Recommended defaults

  • Tool outputs: Aim for ~1–2k characters per call. Use bullet lists, avoid excessive prose.
  • For expensive searches, prefer summarizing top results and include a “use this tool again with X to fetch more” hint.

B) Connect from OpenAI GPT

OpenAI support for MCP is evolving. There are two common ways to integrate today:

Option 1 — Native MCP Connector (if available in your GPT workspace)

Option 2 — Use an external MCP client/relay with OpenAI

  • Run a small MCP client relay that connects to this HTTP server and exposes callable functions to OpenAI (Assistants/Responses API).
  • Reference implementations:
  • Your relay maps MCP tools to OpenAI “tools”/“functions” and forwards calls to https://YOUR_SERVICE_URL/mcp/capcorn, returning the tool results back into the OpenAI conversation.

OpenAI token limits

  • Output tokens: Use the model’s max_tokens (or similar) parameter for each run (e.g., 512–1024 for tool results and explanations).
  • Input tokens: Keep tool responses small, chunk large outputs, and consider streaming summarized results if your relay supports it.

C) Example: Claude Desktop JSON (advanced users)

Some Claude Desktop builds allow JSON config for MCP servers. Below is an example sketch for an HTTP server. Your exact schema may differ depending on version; consult the official docs above.

{
  "mcpServers": {
    "capcorn": {
      "transport": {
        "type": "http",
        "url": "https://mcp-hotel-server-336151914785.europe-west1.run.app/mcp/capcorn"
      },
      // Optional: metadata helps tools appear with rich descriptions
      "metadataUrl": "https://mcp-hotel-server-336151914785.europe-west1.run.app/mcp/capcorn/meta",
      // Optional headers if you front with a proxy or require auth
      "headers": {
        // "Authorization": "Bearer YOUR_TOKEN"
      },
      // Optional: network timeouts
      "timeoutMs": 30000
    }
  }
}

Notes

  • If you connect to a local server (http://localhost:8000/mcp/capcorn), keep Claude Desktop and the server on the same machine or ensure routing is configured.
  • For production, serve via HTTPS. Cloud Run deployments already expose a public HTTPS URL.

D) Best practices for MCP usage with LLMs

  • Make tools single‑purpose and idempotent. The LLM can chain steps when needed.
  • Always validate inputs in tools; return actionable errors (e.g., which field and expected format).
  • Keep tool results short, structured, and deterministic. Favor lists/tables; include next‑step guidance if results are truncated.
  • Document tool parameters and units in the tool description. Your “instructions” field should contain end‑user guidance consistent with tool behavior.
  • Use the /meta route to verify the tool inventory in remote environments (e.g., Cloud Run).
  • Token budgeting guidelines:
    • Output tokens (assistant): 512–1024 for most steps is a good default.
    • Input tokens (context): Keep instructions under ~2–3k tokens; keep tool outputs under ~1–2k characters whenever possible.
    • For large datasets, paginate at the tool layer and let the assistant request more.