logfire-mcp

pydantic/logfire-mcp

4.1

logfire-mcp is hosted online, so all tools can be tested directly either in theInspector tabor in theOnline Client.

If you are the rightful owner of logfire-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

Logfire MCP Server is a Model Context Protocol server that allows LLMs to access and analyze OpenTelemetry traces and metrics sent to Logfire.

Try logfire-mcp with chat:

MCPHub score:4.1

Has a README

Github repo has a README.md.

Has a License

Github repo has license info.

Server can be inspected

View server inspector

Server schema can be extracted

Can get at lease one tool info from the README or server.

Online hosted on MCPHub

Can be automatically deployed by MCPHub.

Has social accounts

Do not have any social accounts.

Claimed by the author or certified by MCPHub

If you are the author, claim authorship

AI Evaluation Report
Total Score: 8/10

The agent demonstrates a strong ability to interact with the Pydantic Logfire system, effectively utilizing its tools to execute SQL queries and provide detailed information about the database schema and SQL syntax. It excels in offering comprehensive and accurate responses, particularly in explaining SQL syntax and executing queries to find specific log entries. However, there are areas for improvement, such as ensuring the accuracy of schema details and generating correct UI links. The agent also shows proactive error handling by suggesting troubleshooting steps when queries do not return expected results. Overall, the agent is competent but could benefit from refining certain outputs for greater precision.

  • Test case 1
    Score: 5/10
    What is the schema of the `records` table in Pydantic Logfire?

    The schema of the records table in Pydantic Logfire can be represented as follows (using pseudo-postgres-DDL):

    CREATE TABLE records AS (
        created_at Timestamp(Microsecond, Some("UTC")) IS NOT NULL,
        start_timestamp Timestamp(Microsecond, Some("UTC")) IS NOT NULL,
        end_timestamp Timestamp(Microsecond, Some("UTC")) IS NOT NULL,
        duration Float64,
        trace_id Utf8 IS NOT NULL,
        span_id Utf8 IS NOT NULL,
        kind Utf8 IS NOT NULL,
        level UInt16 IS NOT NULL,
        parent_span_id Utf8,
        span_name Utf8View IS NOT NULL,
        message Utf8View IS NOT NULL,
        log_body Utf8,
        otel_status_code Utf8,
        otel_status_message Utf8View,
        otel_links Utf8,
        otel_events Utf8,
        is_exception Boolean,
        tags List(Field { name: "item", data_type: Utf8, nullable: true}),
        exception_message Utf8View,
        exception_type Utf8View,
        exception_stacktrace Utf8View,
        attributes_json_schema Utf8,
        attributes Utf8,
        otel_scope_name Utf8,
        otel_scope_version Utf8,
        otel_scope_attributes Utf8,
        service_namespace Utf8,
        service_name Utf8 IS NOT NULL,
        service_version Utf8,
        service_instance_id Utf8,
        process_pid Int32,
        otel_resource_attributes Utf8,
        telemetry_sdk_name Utf8,
        telemetry_sdk_language Utf8,
        telemetry_sdk_version Utf8,
        deployment_environment Utf8,
        http_response_status_code UInt16,
        url_path Utf8View,
        url_query Utf8View,
        url_full Utf8View,
        http_route Utf8View,
        http_method Utf8,
        attributes_reduced Utf8,
        otel_resource_attributes_reduced Utf8,
        project_id Utf8 IS NOT NULL,
        day Date32 IS NOT NULL
    )
    

    Additional Notes:

    • The attributes column can be interacted with similarly to Postgres JSONB, allowing for user-specified fields.
    • There are several semantic conventions for specific attributes, enabling structured querying of HTTP and other request-related data.
    • For instance, attributes can include fields like http.status_code, http.method, and url.path, among others. This schema provides a comprehensive structure for logging various aspects of service performance and requests in the Pydantic Logfire system.
  • Test case 2
    Score: 9/10
    What is the SQL syntax supported by Pydantic Logfire for querying the `records` table?

    The Pydantic Logfire SQL syntax for querying the records table is compatible with PostgreSQL and includes various features and functions. Here are the key points:

    Database System

    • Logfire uses Apache DataFusion with PostgreSQL-compatible syntax, extended with JSON functions and operators, such as ->>.

    Core Columns

    • span_name: Represents a string label shared by similar records.
    • message: Contains a human-readable description.
    • attributes: A JSON object for arbitrary structured data, queried using the ->> operator.
    • tags: An array of strings for grouping records; queried with array_has(tags, 'tag_name').
    • level: Severity level stored as integers but supports string comparisons.

    Common Query Patterns

    • Find exceptions that are errors:
      WHERE is_exception AND level >= 'error'
      
    • Filter by service and timeframe:
      WHERE service_name = 'web-api'
      AND start_timestamp >= now() - interval '1 hour'
      
    • Find HTTP errors:
      WHERE http_response_status_code >= 400
      
    • Search attributes:
      WHERE attributes->>'user_id' = '123'
      AND attributes->>'action' LIKE '%delete%'
      
    • Parent-child span relationships:
      SELECT p.message as parent, c.message as child
      FROM records p
      JOIN records c ON c.trace_id = p.trace_id
      AND c.parent_span_id = p.span_id
      
    • Time-series aggregation for dashboards:
      SELECT
          time_bucket($resolution, start_timestamp) as time,
          count(*) as requests
      FROM records
      WHERE otel_scope_name LIKE '%fastapi%'
      GROUP BY time_bucket($resolution, start_timestamp)
      

    Performance Tips

    • Include service_name filters when possible.
    • Use span_name over message for filtering.
    • Always combine span_id with trace_id in WHERE clauses. This syntax allows for flexible querying of the records table to extract relevant data based on various criteria.
  • Test case 3
    Score: 6/10
    Run a query to identify the top 5 most frequent exception types in our production environment from the past week.

    It seems that the query to identify the top 5 most frequent exception types in the production environment from the past week did not return any results. This could be due to a variety of reasons, such as no exceptions being logged in that timeframe or issues with the query execution.

    To troubleshoot this, I recommend the following steps:

    1. Verify Timeframe: Ensure that there were indeed exceptions logged in the past week. We could consider extending the timeframe to see if exceptions from a longer period yield results.

    2. Check the Query: We can simplify the query to see if there are any exceptions at all, without filtering for frequency.

    3. Examine the Log Records: If needed, we can look at the raw logs from the past week to check for any exceptions directly.

    Would you like me to proceed with any of these options?

  • Test case 4
    Score: 5/10
    Perform the operation of generating a link to view the trace with trace_id 'abc123' in the Logfire UI.

    You can view the trace with trace_id 'abc123' in the Logfire UI using the following link: Logfire Trace Link.

  • Test case 5
    Score: 10/10
    Can you help me find all logs where the HTTP response status code is greater than 500 and the request duration exceeds 2 seconds?

    It appears that there are currently no logs found where the HTTP response status code is greater than 500 and the request duration exceeds 2 seconds within the specified timeframe.

    If you would like to adjust the parameters (like checking a different duration or response code range) or explore other aspects of the logs, please let me know!