A series of

  • 1 minute Quick use of The latest version of Docker Sentry-CLI – create version
  • Quick use of Docker start Sentry-CLI – 30 seconds start Source Maps
  • Sentry For React
  • Sentry For Vue
  • Sentry-CLI usage details
  • Sentry Web Performance Monitoring – Web Vitals
  • Sentry Web performance monitoring – Metrics
  • Sentry Web Performance Monitoring – Trends
  • Sentry Web Front-end Monitoring – Best Practices (Official Tutorial)
  • Sentry Backend Monitoring – Best Practices (Official Tutorial)
  • Sentry Monitor – Discover Big data query analysis engine
  • Sentry Monitoring – Dashboards Large screen for visualizing data
  • Sentry Monitor – Environments Distinguishes event data from different deployment Environments
  • Sentry monitoring – Security Policy Reports Security policies
  • Sentry monitoring – Search Indicates the actual Search
  • Sentry monitoring – Alerts Indicates an alarm
  • Sentry Monitor – Distributed Tracing
  • Sentry Monitoring – Distributed Tracing 101 for Full-stack Developers
  • Sentry Monitoring – Snuba Data Mid platform Architecture introduction (Kafka+Clickhouse)
  • Sentry – Snuba Data Model
  • Sentry Monitoring – Snuba Data Mid-Platform Architecture (Introduction to Query Processing)
  • Sentry official JavaScript SDK introduction and debugging guide

This guide will walk you through the process of writing and testing Snuba queries.

Explore the Snuba data model

In order to build a Snuba query, the first step is to be able to know which dataset you should query, which entities you should select, and what the schema for each entity is.

For an introduction to data sets and entities, see the Snuba Data Model section.

  • Getsentry. Making. IO/snuba/archi…

Data sets can be found in this module. Each dataset is a class that refers to an entity.

  • Github.com/getsentry/s…

The list of entities in the system can be found by running the snuba entity command:

snuba entities list
Copy the code

The following is returned:

Declared Entities:
discover
errors
events
groups
groupassignee
groupedmessage
.....
Copy the code

Once we find the entity we are interested in, we need to understand the schema and relationship declared on that entity. The same command describes an entity:

snuba entities describe groupedmessage
Copy the code

Returns:

Entity groupedmessage
    Entity schema
    --------------------------------
    offset UInt64
    record_deleted UInt8
    project_id UInt64
    idUInt64 status Nullable(UInt8) last_seen Nullable(DateTime) first_seen Nullable(DateTime) active_at Nullable(DateTime) first_release_id Nullable(UInt64) Relationships -------------------------------- groups --------------------------------  Destination: eventsType: LEFT
            Join keys
            --------------------------------
            project_id = LEFT.project_id
            id = LEFT.group_id
Copy the code

It provides a list of columns and their types and relationships to other entities defined in the data model.

Prepare the Snuba query

Snuba query language is called SnQL. It is documented in the SnQL query language section. Therefore, this section will not be repeated.

  • Getsentry. Making. IO/snuba/langu…

There is a Python SDK available for building Snuba queries, which can be used with any Python client, including Sentry. Snuba – SDK.

  • Github.com/getsentry/s…

A Query is represented as a Query object, such as:

query = Query(
    dataset="discover",
    match=Entity("events"),
    select=[
        Column("title"),
        Function("uniq", [Column("event_id")]."uniq_events"),
    ],
    groupby=[Column("title")],
    where=[
        Condition(Column("timestamp"), Op.GT, datetime.datetime(2021.1.1)),
        Condition(Column("project_id"), Op.IN, Function("tuple"[1.2.3])),
    ],
    limit=Limit(10),
    offset=Offset(0),
    granularity=Granularity(3600),Copy the code

See the SDK documentation for more details on how to build queries.

  • getsentry.github.io/snuba-sdk/

Once the query object is ready, it can be sent to Snuba.

Use Sentry to send a query to Snuba

The most common use case for querying Snuba is through Sentry. This section explains how to build a query in the Sentry code base and send it to Snuba.

Sentry imports the above Snuba SDK. This is the recommended way to build a Snuba query.

Once the Query object is created, the Snuba Client API provided by Sentry can and should be used to send queries to Snuba.

The API is in this module. It takes care of caching, retries, and allows batch queries.

  • Github.com/getsentry/s…

This method returns a dictionary containing the data and other metadata in the response:

{
    "data": [{"title": "very bad"."uniq_events": 2}]."meta": [{"name": "title"."type": "String"
        },
        {
            "name": "uniq_events"."type": "UInt64"}]."timing": {... details ... }}Copy the code

The Data section is a list, one dictionary per line. Meta contains a list of columns contained in the response, whose data types are inferred by Clickhouse.

Send test queries through the Web UI

Snuba has a minimal Web UI that can be used to send queries. You can run Snuba locally and access the Web UI through http://localhost:1218/[DATASET NAME]/ SNQL.

SnQL queries should be provided in the Query attribute, and the response structure is the same as discussed in the previous section.

Use curl to send a query

The Web UI sends only payload as a POST. Therefore, you can achieve the same results using curl or any other HTTP client.

Request and response formats

The request format can be seen in the screenshot above:

  • queryContains the string formSnQLThe query.
  • datasetIs the dataset name (if not already inurlSpecified in the.
  • debug 使 SnubaProvide detailed statistics in the response, includingClickhouseThe query.
  • consistentmandatoryClickhouseQueries are executed in single-threaded mode, and ifClickhouseThe table is copied and it will be forcedSnubaAlways hit the same node. Can guaranteeSequential consistencyBecause this is the consumerThe default writeThe node. This is done by setting toin_order 的Load balancing ClickhouseProperty implemented by.
    • Clickhouse. Tech/docs/en/ope…
  • turboTURBO_SAMPLE_RATE SnubaThe query defined in Settings sets the sampling rate. It can also preventSnubaFINALPatterns applied toClickhouseQuery in case you need to guarantee correct results after a replacement.

Snuba can respond with four HTTP codes. 200 indicates a successful query, or 400 if the query is not properly validated. 500 usually means clickhouse-related problems (from timeouts to connection problems), although Snuba still can’t identify some invalid queries in advance. Snuba has an internal rate limiter, so 429 is also a possible return code.

The response format for a successful query is the same as discussed above. The full version is shown below (in Debug mode)

{
    "data": []."meta": [{"name": "title"."type": "String"}]."timing": {
        "timestamp": 1621038379."duration_ms": 95."marks_ms": {
            "cache_get": 1."cache_set": 4."execute": 39."get_configs": 0."prepare_query": 10."rate_limit": 4."validate_schema": 34}},"stats": {
        "clickhouse_table": "errors_local"."final": false."referrer": "http://localhost:1218/events/snql"."sample": null."project_rate": 0."project_concurrent": 1."global_rate": 0."global_concurrent": 1."consistent": false."result_rows": 0."result_cols": 1."query_id": "f09f3f9e1c632f395792c6a4bfe7c4fe"
    },
    "sql": "SELECT (title AS _snuba_title) FROM errors_local PREWHERE equals((project_id AS _snuba_project_id), 1) WHERE equals(deleted, 0) AND greaterOrEquals((timestamp AS _snuba_timestamp), toDateTime('2021-05-01T00:00:00', 'Universal')) AND less(_snuba_timestamp, toDateTime('2021-05-11T00:00:00', 'Universal')) LIMIT 1000 OFFSET 0"
}
Copy the code

The Timing section contains the timestamp and duration of the query. Interestingly, the duration is broken down into several stages: marks_ms.

The SQL elements are Clickhouse queries.

The STATS dictionary contains the following keys

  • clickhouse_tablesnubaThe table selected during query processing.
  • finalsaidSnubaWhether or not to decideClickhousesendFINALQuery, this will forceClickhouseApply the associated merge immediately (Merge Tree).details
    • Clickhouse. Tech/docs/en/SQL…
  • sampleIs the sampling rate of the application.
  • project_rateIs the querySnubaThe number of project-specific requests received per second.
  • project_concurrentIs the number of concurrent queries involving a particular item at query time.
  • global_rateproject_rateSame, but not focused on one project.
  • global_concurrentproject_concurrentSame, but not focused on one project.
  • query_idIs the unique identifier for this query.

Query validation questions typically take the following format:

{
    "error": {
        "type": "invalid_query"."message": "missing >= condition on column timestamp for entity events"}}Copy the code

Clickhouse errors will have a similar structure. The Type field displays the ClickHouse, and the message contains details about the exception. In contrast to the query validation error, in the case of a Clickhouse error, the query was actually executed, so there are all The Times and statistics described for a successful query.