Skip to main content

Querying Snowplow data

Basic queries​

You will typically find most of your Snowplow data in the events table. If you are using Redshift or Postgres, there will be extra tables for self-describing events and entities — see below.

note

Database and/or schema name will depend on your configuration, but we will use atomic as the schema name in the examples below.

Please refer to the structure of Snowplow data for the principles behind our approach, as well as the descriptions of the various standard columns.

Data models

Querying the events table directly can be useful for exploring your events or building custom analytics. However, for many common use cases it’s much easier to use our data models, which provide a pre-aggregated view of your data.

The simplest query could look like this:

SELECT * FROM atomic.events
WHERE event_name = 'page_view'
caution

With large data volumes (read: any production system), you should always include a filter on the partition key (normally, collector_tstamp), for example:

WHERE ... AND collector_tstamp between timestamp '2023-10-23' and timestamp '2023-11-23'

This ensures that you read from the minimum number of (micro-)partitions necessary, making the query run much faster and reducing compute cost (where applicable).

Self-describing events​

Self-describing events can contain their own set of fields, defined by their schema.

For Redshift and Postgres users, self-describing events are not part of the standard atomic.events table. Instead, each type of event is in its own table. The table name and the fields in the table will be determined by the event’s schema. See how schemas translate to the warehouse for more details.

You can query just the table for that particular self-describing event, if that's all that's required for your analysis, or join that table back to the atomic.events table:

select 
...
from
atomic.events ev
left join
atomic.my_example_event_table sde
on sde.root_id = ev.event_id and sde.root_tstamp = ev.collector_tstamp
caution

You may need to take care of duplicate events.

Entities​

Entities (also known as contexts) provide extra information about the event, such as data describing a product or a user.

For Redshift and Postgres users, entities are not part of the standard atomic.events table. Instead, each type of entity is in its own table. The table name and the fields in the table will be determined by the entity’s schema. See how schemas translate to the warehouse for more details.

The entities can be joined back to the core atomic.events table by the following, which is a one-to-one join (for a single record entity) or a one-to-many join (for a multi-record entity), assuming no duplicates.

select 
...
from
atomic.events ev
left join -- assumes no duplicates, and will return all events regardless of if they have this entity
atomic.my_entity ent
on ent.root_id = ev.event_id and ent.root_tstamp = ev.collector_tstamp
caution

You may need to take care of duplicate events.

Failed events​

See Exploring failed events.

Dealing with duplicates​

In some cases, your data might contain duplicate events (full deduplication before the data lands in the warehouse is optionally available for Redshift, Snowflake and Databricks on AWS).

While our data models deal with duplicates for you, there may be cases where you need to de-duplicate the events table yourself.

In Redshift/Postgres you must first generate a row_number() on your events and use this to de-duplicate.

with unique_events as (
select
...
row_number() over (partition by a.event_id order by a.collector_tstamp) as event_id_dedupe_index
from
atomic.events a
)

select
...
from
unique_events
where
event_id_dedupe_index = 1

Things get a little more complicated if you want to join your event data with a table containing entities.

Suppose your entity is called my_entity. If you know that each of your events has at most 1 such entity attached, the de-duplication requires the use of a row number over event_id to get each unique event:

with unique_events as (
select
ev.*,
row_number() over (partition by a.event_id order by a.collector_tstamp) as event_id_dedupe_index
from
atomic.events ev
),

unique_my_entity as (
select
ent.*,
row_number() over (partition by a.root_id order by a.root_tstamp) as my_entity_index
from
atomic.my_entity_1 ent
)

select
...
from
unique_events u_ev
left join
unique_my_entity u_ent
on u_ent.root_id = u_ev.event_id and u_ent.root_tstamp = u_ev.collector_tstamp and u_ent.my_entity_index = 1
where
u_ev.event_id_dedupe_index = 1

If your events might have more than one my_entity attached, the logic is slightly more complex.

Details

First, de-duplicate the events table in the same way as above, but also keep track of the number of duplicates (see event_id_dedupe_count below). In the entity table, generate a row number per unique combination of all fields in the record. Then join on root_id and root_tstamp as before, but with an additional clause that the row number is a multiple of the number of duplicates, to support the 1-to-many join. This ensures all duplicates are removed while retaining all original records of the entity. This may look like a weird join condition, but it works.

Unfortunately, listing all fields manually can be quite tedious, but we have added support for this in the de-duplication logic of our dbt packages.

with unique_events as (
select
ev.*,
row_number() over (partition by a.event_id order by a.collector_tstamp) as event_id_dedupe_index,
count(*) over (partition by a.event_id) as event_id_dedupe_count
from
atomic.events ev
),

unique_my_entity as (
select
ent.*,
row_number() over (partition by a.root_id, a.root_tstamp, ... /*all columns listed here for your entity */ order by a.root_tstamp) as my_entity_index
from
atomic.my_entity_1 ent
)

select
...
from
unique_events u_ev
left join
unique_my_entity u_ent
on u_ent.root_id = u_ev.event_id and u_ent.root_tstamp = u_ev.collector_tstamp and mod(u_ent.my_entity_index, u_ev.event_id_dedupe_count) = 0
where
u_ev.event_id_dedupe_index = 1
Was this page helpful?