Skip to main content

Normalize Quickstart

Requirementsโ€‹

In addition to dbt being installed:

  • Python 3.7 or later

In addition to the standard privileges required by dbt, our packages by default write to additional schemas beyond just your profile schema. If your connected user does not have create schema privileges, you will need to ensure that the following schemas exist in your warehouse and the user can create tables in them:

  • <profile_schema>_derived
  • <profile_schema>_scratch
  • <profile_schema>_snowplow_manifest

Alternatively, you can override the output schemas our models write to, see the relevant package configuration page for how to do this.

Please refer to the Official Guide on setting up permissions.

Installationโ€‹

Check dbt Hub for the latest installation instructions, or read the dbt docs for more information on installing packages. If you are using multiple packages you may need to up/downgrade a specific package to ensure compatibility.

note

Make sure to run the dbt deps command after updating your packages.yml to ensure you have the specified version of each package installed in your project.

Setupโ€‹

1. Override the dispatch order in your projectโ€‹

To take advantage of the optimized upsert that the Snowplow packages offer you need to ensure that certain macros are called from snowplow_utils first before dbt-core. This can be achieved by adding the following to the top level of your dbt_project.yml file:

dbt_project.yml
dispatch:
- macro_namespace: dbt
search_order: ['snowplow_utils', 'dbt']

If you do not do this the package will still work, but the incremental upserts will become more costly over time.

2. Adding the selectors.yml fileโ€‹

Within the packages we have provided a suite of suggested selectors to run and test the models within the package together with the web model. This leverages dbt's selector flag. You can find out more about each selector in the YAML Selectors section.

These are defined in the selectors.yml file (source) within the package, however in order to use these selections you will need to copy this file into your own dbt project directory. This is a top-level file and therefore should sit alongside your dbt_project.yml file. If you are using multiple packages in your project you will need to combine the contents of these into a single file.

3. Check source dataโ€‹

This package will by default assume your Snowplow events data is contained in the atomic schema of your target.database. In order to change this, please add the following to your dbt_project.yml file:

dbt_project.yml
vars:
snowplow_normalize:
snowplow__atomic_schema: schema_with_snowplow_events
snowplow__database: database_with_snowplow_events
Databricks only

Please note that your target.database is NULL if using Databricks. In Databricks, schemas and databases are used interchangeably and in the dbt implementation of Databricks therefore we always use the schema value, so adjust your snowplow__atomic_schema value if you need to.

4. Filter your data setโ€‹

You can specify both start_date at which to start processing events and the app_id's to filter for. By default the start_date is set to 2020-01-01 and all app_id's are selected. To change this please add the following to your dbt_project.yml file:

dbt_project.yml
vars:
snowplow_normalize:
snowplow__start_date: 'yyyy-mm-dd'
snowplow__app_id: ['my_app_1','my_app_2']
note

If you have events you are going to normalize with no value for the dvce_sent_tstamp field, you need to disable the days late filter by setting the snowplow__days_late_allowed variable to -1, otherwise these events will not be processed.

5. Install additional python packagesโ€‹

The script only requires 2 additional packages (jsonschema and requests) that are not built into python by default, you can install these by running the below command, or by installing them by your preferred method.

pip install -r dbt_packages/snowplow_normalize/utils/requirements.txt

6. Setup the generator configuration fileโ€‹

You can use the example provided in utils/example_normalize_config.json to start your configuration file to specify which events, self-describing events, and contexts you wish to include in each table. For more information on this file see the normalize package docs.

7. Setup your resolver connection file (optional)โ€‹

If you are not using iglu central as your only iglu registry then you will need to set up an iglu resolver file and point to this in your generator config.

8. Generate your modelsโ€‹

At the root of your dbt project, running python dbt_packages/snowplow_normalize/utils/snowplow_normalize_model_gen.py path/to/your/config.json will generate all models specified in your configuration.

9. Additional vendor specific configurationโ€‹

BigQuery Only

Verify which column your events table is partitioned on. It will likely be partitioned on collector_tstamp or derived_tstamp. If it is partitioned on collector_tstamp you should set snowplow__derived_tstamp_partitioned to false. This will ensure only the collector_tstamp column is used for partition pruning when querying the events table:

dbt_project.yml
vars:
snowplow_normalize:
snowplow__derived_tstamp_partitioned: false
Databricks only - setting the databricks_catalog

Add the following variable to your dbt project's dbt_project.yml file

dbt_project.yml
vars:
snowplow_normalize:
snowplow__databricks_catalog: 'hive_metastore'

Depending on the use case it should either be the catalog (for Unity Catalog users from databricks connector 1.1.1 onwards, defaulted to 'hive_metastore') or the same value as your snowplow__atomic_schema (unless changed it should be 'atomic'). This is needed to handle the database property within models/base/src_base.yml.

A more detailed explanation for how to set up your Databricks configuration properly can be found in Unity Catalog support.

10. Run your model(s)โ€‹

You can now run your models for the first time by running the below command (see the operation page for more information on operation of the package):

dbt run --selector snowplow_normalize
Was this page helpful?