Querying your first events
Once you’ve tracked some events, you will want to look at them in your data warehouse or database. The exact steps will depend on your choice of storage and the Snowplow offering.
Connection details​
- BDP Enterprise
- BDP Cloud
- Try Snowplow
- Open Source
Use the connection details you provided when setting up BDP Enterprise.
You can find the connection details in the Console, under the destination you’ve selected.
You can find the connection details in the Try Snowplow UI: hostname, port, database, username and password (request credentials in the UI if you haven’t done so).
For a step-by-step guide on how to query data in Try Snowplow, see this tutorial.
- Postgres
- Redshift
- BigQuery
- Snowflake
- Databricks
Your database will be named according to the postgres_db_name
Terraform variable. It will contain two schemas:
atomic
— the validated eventsatomic_bad
— the failed events
You can connect to the database using the credentials you provided for the loader in the Terraform variables (postgres_db_username
and postgres_db_password
), along with the postgres_db_address
and postgres_db_port
Terraform outputs.
If you need to reset your username or password, you can follow these steps.
See the AWS RDS documentation for more details on how to connect.
If you opted for the secure
option, you will first need to create a tunnel into your VPC to be able to connect to your RDS instance and be able to query the data. A common solution to this issue is to configure a bastion host as described here.
The database name and the schema name will be defined by the redshift_database
and redshift_schema
variables in Terraform.
There are two different ways to login to the database:
- The first option is to use the credentials you configured for the loader in the Terraform variables (
redshift_loader_user
andredshift_loader_password
) - The second option is to grant
SELECT
permissions on the schema to an existing user
To connect, you can use the Redshift UI or something like psql
.
The database will be called <prefix>_snowplow_db
, where <prefix>
is the prefix you picked in your Terraform variables file. It will contain an atomic
schema with your validated events.
You can access the database via the BigQuery UI.
The database name and the schema name will be defined by the snowflake_database
and snowflake_schema
variables in Terraform.
There are two different ways to login to the database:
- The first option is to use the credentials you configured for the loader in the Terraform variables (
snowflake_loader_user
andsnowflake_loader_password
) - The second option is to grant
SELECT
permissions on the schema to an existing user
To connect, you can use either Snowflake dashboard or SnowSQL.
The database name and the schema name will be defined by the databricks_database
and databricks_schema
variables in Terraform.
There are two different ways to login to the database:
- The first option is to use the credentials you configured for the loader in the Terraform variables (
databricks_loader_user
anddatabricks_loader_password
, or alternatively thedatabricks_auth_token
) - The second option is to grant
SELECT
permissions on the schema to an existing user
See the Databricks tutorial for more details on how to connect. The documentation on Unity Catalog is also useful.
Writing queries​
Follow our querying guide for more information.