Skip to content

Commit

Permalink
Merge branch 'current' into mirnawong1-patch-12
Browse files Browse the repository at this point in the history
  • Loading branch information
nghi-ly authored Jun 3, 2023
2 parents 8696bf8 + d725788 commit 853f996
Show file tree
Hide file tree
Showing 206 changed files with 2,198 additions and 1,605 deletions.
2 changes: 1 addition & 1 deletion contributing/adding-page-components.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Using warehouse components

You can use the following components to provide code snippets for each supported warehouse. You can see a real-life example in the docs page [Initialize your project](/docs/quickstarts/dbt-cloud/databricks#initialize-your-dbt-project-and-start-developing).
You can use the following components to provide code snippets for each supported warehouse. You can see a real-life example in the docs page [Initialize your project](/quickstarts/databricks?step=6).

Identify code by labeling with the warehouse names:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ In short, a jaffle is:

*See above: Tasty, tasty jaffles.*

Jaffle Shop is a demo repo referenced in [dbt’s Getting Started Guide](/docs/quickstarts/overview), and its jaffles hold a special place in the dbt community’s hearts, as well as on Data Twitter™.
Jaffle Shop is a demo repo referenced in [dbt’s Getting Started Guide](/quickstarts), and its jaffles hold a special place in the dbt community’s hearts, as well as on Data Twitter™.

![jaffles on data twitter](/img/blog/2022-02-08-customer-360-view/image_1.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ As a rule of thumb, you can consider that if your table partition length is less
When we designed ingestion partitioning table support with the dbt Labs team, we focused on ease of use and how to have seamless integration with incremental materialization.

One of the great features of incremental materialization is to be able to proceed with a full refresh. We added support for that feature and, luckily, `MERGE` statements are working as intended for ingestion-time partitioning tables. This is also the approach used by the [dbt BigQuery connector](/reference/warehouse-setups/bigquery-setup).
One of the great features of incremental materialization is to be able to proceed with a full refresh. We added support for that feature and, luckily, `MERGE` statements are working as intended for ingestion-time partitioning tables. This is also the approach used by the [dbt BigQuery connector](/docs/core/connect-data-platform/bigquery-setup).

The complexity is hidden in the connector and it’s very intuitive to use. For example, if you have a model with the following SQL:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Before you can get started:
- You must have Python 3.8 or above installed
- You must have dbt version 1.3.0 or above installed
- You should have a basic understanding of [SQL](https://www.sqltutorial.org/)
- You should have a basic understanding of [dbt](https://docs.getdbt.com/docs/quickstarts/overview)
- You should have a basic understanding of [dbt](https://docs.getdbt.com/quickstarts)

### Step 2: Clone the repository

Expand Down
2 changes: 1 addition & 1 deletion website/blog/ctas.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@
header: "Just Getting Started?"
subheader: Check out guides on getting your warehouse set up and connected to dbt Cloud.
button_text: Learn more
url: https://docs.getdbt.com/docs/quickstarts/overview
url: https://docs.getdbt.com/quickstarts
2 changes: 1 addition & 1 deletion website/docs/docs/build/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -490,7 +490,7 @@ You may find some pieces of functionality, like secondary calculations, complica
| Input | Example | Description | Required |
| ----------- | ----------- | ----------- | -----------|
| <VersionBlock firstVersion="1.2">metric_list</VersionBlock><VersionBlock lastVersion="1.1">metric_name</VersionBlock> | <VersionBlock firstVersion="1.2">`metric('some_metric)'`, <br />[`metric('some_metric)'`, <br />`metric('some_other_metric)'`]<br /></VersionBlock><VersionBlock lastVersion="1.1">`'metric_name'`<br /></VersionBlock> | <VersionBlock firstVersion="1.2">The metric(s) to be queried by the macro. If multiple metrics required, provide in list format.</VersionBlock><VersionBlock lastVersion="1.1">The name of the metric</VersionBlock> | Required |
| grain | `'day'`, `'week'`, <br />`'month'`, `'quarter'`, <br />`'year'`, `'all_time'`<br /> | The time grain that the metric will be aggregated to in the returned dataset | Required |
| grain | `'day'`, `'week'`, <br />`'month'`, `'quarter'`, <br />`'year'`<br /> | The time grain that the metric will be aggregated to in the returned dataset | Optional |
| dimensions | [`'plan'`,<br /> `'country'`] | The dimensions you want the metric to be aggregated by in the returned dataset | Optional |
| secondary_calculations | [`metrics.period_over_period( comparison_strategy="ratio", interval=1, alias="pop_1wk")`] | Performs the specified secondary calculation on the metric results. Examples include period over period calculations, rolling calculations, and period to date calculations. | Optional |
| start_date | `'2022-01-01'` | Limits the date range of data used in the metric calculation by not querying data before this date | Optional |
Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ The top level of a dbt workflow is the project. A project is a directory of a `.

Your organization may need only a few models, but more likely you’ll need a complex structure of nested models to transform the required data. A model is a single file containing a final `select` statement, and a project can have multiple models, and models can even reference each other. Add to that, numerous projects and the level of effort required for transforming complex data sets can improve drastically compared to older methods.

Learn more about models in [SQL models](/docs/build/sql-models) and [Python models](/docs/build/python-models) pages. If you'd like to begin with a bit of practice, visit our [Getting Started Guide](/docs/quickstarts/overview) for instructions on setting up the Jaffle_Shop sample data so you can get hands-on with the power of dbt.
Learn more about models in [SQL models](/docs/build/sql-models) and [Python models](/docs/build/python-models) pages. If you'd like to begin with a bit of practice, visit our [Getting Started Guide](/quickstarts) for instructions on setting up the Jaffle_Shop sample data so you can get hands-on with the power of dbt.
6 changes: 3 additions & 3 deletions website/docs/docs/build/projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ After configuring the Project subdirectory option, dbt Cloud will use it as the

You can create new projects and [share them](/docs/collaborate/git-version-control) with other people by making them available on a hosted git repository like GitHub, GitLab, and BitBucket.

After you set up a connection with your data platform, you can [initialize your new project in dbt Cloud](/docs/quickstarts/overview) and start developing. Or, run [dbt init from the command line](/reference/commands/init) to set up your new project.
After you set up a connection with your data platform, you can [initialize your new project in dbt Cloud](/quickstarts) and start developing. Or, run [dbt init from the command line](/reference/commands/init) to set up your new project.

During project initialization, dbt creates sample model files in your project directory to help you start developing quickly.

Expand All @@ -89,5 +89,5 @@ If you want to see what a mature, production project looks like, check out the [

## Related docs
* [Best practices: How we structure our dbt projects](/guides/best-practices/how-we-structure/1-guide-overview)
* [Quickstarts for dbt Cloud](/docs/quickstarts/overview)
* [Quickstart for dbt Core](/docs/quickstarts/dbt-core/manual-install)
* [Quickstarts for dbt Cloud](/quickstarts)
* [Quickstart for dbt Core](/quickstarts/manual-install)
2 changes: 1 addition & 1 deletion website/docs/docs/build/python-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -658,7 +658,7 @@ Use the `cluster` submission method with dedicated Dataproc clusters you or your

<Lightbox src="/img/docs/building-a-dbt-project/building-models/python-models/dataproc-connector-initialization.png" title="Add the Spark BigQuery connector as an initialization action"/>

The following configurations are needed to run Python models on Dataproc. You can add these to your [BigQuery profile](/reference/warehouse-setups/bigquery-setup#running-python-models-on-dataproc) or configure them on specific Python models:
The following configurations are needed to run Python models on Dataproc. You can add these to your [BigQuery profile](/docs/core/connect-data-platform/bigquery-setup#running-python-models-on-dataproc) or configure them on specific Python models:
- `gcs_bucket`: Storage bucket to which dbt will upload your model's compiled PySpark code.
- `dataproc_region`: GCP region in which you have enabled Dataproc (for example `us-central1`).
- `dataproc_cluster_name`: Name of Dataproc cluster to use for running Python model (executing PySpark job). Only required if `submission_method: cluster`.
Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/sql-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ id: "sql-models"

:::info Building your first models

If you're new to dbt, we recommend that you read a [quickstart guide](/docs/quickstarts/overview) to build your first dbt project with models.
If you're new to dbt, we recommend that you read a [quickstart guide](/quickstarts) to build your first dbt project with models.

:::

Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ There are two ways of defining tests in dbt:
Defining tests is a great way to confirm that your code is working correctly, and helps prevent regressions when your code changes. Because you can use them over and over again, making similar assertions with minor variations, generic tests tend to be much more common—they should make up the bulk of your dbt testing suite. That said, both ways of defining tests have their time and place.

:::tip Creating your first tests
If you're new to dbt, we recommend that you check out our [quickstart guide](/docs/quickstarts/overview) to build your first dbt project with models and tests.
If you're new to dbt, we recommend that you check out our [quickstart guide](/quickstarts) to build your first dbt project with models and tests.
:::

## Singular tests
Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/cloud/about-cloud-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ dbt Cloud is the fastest and most reliable way to deploy your dbt jobs. It conta
- [Managing users and licenses](/docs/cloud/manage-access/seats-and-users)
- [Configuring secure access](/docs/cloud/manage-access/about-user-access)

These settings are intended for dbt Cloud administrators. If you need a more detailed first-time setup guide for specific data platforms, read our [quickstart guides](/docs/quickstarts/overview).
These settings are intended for dbt Cloud administrators. If you need a more detailed first-time setup guide for specific data platforms, read our [quickstart guides](/quickstarts).

If you want a more in-depth learning experience, we recommend taking the dbt Fundamentals on our [dbt Learn online courses site](https://courses.getdbt.com/).

Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/cloud/about-cloud/dbt-cloud-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ link="/docs/cloud/dbt-cloud-ide/develop-in-the-cloud"
## Related docs

- [dbt Cloud plans and pricing](https://www.getdbt.com/pricing/)
- [Quickstart guides](/docs/quickstarts/overview)
- [Quickstart guides](/quickstarts)
- [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud)
- [dbt Cloud support](/docs/dbt-support)
- [Become a contributor](https://docs.getdbt.com/community/contribute)
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ id: about-connections
description: "Information about data platform connections"
sidebar_label: "About data platform connections"
---
dbt Cloud can connect with a variety of data platform providers including:
ddbt Cloud can connect with a variety of data platform providers including:
- [Amazon Redshift](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb)
- [Apache Spark](/docs/cloud/connect-data-platform/connect-apache-spark)
- [Databricks](/docs/cloud/connect-data-platform/connect-databricks)
Expand All @@ -17,7 +17,7 @@ You can connect to your database in dbt Cloud by clicking the gear in the top ri

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choose-a-connection.png" title="Choose a connection"/>

These connection instructions provide the basic fields required for configuring a data platform connection in dbt Cloud. For more detailed guides, which include demo project data, read our [Quickstart guides](https://docs.getdbt.com/docs/quickstarts/overview)
These connection instructions provide the basic fields required for configuring a data platform connection in dbt Cloud. For more detailed guides, which include demo project data, read our [Quickstart guides](https://docs.getdbt.com/quickstarts)

## IP Restrictions

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,10 @@ dbt Cloud supports connecting to an Apache Spark cluster using the HTTP method
or the Thrift method. Note: While the HTTP method can be used to connect to
an all-purpose Databricks cluster, the ODBC method is recommended for all
Databricks connections. For further details on configuring these connection
parameters, please see the [dbt-spark documentation](https://github.com/dbt-labs/dbt-spark#configuring-your-profile)
parameters, please see the [dbt-spark documentation](https://github.com/dbt-labs/dbt-spark#configuring-your-profile).

To learn how to optimize performance with data platform-specific configurations in dbt Cloud, refer to [Apache Spark-specific configuration](/reference/resource-configs/spark-configs).


The following fields are available when creating an Apache Spark connection using the
HTTP and Thrift connection methods:
Expand All @@ -26,4 +29,4 @@ HTTP and Thrift connection methods:
| Auth | Optional, supply if using Kerberos | `KERBEROS` |
| Kerberos Service Name | Optional, supply if using Kerberos | `hive` |

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/spark-connection.png" title="Configuring a Spark connection"/>
<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/spark-connection.png" title="Configuring a Spark connection"/>
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,9 @@ sidebar_label: "Connect Databricks"

The dbt-databricks adapter is maintained by the Databricks team and is verified by dbt Labs. The Databricks team is committed to supporting and improving the adapter over time, so you can be sure the integrated experience will provide the best of dbt and the best of Databricks. Connecting to Databricks via dbt-spark has been deprecated.

## About the dbt-databricks Adapter
dbt-databricks is compatible with the following versions of dbt Core in dbt Cloud with varying degrees of functionality.
## About the dbt-databricks adapter

dbt-databricks is compatible with the following versions of dbt Core in dbt Cloud with varying degrees of functionality.

| Feature | dbt Versions |
| ----- | ----------- |
Expand All @@ -23,6 +24,8 @@ The dbt-databricks adapter is more opinionated, guiding users to an improved exp
- **Support for Unity Catalog:**
Unity Catalog allows Databricks users to centrally manage all data assets, simplifying access management and improving search and query performance. Databricks users can now get three-part data hierarchies – catalog, schema, model name – which solves a longstanding friction point in data organization and governance.

To learn how to optimize performance with data platform-specific configurations in dbt Cloud, refer to [Databricks-specific configuration](/reference/resource-configs/databricks-configs).


To set up the Databricks connection, supply the following fields:

Expand All @@ -32,4 +35,4 @@ To set up the Databricks connection, supply the following fields:
| HTTP Path | The HTTP path of the Databricks cluster or SQL warehouse | /sql/1.0/warehouses/1a23b4596cd7e8fg |
| Catalog | Name of Databricks Catalog (optional) | Production |

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/dbt-databricks.png" title="Configuring a Databricks connection using the dbt-databricks adapter"/>
<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/dbt-databricks.png" title="Configuring a Databricks connection using the dbt-databricks adapter"/>
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Make sure the location of the instance is the same Virtual Private Cloud (VPC) a
</details>


#### Configuring the Bastion Server in AWS:
### Configuring the Bastion Server in AWS

To configure the SSH tunnel in dbt Cloud, you'll need to provide the hostname/IP of your bastion server, username, and port, of your choosing, that dbt Cloud will connect to. Review the following steps:

Expand All @@ -64,3 +64,7 @@ To configure the SSH tunnel in dbt Cloud, you'll need to provide the hostname/IP
- Copy and paste the dbt Cloud generated public key, into the authorized_keys file.

The Bastion server should now be ready for dbt Cloud to use as a tunnel into the Redshift environment.

## Configuration

To learn how to optimize performance with data platform-specific configurations in dbt Cloud, refer to [Redshift-specific configuration](/reference/resource-configs/redshift-configs).
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ description: "Configure Snowflake connection."
sidebar_label: "Connect Snowflake"
---

The following fields are required when creating a Snowflake connection:
The following fields are required when creating a Snowflake connection

| Field | Description | Examples |
| ----- | ----------- | -------- |
| Account | The Snowflake account to connect to. Take a look [here](/reference/warehouse-setups/snowflake-setup#account) to determine what the account field should look like based on your region.| <Snippet src="snowflake-acct-name" /> |
| Account | The Snowflake account to connect to. Take a look [here](/docs/core/connect-data-platform/snowflake-setup#account) to determine what the account field should look like based on your region.| <Snippet src="snowflake-acct-name" /> |
| Role | A mandatory field indicating what role should be assumed after connecting to Snowflake | `transformer` |
| Database | The logical database to connect to and run queries against. | `analytics` |
| Warehouse | The virtual warehouse to use for running queries. | `transforming` |
Expand Down Expand Up @@ -63,4 +63,8 @@ In order to successfully fill in the Private Key field, you **must** include the
The OAuth auth method permits dbt Cloud to run development queries on behalf of
a Snowflake user without the configuration of Snowflake password in dbt Cloud. For
more information on configuring a Snowflake OAuth connection in dbt Cloud, please see [the docs on setting up Snowflake OAuth](/docs/cloud/manage-access/set-up-snowflake-oauth).
<Lightbox src="/img/docs/dbt-cloud/dbt-cloud-enterprise/database-connection-snowflake-oauth.png" title="Configuring Snowflake OAuth connection"/>
<Lightbox src="/img/docs/dbt-cloud/dbt-cloud-enterprise/database-connection-snowflake-oauth.png" title="Configuring Snowflake OAuth connection"/>

## Configuration

To learn how to optimize performance with data platform-specific configurations in dbt Cloud, refer to [Snowflake-specific configuration](/reference/resource-configs/snowflake-configs).
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,8 @@ The following are the required fields for setting up a connection with a [Starbu

## Catalogs and schemas

<Snippet src="connect-starburst-trino/schema-db-fields" />
<Snippet src="connect-starburst-trino/schema-db-fields" />

## Configuration

To learn how to optimize performance with data platform-specific configurations in dbt Cloud, refer to [Starburst/Trino-specific configuration](/reference/resource-configs/trino-configs).
Original file line number Diff line number Diff line change
Expand Up @@ -44,3 +44,7 @@ more information on the initial configuration of a BigQuery OAuth connection in

As an end user, if your organization has set up BigQuery OAuth, you can link a project with your personal BigQuery account in your personal Profile in dbt Cloud, like so:
<Lightbox src="/img/docs/dbt-cloud/dbt-cloud-enterprise/gsuite/bq_oauth/bq_oauth_as_user.gif" title="Link Button in dbt Cloud Credentials Screen" />

## Configuration

To learn how to optimize performance with data platform-specific configurations in dbt Cloud, refer to [BigQuery-specific configuration](/reference/resource-configs/bigquery-configs).
2 changes: 1 addition & 1 deletion website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,6 @@ There are default keyboard shortcuts that can help make development more product

## Related docs

- [Quickstart guide](/docs/quickstarts/overview)
- [Quickstart guide](/quickstarts)
- [About dbt Cloud](/docs/cloud/about-cloud/dbt-cloud-features)
- [Develop in the Cloud](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud)
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The Cloud IDE needs explicit action to save your changes. There are three ways y

:::info📌

New to dbt? Check out our [quickstart guide](/docs/quickstarts/overview) to build your first dbt project in the Cloud IDE!
New to dbt? Check out our [quickstart guide](/quickstarts) to build your first dbt project in the Cloud IDE!

:::

Expand Down
Loading

0 comments on commit 853f996

Please sign in to comment.