Skip to content

Commit

Permalink
Fix for issue 2893 capitalize YAML [REVIEW] (dbt-labs#3345)
Browse files Browse the repository at this point in the history
## What are you changing in this pull request and why?

Issue dbt-labs#2893 

Write the Docs contribution. I replaced all plain-text instances of
`Yaml` or `yaml` to `YAML`.

Co-authored-by: mirnawong1 <[email protected]>
  • Loading branch information
vgiurgiu and mirnawong1 authored May 7, 2023
1 parent 4bf2f4a commit 283aba2
Show file tree
Hide file tree
Showing 42 changed files with 54 additions and 54 deletions.
2 changes: 1 addition & 1 deletion website/blog/2021-09-15-september-21-product-email.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Give Jeremy a win and check out the [blog](http://blog.getdbt.com/getting-ready
### dbt v0.21.0-rc1
- Check out the [#dbt-prereleases](https://getdbt.slack.com/archives/C016X6ABVUK?utm_campaign=Monthly%20Product%20Updates&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-8nIpohDBSr7SvpXrqY-5ONmnjdIgW0XMiAPkjQTb9Pgwt24nzqAWNX2Xgtj8LA0LrPoHpD) channel in the dbt Community Slack, and Jeremy's [Discourse post](https://discourse.getdbt.com/t/prerelease-dbt-core-v0-21-louis-kahn/3077?utm_campaign=Monthly%20Product%20Updates&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-8nIpohDBSr7SvpXrqY-5ONmnjdIgW0XMiAPkjQTb9Pgwt24nzqAWNX2Xgtj8LA0LrPoHpD)!
- dbt build: Did you catch our teaser last month at [Staging](https://www.youtube.com/watch?v=-XRD_IjWX2U&utm_campaign=Monthly%20Product%20Updates&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-8nIpohDBSr7SvpXrqY-5ONmnjdIgW0XMiAPkjQTb9Pgwt24nzqAWNX2Xgtj8LA0LrPoHpD)?
- Defining resource configs in all the places you'd expect (i.e. yaml files)
- Defining resource configs in all the places you'd expect (i.e. YAML files)
- Capture changes to macros in state:modified, for best-yet Slim CI

![Screen Shot 2021-09-20 at 11.34.47 AM (1)](https://hs-8698602.f.hubspotemail.net/hub/8698602/hubfs/Screen%20Shot%202021-09-20%20at%2011.34.47%20AM%20(1).png?upscale=true&width=600&upscale=true&name=Screen%20Shot%202021-09-20%20at%2011.34.47%20AM%20(1).png)
Expand Down
4 changes: 2 additions & 2 deletions website/blog/2023-02-14-passing-the-dbt-certification-exam.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,13 @@ In this article, two Montreal Analytics consultants, Jade and Callie, discuss th

**C:** To prepare for the exam I reviewed the official dbt Certification Study Guide and the [official dbt docs](https://docs.getdbt.com/), and attended group study and learning sessions that were hosted by Montreal Analytics for all employees interested in taking the exam. As a group, we prioritized subjects that we felt less familiar with; for the first cohort of test takers this was mainly newer topics that haven’t yet become integral to a typical dbt project, such as [doc blocks](https://docs.getdbt.com/docs/collaborate/documentation#using-docs-blocks) and [configurations versus properties](https://docs.getdbt.com/reference/configs-and-properties). These sessions mainly covered the highlights and common “gotchas” that are experienced using these techniques. The sessions were moderated by a team member who had already successfully completed the dbt Certification, but operated in a very collaborative environment, so everyone could provide additional information, ask questions to the group, and provide feedback to other members of our certification taking group.

I felt comfortable with the breadth of my dbt knowledge and had familiarity with most topics. However in my day-to-day implementation, I am often reliant on documentation or copying and pasting specific configurations in order to get the correct settings. Therefore, my focus was on memorizing important criteria for *how to use* certain features, particularly on the order/nesting of how the key yaml files are configured (dbt_project.yml, table.yml, source.yml).
I felt comfortable with the breadth of my dbt knowledge and had familiarity with most topics. However in my day-to-day implementation, I am often reliant on documentation or copying and pasting specific configurations in order to get the correct settings. Therefore, my focus was on memorizing important criteria for *how to use* certain features, particularly on the order/nesting of how the key YAML files are configured (dbt_project.yml, table.yml, source.yml).

## How did the test go?

**J:** With a cup of coffee I started my exam in high spirits and high stress. I had never taken a proctored exam before, so going into this I had to adjust to being on camera while taking a test and in general taking a test in my living room felt strange!

The first few questions were trickier than I’d anticipated, and my heart started beating a little faster as a result. I found the build-list questions, five lines of code to create a valid yaml or SQL file that accomplishes a certain task, particularly difficult.
The first few questions were trickier than I’d anticipated, and my heart started beating a little faster as a result. I found the build-list questions, five lines of code to create a valid YAML or SQL file that accomplishes a certain task, particularly difficult.

The exam consists of 65 questions, usually containing multiple parts, so by 90 minutes in I started to get tired. I’d flagged several questions and went back to check on those before submitting. At the time, I thought I answered about 60% of these questions right? Having lost my coffee buzz and with shaky confidence I submitted the test to see my result. Failed.

Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/seeds.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Seeds are configured in your `dbt_project.yml`, check out the [seed configuratio


## Documenting and testing seeds
You can document and test seeds in yaml by declaring properties — check out the docs on [seed properties](seed-properties) for more information.
You can document and test seeds in YAML by declaring properties — check out the docs on [seed properties](seed-properties) for more information.

## FAQs
<FAQ src="Seeds/load-raw-data-with-seed" />
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ def model(dbt, session):

</File>

There's a limit to how fancy you can get with the `dbt.config()` method. It accepts _only_ literal values (strings, booleans, and numeric types). Passing another function or a more complex data structure is not possible. The reason is that dbt statically analyzes the arguments to `config()` while parsing your model without executing your Python code. If you need to set a more complex configuration, we recommend you define it using the [`config` property](resource-properties/config) in a yaml file.
There's a limit to how fancy you can get with the `dbt.config()` method. It accepts _only_ literal values (strings, booleans, and numeric types). Passing another function or a more complex data structure is not possible. The reason is that dbt statically analyzes the arguments to `config()` while parsing your model without executing your Python code. If you need to set a more complex configuration, we recommend you define it using the [`config` property](resource-properties/config) in a YAML file.

#### Accessing project context

Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/collaborate/govern/model-access.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ models:
</File>
Each model can only belong to one `group`, and groups cannot be nested. If you set a different `group` in that model's yaml or in-file config, it will override the `group` applied at the project level.
Each model can only belong to one `group`, and groups cannot be nested. If you set a different `group` in that model's YAML or in-file config, it will override the `group` applied at the project level.

## Access modifiers

Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/collaborate/govern/model-contracts.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ models:
</File>

When building a model with a defined contract, dbt will do two things differently:
1. dbt will run a "preflight" check to ensure that the model's query will return a set of columns with names and data types matching the ones you have defined. This check is agnostic to the order of columns specified in your model (SQL) or yaml spec.
1. dbt will run a "preflight" check to ensure that the model's query will return a set of columns with names and data types matching the ones you have defined. This check is agnostic to the order of columns specified in your model (SQL) or YAML spec.
2. dbt will include the column names, data types, and constraints in the DDL statements it submits to the data platform, which will be enforced while building or updating the model's table.

## FAQs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_label: "Autocomplete in IDE and more"
tags: [v1.1.43, January-19-2022, IDE]
---

Some noteworthy improvements include autocomplete snippets for sql and yaml files in the IDE, which are available for use now! We also added a [new metric layer page](https://docs.getdbt.com/docs/dbt-cloud/using-dbt-cloud/cloud-metrics-layer) to docs.getdbt.com to help you begin thinking about the metrics layer in dbt Cloud.
Some noteworthy improvements include autocomplete snippets for sql and YAML files in the IDE, which are available for use now! We also added a [new metric layer page](https://docs.getdbt.com/docs/dbt-cloud/using-dbt-cloud/cloud-metrics-layer) to docs.getdbt.com to help you begin thinking about the metrics layer in dbt Cloud.

#### Performance improvements and enhancements

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ To create your dbt project:
### Connect to BigQuery
When developing locally, dbt connects to your <Term id="data-warehouse" /> using a [profile](/docs/core/connection-profiles), which is a yaml file with all the connection details to your warehouse.
When developing locally, dbt connects to your <Term id="data-warehouse" /> using a [profile](/docs/core/connection-profiles), which is a YAML file with all the connection details to your warehouse.
1. Create a file in the `~/.dbt/` directory named `profiles.yml`.
2. Move your BigQuery keyfile into this directory.
Expand Down
2 changes: 1 addition & 1 deletion website/docs/faqs/Docs/document-all-columns.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Do I need to add a yaml entry for column for it to appear in the docs site?
title: Do I need to add a YAML entry for column for it to appear in the docs site?
description: "All columns appear in your docs site"
sidebar_label: 'Types of columns included in doc site'
id: document-all-columns
Expand Down
2 changes: 1 addition & 1 deletion website/docs/faqs/Project/yaml-file-extension.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Can I use a yaml file extension?
title: Can I use a YAML file extension?
description: "dbt will only search for files with a `.yml` file extension"
sidebar_label: '.yml file extension search'
id: yaml-file-extension
Expand Down
2 changes: 1 addition & 1 deletion website/docs/guides/legacy/getting-help.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The docs site you're on is highly searchable, make sure to explore for the answe
We have a handy guide on [debugging errors](debugging-errors) to help out! This guide also helps explain why errors occur, and which docs you might need to search for help.

#### Search for answers using your favorite search engine
We're committed to making more errors searchable, so it's worth checking if there's a solution already out there! Further, some errors related to installing dbt, the SQL in your models, or getting yaml right, are errors that are not-specific to dbt, so there may be other resources to cehck.
We're committed to making more errors searchable, so it's worth checking if there's a solution already out there! Further, some errors related to installing dbt, the SQL in your models, or getting YAML right, are errors that are not-specific to dbt, so there may be other resources to cehck.

#### Experiment!
If the question you have is "What happens when I do `X`", try doing `X` and see what happens! Assuming you have a solid dev environment set up, making mistakes in development won't affect your end users
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ And (!): a first-ever entry point for [programmatic invocations](programmatic-in
Run `dbt --help` to see new & improved help documentation :)

### Quick hits
- The [`version: 2` top-level key](project-configs/version) is now **optional** in all yaml files. Also, the [`config-version: 2`](config-version) and `version:` top-level keys are now optional in `dbt_project.yml` files.
- The [`version: 2` top-level key](project-configs/version) is now **optional** in all YAML files. Also, the [`config-version: 2`](config-version) and `version:` top-level keys are now optional in `dbt_project.yml` files.
- [Events and logging](events-logging): Added `node_relation` (`database`, `schema`, `identifier`) to the `node_info` dictionary, available on node-specific events
- Support setting `--project-dir` via environment variable: [`DBT_PROJECT_DIR`](dbt_project.yml)
- More granular [configurations](/reference/global-configs) for logging (to set log format, log levels, and colorization) and cache population
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ dbt Core major version 1.0 includes a number of breaking changes! Wherever possi

The two **test types** are now "singular" and "generic" (instead of "data" and "schema", respectively). The `test_type:` selection method accepts `test_type:singular` and `test_type:generic`. (It will also accept `test_type:schema` and `test_type:data` for backwards compatibility.) **Not backwards compatible:** The `--data` and `--schema` flags to dbt test are no longer supported, and tests no longer have the tags `'data'` and `'schema'` automatically applied. Updated docs: [tests](/docs/build/tests), [test selection](test-selection-examples), [selection methods](node-selection/methods).

The `greedy` flag/property has been renamed to **`indirect_selection`**, which is now eager by default. **Note:** This reverts test selection to its pre-v0.20 behavior by default. `dbt test -s my_model` _will_ select multi-parent tests, such as `relationships`, that depend on unselected resources. To achieve the behavior change in v0.20 + v0.21, set `--indirect-selection=cautious` on the CLI or `indirect_selection: cautious` in yaml selectors. Updated docs: [test selection examples](test-selection-examples), [yaml selectors](yaml-selectors).
The `greedy` flag/property has been renamed to **`indirect_selection`**, which is now eager by default. **Note:** This reverts test selection to its pre-v0.20 behavior by default. `dbt test -s my_model` _will_ select multi-parent tests, such as `relationships`, that depend on unselected resources. To achieve the behavior change in v0.20 + v0.21, set `--indirect-selection=cautious` on the CLI or `indirect_selection: cautious` in YAML selectors. Updated docs: [test selection examples](test-selection-examples), [yaml selectors](yaml-selectors).

### Global macros

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ dbt Core v0.21 has reached the end of critical support. No new patch versions wi
- [Test selection examples](test-selection-examples) includes more discussion of indirect selection (a change in v0.20), and the optional "greedy" flag/property (new in v0.21), which you can optionally set to include tests that have a mix of selected + unselected parents

### Elsewhere in Core
- [Resource configs and properties](configs-and-properties) docs have been consolidated and reconciled. New `config` property that makes it possible to configure models, seeds, snapshots, and tests in all yaml files.
- [Resource configs and properties](configs-and-properties) docs have been consolidated and reconciled. New `config` property that makes it possible to configure models, seeds, snapshots, and tests in all YAML files.
- [Configuring incremental models](/docs/build/incremental-models): New optional configuration for incremental models, `on_schema_change`.
- [Environment variables](env_var): Add a log-scrubbing prefix, `DBT_ENV_SECRET_`
- [Test `where` config](where) has been reimplemented as a macro (`get_where_subquery`) that you can optionally reimplement, too
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ This configuration will work in dbt v0.17.0 when `config-version: 2` is used, bu

Support for version 1 will be removed in a future release of dbt.

### NativeEnvironment rendering for yaml fields
### NativeEnvironment rendering for YAML fields

In dbt v0.17.0, dbt enabled use of Jinja's Native Environment to render values in
YML files. This Native Environment coerces string values to their
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ This section shows a very basic example of linting a project every time a commit

The steps below use [SQLFluff](https://docs.sqlfluff.com/en/stable/) to scan your code and look for linting errors. In the example, it's set to use the `snowflake` dialect, and specifically runs the rules L019, L020, L021, and L022. This is purely for demonstration purposes. You should update this to reflect your code base's [dialect](https://docs.sqlfluff.com/en/stable/dialects.html) and the [rules](https://docs.sqlfluff.com/en/stable/rules.html) you've established for your repo.

### 1. Create a yaml file to define your pipeline
### 1. Create a YAML file to define your pipeline

The yaml files defined below are what tell your code hosting platform the steps to run. In this setup, you’re telling the platform to run a SQLFluff lint job every time a commit is pushed.
The YAML files defined below are what tell your code hosting platform the steps to run. In this setup, you’re telling the platform to run a SQLFluff lint job every time a commit is pushed.

<Tabs
defaultValue="github"
Expand All @@ -32,7 +32,7 @@ my_awesome_project

To define the job for our action, let’s add a new file named `lint_on_push.yml` under the `workflows` folder. This file is how we tell the GitHub runner what to execute when the job is triggered.

Below I touch on the important pieces for running a dbt Cloud job, but if you want a full run-down of all the components of this yaml file checkout [this GitHub article](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#understanding-the-workflow-file) on actions.
Below I touch on the important pieces for running a dbt Cloud job, but if you want a full run-down of all the components of this YAML file checkout [this GitHub article](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#understanding-the-workflow-file) on actions.

**Key pieces:**

Expand Down Expand Up @@ -145,7 +145,7 @@ pipelines:

### 2. Commit and push your changes to make sure everything works

After you finish creating the yaml files, commit and push your code. Doing this will trigger your pipeline for the first time! If everything goes well, you should see the pipeline in your code platform. When you click into the job you’ll get a log showing that SQLFluff was run. If your code failed linting you’ll get an error in the job with a description of what needs to be fixed. If everything passed the lint check, you’ll see a successful job run.
After you finish creating the YAML files, commit and push your code. Doing this will trigger your pipeline for the first time! If everything goes well, you should see the pipeline in your code platform. When you click into the job you’ll get a log showing that SQLFluff was run. If your code failed linting you’ll get an error in the job with a description of what needs to be fixed. If everything passed the lint check, you’ll see a successful job run.

<Tabs
defaultValue="github"
Expand Down
Loading

0 comments on commit 283aba2

Please sign in to comment.