Skip to content

Commit 8e5c0dc

Browse files
authored
docs(observe): update docs for remote executor, databricks (#10393)
1 parent 3ab4ec9 commit 8e5c0dc

File tree

5 files changed

+15
-15
lines changed

5 files changed

+15
-15
lines changed

docs/managed-datahub/observe/assertions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Assertions
22

33
:::note Contract Monitoring Support
4-
Currently we support Snowflake, Databricks, Redshift, and BigQuery for out-of-the-box contract monitoring as part of Acryl Observe.
4+
Currently we support Snowflake, Redshift, BigQuery, and Databricks for out-of-the-box contract monitoring as part of Acryl Observe.
55
:::
66

77
An assertion is **a data quality test that finds data that violates a specified rule.**

docs/managed-datahub/observe/column-assertions.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ import FeatureAvailability from '@site/src/components/FeatureAvailability';
1818

1919
Can you remember a time when an important warehouse table column changed dramatically, with little or no notice? Perhaps the number of null values suddenly spiked, or a new value was added to a fixed set of possible values. If the answer is yes, how did you initially find out? We'll take a guess - someone looking at an internal reporting dashboard or worse, a user using your your product, sounded an alarm when a number looked a bit out of the ordinary.
2020

21-
There are many reasons why important columns in your Snowflake, Redshift, or BigQuery tables may change - application code bugs, new feature rollouts, etc. Oftentimes, these changes break important assumptions made about the data used in building key downstream data products like reporting dashboards or data-driven product features.
21+
There are many reasons why important columns in your Snowflake, Redshift, BigQuery, or Databricks tables may change - application code bugs, new feature rollouts, etc. Oftentimes, these changes break important assumptions made about the data used in building key downstream data products like reporting dashboards or data-driven product features.
2222

2323
What if you could reduce the time to detect these incidents, so that the people responsible for the data were made aware of data issues before anyone else? With Acryl DataHub Column Assertions, you can.
2424

@@ -41,7 +41,7 @@ Note that an Ingestion Source _must_ be configured with the data platform of you
4141
Acryl DataHub's **Ingestion** tab.
4242

4343
> Note that Column Assertions are not yet supported if you are connecting to your warehouse
44-
> using the DataHub CLI or a Remote Ingestion Executor.
44+
> using the DataHub CLI.
4545
4646
## What is a Column Assertion?
4747

@@ -121,7 +121,7 @@ another always-increasing number - that can be used to find the "new rows" that
121121
`Edit Assertions` and `Edit Monitors` privileges for the entity. This is granted to Entity owners by default.
122122

123123
2. **Data Platform Connection**: In order to create a Column Assertion, you'll need to have an **Ingestion Source**
124-
configured to your Data Platform: Snowflake, BigQuery, or Redshift under the **Ingestion** tab.
124+
configured to your Data Platform: Snowflake, BigQuery, Redshift, or Databricks under the **Ingestion** tab.
125125

126126
Once these are in place, you're ready to create your Column Assertions!
127127

docs/managed-datahub/observe/custom-sql-assertions.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ If the answer is yes, how did you find out? We'll take a guess - someone looking
2222
a number looked a bit out of the ordinary. Perhaps your table initially tracked purchases made on your company's e-commerce web store, but suddenly began to include purchases made
2323
through your company's new mobile app.
2424

25-
There are many reasons why an important Table on Snowflake, Redshift, or BigQuery may change in its meaning - application code bugs, new feature rollouts,
25+
There are many reasons why an important Table on Snowflake, Redshift, BigQuery, or Databricks may change in its meaning - application code bugs, new feature rollouts,
2626
changes to key metric definitions, etc. Often times, these changes break important assumptions made about the data used in building key downstream data products
2727
like reporting dashboards or data-driven product features.
2828

@@ -49,7 +49,7 @@ Note that an Ingestion Source _must_ be configured with the data platform of you
4949
tab.
5050

5151
> Note that SQL Assertions are not yet supported if you are connecting to your warehouse
52-
> using the DataHub CLI or a Remote Ingestion Executor.
52+
> using the DataHub CLI.
5353
5454
## What is a Custom SQL Assertion?
5555

@@ -120,7 +120,7 @@ The **Assertion Description**: This is a human-readable description of the Asser
120120
`Edit Assertions`, `Edit Monitors`, **and the additional `Edit SQL Assertion Monitors`** privileges for the entity. This is granted to Entity owners by default.
121121

122122
2. **Data Platform Connection**: In order to create a Custom SQL Assertion, you'll need to have an **Ingestion Source** configured to your
123-
Data Platform: Snowflake, BigQuery, or Redshift under the **Integrations** tab.
123+
Data Platform: Snowflake, BigQuery, Redshift, or Databricks under the **Integrations** tab.
124124

125125
Once these are in place, you're ready to create your Custom SQL Assertions!
126126

docs/managed-datahub/observe/freshness-assertions.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ months without being updated with fresh data?
2222

2323
Perhaps a bug had been introduced into an upstream Airflow DAG
2424
or worse, the person in charge of maintaining the Table has departed from your organization entirely.
25-
There are many reasons why an important Table on Snowflake, Redshift, or BigQuery may fail to be updated as often as expected.
25+
There are many reasons why an important Table on Snowflake, Redshift, BigQuery, or Databricks may fail to be updated as often as expected.
2626

2727
What if you could reduce the time to detect these incidents, so that the people responsible for the data were made aware of data
2828
issues _before_ anyone else? What if you could communicate commitments about the freshness or change frequency
@@ -49,7 +49,7 @@ Note that an Ingestion Source _must_ be configured with the data platform of you
4949
tab.
5050

5151
> Note that Freshness Assertions are not yet supported if you are connecting to your warehouse
52-
> using the DataHub CLI or a Remote Ingestion Executor.
52+
> using the DataHub CLI.
5353
5454
## What is a Freshness Assertion?
5555

@@ -147,7 +147,7 @@ Freshness Assertions also have an off switch: they can be started or stopped at
147147
`Edit Assertions` and `Edit Monitors` privileges for the entity. This is granted to Entity owners by default.
148148

149149
2. **Data Platform Connection**: In order to create a Freshness Assertion, you'll need to have an **Ingestion Source** configured to your
150-
Data Platform: Snowflake, BigQuery, or Redshift under the **Integrations** tab.
150+
Data Platform: Snowflake, BigQuery, Redshift, or Databricks under the **Integrations** tab.
151151

152152
Once these are in place, you're ready to create your Freshness Assertions!
153153

@@ -260,7 +260,7 @@ As part of the **Acryl Observe** module, Acryl DataHub also provides **Smart Ass
260260
dynamic, AI-powered Freshness Assertions that you can use to monitor the freshness of important warehouse Tables, without
261261
requiring any manual setup.
262262

263-
If Acryl DataHub is able to detect a pattern in the change frequency of a Snowflake, Redshift, or BigQuery Table, you'll find
263+
If Acryl DataHub is able to detect a pattern in the change frequency of a Snowflake, Redshift, BigQuery, or Databricks Table, you'll find
264264
a recommended Smart Assertion under the `Validations` tab on the Table profile page:
265265

266266
<p align="center">

docs/managed-datahub/observe/volume-assertions.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ If the answer is yes, how did you find out? We'll take a guess - someone looking
2222
a number looked a bit out of the ordinary. Perhaps your table initially tracked purchases made on your company's e-commerce web store, but suddenly began to include purchases made
2323
through your company's new mobile app.
2424

25-
There are many reasons why an important Table on Snowflake, Redshift, or BigQuery may change in its meaning - application code bugs, new feature rollouts,
25+
There are many reasons why an important Table on Snowflake, Redshift, BigQuery, or Databricks may change in its meaning - application code bugs, new feature rollouts,
2626
changes to key metric definitions, etc. Often times, these changes break important assumptions made about the data used in building key downstream data products
2727
like reporting dashboards or data-driven product features.
2828

@@ -50,7 +50,7 @@ Note that an Ingestion Source _must_ be configured with the data platform of you
5050
tab.
5151

5252
> Note that Volume Assertions are not yet supported if you are connecting to your warehouse
53-
> using the DataHub CLI or a Remote Ingestion Executor.
53+
> using the DataHub CLI.
5454
5555
## What is a Volume Assertion?
5656

@@ -140,7 +140,7 @@ Volume Assertions also have an off switch: they can be started or stopped at any
140140
`Edit Assertions` and `Edit Monitors` privileges for the entity. This is granted to Entity owners by default.
141141

142142
2. **Data Platform Connection**: In order to create a Volume Assertion, you'll need to have an **Ingestion Source** configured to your
143-
Data Platform: Snowflake, BigQuery, or Redshift under the **Integrations** tab.
143+
Data Platform: Snowflake, BigQuery, Redshift, or Databricks under the **Integrations** tab.
144144

145145
Once these are in place, you're ready to create your Volume Assertions!
146146

@@ -238,7 +238,7 @@ As part of the **Acryl Observe** module, Acryl DataHub also provides **Smart Ass
238238
dynamic, AI-powered Volume Assertions that you can use to monitor the volume of important warehouse Tables, without
239239
requiring any manual setup.
240240

241-
If Acryl DataHub is able to detect a pattern in the volume of a Snowflake, Redshift, or BigQuery Table, you'll find
241+
If Acryl DataHub is able to detect a pattern in the volume of a Snowflake, Redshift, BigQuery, or Databricks Table, you'll find
242242
a recommended Smart Assertion under the `Validations` tab on the Table profile page:
243243

244244
<p align="center">

0 commit comments

Comments
 (0)