You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/managed-datahub/welcome-acryl.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ Acryl DataHub employs a push-based metadata ingestion model. In practice, this m
49
49
50
50
This approach comes with another benefit: security. By managing your own instance of the agent, you can keep the secrets and credentials within your walled garden. Skip uploading secrets & keys into a third-party cloud tool.
51
51
52
-
To push metadata into DataHub, Acryl provide's an ingestion framework written in Python. Typically, push jobs are run on a schedule at an interval of your choosing. For our step-by-step guide on ingestion, click [here](docs/managed-datahub/metadata-ingestion-with-acryl/ingestion.md).
52
+
To push metadata into DataHub, Acryl provide's an ingestion framework written in Python. Typically, push jobs are run on a schedule at an interval of your choosing. For our step-by-step guide on ingestion, click [here](../../metadata-ingestion/cli-ingestion.md).
Batch ingestion involves extracting metadata from a source system in bulk. Typically, this happens on a predefined schedule using the [Metadata Ingestion](../docs/components.md#ingestion-framework) framework.
4
+
The metadata that is extracted includes point-in-time instances of dataset, chart, dashboard, pipeline, user, group, usage, and task metadata.
4
5
5
-
Make sure you have installed DataHub CLI before following this guide.
# If you see "command not found", try running this instead: python3 -m datahub version
17
+
python3 -m datahub version
14
18
```
15
19
20
+
Your command line should return the proper version of DataHub upon executing these commands successfully.
21
+
22
+
16
23
Check out the [CLI Installation Guide](../docs/cli.md#installation) for more installation options and troubleshooting tips.
17
24
18
-
After that, install the required plugin for the ingestion.
25
+
26
+
## Installing Connector Plugins
27
+
28
+
Our CLI follows a plugin architecture. You must install connectors for different data sources individually.
29
+
For a list of all supported data sources, see [the open source docs](../docs/cli.md#sources).
30
+
Once you've found the connectors you care about, simply install them using `pip install`.
31
+
For example, to install the `mysql` connector, you can run
19
32
20
33
```shell
21
-
pip install 'acryl-datahub[datahub-rest]'# install the required plugin
34
+
pip install --upgrade 'acryl-datahub[mysql]'
22
35
```
23
-
24
36
Check out the [alternative installation options](../docs/cli.md#alternate-installation-options) for more reference.
25
37
26
38
## Configuring a Recipe
27
39
28
-
Create a `recipe.yml` file that defines the source and sink for metadata, as shown below.
40
+
Create a [Recipe](recipe_overview.md) yaml file that defines the source and sink for metadata, as shown below.
29
41
30
42
```yaml
31
-
# recipe.yml
43
+
# example-recipe.yml
44
+
45
+
# MySQL source configuration
32
46
source:
33
-
type: <source_name>
47
+
type: mysql
34
48
config:
35
-
option_1: <value>
36
-
...
49
+
username: root
50
+
password: password
51
+
host_port: localhost:3306
37
52
53
+
# Recipe sink configuration.
38
54
sink:
39
-
type: <sink_type_name>
55
+
type: "datahub-rest"
40
56
config:
41
-
...
57
+
server: "https://<your domain name>.acryl.io/gms"
58
+
token: <Your API key>
42
59
```
60
+
The **source** configuration block defines where to extract metadata from. This can be an OLTP database system, a data warehouse, or something as simple as a file. Each source has custom configuration depending on what is required to access metadata from the source. To see configurations required for each supported source, refer to the [Sources](source_overview.md) documentation.
61
+
62
+
The **sink** configuration block defines where to push metadata into. Each sink type requires specific configurations, the details of which are detailed in the [Sinks](sink_overview.md) documentation.
63
+
64
+
To configure your instance of DataHub as the destination for ingestion, set the "server" field of your recipe to point to your Acryl instance's domain suffixed by the path `/gms`, as shown below.
65
+
A complete example of a DataHub recipe file, which reads from MySQL and writes into a DataHub instance:
43
66
44
67
For more information and examples on configuring recipes, please refer to [Recipes](recipe_overview.md).
45
68
46
-
## Ingesting Metadata
47
69
48
-
You can run ingestion using `datahub ingest` like below.
70
+
### Using Recipes with Authentication
71
+
In Acryl DataHub deployments, only the `datahub-rest` sink is supported, which simply means that metadata will be pushed to the REST endpoints exposed by your DataHub instance. The required configurations for this sink are
72
+
73
+
1. **server**: the location of the REST API exposed by your instance of DataHub
74
+
2. **token**: a unique API key used to authenticate requests to your instance's REST API
75
+
76
+
The token can be retrieved by logging in as admin. You can go to Settings page and generate a Personal Access Token with your desired expiration date.
Please keep Your API key secure & avoid sharing it.
88
+
If you are on Acryl Cloud and your key is compromised for any reason, please reach out to the Acryl team at [email protected].
89
+
:::
49
90
91
+
92
+
## Ingesting Metadata
93
+
The final step requires invoking the DataHub CLI to ingest metadata based on your recipe configuration file.
94
+
To do so, simply run `datahub ingest` with a pointer to your YAML recipe file:
50
95
```shell
51
96
datahub ingest -c <path/to/recipe.yml>
52
97
```
53
98
99
+
## Scheduling Ingestion
100
+
101
+
Ingestion can either be run in an ad-hoc manner by a system administrator or scheduled for repeated executions. Most commonly, ingestion will be run on a daily cadence.
102
+
To schedule your ingestion job, we recommend using a job schedule like [Apache Airflow](https://airflow.apache.org/). In cases of simpler deployments, a CRON job scheduled on an always-up machine can also work.
103
+
Note that each source system will require a separate recipe file. This allows you to schedule ingestion from different sources independently or together.
104
+
Learn more about scheduling ingestion in the [Scheduling Ingestion Guide](/metadata-ingestion/schedule_docs/intro.md).
105
+
54
106
## Reference
55
107
56
108
Please refer the following pages for advanced guids on CLI ingestion.
@@ -59,8 +111,10 @@ Please refer the following pages for advanced guids on CLI ingestion.
59
111
- [UI Ingestion Guide](../docs/ui-ingestion.md)
60
112
61
113
:::tip Compatibility
114
+
62
115
DataHub server uses a 3 digit versioning scheme, while the CLI uses a 4 digit scheme. For example, if you're using DataHub server version 0.10.0, you should use CLI version 0.10.0.x, where x is a patch version.
63
116
We do this because we do CLI releases at a much higher frequency than server releases, usually every few days vs twice a month.
64
117
65
118
For ingestion sources, any breaking changes will be highlighted in the [release notes](../docs/how/updating-datahub.md). When fields are deprecated or otherwise changed, we will try to maintain backwards compatibility for two server releases, which is about 4-6 weeks. The CLI will also print warnings whenever deprecated options are used.
0 commit comments