@@ -52,19 +52,8 @@ class TabularDataset(datasets._ColumnNamesDataset):
5252 my_dataset = aiplatform.TabularDataset.create(
5353 display_name="my-dataset", gcs_source=['gs://path/to/my/dataset.csv'])
5454 ```
55-
56- The following code shows you how to create and import a tabular
57- dataset in two distinct steps.
58-
59- ```py
60- my_dataset = aiplatform.TextDataset.create(
61- display_name="my-dataset")
62-
63- my_dataset.import(
64- gcs_source=['gs://path/to/my/dataset.csv']
65- import_schema_uri=aiplatform.schema.dataset.ioformat.text.multi_label_classification
66- )
67- ```
55+ Contrary to unstructured datasets, creating and importing a tabular dataset
56+ can only be done in a single step.
6857
6958 If you create a tabular dataset with a pandas
7059 [`DataFrame`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html),
@@ -108,10 +97,11 @@ def create(
10897 Optional. The URI to one or more Google Cloud Storage buckets that contain
10998 your datasets. For example, `str: "gs://bucket/file.csv"` or
11099 `Sequence[str]: ["gs://bucket/file1.csv",
111- "gs://bucket/file2.csv"]`.
100+ "gs://bucket/file2.csv"]`. Either `gcs_source` or `bq_source` must be specified.
112101 bq_source (str):
113102 Optional. The URI to a BigQuery table that's used as an input source. For
114- example, `bq://project.dataset.table_name`.
103+ example, `bq://project.dataset.table_name`. Either `gcs_source`
104+ or `bq_source` must be specified.
115105 project (str):
116106 Optional. The name of the Google Cloud project to which this
117107 `TabularDataset` is uploaded. This overrides the project that
0 commit comments