gcloud dataplex datascans update data-discovery

NAME
gcloud dataplex datascans update data-discovery - update a Dataplex data discovery scan job
SYNOPSIS
gcloud dataplex datascans update data-discovery(DATASCAN :--location=LOCATION)[--description=DESCRIPTION][--display-name=DISPLAY_NAME][--labels=[KEY=VALUE,…]][--async    |--validate-only][--bigquery-publishing-connection=BIGQUERY_PUBLISHING_CONNECTION--bigquery-publishing-dataset-location=BIGQUERY_PUBLISHING_DATASET_LOCATION--bigquery-publishing-dataset-project=BIGQUERY_PUBLISHING_DATASET_PROJECT--bigquery-publishing-table-type=BIGQUERY_PUBLISHING_TABLE_TYPE--storage-exclude-patterns=[PATTERN,…]--storage-include-patterns=[PATTERN,…]--csv-delimiter=CSV_DELIMITER--csv-disable-type-inference=CSV_DISABLE_TYPE_INFERENCE--csv-encoding=CSV_ENCODING--csv-header-row-count=CSV_HEADER_ROW_COUNT--csv-quote-character=CSV_QUOTE_CHARACTER--json-disable-type-inference=JSON_DISABLE_TYPE_INFERENCE--json-encoding=JSON_ENCODING][--on-demand=ON_DEMAND    |--schedule=SCHEDULE][GCLOUD_WIDE_FLAG]
DESCRIPTION
Allows users to auto discover BigQuery External and BigLake tables fromunderlying Cloud Storage buckets.
EXAMPLES
To update description of a data discovery scandata-discovery-datascan in projecttest-projectlocated inus-central1, run:
gclouddataplexdatascansupdatedata-discoverydata-discovery-datascan--project=test-project--location=us-central1--description="Description is updated."
POSITIONAL ARGUMENTS
Datascan resource - Arguments and flags that define the Dataplex datascan youwant to update a data discovery scan for. The arguments in this group can beused to specify the attributes of this resource. (NOTE) Some attributes are notgiven arguments in this group but can be set in other ways.

To set theproject attribute:

  • provide the argumentdatascan on the command line with a fullyspecified name;
  • provide the argument--project on the command line;
  • set the propertycore/project.

This must be specified.

DATASCAN
ID of the datascan or fully qualified identifier for the datascan.

To set thedataScans attribute:

  • provide the argumentdatascan on the command line.

This positional argument must be specified if any of the other arguments in thisgroup are specified.

--location=LOCATION
The location of the Dataplex resource.

To set thelocation attribute:

  • provide the argumentdatascan on the command line with a fullyspecified name;
  • provide the argument--location on the command line;
  • set the propertydataplex/location.
FLAGS
--description=DESCRIPTION
Description of the data discovery scan
--display-name=DISPLAY_NAME
Display name of the data discovery scan
--labels=[KEY=VALUE,…]
List of label KEY=VALUE pairs to add.

Keys must start with a lowercase character and contain only hyphens(-), underscores (_), lowercase characters, andnumbers. Values must contain only hyphens (-), underscores(_), lowercase characters, and numbers.

At most one of --async | --validate-only can be specified.

At most one of these can be specified:

--async
Return immediately, without waiting for the operation in progress to complete.
--validate-only
Validate the update action, but don't actually perform it.
Data spec for the data discovery scan.
BigQuery publishing config arguments for the data discovery scan.
--bigquery-publishing-connection=BIGQUERY_PUBLISHING_CONNECTION
BigQuery connection to use for auto discovering cloud resource bucket to BigLaketables. Connection is required forBIGLAKE`BigQuery publishing table type.
--bigquery-publishing-dataset-location=BIGQUERY_PUBLISHING_DATASET_LOCATION
The location of the BigQuery dataset to publish BigLake external or non-BigLakeexternal tables to. If not specified, the dataset location will be set to thelocation of the data source resource. Refer tohttps://cloud.google.com/bigquery/docs/locations#supportedLocationsfor supported locations.
--bigquery-publishing-dataset-project=BIGQUERY_PUBLISHING_DATASET_PROJECT
The project of the BigQuery dataset to publish BigLake external or non-BigLakeexternal tables to. If not specified, the cloud resource bucket project will beused to create the dataset. The format is "projects/{project_id_or_number}.
--bigquery-publishing-table-type=BIGQUERY_PUBLISHING_TABLE_TYPE
BigQuery table type to discover the cloud resource bucket. Can be eitherEXTERNAL orBIGLAKE. If not specified, the table typewill be set toEXTERNAL.
Storage config arguments for the data discovery scan.
--storage-exclude-patterns=[PATTERN,…]
List of patterns that identify the data to exclude during discovery. Thesepatterns are interpreted as glob patterns used to match object names in theCloud Storage bucket. Exclude patterns will be applied before include patterns.
--storage-include-patterns=[PATTERN,…]
List of patterns that identify the data to include during discovery when only asubset of the data should be considered. These patterns are interpreted as globpatterns used to match object names in the Cloud Storage bucket.
CSV options arguments for the data discovery scan.
--csv-delimiter=CSV_DELIMITER
Delimiter used to separate values in the CSV file. If not specified, thedelimiter will be set to comma (",").
--csv-disable-type-inference=CSV_DISABLE_TYPE_INFERENCE
Whether to disable the inference of data types for CSV data. If true, allcolumns are registered as strings.
--csv-encoding=CSV_ENCODING
Character encoding of the CSV file. If not specified, the encoding will be setto UTF-8.
--csv-header-row-count=CSV_HEADER_ROW_COUNT
The number of rows to interpret as header rows that should be skipped whenreading data rows. The default value is 1.
--csv-quote-character=CSV_QUOTE_CHARACTER
The character used to quote column values. Accepts " (double quotation mark) or' (single quotation mark). If unspecified, defaults to " (double quotationmark).
JSON options arguments for the data discovery scan.
--json-disable-type-inference=JSON_DISABLE_TYPE_INFERENCE
Whether to disable the inference of data types for JSON data. If true, allcolumns are registered as strings.
--json-encoding=JSON_ENCODING
Character encoding of the JSON file. If not specified, the encoding will be setto UTF-8.
Data discovery scan execution settings.
Data discovery scan scheduling and trigger settings

At most one of these can be specified:

--on-demand=ON_DEMAND
If set, the scan runs one-time shortly after data discovery scan updation.
--schedule=SCHEDULE
Cron schedule (https://en.wikipedia.org/wiki/Cron) for running scansperiodically. To explicitly set a timezone to the cron tab, apply a prefix inthe cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. Forexample,CRON_TZ=America/New_York 1 * * * * orTZ=America/New_York 1 * * * *. This field is required for RECURRINGscans.
GCLOUD WIDE FLAGS
These flags are available to all commands:--access-token-file,--account,--billing-project,--configuration,--flags-file,--flatten,--format,--help,--impersonate-service-account,--log-http,--project,--quiet,--trace-token,--user-output-enabled,--verbosity.

Run$gcloud help for details.

NOTES
This variant is also available:
gcloudalphadataplexdatascansupdatedata-discovery

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-06-17 UTC.