gcloud dataplex zones create

NAME
gcloud dataplex zones create - create a zone
SYNOPSIS
gcloud dataplex zones create(ZONE :--lake=LAKE--location=LOCATION)--resource-location-type=RESOURCE_LOCATION_TYPE--type=TYPE[--async][--description=DESCRIPTION][--display-name=DISPLAY_NAME][--labels=[KEY=VALUE,…]][--validate-only][--[no-]discovery-enabled--discovery-exclude-patterns=[EXCLUDE_PATTERNS,…]--discovery-include-patterns=[INCLUDE_PATTERNS,…]--discovery-schedule=DISCOVERY_SCHEDULE--csv-delimiter=CSV_DELIMITER--[no-]csv-disable-type-inference--csv-encoding=CSV_ENCODING--csv-header-rows=CSV_HEADER_ROWS--[no-]json-disable-type-inference--json-encoding=JSON_ENCODING][GCLOUD_WIDE_FLAG]
DESCRIPTION
A zone represents a logical group of related assets within a lake. A zone can beused to map to organizational structure or represent stages of data readinessfrom raw to curated. It provides managing behavior that is shared or inheritedby all contained assets.

The Zone ID is used to generate names such as database and dataset names whenpublishing metadata to Hive Metastore and BigQuery.

  • Must contain only lowercase letters, numbers, and hyphens.
  • Must start with a letter.
  • Must end with a number or a letter.
  • Must be between 1-63 characters.
  • Must be unique across all lakes from all locations in a project.
EXAMPLES
To create a Dataplex zone with nametest-zone within laketest-lake in locationus-central1 with typeRAW, and resource location typeSINGLE_REGION, run:
gclouddataplexzonescreatetest-zone--location=us-central--lake=test-lake--resource-location-type=SINGLE_REGION--type=RAW

To create a Dataplex zone with nametest-zone within laketest-lake in locationus-central1 with typeRAW,resource location typeSINGLE_REGION withdiscovery-enabled and discovery schedule0 * * * *, run:

gclouddataplexzonescreatetest-zone--location=us-central--lake=test-lake--resource-location-type=SINGLE_REGION--type=RAW--discovery-enabled--discovery-schedule="0 * * * *"
POSITIONAL ARGUMENTS
Zones resource - Arguments and flags that define the Dataplex zone you want tocreate. The arguments in this group can be used to specify the attributes ofthis resource. (NOTE) Some attributes are not given arguments in this group butcan be set in other ways.

To set theproject attribute:

  • provide the argumentzone on the command line with a fullyspecified name;
  • provide the argument--project on the command line;
  • set the propertycore/project.

This must be specified.

ZONE
ID of the zones or fully qualified identifier for the zones.

To set thezone attribute:

  • provide the argumentzone on the command line.

This positional argument must be specified if any of the other arguments in thisgroup are specified.

--lake=LAKE
The identifier of the Dataplex lake resource.

To set thelake attribute:

  • provide the argumentzone on the command line with a fullyspecified name;
  • provide the argument--lake on the command line.
--location=LOCATION
The location of the Dataplex resource.

To set thelocation attribute:

  • provide the argumentzone on the command line with a fullyspecified name;
  • provide the argument--location on the command line;
  • set the propertydataplex/location.
REQUIRED FLAGS
Settings for resources attached as assets within a zone.

This must be specified.

--resource-location-type=RESOURCE_LOCATION_TYPE
Location type of the resources attached to a zone.RESOURCE_LOCATION_TYPE must be one of:
MULTI_REGION
Resources that are associated with a multi-region location.
SINGLE_REGION
Resources that are associated with a single region.
--type=TYPE
Type.TYPE must be one of:
CURATED
A zone that contains data that is considered to be ready for broader consumptionand analytics workloads. Curated structured data stored in Cloud Storage mustconform to certain file formats (Parquet, Avro, and Orc) and organized in ahive-compatible directory layout.
RAW
A zone that contains data that needs further processing before it is consideredgenerally ready for consumption and analytics workloads.
OPTIONAL FLAGS
--async
Return immediately, without waiting for the operation in progress to complete.
--description=DESCRIPTION
Description of the zone.
--display-name=DISPLAY_NAME
Display name of the zone.
--labels=[KEY=VALUE,…]
List of label KEY=VALUE pairs to add.

Keys must start with a lowercase character and contain only hyphens(-), underscores (_), lowercase characters, andnumbers. Values must contain only hyphens (-), underscores(_), lowercase characters, and numbers.

--validate-only
Validate the create action, but don't actually perform it.
Settings to manage the metadata discovery and publishing.
--[no-]discovery-enabled
Whether discovery is enabled. Use--discovery-enabled to enable and--no-discovery-enabled to disable.
--discovery-exclude-patterns=[EXCLUDE_PATTERNS,…]
The list of patterns to apply for selecting data to exclude during discovery.For Cloud Storage bucket assets, these are interpreted as glob patterns used tomatch object names. For BigQuery dataset assets, these are interpreted aspatterns to match table names.
--discovery-include-patterns=[INCLUDE_PATTERNS,…]
The list of patterns to apply for selecting data to include during discovery ifonly a subset of the data should considered. For Cloud Storage bucket assets,these are interpreted as glob patterns used to match object names. For BigQuerydataset assets, these are interpreted as patterns to match table names.
Determines when discovery jobs are triggered.
--discovery-schedule=DISCOVERY_SCHEDULE
Cron schedule for runningdiscovery jobs periodically. Discovery jobs must be scheduled at least 30minutes apart.
Describe data formats.
Describe CSV and similar semi-structured data formats.
--csv-delimiter=CSV_DELIMITER
The delimiter being used to separate values. This defaults to ','.
--[no-]csv-disable-type-inference
Whether to disable the inference of data type for CSV data. If true, all columnswill be registered as strings. Use--csv-disable-type-inference toenable and--no-csv-disable-type-inference to disable.
--csv-encoding=CSV_ENCODING
The character encoding of the data. The default is UTF-8.
--csv-header-rows=CSV_HEADER_ROWS
The number of rows to interpret as header rows that should be skipped whenreading data rows.
Describe JSON data format.
--[no-]json-disable-type-inference
Whether to disable the inference of data type for Json data. If true, allcolumns will be registered as their primitive types (strings, number orboolean). Use--json-disable-type-inference to enable and--no-json-disable-type-inference to disable.
--json-encoding=JSON_ENCODING
The character encoding of the data. The default is UTF-8.
GCLOUD WIDE FLAGS
These flags are available to all commands:--access-token-file,--account,--billing-project,--configuration,--flags-file,--flatten,--format,--help,--impersonate-service-account,--log-http,--project,--quiet,--trace-token,--user-output-enabled,--verbosity.

Run$gcloud help for details.

NOTES
This variant is also available:
gcloudalphadataplexzonescreate

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-05-07 UTC.