gcloud beta datastream streams update Stay organized with collections Save and categorize content based on your preferences.
- NAME
- gcloud beta datastream streams update - updates a Datastream stream
- SYNOPSIS
gcloud beta datastream streams update(STREAM:--location=LOCATION)[--display-name=DISPLAY_NAME][--rule-sets=RULE_SETS][--state=STATE][--update-labels=[KEY=VALUE,…]][--update-mask=UPDATE_MASK][--backfill-none|--backfill-all--mongodb-excluded-objects=MONGODB_EXCLUDED_OBJECTS|--mysql-excluded-objects=MYSQL_EXCLUDED_OBJECTS|--oracle-excluded-objects=ORACLE_EXCLUDED_OBJECTS|--postgresql-excluded-objects=POSTGRESQL_EXCLUDED_OBJECTS|--salesforce-excluded-objects=SALESFORCE_EXCLUDED_OBJECTS|--spanner-excluded-objects=SPANNER_EXCLUDED_OBJECTS|--sqlserver-excluded-objects=SQLSERVER_EXCLUDED_OBJECTS][--clear-labels|--remove-labels=[KEY,…]][--destination-name=DESTINATION_NAME--bigquery-destination-config=BIGQUERY_DESTINATION_CONFIG|--gcs-destination-config=GCS_DESTINATION_CONFIG][--force|--validate-only][--source-name=SOURCE_NAME--mongodb-source-config=MONGODB_SOURCE_CONFIG|--mysql-source-config=MYSQL_SOURCE_CONFIG|--oracle-source-config=ORACLE_SOURCE_CONFIG|--postgresql-source-config=POSTGRESQL_SOURCE_CONFIG|--salesforce-source-config=SALESFORCE_SOURCE_CONFIG|--spanner-source-config=SPANNER_SOURCE_CONFIG|--sqlserver-source-config=SQLSERVER_SOURCE_CONFIG][GCLOUD_WIDE_FLAG …]
- DESCRIPTION
(BETA)(DEPRECATED)Datastream beta version isdeprecated. Please use`gcloud datastream streamsupdatecommand instead.Update a Datastream stream. If successful, the response body contains a newlycreated instance of Operation. To get the operation result, call: describeOPERATION
- EXAMPLES
- To update a stream with a new source and new display name:
gcloudbetadatastreamstreamsupdateSTREAM--location=us-central1--display-name=my-stream--source-name=source--update-mask=display_name,source_nameTo update a stream's state to RUNNING:
gcloudbetadatastreamstreamsupdateSTREAM--location=us-central1--state=RUNNING--update-mask=stateTo update a stream's oracle source config:
gcloudbetadatastreamstreamsupdateSTREAM--location=us-central1--oracle-source-config=good_oracle_cp.json--update-mask=oracle_source_config.allowlist - POSITIONAL ARGUMENTS
- Stream resource - The stream to update. The arguments in this group can be usedto specify the attributes of this resource. (NOTE) Some attributes are not givenarguments in this group but can be set in other ways.
To set the
projectattribute:- provide the argument
streamon the command line with a fullyspecified name; - provide the argument
--projecton the command line; - set the property
core/project.
This must be specified.
STREAM- ID of the stream or fully qualified identifier for the stream.
To set the
streamattribute:- provide the argument
streamon the command line.
This positional argument must be specified if any of the other arguments in thisgroup are specified.
- provide the argument
--location=LOCATION- The Cloud location for the stream.
To set the
locationattribute:- provide the argument
streamon the command line with a fullyspecified name; - provide the argument
--locationon the command line.
- provide the argument
- provide the argument
- Stream resource - The stream to update. The arguments in this group can be usedto specify the attributes of this resource. (NOTE) Some attributes are not givenarguments in this group but can be set in other ways.
- FLAGS
--display-name=DISPLAY_NAME- Friendly name for the stream.
--rule-sets=RULE_SETS- Path to a JSON file containing a list of rule sets to be applied to the stream.
TheJSONfileisformattedasfollows,withcamelCasefieldnaming:
[{"objectFilter":{"sourceObjectIdentifier":{"oracleIdentifier":{"schema":"schema1","table":"table1"}}},"customizationRules":[{"bigqueryClustering":{"columns":["COL_A"]}}]},{"objectFilter":{"sourceObjectIdentifier":{"oracleIdentifier":{"schema":"schema2","table":"table2"}}},"customizationRules":[{"bigqueryPartitioning":{"timeUnitPartition":{"column":"TIME_COL","partitioningTimeGranularity":"PARTITIONING_TIME_GRANULARITY_DAY"}}}]}]
--state=STATE- Stream state, can be set to: "RUNNING" or "PAUSED".
--update-labels=[KEY=VALUE,…]- List of label KEY=VALUE pairs to update. If a label exists, its value ismodified. Otherwise, a new label is created.
Keys must start with a lowercase character and contain only hyphens(
-), underscores (_), lowercase characters, andnumbers. Values must contain only hyphens (-), underscores(_), lowercase characters, and numbers. --update-mask=UPDATE_MASK- Used to specify the fields to be overwritten in the stream resource by theupdate. If the update mask is used, then a field will be overwritten only if itis in the mask. If the user does not provide a mask then all fields will beoverwritten. This is a comma-separated list of fully qualified names of fields,written as snake_case or camelCase. Example: "display_name,source_config.oracle_source_config".
- At most one of these can be specified:
--backfill-none- Do not automatically backfill any objects. This flag is equivalent to selectingthe Manual backfill type in the Google Cloud console.
- Or at least one of these can be specified:
--backfill-all- Automatically backfill objects included in the stream source configuration.Specific objects can be excluded. This flag is equivalent to selecting theAutomatic backfill type in the Google Cloud console.
- At most one of these can be specified:
--mongodb-excluded-objects=MONGODB_EXCLUDED_OBJECTS- Path to a YAML (or JSON) file containing the MongoDB data sources to avoidbackfilling.
The JSON file is formatted as follows, with camelCase field naming:
{"databases":[{"database":"sample_database","collections":[{"collection":"sample_collection","fields":[{"field":"sample_field",}]}]}]}
--mysql-excluded-objects=MYSQL_EXCLUDED_OBJECTS- Path to a YAML (or JSON) file containing the MySQL data sources to avoidbackfilling.
The JSON file is formatted as follows, with camelCase field naming:
{"mysqlDatabases":[{"database":"sample_database","mysqlTables":[{"table":"sample_table","mysqlColumns":[{"column":"sample_column",}]}]}]}
--oracle-excluded-objects=ORACLE_EXCLUDED_OBJECTS- Path to a YAML (or JSON) file containing the Oracle data sources to avoidbackfilling.
The JSON file is formatted as follows, with camelCase field naming:
{"oracleSchemas":[{"schema":"SAMPLE","oracleTables":[{"table":"SAMPLE_TABLE","oracleColumns":[{"column":"COL",}]}]}]}
--postgresql-excluded-objects=POSTGRESQL_EXCLUDED_OBJECTS- Path to a YAML (or JSON) file containing the PostgreSQL data sources to avoidbackfilling.
The JSON file is formatted as follows, with camelCase field naming:
{"postgresqlSchemas":[{"schema":"SAMPLE","postgresqlTables":[{"table":"SAMPLE_TABLE","postgresqlColumns":[{"column":"COL",}]}]}]}
--salesforce-excluded-objects=SALESFORCE_EXCLUDED_OBJECTS- Path to a YAML (or JSON) file containing the Salesforce data sources to avoidbackfilling.
The JSON file is formatted as follows, with camelCase field naming:
{"objects":[{"objectName":"SAMPLE",},{"objectName":"SAMPLE2",}]}
--spanner-excluded-objects=SPANNER_EXCLUDED_OBJECTS- Path to a YAML (or JSON) file containing the Spanner data sources to avoidbackfilling.
The JSON file is formatted as follows, with camelCase field naming:
{"schemas":[{"schema":"SAMPLE_SCHEMA","tables":[{"table":"SAMPLE_TABLE","columns":[{"column":"SAMPLE_COLUMN",}]}]}]}
--sqlserver-excluded-objects=SQLSERVER_EXCLUDED_OBJECTS- Path to a YAML (or JSON) file containing the SQL Server data sources to avoidbackfilling.
The JSON file is formatted as follows, with camelCase field naming:
{"schemas":[{"schema":"SAMPLE","tables":[{"table":"SAMPLE_TABLE","columns":[{"column":"COL",}]}]}]}
- At most one of these can be specified:
--clear-labels- Remove all labels. If
--update-labelsis also specified then--clear-labelsis applied first.For example, to remove all labels:
gcloudbetadatastreamstreamsupdate--clear-labelsTo remove all existing labels and create two new labels,
andfoo:bazgcloudbetadatastreamstreamsupdate--clear-labels--update-labelsfoo=bar,baz=qux --remove-labels=[KEY,…]- List of label keys to remove. If a label does not exist it is silently ignored.If
--update-labelsis also specified then--update-labelsis applied first.
- Connection profile resource - Resource ID of the destination connection profile.This represents a Cloud resource. (NOTE) Some attributes are not given argumentsin this group but can be set in other ways.
To set the
projectattribute:- provide the argument
--destination-nameon the command line with afully specified name; - provide the argument
--projecton the command line; - set the property
core/project.
To set the
locationattribute:- provide the argument
--destination-nameon the command line with afully specified name; - provide the argument
--locationon the command line.
- provide the argument
--destination-name=DESTINATION_NAME- ID of the connection_profile or fully qualified identifier for theconnection_profile.
To set the
connection_profileattribute:- provide the argument
--destination-nameon the command line.
- provide the argument
- At most one of these can be specified:
--bigquery-destination-config=BIGQUERY_DESTINATION_CONFIG- Path to a YAML (or JSON) file containing the configuration for Google BigQueryDestination Config.
The YAML (or JSON) file should be formatted as follows:
BigQuery configuration with source hierarchy datasets and merge mode (merge modeis by default):
{"sourceHierarchyDatasets":{"datasetTemplate":{"location":"us-central1","datasetIdPrefix":"my_prefix","kmsKeyName":"projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}"}},"merge":{}"dataFreshness":"3600s"}
BigQuery configuration with source hierarchy datasets and append only mode:
{"sourceHierarchyDatasets":{"datasetTemplate":{"location":"us-central1","datasetIdPrefix":"my_prefix","kmsKeyName":"projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}"}},"appendOnly":{}}
BigQuery configuration with single target dataset and merge mode:
{"singleTargetDataset":{"datasetId":"projectId:my_dataset"},"merge":{}"dataFreshness":"3600s"}
BigQuery configuration with Big Lake table configuration:
{"singleTargetDataset":{"datasetId":"projectId:datasetId"},"appendOnly":{},"blmtConfig":{"bucket":"bucketName","tableFormat":"ICEBERG","fileFormat":"PARQUET","connectionName":"projectId.region.connectionName","rootPath":"/root"}}
--gcs-destination-config=GCS_DESTINATION_CONFIG- Path to a YAML (or JSON) file containing the configuration for Google CloudStorage Destination Config.
The JSON file is formatted as follows:
{"path":"some/path","fileRotationMb":5,"fileRotationInterval":"15s","avroFileFormat":{}}
- At most one of these can be specified:
--force- Update the stream without validating it.
--validate-only- Only validate the stream, but do not update any resources. The default is false.
- Connection profile resource - Resource ID of the source connection profile. Thisrepresents a Cloud resource. (NOTE) Some attributes are not given arguments inthis group but can be set in other ways.
To set the
projectattribute:- provide the argument
--source-nameon the command line with a fullyspecified name; - provide the argument
--projecton the command line; - set the property
core/project.
To set the
locationattribute:- provide the argument
--source-nameon the command line with a fullyspecified name; - provide the argument
--locationon the command line.
- provide the argument
--source-name=SOURCE_NAME- ID of the connection_profile or fully qualified identifier for theconnection_profile.
To set the
connection_profileattribute:- provide the argument
--source-nameon the command line.
- provide the argument
- At most one of these can be specified:
--mongodb-source-config=MONGODB_SOURCE_CONFIG- Path to a YAML (or JSON) file containing the configuration for MongoDB SourceConfig.
The JSON file is formatted as follows, with snake_case field naming:
{"includeObjects":{},"excludeObjects":{"databases":[{"database":"sampleDb","collections":[{"collection":"sampleCollection","fields":[{"field":"SAMPLE_FIELD",}]}]}]}}
--mysql-source-config=MYSQL_SOURCE_CONFIG- Path to a YAML (or JSON) file containing the configuration for MySQL SourceConfig.
The JSON file is formatted as follows, with snake_case field naming:
{"allowlist":{},"rejectlist":{"mysql_databases":[{"database_name":"sample_database","mysql_tables":[{"table_name":"sample_table","mysql_columns":[{"column_name":"sample_column",}]}]}]}}
--oracle-source-config=ORACLE_SOURCE_CONFIG- Path to a YAML (or JSON) file containing the configuration for Oracle SourceConfig.
The JSON file is formatted as follows, with snake_case field naming:
{"allowlist":{},"rejectlist":{"oracle_schemas":[{"schema_name":"SAMPLE","oracle_tables":[{"table_name":"SAMPLE_TABLE","oracle_columns":[{"column_name":"COL",}]}]}]}}
--postgresql-source-config=POSTGRESQL_SOURCE_CONFIG- Path to a YAML (or JSON) file containing the configuration for PostgreSQL SourceConfig.
The JSON file is formatted as follows, with camelCase field naming:
{"includeObjects":{},"excludeObjects":{"postgresqlSchemas":[{"schema":"SAMPLE","postgresqlTables":[{"table":"SAMPLE_TABLE","postgresqlColumns":[{"column":"COL",}]}]}]},"replicationSlot":"SAMPLE_REPLICATION_SLOT","publication":"SAMPLE_PUBLICATION"}
--salesforce-source-config=SALESFORCE_SOURCE_CONFIG- Path to a YAML (or JSON) file containing the configuration for Salesforce SourceConfig.
The JSON file is formatted as follows, with camelCase field naming:
{"pollingInterval":"3000s","includeObjects":{},"excludeObjects":{"objects":[{"objectName":"SAMPLE","fields":[{"fieldName":"SAMPLE_FIELD",}]}]}}
--spanner-source-config=SPANNER_SOURCE_CONFIG- Path to a YAML (or JSON) file containing the configuration for Spanner SourceConfig.
The JSON file is formatted as follows, with camelCase field naming:
{"includeObjects":{},"excludeObjects":{"schemas":[{"schema":"SAMPLE","tables":[{"table":"SAMPLE_TABLE","columns":[{"column":"COL",}]}]}]},"maxConcurrentCdcTasks":1000,"maxConcurrentBackfillTasks":10,"backfillDataBoostEnabled":false,"fgacRole":"SAMPLE_FGAC_ROLE","spannerRpcPriority":"MEDIUM"}
--sqlserver-source-config=SQLSERVER_SOURCE_CONFIG- Path to a YAML (or JSON) file containing the configuration for SQL Server SourceConfig.
The JSON file is formatted as follows, with camelCase field naming:
{"includeObjects":{},"excludeObjects":{"schemas":[{"schema":"SAMPLE","tables":[{"table":"SAMPLE_TABLE","columns":[{"column":"COL",}]}]}]},"maxConcurrentCdcTasks":2,"maxConcurrentBackfillTasks":10,"transactionLogs":{}# Or changeTables}
- GCLOUD WIDE FLAGS
- These flags are available to all commands:
--access-token-file,--account,--billing-project,--configuration,--flags-file,--flatten,--format,--help,--impersonate-service-account,--log-http,--project,--quiet,--trace-token,--user-output-enabled,--verbosity.Run
$gcloud helpfor details. - NOTES
- This command is currently in beta and might change without notice. This variantis also available:
gclouddatastreamstreamsupdate
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-01-21 UTC.