gcloud beta dataflow yaml run

NAME
gcloud beta dataflow yaml run - runs a job from the specified path
SYNOPSIS
gcloud beta dataflow yaml runJOB_NAME(--yaml-pipeline=YAML_PIPELINE    |--yaml-pipeline-file=YAML_PIPELINE_FILE)[--additional-experiments=[ADDITIONAL_EXPERIMENTS,…]][--additional-pipeline-options=[ADDITIONAL_PIPELINE_OPTIONS,…]][--additional-user-labels=[ADDITIONAL_USER_LABELS,…]][--dataflow-kms-key=DATAFLOW_KMS_KEY][--disable-public-ips][--enable-streaming-engine][--jinja-variables=JSON_OBJECT][--launcher-machine-type=LAUNCHER_MACHINE_TYPE][--max-workers=MAX_WORKERS][--network=NETWORK][--num-workers=NUM_WORKERS][--pipeline-options=[OPTIONS=VALUE;OPTION=VALUE,…]][--region=REGION_ID][--service-account-email=SERVICE_ACCOUNT_EMAIL][--staging-location=STAGING_LOCATION][--subnetwork=SUBNETWORK][--temp-location=TEMP_LOCATION][--template-file-gcs-location=TEMPLATE_FILE_GCS_LOCATION][--worker-machine-type=WORKER_MACHINE_TYPE][[--[no-]update :--transform-name-mappings=[TRANSFORM_NAME_MAPPINGS,…]]][GCLOUD_WIDE_FLAG]
DESCRIPTION
(BETA) Runs a job from the specified YAML description or CloudStorage path.
EXAMPLES
To run a job from YAML, run:
gcloudbetadataflowyamlrunmy-job--yaml-pipeline-file=gs://yaml-path--region=europe-west1
POSITIONAL ARGUMENTS
JOB_NAME
Unique name to assign to the job.
REQUIRED FLAGS
Exactly one of these must be specified:
--yaml-pipeline=YAML_PIPELINE
Inline definition of the YAML pipeline to run.
--yaml-pipeline-file=YAML_PIPELINE_FILE
Path of a file defining the YAML pipeline to run. (Must be a local file or a URLbeginning with 'gs://'.)
OPTIONAL FLAGS
--additional-experiments=[ADDITIONAL_EXPERIMENTS,…]
Additional experiments to pass to the job. Example:--additional-experiments=experiment1,experiment2=value2
--additional-pipeline-options=[ADDITIONAL_PIPELINE_OPTIONS,…]
Additional pipeline options to pass to the job. Example:--additional-pipeline-options=option1=value1,option2=value2 For a list ofavailable options, see the Dataflow reference:https://cloud.google.com/dataflow/docs/reference/pipeline-options
--additional-user-labels=[ADDITIONAL_USER_LABELS,…]
Additional user labels to pass to the job. Example:--additional-user-labels='key1=value1,key2=value2'
--dataflow-kms-key=DATAFLOW_KMS_KEY
Cloud KMS key to protect the job resources.
--disable-public-ips
If specified, Cloud Dataflow workers will not use public IP addresses. Overridesthe defaultdataflow/disable_public_ips property value for thiscommand invocation.
--enable-streaming-engine
Enable Streaming Engine for the streaming job. Overrides the defaultdataflow/enable_streaming_engine property value for this commandinvocation.
--jinja-variables=JSON_OBJECT
Jinja2 variables to be used in reifying the yaml.
--launcher-machine-type=LAUNCHER_MACHINE_TYPE
The machine type to use for launching the job. The default is n1-standard-1.
--max-workers=MAX_WORKERS
Maximum number of workers to run.
--network=NETWORK
Compute Engine network for launching worker instances to run the pipeline. Ifnot set, the default network is used.
--num-workers=NUM_WORKERS
Initial number of workers to use.
--pipeline-options=[OPTIONS=VALUE;OPTION=VALUE,…]
(DEPRECATED) Pipeline options to pass to the job.

The--pipeline-options flag is deprecated. Pipeline options shouldbe passed using --additional-pipeline-options flag.

--region=REGION_ID
Region ID of the job's regional endpoint. Defaults to 'us-central1'.
--service-account-email=SERVICE_ACCOUNT_EMAIL
Service account to run the workers as.
--staging-location=STAGING_LOCATION
Google Cloud Storage location to stage local files. If not set, defaults to thevalue for --temp-location.(Must be a URL beginning with 'gs://'.)
--subnetwork=SUBNETWORK
Compute Engine subnetwork for launching worker instances to run the pipeline. Ifnot set, the default subnetwork is used.
--temp-location=TEMP_LOCATION
Google Cloud Storage location to stage temporary files. If not set, defaults tothe value for --staging-location.(Must be a URL beginning with 'gs://'.)
--template-file-gcs-location=TEMPLATE_FILE_GCS_LOCATION
Google Cloud Storage location of the YAML template to run. (Must be a URLbeginning with 'gs://'.)
--worker-machine-type=WORKER_MACHINE_TYPE
Type of machine to use for workers. Defaults to server-specified.
--[no-]update
Specify this flag to update a streaming job. Use--update to enableand--no-update to disable.
--transform-name-mappings=[TRANSFORM_NAME_MAPPINGS,…]
Transform name mappings for the streaming update job.
GCLOUD WIDE FLAGS
These flags are available to all commands:--access-token-file,--account,--billing-project,--configuration,--flags-file,--flatten,--format,--help,--impersonate-service-account,--log-http,--project,--quiet,--trace-token,--user-output-enabled,--verbosity.

Run$gcloud help for details.

NOTES
This command is currently in beta and might change without notice. This variantis also available:
gclouddataflowyamlrun

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-01-27 UTC.