REST Resource: projects.locations.workflowTemplates

Resource: WorkflowTemplate

A Dataproc workflow template resource.

JSON representation
{"id":string,"name":string,"version":integer,"createTime":string,"updateTime":string,"labels":{string:string,...},"placement":{object (WorkflowTemplatePlacement)},"jobs":[{object (OrderedJob)}],"parameters":[{object (TemplateParameter)}],"dagTimeout":string,"encryptionConfig":{object (EncryptionConfig)}}
Fields
id

string

name

string

Output only. The resource name of the workflow template, as described inhttps://cloud.google.com/apis/design/resource_names.

  • Forprojects.regions.workflowTemplates, the resource name of the template has the following format:projects/{projectId}/regions/{region}/workflowTemplates/{template_id}

  • Forprojects.locations.workflowTemplates, the resource name of the template has the following format:projects/{projectId}/locations/{location}/workflowTemplates/{template_id}

version

integer

Optional. Used to perform a consistent read-modify-write.

This field should be left blank for aworkflowTemplates.create request. It is required for anworkflowTemplates.update request, and must match the current server version. A typical update template flow would fetch the current template with aworkflowTemplates.get request, which will return the current template with theversion field filled in with the current server version. The user updates other fields in the template, then returns it as part of theworkflowTemplates.update request.

createTime

string (Timestamp format)

Output only. The time template was created.

Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples:"2014-10-02T15:01:23Z","2014-10-02T15:01:23.045123456Z" or"2014-10-02T15:01:23+05:30".

updateTime

string (Timestamp format)

Output only. The time template was last updated.

Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples:"2014-10-02T15:01:23Z","2014-10-02T15:01:23.045123456Z" or"2014-10-02T15:01:23+05:30".

labels

map (key: string, value: string)

Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.

Labelkeys must contain 1 to 63 characters, and must conform toRFC 1035.

Labelvalues may be empty, but, if present, must contain 1 to 63 characters, and must conform toRFC 1035.

No more than 32 labels can be associated with a template.

An object containing a list of"key": value pairs. Example:{ "name": "wrench", "mass": "1.3kg", "count": "3" }.

placement

object (WorkflowTemplatePlacement)

Required. WorkflowTemplate scheduling information.

jobs[]

object (OrderedJob)

Required. The Directed Acyclic Graph of Jobs to submit.

parameters[]

object (TemplateParameter)

Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.

dagTimeout

string (Duration format)

Optional. Timeout duration for the DAG of jobs, expressed in seconds (seeJSON representation of duration). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on amanaged cluster, the cluster is deleted.

encryptionConfig

object (EncryptionConfig)

Optional. Encryption settings for encrypting workflow template job arguments.

WorkflowTemplatePlacement

Specifies workflow execution target.

EithermanagedCluster orclusterSelector is required.

JSON representation
{// Union fieldplacement can be only one of the following:"managedCluster":{object (ManagedCluster)},"clusterSelector":{object (ClusterSelector)}// End of list of possible types for union fieldplacement.}
Fields
Union fieldplacement. Required. Specifies where workflow executes; either on a managed cluster or an existing cluster chosen by labels.placement can be only one of the following:
managedCluster

object (ManagedCluster)

A cluster that is managed by the workflow.

clusterSelector

object (ClusterSelector)

Optional. A selector that chooses target cluster for jobs based on metadata.

The selector is evaluated at the time each job is submitted.

ManagedCluster

Cluster that is managed by the workflow.

JSON representation
{"clusterName":string,"config":{object (ClusterConfig)},"labels":{string:string,...}}
Fields
clusterName

string

Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.

The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.

config

object (ClusterConfig)

Required. The cluster configuration.

labels

map (key: string, value: string)

Optional. The labels to associate with this cluster.

Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}

Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}

No more than 32 labels can be associated with a given cluster.

An object containing a list of"key": value pairs. Example:{ "name": "wrench", "mass": "1.3kg", "count": "3" }.

ClusterSelector

A selector that chooses target cluster for jobs based on metadata.

JSON representation
{"zone":string,"clusterLabels":{string:string,...}}
Fields
zone

string

Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.

If unspecified, the zone of the first cluster matching the selector is used.

clusterLabels

map (key: string, value: string)

Required. The cluster labels. Cluster must have all labels to match.

An object containing a list of"key": value pairs. Example:{ "name": "wrench", "mass": "1.3kg", "count": "3" }.

OrderedJob

A job executed by the workflow.

JSON representation
{"stepId":string,"labels":{string:string,...},"scheduling":{object (JobScheduling)},"prerequisiteStepIds":[string],// Union fieldjob_type can be only one of the following:"hadoopJob":{object (HadoopJob)},"sparkJob":{object (SparkJob)},"pysparkJob":{object (PySparkJob)},"hiveJob":{object (HiveJob)},"pigJob":{object (PigJob)},"sparkRJob":{object (SparkRJob)},"sparkSqlJob":{object (SparkSqlJob)},"prestoJob":{object (PrestoJob)},"flinkJob":{object (FlinkJob)}// End of list of possible types for union fieldjob_type.}
Fields
stepId

string

Required. The step id. The id must be unique among all jobs within the template.

The step id is used as prefix for job id, as jobgoog-dataproc-workflow-step-id label, and inprerequisiteStepIds field from other steps.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

labels

map (key: string, value: string)

Optional. The labels to associate with this job.

Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}

Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}

No more than 32 labels can be associated with a given job.

An object containing a list of"key": value pairs. Example:{ "name": "wrench", "mass": "1.3kg", "count": "3" }.

scheduling

object (JobScheduling)

Optional. Job scheduling configuration.

prerequisiteStepIds[]

string

Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.

Union fieldjob_type. Required. The job definition.job_type can be only one of the following:
hadoopJob

object (HadoopJob)

Optional. Job is a Hadoop job.

sparkJob

object (SparkJob)

Optional. Job is a Spark job.

pysparkJob

object (PySparkJob)

Optional. Job is a PySpark job.

hiveJob

object (HiveJob)

Optional. Job is a Hive job.

pigJob

object (PigJob)

Optional. Job is a Pig job.

sparkRJob

object (SparkRJob)

Optional. Job is a SparkR job.

sparkSqlJob

object (SparkSqlJob)

Optional. Job is a SparkSql job.

prestoJob

object (PrestoJob)

Optional. Job is a Presto job.

flinkJob

object (FlinkJob)

Optional. Job is a Flink job.

TemplateParameter

A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)

JSON representation
{"name":string,"fields":[string],"description":string,"validation":{object (ParameterValidation)}}
Fields
name

string

Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.

fields[]

string

Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.

A field path is similar in syntax to agoogle.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified asplacement.clusterSelector.zone.

Also, field paths can reference fields using the following syntax:

  • Values in maps can be referenced by key:

    • labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • placement.managedCluster.labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • jobs['step-id'].labels['key']
  • Jobs in the jobs list can be referenced by step-id:

    • jobs['step-id'].hadoopJob.mainJarFileUri
    • jobs['step-id'].hiveJob.queryFileUri
    • jobs['step-id'].pySparkJob.mainPythonFileUri
    • jobs['step-id'].hadoopJob.jarFileUris[0]
    • jobs['step-id'].hadoopJob.archiveUris[0]
    • jobs['step-id'].hadoopJob.fileUris[0]
    • jobs['step-id'].pySparkJob.pythonFileUris[0]
  • Items in repeated fields can be referenced by a zero-based index:

    • jobs['step-id'].sparkJob.args[0]
  • Other examples:

    • jobs['step-id'].hadoopJob.properties['key']
    • jobs['step-id'].hadoopJob.args[0]
    • jobs['step-id'].hiveJob.scriptVariables['key']
    • jobs['step-id'].hadoopJob.mainJarFileUri
    • placement.clusterSelector.zone

It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid:

  • placement.clusterSelector.clusterLabels
  • jobs['step-id'].sparkJob.args
description

string

Optional. Brief description of the parameter. Must not exceed 1024 characters.

validation

object (ParameterValidation)

Optional. Validation rules to be applied to this parameter's value.

ParameterValidation

Configuration for parameter validation.

JSON representation
{// Union fieldvalidation_type can be only one of the following:"regex":{object (RegexValidation)},"values":{object (ValueValidation)}// End of list of possible types for union fieldvalidation_type.}
Fields
Union fieldvalidation_type. Required. The type of validation to be performed.validation_type can be only one of the following:
regex

object (RegexValidation)

Validation based on regular expressions.

values

object (ValueValidation)

Validation based on a list of allowed values.

RegexValidation

Validation based on regular expressions.

JSON representation
{"regexes":[string]}
Fields
regexes[]

string

Required. RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).

ValueValidation

Validation based on a list of allowed values.

JSON representation
{"values":[string]}
Fields
values[]

string

Required. List of allowed values for the parameter.

EncryptionConfig

Encryption settings for encrypting workflow template job arguments.

JSON representation
{"kmsKey":string}
Fields
kmsKey

string

Optional. The Cloud KMS key name to use for encrypting workflow template job arguments.

When this this key is provided, the following workflow templatejob arguments, if present, areCMEK encrypted:

Methods

create

Creates new workflow template.

delete

Deletes a workflow template.

get

Retrieves the latest workflow template.

getIamPolicy

Gets the access control policy for a resource.

instantiate

Instantiates a template and begins execution.

instantiateInline

Instantiates a template and begins execution.

list

Lists workflows that match the specified filter in the request.

setIamPolicy

Sets the access control policy on the specified resource.

testIamPermissions

Returns permissions that a caller has on the specified resource.

update

Updates (replaces) workflow template.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-10-15 UTC.