Build configuration file schema

A build config file contains instructions for Cloud Build to performtasks based on your specifications. For example, your build config file cancontain instructions to build, package, and push Docker images.

This page explains the schema of the Cloud Buildconfiguration file. For instructions on creating and using a build config file,seeCreating a basic build config file.

Structure of a build config file

Build config files are modeled using the Cloud Build API'sBuildresource.

You can write the build config file using the YAML or the JSON syntax. If yousubmit build requests using third-party http tools such as curl, use the JSONsyntax.

Note: If you're using VS Code or IntelliJ IDEs, you can useCloud Code to author your YAML config files. Cloud Codeis built in to theCloud Shell Editor anddoesn't require any setup. For more information, seeCloud Code for VS Code andCloud Code for IntelliJ.

A build config file has the following structure:

YAML

steps:-name:stringargs:[string,string,...]env:[string,string,...]allowFailure:booleanallowExitCodes:[string (int64 format),string (int64 format),...]dir:stringid:stringwaitFor:[string,string,...]entrypoint:stringsecretEnv:stringvolumes:object(Volume)timeout:string (Duration format)script:stringautomapSubstitutions:boolean-name:string...-name:string...timeout:string (Duration format)queueTtl:string (Duration format)logsBucket:stringoptions:env:[string,string,...]secretEnv:stringvolumes:object(Volume)sourceProvenanceHash:enum(HashType)machineType:enum(MachineType)diskSizeGb:string (int64 format)substitutionOption:enum(SubstitutionOption)dynamicSubstitutions:booleanautomapSubstitutions:booleanlogStreamingOption:enum(LogStreamingOption)logging:enum(LoggingMode)defaultLogsBucketBehavior:enum(DefaultLogsBucketBehavior)pool:object(PoolOption)pubsubTopic:stringrequestedVerifyOption:enum(RequestedVerifyOption)substitutions: map (key:string, value:string)tags:[string,string,...]serviceAccount:stringsecrets:object(Secret)availableSecrets:object(Secrets)artifacts:object(Artifacts)goModules:[object(GoModules),...]mavenArtifacts:[object(MavenArtifact),...]pythonPackages:[object(PythonPackage),...]npmPackages:[object(npmPackage),...]images:-[string,string,...]

JSON

{"steps":[{"name":"string","args":["string","string","..."],"env":["string","string","..."],"allowFailure":"boolean","allowExitCodes: [            "string(int64format)",            "string(int64format)",            "..."        ],        "dir": "string",        "id": "string",        "waitFor": [            "string",            "string",            "..."        ],        "entrypoint": "string",        "secretEnv": "string",        "volumes": "object(Volume)",        "timeout": "string(Durationformat)",        "script" : "string",        "automapSubstitutions" : "boolean"    },    {        "name": "string"        ...    },    {        "name": "string"        ...    }    ],    "timeout": "string(Durationformat)",    "queueTtl": "string(Durationformat)",    "logsBucket": "string",    "options": {        "sourceProvenanceHash": "enum(HashType)",        "machineType": "enum(MachineType)",        "diskSizeGb": "string(int64format)",        "substitutionOption": "enum(SubstitutionOption)",        "dynamicSubstitutions": "boolean",        "automapSubstitutions": "boolean",        "logStreamingOption": "enum(LogStreamingOption)",        "logging": "enum(LoggingMode)"        "defaultLogsBucketBehavior": "enum(DefaultLogsBucketBehavior)"        "env": [            "string",            "string",            "..."        ],        "secretEnv": "string",        "volumes": "object(Volume)",        "pool": "object(PoolOption)"        "requestedVerifyOption": "enum(RequestedVerifyOption)"    },    "substitutions": "map(key:string,value:string)",    "tags": [        "string",        "string",        "..."    ],    "serviceAccount": "string",    "secrets": "object(Secret)",    "availableSecrets": "object(Secrets)",    "artifacts": "object(Artifacts)",      "goModules": [object(GoModules), ...],      "mavenArtifacts": ["object(MavenArtifact)", ...],      "pythonPackages": ["object(PythonPackage)", ...],      "npmPackages": ["object(npmPackage)", ...],    "images": [        "string",        "string",        "..."    ]}

Each of the sections of the build config file defines a part of the task youwant Cloud Build to execute:

Build steps

Abuild step specifies an action that you want Cloud Build toperform. For each build step, Cloud Build executes a docker containeras an instance ofdocker run. Build steps are analogous to commands in ascript and provide you with the flexibility of executing arbitrary instructionsin your build. If you can package a build tool into a container,Cloud Build can execute it as part of your build. By default,Cloud Build executes all steps of a build serially on the same machine.If you have steps that can run concurrently, use thewaitFor option.

You can include up to 300 build steps in your config file.

Use thesteps field in the build config file to specify a build step. Here's asnippet of the kind of configuration you might set in thesteps field:

YAML

steps:-name:'gcr.io/cloud-builders/kubectl'args:['set','image','deployment/mydepl','my-image=gcr.io/my-project/myimage']env:-'CLOUDSDK_COMPUTE_ZONE=us-east4-b'-'CLOUDSDK_CONTAINER_CLUSTER=my-cluster'-name:'gcr.io/cloud-builders/docker'args:['build','-t','gcr.io/my-project-id/myimage','.']

JSON

{"steps":[{"name":"gcr.io/cloud-builders/kubectl","args":["set","image""deployment/mydepl""my-image=gcr.io/my-project/myimage"],"env":["CLOUDSDK_COMPUTE_ZONE=us-east4-b","CLOUDSDK_CONTAINER_CLUSTER=my-cluster"]},{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/my-project-id/myimage","."]}]}

name

Use thename field of a build step to specify acloudbuilder, which is a container imagerunning common tools. You use a builder in a build step to execute your tasks.

The following snippet shows build steps calling thebazel,gcloud, anddocker builders:

YAML

steps:-name:'gcr.io/cloud-builders/bazel'...-name:'gcr.io/cloud-builders/gcloud'...-name:'gcr.io/cloud-builders/docker'...

JSON

{"steps":[{"name":"gcr.io/cloud-builders/bazel"...},{"name":"gcr.io/cloud-builders/gcloud"...},{"name":"gcr.io/cloud-builders/docker"...}]}

args

Theargs field of a build step takes a list of arguments and passes them tothe builder referenced by thename field. Arguments passed to the builder arepassed to the tool that's running in the builder, which lets you invoke anycommand supported by the tool. If the builder used in the build step has anentrypoint, args will be used as arguments to that entrypoint. If the builderdoes not define an entrypoint, the first element in args will be used as theentrypoint, and the remainder will be used as arguments.

You can create up to 100 arguments per step. The maximum argument length is10,000 characters.

The following snippet invokes thedocker build command and installs Mavendependencies:

YAML

steps:-name:'gcr.io/cloud-builders/mvn'args:['install']-name:'gcr.io/cloud-builders/docker'args:['build','-t','gcr.io/my-project-id/myimage','.']

JSON

{"steps":[{"name":"gcr.io/cloud-builders/mvn","args":["install"]},{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/my-project-id/myimage","."]}]}

env

Theenv field of a build step takes a list of environment variables to be usedwhen running the step. The variables are of the formKEY=VALUE.

In the following build config theenv field of the build step sets theCompute Engine zone and the GKE cluster prior to executingkubectl:

YAML

steps:-name:'gcr.io/cloud-builders/docker'args:['build','-t','gcr.io/myproject/myimage','.']-name:'gcr.io/cloud-builders/kubectl'args:['set','image','deployment/myimage','frontend=gcr.io/myproject/myimage']env:-'CLOUDSDK_COMPUTE_ZONE=us-east1-b'-'CLOUDSDK_CONTAINER_CLUSTER=node-example-cluster'

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/myproject/myimage","."]},{"name":"gcr.io/cloud-builders/kubectl","args":["set","image","deployment/myimage","frontend=gcr.io/myproject/myimage"],"env":["CLOUDSDK_COMPUTE_ZONE=us-east1-b","CLOUDSDK_CONTAINER_CLUSTER=node-example-cluster"]}]}

dir

Use thedir field in a build step to set a working directory to use whenrunning the step's container. If you set thedir field in the build step,the working directory is set to/workspace/<dir>. If this value is a relativepath, it is relative to the build's working directory. If this value isabsolute, it may be outside the build's working directory, in which case thecontents of the path may not be persisted across build step executions (unless avolume for that path is specified).

The following code snippet sets the working directory for the build step as/workspace/examples/hello_world:

YAML

steps:-name:'gcr.io/cloud-builders/go'args:['install','.']env:['PROJECT_ROOT=hello']dir:'examples/hello_world'

JSON

{"steps":[{"name":"gcr.io/cloud-builders/go","args":["install","."],"env":["PROJECT_ROOT=hello"],"dir":"examples/hello_world"}]}

timeout

Use thetimeout field in a build step to set a time limit for executing thestep. If you don't set this field, the step runs either until it completes oruntil the build itself times out. Thetimeout field for a build step must notexceed thetimeout value specified for the build.

timeout is specified in seconds (usingDurationformat) with up to nine fractional digits, ending ins (for example:3.5s).

In the following build config, theubuntu step is timed out after 500 seconds:

YAML

steps:-name:'ubuntu'args:['sleep','600']timeout:500s-name:'ubuntu'args:['echo','helloworld,after600s']

JSON

{"steps":[{"name":"ubuntu","args":["sleep","600"],"timeout":"500s"},{"name":"ubuntu","args":["echo","hello world, after 600s"]}]}
Note: The timeout field exists for both the build step and the build. Thetimeout field of a build step specifies the amount of time the step is allowedto run, and thetimeout field of a build specifies the amount oftime the build is allowed to run.

script

Use thescript field in a build step to specify a shell script to execute inthe step. If you specifyscript in a build step, you cannot specifyargsorentrypoint in the same step. For instructions on using thescript field, seeRunning bash scripts.

automapSubstitutions

If set totrue, automatically map all substitutions and make them availableas environment variables in a single step. If set tofalse, ignoresubstitutions for that step. For examples, seeSubstitute variable values.

ID

Use theid field to set a unique identifier for a build step.id is usedwith thewaitFor field to configure the order in which build steps should berun. For instructions on usingwaitFor andid, seeConfiguring build steporder.

waitFor

Use thewaitFor field in a build step to specify which steps must run beforethe build step is run. If no values are provided forwaitFor, the build stepwaits for all prior build steps in the build request to complete successfullybefore running. For instructions on usingwaitFor andid, seeConfiguringbuild steporder.

entrypoint

Use theentrypoint in a build step to specify an entrypoint if you don't wantto use the default entrypoint of the builder. If you don't set this field,Cloud Build will use the builder's entrypoint. The following snippetsets the entrypoints for thenpm build step:

YAML

steps:-name:'gcr.io/cloud-builders/npm'entrypoint:'node'args:['--version']

JSON

{"steps":[{"name":"gcr.io/cloud-builders/npm","entrypoint":"node","args":["--version"]}]}

secretEnv

A list of environment variables which are encrypted using a Cloud KMScrypto key. These values must be specified in the build's secrets. Forinformation on using this field seeUsing the encrypted variable in buildrequests.

volumes

AVolumeis a Docker container volume that is mounted into build steps to persist filesacross build steps. When Cloud Build runs a build step, it automaticallymounts aworkspace volume into/workspace. You can specify additionalvolumes to be mounted into your build steps' containers using thevolumesfield for your steps.

For example, the following build config file writes a file into a volume in thefirst step and reads it in the second step. If these steps did not specify/persistent_volume path as a persistent volume, the first step would write thefile at that path, then that file would be discarded before the second stepexecutes. By specifying the volume with the same name in both steps, thecontents of/persistent_volume in the first step are persisted to the secondstep.

YAML

steps:-name:'ubuntu'volumes:-name:'vol1'path:'/persistent_volume'entrypoint:'bash'args:-'-c'-|echo "Hello, world!" > /persistent_volume/file-name:'ubuntu'volumes:-name:'vol1'path:'/persistent_volume'args:['cat','/persistent_volume/file']

JSON

{"steps":[{"name":"ubuntu","volumes":[{"name":"vol1","path":"/persistent_volume"}],"entrypoint":"bash","args":["-c","echo \"Hello, world!\" > /persistent_volume/file\n"]},{"name":"ubuntu","volumes":[{"name":"vol1","path":"/persistent_volume"}],"args":["cat","/persistent_volume/file"]}]}

allowFailure

In a build step, if you set the value of theallowFailure field totrue, and the build step fails, then the build succeeds as long as all other build steps in that build succeed.

If all of the build steps in a build haveallowFailure set totrue and all of the build steps fail, then the status of the build is stillSuccessful.

allowExitCodes takes precedence over this field.

The following code snippet allows the build to succeed when the first step fails:

YAML

steps:-name:'ubuntu'args:['-c','exit1']allowFailure:truesteps:-name:'ubuntu'args:['echo','HelloWorld']

JSON

{"steps":[{"name":"ubuntu","args":["-c","exit -1"],"allowFailure":true,},{"name":"ubuntu","args":["echo","Hello World"]}]}

allowExitCodes

Use theallowExitCodes field to specify that a build step failure can be ignored when that step returns a particular exit code.

If a build step fails with an exit code matching the value that you have provided inallowExitCodes, Cloud Build will allow this build step to fail without failing your entire build.

If 100% of your build steps fail, but every step exits with a code that you have specified in theallowExitCodes field, then the build is still successful.

However, if the build step fails, and it produces another exit code -- one that does not match the value you have specified is inallowExitCodes -- then the overall build will fail.

The exit code(s) relevant to your build depend on your software. For example, "1" is a common exit code in Linux. You can also define your own exit codes in your scripts. TheallowExitCodes field accepts numbers up to a maximum of 255.

This field takes precedence overallowFailure.

The following code snippet allows the build to succeed when the first step fails with one of the provided exit codes:

YAML

steps:-name:'ubuntu'args:['-c','exit1']allowExitCodes:[1]steps:-name:'ubuntu'args:['echo','HelloWorld']

JSON

{"steps":[{"name":"ubuntu","args":["-c","exit 1"],"allowExitCodes":[1],},{"name":"ubuntu","args":["echo","Hello World"]}]}

timeout

Use thetimeout field for a build to set a time limit for executing the build.If this time elapses, work on the build stops and thebuild status isTIMEOUT.

Iftimeout isn't set, the build uses a defaulttimeout of 60 minutes. Themaximum value fortimeout is 24 hours.timeout is specified in seconds,usingDurationformat, with up to nine fractional digits, ending withs (for example:3.5s).

In the following snippet,timeout is set to 660 seconds to avoid the buildfrom timing out because of the sleep:

YAML

steps:-name:'ubuntu'args:['sleep','600']timeout:660s

JSON

{"steps":[{"name":"ubuntu","args":["sleep","600"]}],"timeout":"660s"}
Note: The timeout field exists for both the build step and the build. Thetimeout field of a build step specifies the amount of time thebuild step is allowed to run, and thetimeout field of a build specifies theamount of time the entire build is allowed to run.

queueTtl

Use thequeueTtl field to specify the amount of time a build can be queued. Ifa build is in the queue for longer than the value set inqueueTtl, the buildexpires and thebuildstatus is set toEXPIRED. If no value is provided, Cloud Build uses the default value of3600s (1 hour).queueTtl starts ticking fromcreateTime.queueTtl mustbe specified in seconds with up to nine fractional digits, terminated by 's',for example,3.5s.

In the following snippettimeout is set to20s andqueueTtl is set to10s.queueTtl starts ticking atcreateTime, which is the time the buildis requested, andtimeout starts ticking atstartTime, which is the time thebuild starts. Therefore,queueTtl will expire atcreateTime +10s unlessthe build starts by then.

YAML

steps:-name:'ubuntu'args:['sleep','5']timeout:20squeueTtl:10s

JSON

{"steps":[{"name":"ubuntu","args":["sleep","5"]}],"timeout":"20s","queueTtl":"10s"}

logsBucket

Set thelogsBucket field for a build to specify a Cloud Storage bucketwhere logs must be written. If you don't set this field, Cloud Buildwill use a default bucket to store your build logs.

The following snippet sets a logs bucket to store the build logs:

YAML

steps:-name:'gcr.io/cloud-builders/go'args:['install','.']logsBucket:'gs://mybucket'

JSON

{"steps":[{"name":"gcr.io/cloud-builders/go","args":["install","."]}],"logsBucket":"gs://mybucket"}

options

Use theoptionsfield to specify the following optional arguments for your build:

enableStructuredLogging:Enables the mapping of specified build log fieldstoLogEntry fieldswhen the build log is sent to Logging.For example, if your build log contains amessage, then the message appearsin eithertextPayload orjsonPayload.message in the resultinglog entry. Build log fields that aren't mappable appear in the log entryjsonPayload. For more information, seeMap build log fields to log entry fields.

env:A list of global environment variable definitions that will exist for all buildsteps in this build. If a variable is defined in both globally and in a buildstep, the variable will use the build step value. The elements are of the formKEY=VALUE for the environment variableKEY being given the valueVALUE.

secretEnv:A list of global environment variables, encrypted using a Cloud Key Management Servicecrypto key, that will be available to all build steps in this build.These values must be specified in the build'sSecret.

volumes:A list of volumes to mount globally for ALL build steps. Each volume is createdas an empty volume prior to starting the build process. Upon completion of thebuild, volumes and their contents are discarded. Global volume names and pathscannot conflict with the volumes defined a build step. Using a global volume ina build with only one step is not valid as it signifies a build request with anincorrect configuration.

pubsubTopic:Option toprovide a Pub/Sub topic namefor receiving build statusnotifications. If you don't provide a name, then Cloud Build usesthe default topic name ofcloud-builds. The following snippet specifiesthat the Pub/Sub topic name ismy-topic:

YAML

steps:-name:'gcr.io/cloud-builders/docker'args:['build','-t','gcr.io/myproject/myimage','.']options:pubsubTopic:'projects/my-project/topics/my-topic'

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/myproject/myimage","."]}],"options":{"pubsubTopic":"projects/my-project/topics/my-topic"}}

sourceProvenanceHash:Set thesourceProvenanceHash option to specify the hash algorithm for source provenance. The following snippet specifies that the hash algorithm isSHA256:

YAML

steps:-name:'gcr.io/cloud-builders/docker'args:['build','-t','gcr.io/myproject/myimage','.']options:sourceProvenanceHash:['SHA256']

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/myproject/myimage","."]}],"options":{"sourceProvenanceHash":["SHA256"]}}

machineType:Cloud Build providesfour high-CPU virtual machine types to run your builds: two machine types with 8 CPUs and two machine types with 32 CPUs. Cloud Build alsoprovidestwo additional virtual machine types with 1 CPU and 2 CPUs to run your builds. Thedefault machine type ise2-standard-2 with 2 CPUs.Requesting a high-CPU virtual machine may increase the startup time of your build. Add themachineType option to request a virtual machine with a higher CPU:

YAML

steps:-name:'gcr.io/cloud-builders/docker'args:['build','-t','gcr.io/myproject/myimage','.']options:machineType:'E2_HIGHCPU_8'

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/myproject/myimage","."]},],"options":{"machineType":"E2_HIGHCPU_8"}}

For more information on using themachineType option seeSpeeding upbuilds.

diskSizeGb:Use thediskSizeGb option to request a custom disk size for your build. Themaximum size you can request is 4000 GB.

The following snippet requests a disk size of 200 GB:

YAML

steps:-name:'gcr.io/cloud-builders/docker'args:['build','-t','gcr.io/myproject/myimage','.']options:diskSizeGb:'200'

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/myproject/myimage","."]}],"options":{"diskSizeGb":'200'}}

logStreamingOption:Use this option to specify if you want to stream build logs toCloud Storage. By default, Cloud Build collects build logs onbuild completion; this option specifies if you want to stream build logs in realtime through the build process. The following snippet specifies that build logsare streamed to Cloud Storage:

YAML

steps:-name:'gcr.io/cloud-builders/go'args:['install','.']options:logStreamingOption:STREAM_ON

JSON

{"steps":[{"name":"gcr.io/cloud-builders/go","args":["install","."]}],"options":{"logStreamingOption":"STREAM_ON"}}

logging:Use this option to specify if you want to store logs in Cloud Loggingor Cloud Storage. If you don't set this option, Cloud Buildstores the logs in both Cloud Logging and Cloud Storage. You canset thelogging option toGCS_ONLY to store the logs only inCloud Storage. The following snippet specifies that the logs are stored inCloud Storage:

YAML

steps:-name:'gcr.io/cloud-builders/docker'args:['build','-t','gcr.io/myproject/myimage','.']options:logging:GCS_ONLY

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/myproject/myimage","."]}],"options":{"logging":"GCS_ONLY"}}

defaultLogsBucketBehavior:ThedefaultLogsBucketBehavior option lets you configure Cloud Build to create a default logs bucket within your own project in the same region as your build. For more information, seeStore build logs in a user-owned and regionalized bucket.

The following build config sets thedefaultLogsBucketBehavior field to the valueREGIONAL_USER_OWNED_BUCKET:

YAML

steps:-name:'gcr.io/cloud-builders/docker'args:['build','-t','us-central1-docker.pkg.dev/myproject/myrepo/myimage','.']options:defaultLogsBucketBehavior:REGIONAL_USER_OWNED_BUCKET

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker","args":["build","-t","us-central1-docker.pkg.dev/myproject/myrepo/myimage","."]}],"options":{"defaultLogsBucketBehavior":"REGIONAL_USER_OWNED_BUCKET"}}

dynamicSubstitutions:Use this option to explicitly enable or disablebash parameter expansionin substitutions. If your build is invoked by a trigger, thedynamicSubstitutions field is always set to true and does not need to bespecified in your build config file. If your build is invoked manually, you mustset thedynamicSubstitutions field to true for bash parameter expansions tobe interpreted when running your build.

automapSubstitutions:Automatically map all substitutions to environment variables which will beavailable throughout the entire build. For examples, seeSubstitute variable values.

substitutionOption:You'll set this option along with thesubstitutions field below to specify thebehavior when there is anerror in the substitutionchecks.

pool:Set the value of this field to the resource name of the private pool to runthe build. For instructions on running a build on a private pool, seeRunning builds in a private pool.

requestedVerifyOption:Set the value ofrequestedVerifyOption toVERIFIED to enable and verify thegeneration ofattestations andprovenance metadata foryour build. Once set, your builds will only be markedSUCCESSif attestations and provenance are generated.

substitutions

Use substitutions in your build config file to substitute specific variables atbuild time. Substitutions are helpful for variables whose value isn't knownuntil build time, or to re-use an existing build request with different variablevalues. By default, the build returns an error if there's a missing substitutionvariable or a missing substitution. However, you can use theALLOW_LOOSE optionto skip this check.

The following snippet uses substitutions to print "hello world." TheALLOW_LOOSE substitution option is set, which means the build will not returnan error if there's a missing substitution variable or a missing substitution.

YAML

steps:-name:'ubuntu'args:['echo','hello${_SUB_VALUE}']substitutions:_SUB_VALUE:worldoptions:substitution_option:'ALLOW_LOOSE'

JSON

{"steps":[{"name":"ubuntu","args":["echo","hello ${_SUB_VALUE}"]}],"substitutions":{"_SUB_VALUE":"world"},"options":{"substitution_option":"ALLOW_LOOSE"}}
Note: If your build is invoked by a trigger, theALLOW_LOOSE option is set bydefault. In this case, your build won't return an error if there is amissing substitution variable or a missing substitution. You cannot overridetheALLOW_LOOSE option for builds invoked by triggers.

For additional instructions on usingsubstitutions, seeSubstituting variablevalues.

tags

Use thetags field to organize your builds into groups and tofilter yourbuilds. Thefollowing config sets two tags namedmytag1 andmytag2:

YAML

steps:-name:'gcr.io/cloud-builders/docker'...-name:'ubuntu'...tags:['mytag1','mytag2']

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker"},{"name":"ubuntu"}],"tags":["mytag1","mytag2"]}
Note: Thetags field is different from the--tag flag on thegcloud builds submit command, which causes Cloud Buildto build using a Dockerfile instead of a build config file. For moreinformation, see--tag.

availableSecrets

Use this field to use a secret from Secret Manager with Cloud Build.For more information, seeUsing secrets.

secrets

Secret pairs a set of secret environment variables containing encryptedvalues with the Cloud KMS key to use to decrypt the value.

Note: UseavailableSecrets instead of usingsecrets. For more information, seeUsing secrets andUsing encrypted credentials.

serviceAccount

Use this field to specify the IAM service account to use atbuild time. For more information, seeConfiguring user-specified service accounts.

You can't specify thelegacy Cloud Build service accountin this field.

images

Theimages field in the build config file specifies one or more Linux Dockerimages to be pushed by Cloud Build to Artifact Registry. You may have abuild that performs tasks without producing any Linux Docker images, but if youbuild images and don't push them to the registry, the images are discarded onbuild completion. If a specified image is not produced during the build, thebuild will fail. For more information on storing images, seeStore artifacts in Artifact Registry.

The following build config sets theimages field to store the built image:

YAML

steps:-name:'gcr.io/cloud-builders/docker'args:['build','-t','gcr.io/myproject/myimage','.']images:['gcr.io/myproject/myimage']

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker","args":["build","-t","gcr.io/myproject/myimage","."]}],"images":["gcr.io/myproject/myimage"]}

artifacts

Theartifacts field in the build config file specifies one or morenon-container artifacts to be stored in Cloud Storage. For more information onstoring non-container artifacts, seeStore build artifacts in Cloud Storage.

The following build config sets theartifacts field to store the built Gopackage togs://mybucket/:

YAML

steps:-name:'gcr.io/cloud-builders/go'args:['build','my-package']artifacts:objects:location:'gs://mybucket/'paths:['my-package']

JSON

{"steps":[{"name":"gcr.io/cloud-builders/go","args":["build","my-package"]}],"artifacts":{"objects":{"location":"gs://mybucket/","paths":["my-package"]}}}

goModules

ThegoModules field lets you upload non-container Go modules to Gorepositories in Artifact Registry. For more information, seeBuild and test Go applications.

Therepository_location field specifies the Artifact Registry repositoryto store your packages. Themodule_path field specifies the local directorythat contains the Go module to upload. This directory must contain ago.mod file.

We recommend using an absolute path for the value ofmodule_path. You can use. to refer to the current working directory, but the field cannot be omittedor left empty. For more instructions on usingmodule_path, seeBuild and test Go applications.

The following build config sets thegoModules field to upload themodule inexample.com/myapp to the Artifact Registry repositoryquickstart-go-repo:

YAML

artifacts:goModules:-repositoryName:'quickstart-go-repo'repositoryLocation:'us-central1'repositoryProjectId:'argo-local-myname'sourcePath:'/workspace/myapp'modulePath:'example.com/myapp'moduleVersion:'v1.0.0'

JSON

{"artifacts":{"goModules":[{"repositoryName":"quickstart-go-repo","repositoryLocation":"us-central1","repositoryProjectId":"argo-local-myname","sourcePath":"/workspace/myapp","modulePath":"example.com/myapp","moduleVersion":"v1.0.0"}]}}

mavenArtifacts

ThemavenArtifacts field lets you upload non-container Java artifacts to Mavenrepositories in Artifact Registry. For more information, seeBuild and test Java applications.

Upload all Maven files in a folder to an Artifact Registry repository

The following build config sets themavenArtifacts field to upload all files in the folder/workspace/target/com/mycompany/app/my-app-1/1.0.0/ to the Artifact Registry repositoryhttps://us-central1-maven.pkg.dev/my-project-id/my-java-repo:

YAML

artifacts:mavenArtifacts:-repository:'https://us-central1-maven.pkg.dev/my-project-id/my-java-repo'deployFolder:'/workspace/target'artifactId:'my-app-1'groupId:'com.mycompany.app'version:'1.0.0'

JSON

{"artifacts":{"mavenArtifacts":[{"repository":"https://us-central1-maven.pkg.dev/my-project-id/my-java-repo","deployFolder":"/workspace/target","artifactId":"my-app-1","groupId":"com.mycompany.app","version":"1.0.0"}]}}

To deploy your Maven files to/workspace/target/com/mycompany/app/my-app-1/1.0.0/, add the-DaltDeploymentRepository=local::default::file:./workspace/target option to your Maven deploy command.

Upload a packaged Maven file to an Artifact Registry repository

The following build config sets themavenArtifacts field to upload the packaged filemy-app-1.0-SNAPSHOT.jar to the Artifact Registry repositoryhttps://us-central1-maven.pkg.dev/my-project-id/my-java-repo:

YAML

artifacts:mavenArtifacts:-repository:'https://us-central1-maven.pkg.dev/my-project-id/my-java-repo'path:'/workspace/my-app/target/my-app-1.0-SNAPSHOT.jar'artifactId:'my-app-1'groupId:'com.mycompany.app'version:'1.0.0'

JSON

{"artifacts":{"mavenArtifacts":[{"repository":"https://us-central1-maven.pkg.dev/my-project-id/my-java-repo","path":"/workspace/my-app/target/my-app-1.0-SNAPSHOT.jar","artifactId":"my-app-1","groupId":"com.mycompany.app","version":"1.0.0"}]}}

pythonPackages

ThepythonPackages field lets you upload Python packages to Artifact Registry.For more information, seeBuild and test Python applications.

The following build config sets thepythonPackages field to upload the Python packagedist/my-pkg.whl to the Artifact Registry repositoryhttps://us-east1-python.pkg.dev/my-project/my-repo:

YAML

artifacts:pythonPackages:-repository:'https://us-east1-python.pkg.dev/my-project/my-repo'paths:['dist/my-pkg.whl']

JSON

{"artifacts":{"pythonPackages":[{"repository":"https://us-east1-python.pkg.dev/my-project/my-repo","paths":["dist/my-pkg.whl"]}]}}

npmPackages

Use thenpmPackages field to configure Cloud Build to upload yourbuilt npm packages to supported repositories in Artifact Registry. You mustprovide values forrepository andpackagePath.

Therepository field specifies the Artifact Registry repository to store yourpackages. ThepackagePath field specifies the local directory that containsthe npm package to upload. This directory must contain apackage.json file.

We recommend using an absolute path for the value ofpackagePath. You can use. to refer to the current working directory, but the field cannot be omittedor left empty. For more instructions on usingnpmPackages, seeBuild and test Node.js applications.

The following build config sets thenpmPackages field to upload the npmpackage in the/workspace/my-pkg directory to the Artifact Registry repositoryhttps://us-east1-npm.pkg.dev/my-project/my-repo.

YAML

artifacts:npmPackages:-repository:'https://us-east1-npm.pkg.dev/my-project/my-repo'packagePath:'/workspace/my-pkg'

JSON

{"artifacts":{"npmPackages":[{"repository":"https://us-east1-npm.pkg.dev/my-project/my-repo","packagePath":"/workspace/my-pkg"}]}}

Using Dockerfiles

If you're executing Docker builds in Cloud Build using thegcloud CLI orbuild triggers, then youcan use aDockerfilewithout a separate build config file. If you want to make more adjustments toyour Docker builds, then you can provide a build config file in addition to theDockerfile. For instructions on how to build a Docker image using aDockerfile, seeQuickstart: Build.

Cloud Build network

When Cloud Build runs each build step, it attaches the step'scontainer to a local Docker network namedcloudbuild. Thecloudbuildnetwork hostsApplication Default Credentials(ADC) that Google Cloud services can use to automatically find yourcredentials. If you're running nested Docker containers and want to exposeADC to an underlying container or usinggcloud in adocker step,use the--network flag in your dockerbuild step:

YAML

steps:-name:'gcr.io/cloud-builders/docker'args:['build','--network=cloudbuild','.']

JSON

{"steps":[{"name":"gcr.io/cloud-builders/docker","args":["build","--network=cloudbuild","."]}]}

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.