gcloud functions deploy Stay organized with collections Save and categorize content based on your preferences.
- NAME
- gcloud functions deploy - create or update a Google Cloud Function
- SYNOPSIS
gcloud functions deploy(NAME:--region=REGION)[--[no-]allow-unauthenticated][--concurrency=CONCURRENCY][--docker-registry=DOCKER_REGISTRY][--egress-settings=EGRESS_SETTINGS][--entry-point=ENTRY_POINT][--gen2][--ignore-file=IGNORE_FILE][--ingress-settings=INGRESS_SETTINGS][--retry][--run-service-account=RUN_SERVICE_ACCOUNT][--runtime=RUNTIME][--runtime-update-policy=RUNTIME_UPDATE_POLICY][--security-level=SECURITY_LEVEL; default="secure-always"][--serve-all-traffic-latest-revision][--service-account=SERVICE_ACCOUNT][--source=SOURCE][--stage-bucket=STAGE_BUCKET][--timeout=TIMEOUT][--trigger-location=TRIGGER_LOCATION][--trigger-service-account=TRIGGER_SERVICE_ACCOUNT][--update-labels=[KEY=VALUE,…]][--binary-authorization=BINARY_AUTHORIZATION|--clear-binary-authorization][--build-env-vars-file=FILE_PATH|--clear-build-env-vars|--set-build-env-vars=[KEY=VALUE,…] |--remove-build-env-vars=[KEY,…]--update-build-env-vars=[KEY=VALUE,…]][--build-service-account=BUILD_SERVICE_ACCOUNT|--clear-build-service-account][--build-worker-pool=BUILD_WORKER_POOL|--clear-build-worker-pool][--clear-docker-repository|--docker-repository=DOCKER_REPOSITORY][--clear-env-vars|--env-vars-file=FILE_PATH|--set-env-vars=[KEY=VALUE,…] |--remove-env-vars=[KEY,…]--update-env-vars=[KEY=VALUE,…]][--clear-kms-key|--kms-key=KMS_KEY][--clear-labels|--remove-labels=[KEY,…]][--clear-max-instances|--max-instances=MAX_INSTANCES][--clear-min-instances|--min-instances=MIN_INSTANCES][--clear-secrets|--set-secrets=[SECRET_ENV_VAR=SECRET_VALUE_REF,/secret_path=SECRET_VALUE_REF,/mount_path:/secret_file_path=SECRET_VALUE_REF,…] |--remove-secrets=[SECRET_ENV_VAR,/secret_path,/mount_path:/secret_file_path,…]--update-secrets=[SECRET_ENV_VAR=SECRET_VALUE_REF,/secret_path=SECRET_VALUE_REF,/mount_path:/secret_file_path=SECRET_VALUE_REF,…]][--clear-vpc-connector|--vpc-connector=VPC_CONNECTOR][--memory=MEMORY:--cpu=CPU][--trigger-bucket=TRIGGER_BUCKET|--trigger-http|--trigger-topic=TRIGGER_TOPIC|--trigger-event=EVENT_TYPE--trigger-resource=RESOURCE|--trigger-event-filters=[ATTRIBUTE=VALUE,…]--trigger-event-filters-path-pattern=[ATTRIBUTE=PATH_PATTERN,…]][GCLOUD_WIDE_FLAG …]
- DESCRIPTION
- Create or update a Google Cloud Function.
- EXAMPLES
- To deploy a function that is triggered by write events on the document
, run:/messages/{pushId}gcloudfunctionsdeploymy_function--runtime=python37--trigger-event=providers/cloud.firestore/eventTypes/document.write--trigger-resource=projects/project_id/databases/(default)/documents/messages/{pushId}Seehttps://cloud.google.com/functions/docs/callingfor more details of using other types of resource as triggers.
- POSITIONAL ARGUMENTS
- Function resource - The Cloud Function name to deploy. The arguments in thisgroup can be used to specify the attributes of this resource. (NOTE) Someattributes are not given arguments in this group but can be set in other ways.
To set the
projectattribute:- provide the argument
NAMEon the command line with a fullyspecified name; - provide the argument
--projecton the command line; - set the property
core/project.
This must be specified.
NAME- ID of the function or fully qualified identifier for the function.
To set the
functionattribute:- provide the argument
NAMEon the command line.
This positional argument must be specified if any of the other arguments in thisgroup are specified.
- provide the argument
--region=REGION- The Cloud region for the function. Overrides the default
functions/regionproperty value for this command invocation.To set the
regionattribute:- provide the argument
NAMEon the command line with a fullyspecified name; - provide the argument
--regionon the command line; - set the property
functions/region.
- provide the argument
- provide the argument
- Function resource - The Cloud Function name to deploy. The arguments in thisgroup can be used to specify the attributes of this resource. (NOTE) Someattributes are not given arguments in this group but can be set in other ways.
- FLAGS
--[no-]allow-unauthenticated- If set, makes this a public function. This will allow all callers, withoutchecking authentication. Use
--allow-unauthenticatedto enable and--no-allow-unauthenticatedto disable. --concurrency=CONCURRENCY- Set the maximum number of concurrent requests allowed per container instance.Leave concurrency unspecified to receive the server default value.
--docker-registry=DOCKER_REGISTRY- (DEPRECATED) Docker Registry to use for storing the function's Docker images.The option
artifact-registryis used by default.WiththegeneraltransitionfromContainerRegistrytoArtifactRegistry,theoptiontospecifydockerregistryisdeprecated.AllcontainerimagestorageandmanagementwillautomaticallytransitiontoArtifactRegistry.Formoreinformation,seehttps://cloud.google.com/artifact-registry/docs/transition/transition-from-gcr
DOCKER_REGISTRYmust be one of:artifact-registry,container-registry. --egress-settings=EGRESS_SETTINGS- Egress settings controls what traffic is diverted through the VPC AccessConnector resource. By default
private-ranges-onlywill be used.EGRESS_SETTINGSmust be one of:private-ranges-only,all. --entry-point=ENTRY_POINT- Name of a Google Cloud Function (as defined in source code) that will beexecuted. Defaults to the resource name suffix (ID of the function), if notspecified.
--gen2- If enabled, this command will use Cloud Functions (Second generation). Ifdisabled with
--no-gen2, Cloud Functions (First generation) will beused. If not specified, the value of this flag will be taken from thefunctions/gen2configuration property. If thefunctions/gen2configuration property is not set, defaults tolooking up the given function and using its generation. --ignore-file=IGNORE_FILE- Override the .gcloudignore file in the source directory and use the specifiedfile instead. By default, the source directory is your current directory. Notethat it could be changed by the --source flag, in which case your .gcloudignorefile will be searched in the overridden directory. For example,
--ignore-file=.mygcloudignorecombined with--source=./mydirwould point to./mydir/.mygcloudignore --ingress-settings=INGRESS_SETTINGS- Ingress settings controls what traffic can reach the function. By default
allwill be used.INGRESS_SETTINGSmust beone of:all,internal-only,internal-and-gclb. --retry- If specified, then the function will be retried in case of a failure.
--run-service-account=RUN_SERVICE_ACCOUNT- The email address of the IAM service account associated with the Cloud Runservice for the function. The service account represents the identity of therunning function, and determines what permissions the function has.
If not provided, the function will use the project's default service account forCompute Engine.
--runtime=RUNTIME- Runtime in which to run the function.
Required when deploying a new function; optional when updating an existingfunction.
For a list of available runtimes, run
gcloud functions runtimeslist. --runtime-update-policy=RUNTIME_UPDATE_POLICY- Runtime update policy for the function being deployed. The option
automaticis used by default.RUNTIME_UPDATE_POLICYmust be one of:automatic,on-deploy. --security-level=SECURITY_LEVEL; default="secure-always"- Security level controls whether a function's URL supports HTTPS only or bothHTTP and HTTPS. By default,
secure-alwayswill be used, meaningonly HTTPS is supported.SECURITY_LEVELmust be one of:secure-always,secure-optional. --serve-all-traffic-latest-revision- If specified, latest function revision will be served all traffic.
--service-account=SERVICE_ACCOUNT- The email address of the IAM service account associated with the function atruntime. The service account represents the identity of the running function,and determines what permissions the function has.
If not provided, the function will use the project's default service account forCompute Engine.
--source=SOURCE- Location of source code to deploy.
Location of the source can be one of the following three options:
- Source code in Google Cloud Storage (must be a
.ziparchive), - Reference to source repository or,
- Local filesystem path (root directory of function source).
Note that, depending on your runtime type, Cloud Functions will look for fileswith specific names for deployable functions. For Node.js, these filenames are
index.jsorfunction.js. For Python, this ismain.py.If you do not specify the
--sourceflag:- The current directory will be used for new function deployments.
- If the function was previously deployed using a local filesystem path, then thefunction's source code will be updated using the current directory.
- If the function was previously deployed using a Google Cloud Storage location ora source repository, then the function's source code will not be updated.
The value of the flag will be interpreted as a Cloud Storage location, if itstarts with
gs://.The value will be interpreted as a reference to a source repository, if itstarts with
https://.Otherwise, it will be interpreted as the local filesystem path. When deployingsource from the local filesystem, this command skips files specified in the
.gcloudignorefile (seegcloud topicgcloudignorefor more information). If the.gcloudignorefile doesn't exist, the command will try to create it.The minimal source repository URL is:
https://source.developers.google.com/projects/${PROJECT}/repos/${REPO}By using the URL above, sources from the root directory of the repository on therevision tagged
masterwill be used.If you want to deploy from a revision different from
master, appendone of the following three sources to the URL:/revisions/${REVISION},/moveable-aliases/${MOVEABLE_ALIAS},/fixed-aliases/${FIXED_ALIAS}.
If you'd like to deploy sources from a directory different from the root, youmust specify a revision, a moveable alias, or a fixed alias, as above, andappend
/paths/${PATH_TO_SOURCES_DIRECTORY}to the URL.Overall, the URL should match the following regular expression:
^https://source\.developers\.google\.com/projects/(?<accountId>[^/]+)/repos/(?<repoName>[^/]+)(((/revisions/(?<commit>[^/]+))|(/moveable-aliases/(?<branch>[^/]+))|(/fixed-aliases/(?<tag>[^/]+)))(/paths/(?<path>.*))?)?$
An example of a validly formatted source repository URL is:
https://source.developers.google.com/projects/123456789/repos/testrepo/moveable-aliases/alternate-branch/paths/path-to=source
- Source code in Google Cloud Storage (must be a
--stage-bucket=STAGE_BUCKET- When deploying a function from a local directory, this flag's value is the nameof the Google Cloud Storage bucket in which source code will be stored. Notethat if you set the
--stage-bucketflag when deploying a function,you will need to specify--sourceor--stage-bucketinsubsequent deployments to update your source code. To use this flagsuccessfully, the account in use must have permissions to write to this bucket.For help granting access, refer to this guide:https://cloud.google.com/storage/docs/access-control/ --timeout=TIMEOUT- The function execution timeout, e.g. 30s for 30 seconds. Defaults to originalvalue for existing function or 60 seconds for new functions.
For GCF 1st gen functions, cannot be more than 540s.
For GCF 2nd gen functions, cannot be more than 3600s.
See $gcloud topic datetimesfor information on duration formats.
--trigger-location=TRIGGER_LOCATION- The location of the trigger, which must be a region or multi-region where therelevant events originate.
--trigger-service-account=TRIGGER_SERVICE_ACCOUNT- The email address of the IAM service account associated with the Eventarctrigger for the function. This is used for authenticated invocation.
If not provided, the function will use the project's default service account forCompute Engine.
--update-labels=[KEY=VALUE,…]- List of label KEY=VALUE pairs to update. If a label exists, its value ismodified. Otherwise, a new label is created.
Keys must start with a lowercase character and contain only hyphens(
-), underscores (_), lowercase characters, andnumbers. Values must contain only hyphens (-), underscores(_), lowercase characters, and numbers.Label keys starting with
deploymentare reserved for use bydeployment tools and cannot be specified manually. - At most one of these can be specified:
--binary-authorization=BINARY_AUTHORIZATION- Name of the Binary Authorization policy that the function image should bechecked against when deploying to Cloud Run.
Example: default
The flag is only applicable to 2nd gen functions.
--clear-binary-authorization- Clears the Binary Authorization policy field.
- At most one of these can be specified:
--build-env-vars-file=FILE_PATH- Path to a local YAML file with definitions for all build environment variables.All existing build environment variables will be removed before the new buildenvironment variables are added.
--clear-build-env-vars- Remove all build environment variables.
--set-build-env-vars=[KEY=VALUE,…]- List of key-value pairs to set as build environment variables. All existingbuild environment variables will be removed first.
- Only --update-build-env-vars and --remove-build-env-vars can be used together.If both are specified, --remove-build-env-vars will be applied first.
--remove-build-env-vars=[KEY,…]- List of build environment variables to be removed.
--update-build-env-vars=[KEY=VALUE,…]- List of key-value pairs to set as build environment variables.
- At most one of these can be specified:
--build-service-account=BUILD_SERVICE_ACCOUNT- IAM service account whose credentials will be used for the build step. Must beof the format projects/${PROJECT_ID}/serviceAccounts/${ACCOUNT_EMAIL_ADDRESS}.
If not provided, the function will use the project's default service account forCloud Build.
--clear-build-service-account- Clears the build service account field.
- At most one of these can be specified:
--build-worker-pool=BUILD_WORKER_POOL- Name of the Cloud Build Custom Worker Pool that should be used to build thefunction. The format of this field is
projects/${PROJECT}/locations/${LOCATION}/workerPools/${WORKERPOOL}where ${PROJECT} is the project id and ${LOCATION} is the location where theworker pool is defined and ${WORKERPOOL} is the short name of the worker pool. --clear-build-worker-pool- Clears the Cloud Build Custom Worker Pool field.
- At most one of these can be specified:
--clear-docker-repository- Clears the Docker repository configuration of the function.
--docker-repository=DOCKER_REPOSITORY- Sets the Docker repository to be used for storing the Cloud Function's Dockerimages while the function is being deployed.
DOCKER_REPOSITORYmustbe an Artifact Registry Docker repository present in thesameproject and location as the Cloud Function.**Preview:** for 2nd gen functions, a Docker Artifact registry repository in adifferent project and/or location may be used. Additional requirements apply,seehttps://cloud.google.com/functions/docs/building#image_registry
The repository name should match one of these patterns:
projects/${PROJECT}/locations/${LOCATION}/repositories/${REPOSITORY},{LOCATION}-docker.pkg.dev/{PROJECT}/{REPOSITORY}.
where
${PROJECT}is the project,${LOCATION}is thelocation of the repository and${REPOSITORY}is a valid repositoryID.
- At most one of these can be specified:
--clear-env-vars- Remove all environment variables.
--env-vars-file=FILE_PATH- Path to a local YAML file with definitions for all environment variables. Allexisting environment variables will be removed before the new environmentvariables are added.
--set-env-vars=[KEY=VALUE,…]- List of key-value pairs to set as environment variables. All existingenvironment variables will be removed first.
- Only --update-env-vars and --remove-env-vars can be used together. If both arespecified, --remove-env-vars will be applied first.
--remove-env-vars=[KEY,…]- List of environment variables to be removed.
--update-env-vars=[KEY=VALUE,…]- List of key-value pairs to set as environment variables.
- At most one of these can be specified:
--clear-kms-key- Clears the KMS crypto key used to encrypt the function.
--kms-key=KMS_KEY- Sets the user managed KMS crypto key used to encrypt the Cloud Function and itsresources.
The KMS crypto key name should match the pattern
projects/${PROJECT}/locations/${LOCATION}/keyRings/${KEYRING}/cryptoKeys/${CRYPTOKEY}where ${PROJECT} is the project, ${LOCATION} is the location of the key ring,and ${KEYRING} is the key ring that contains the ${CRYPTOKEY} crypto key.If this flag is set, then a Docker repository created in Artifact Registry mustbe specified using the
--docker-repositoryflag and the repositorymust be encrypted using thesameKMS key.
- At most one of these can be specified:
--clear-labels- Remove all labels. If
--update-labelsis also specified then--clear-labelsis applied first.For example, to remove all labels:
gcloudfunctionsdeploy--clear-labelsTo remove all existing labels and create two new labels,
andfoo:bazgcloudfunctionsdeploy--clear-labels--update-labelsfoo=bar,baz=qux --remove-labels=[KEY,…]- List of label keys to remove. If a label does not exist it is silently ignored.If
--update-labelsis also specified then--update-labelsis applied first.Label keys starting withdeploymentare reserved for use by deployment tools and cannot bespecified manually.
- At most one of these can be specified:
--clear-max-instances- Clears the maximum instances setting for the function.
If it's any 2nd gen function or a 1st gen HTTP function, this flag sets maximuminstances to 0, which means there is no limit to maximum instances. If it's anevent-driven 1st gen function, this flag sets maximum instances to 3000, whichis the default value for 1st gen functions.
--max-instances=MAX_INSTANCES- Sets the maximum number of instances for the function. A function execution thatwould exceed max-instances times out.
- At most one of these can be specified:
--clear-min-instances- Clears the minimum instances setting for the function.
--min-instances=MIN_INSTANCES- Sets the minimum number of instances for the function. This is helpful forreducing cold start times. Defaults to zero.
- At most one of these can be specified:
--clear-secrets- Remove all secret environment variables and volumes.
--set-secrets=[SECRET_ENV_VAR=SECRET_VALUE_REF,/secret_path=SECRET_VALUE_REF,/mount_path:/secret_file_path=SECRET_VALUE_REF,…]- List of secret environment variables and secret volumes to configure. Existingsecrets configuration will be overwritten.
You can reference a secret value referred to as
SECRET_VALUE_REFinthe help text in the following ways.- Use
${SECRET}:${VERSION}if you are referencing a secret in thesame project, where${SECRET}is the name of the secret in secretmanager (not the full resource name) and${VERSION}is the versionof the secret which is either apositive integeror the labellatest. For example, useSECRET_FOO:1to referenceversion1of the secretSECRET_FOOwhich exists in thesame project as the function.
- Use
projects/${PROJECT}/secrets/${SECRET}/versions/${VERSION}orprojects/${PROJECT}/secrets/${SECRET}:${VERSION}to reference asecret version using the full resource name, where${PROJECT}iseither the project number (preferred) or the project ID of theproject which contains the secret,${SECRET}is the name of thesecret in secret manager (not the full resource name) and${VERSION}is the version of the secret which is either apositive integeror the labellatest. For example, useprojects/1234567890/secrets/SECRET_FOO/versions/1orprojects/project_id/secrets/SECRET_FOO/versions/1to referenceversion1of the secretSECRET_FOOthat exists in theproject1234567890orproject_idrespectively. Thisformat is useful when the secret exists in a different project.
To configure the secret as an environment variable, use
SECRET_ENV_VAR=SECRET_VALUE_REF. To use the value of the secret,read the environment variableSECRET_ENV_VARas you would normallydo in the function's programming language.We recommend using a
numericversion for secret environmentvariables as any updates to the secret value are not reflected until new clonesstart.To mount the secret within a volume use
/secret_path=SECRET_VALUE_REFor/mount_path:/secret_file_path=SECRET_VALUE_REF. To use the value ofthe secret, read the file at/secret_pathas you would normally doin the function's programming language.For example,
/etc/secrets/secret_foo=SECRET_FOO:latestor/etc/secrets:/secret_foo=SECRET_FOO:latestwill make the value ofthelatestversion of the secretSECRET_FOOavailablein a filesecret_foounder the directory/etc/secrets./etc/secretswill be considered as themount pathandwillnotbe available for any other volume.We recommend referencing the
latestversion when using secretvolumes so that the secret's value changes are reflected immediately. - Use
- Only
--update-secretsand--remove-secretscan be usedtogether. If both are specified, then--remove-secretswill beapplied first.--remove-secrets=[SECRET_ENV_VAR,/secret_path,/mount_path:/secret_file_path,…]- List of secret environment variable names and secret paths to remove.
Existing secrets configuration of secret environment variable names and secretpaths not specified in this list will be preserved.
To remove a secret environment variable, use the name of the environmentvariable
SECRET_ENV_VAR.To remove a file within a secret volume or the volume itself, use the secretpath as the key (either
/secret_pathor/mount_path:/secret_file_path). --update-secrets=[SECRET_ENV_VAR=SECRET_VALUE_REF,/secret_path=SECRET_VALUE_REF,/mount_path:/secret_file_path=SECRET_VALUE_REF,…]- List of secret environment variables and secret volumes to update. Existingsecrets configuration not specified in this list will be preserved.
- At most one of these can be specified:
--clear-vpc-connector- Clears the VPC connector field.
- Connector resource - The VPC Access connector that the function can connect to.It can be either the fully-qualified URI, or the short name of the VPC Accessconnector resource. If the short name is used, the connector must belong to thesame project. The format of this field is either
projects/${PROJECT}/locations/${LOCATION}/connectors/${CONNECTOR}or${CONNECTOR}, where${CONNECTOR}is the short nameof the VPC Access connector. This represents a Cloud resource. (NOTE) Someattributes are not given arguments in this group but can be set in other ways.To set the
projectattribute:- provide the argument
--vpc-connectoron the command line with afully specified name; - provide the argument
--projecton the command line; - set the property
core/project.
To set the
regionattribute:- provide the argument
--vpc-connectoron the command line with afully specified name; - provide the argument
--regionon the command line; - set the property
functions/region.
--vpc-connector=VPC_CONNECTOR- ID of the connector or fully qualified identifier for the connector.
To set the
connectorattribute:- provide the argument
--vpc-connectoron the command line.
- provide the argument
- provide the argument
--memory=MEMORY- Limit on the amount of memory the function can use.
Allowed values for v1 are: 128MB, 256MB, 512MB, 1024MB, 2048MB, 4096MB, and8192MB.
Allowed values for GCF 2nd gen are in the format: <number><unit>with allowed units of "k", "M", "G", "Ki", "Mi", "Gi". Ending 'b' or 'B' isallowed, but both are interpreted as bytes as opposed to bits.
Examples: 1000000K, 1000000Ki, 256Mb, 512M, 1024Mi, 2G, 4Gi.
By default, a new function is limited to 256MB of memory. When deploying anupdate to an existing function, the function keeps its old memory limit unlessyou specify this flag.
--cpu=CPU- The number of available CPUs to set. Only valid when
--memory=MEMORYis specified.Examples: .5, 2, 2.0, 2000m.
By default, a new function's available CPUs is determined based on its memoryvalue.
When deploying an update that includes memory changes to an existing function,the function's available CPUs will be recalculated based on the new memoryunless this flag is specified. When deploying an update that does not includememory changes to an existing function, the function's "available CPUs" settingwill keep its old value unless you use this flag to change the setting.
- If you don't specify a trigger when deploying an update to an existing functionit will keep its current trigger. You must specify one of the following whendeploying a new function:
--trigger-topic,--trigger-bucket,--trigger-http,--trigger-eventAND--trigger-resource,--trigger-event-filtersand optionally--trigger-event-filters-path-pattern.
--trigger-bucket=TRIGGER_BUCKET- Google Cloud Storage bucket name. Trigger the function when an object is createdor overwritten in the specified Cloud Storage bucket.
--trigger-http- Function will be assigned an endpoint, which you can view by using the
describecommand. Any HTTP request (of a supported type) to theendpoint will trigger function execution. Supported HTTP request types are:POST, PUT, GET, DELETE, and OPTIONS. --trigger-topic=TRIGGER_TOPIC- Name of Pub/Sub topic. Every message published in this topic will triggerfunction execution with message contents passed as input data. Note that thisflag does not accept the format of projects/PROJECT_ID/topics/TOPIC_ID. Use thisflag to specify the final element TOPIC_ID. The PROJECT_ID will be read from theactive configuration.
--trigger-event=EVENT_TYPE- Specifies which action should trigger the function. For a list of acceptablevalues, call
gcloud functionsevent-types list. --trigger-resource=RESOURCE- Specifies which resource from
--trigger-eventis being observed.E.g. if--trigger-eventisproviders/cloud.storage/eventTypes/object.change,--trigger-resourcemust be a bucket name. For a list of expectedresources, callgcloud functionsevent-types list. --trigger-event-filters=[ATTRIBUTE=VALUE,…]- The Eventarc matching criteria for the trigger. The criteria can be specifiedeither as a single comma-separated argument or as multiple arguments. Thefilters must include the
attribute, aswell as any other attributes that are expected for the chosen type.type --trigger-event-filters-path-pattern=[ATTRIBUTE=PATH_PATTERN,…]- The Eventarc matching criteria for the trigger in path pattern format. Thecriteria can be specified as a single comma-separated argument or as multiplearguments.
The provided attribute/value pair will be used with the
match-path-patternoperator to configure the trigger, seehttps://cloud.google.com/eventarc/docs/reference/rest/v1/projects.locations.triggers#eventfilterandhttps://cloud.google.com/eventarc/docs/path-patternsfor more details about on how to construct path patterns.For example, to filter on events for Compute Engine VMs in a given zone:
--trigger-event-filters-path-pattern=resourceName='/projects/*/zones/us-central1-a/instances/*'
- GCLOUD WIDE FLAGS
- These flags are available to all commands:
--access-token-file,--account,--billing-project,--configuration,--flags-file,--flatten,--format,--help,--impersonate-service-account,--log-http,--project,--quiet,--trace-token,--user-output-enabled,--verbosity.Run
$gcloud helpfor details. - NOTES
- These variants are also available:
gcloudalphafunctionsdeploygcloudbetafunctionsdeploy
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-22 UTC.