Create custom constraints

Google Cloud Organization Policy gives you centralized, programmaticcontrol over your organization's resources. As theorganization policy administrator, you can define an organization policy,which is a set of restrictions called constraints that apply toGoogle Cloud resources and descendants of those resources in theGoogle Cloud resource hierarchy. You can enforce organization policies atthe organization, folder, or project level.

Organization Policy providespredefined constraints for variousGoogle Cloud services. However, if you want more granular, customizablecontrol over the specific fields that are restricted in your organizationpolicies, you can also create custom constraints and use those customconstraints in an organization policy.

Benefits

You can use a custom organization policy to allow or deny specificoperations on Serverless for Apache Spark batches, sessions, and session templates.For example, if a request to create a batch workload fails to satisfycustom constraint validation as set by your organization policy,the request will fail, and an error will be returned to the caller.

Policy inheritance

By default, organization policies are inherited by the descendants of theresources on which you enforce the policy. For example, if you enforce a policyon a folder, Google Cloud enforces the policy on all projects in thefolder. To learn more about this behavior and how to change it, refer toHierarchy evaluation rules.

Pricing

The Organization Policy Service, including predefined and custom constraints, is offered atno charge.

Before you begin

  1. Set up your project
    1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
    2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Roles required to select or create a project

      • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
      • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
      Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

      Go to project selector

    3. Verify that billing is enabled for your Google Cloud project.

    4. Enable the Serverless for Apache Spark API.

      Roles required to enable APIs

      To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

      Enable the API

    5. Install the Google Cloud CLI.

    6. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    7. Toinitialize the gcloud CLI, run the following command:

      gcloudinit
    8. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Roles required to select or create a project

      • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
      • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
      Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

      Go to project selector

    9. Verify that billing is enabled for your Google Cloud project.

    10. Enable the Serverless for Apache Spark API.

      Roles required to enable APIs

      To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

      Enable the API

    11. Install the Google Cloud CLI.

    12. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    13. Toinitialize the gcloud CLI, run the following command:

      gcloudinit
    14. Ensure that you know yourorganization ID.

Required roles

To get the permissions that you need to manage organization policies, ask your administrator to grant you theOrganization policy administrator (roles/orgpolicy.policyAdmin) IAM role on the organization resource. For more information about granting roles, seeManage access to projects, folders, and organizations.

This predefined role contains the permissions required to manage organization policies. To see the exact permissions that are required, expand theRequired permissions section:

Required permissions

The following permissions are required to manage organization policies:

  • orgpolicy.constraints.list
  • orgpolicy.policies.create
  • orgpolicy.policies.delete
  • orgpolicy.policies.list
  • orgpolicy.policies.update
  • orgpolicy.policy.get
  • orgpolicy.policy.set

You might also be able to get these permissions withcustom roles or otherpredefined roles.

Create a custom constraint

A custom constraint is defined in a YAML file by the resources, methods,conditions, and actions it is applied to. Serverless for Apache Spark supportscustom constraints that are applied to theCREATE method of the batch andsession resources.

For more information about how to create a custom constraint, seeCreating and managing custom organization policies.

Create a custom constraint for a batch resource

To create a YAML file for a Serverless for Apache Spark custom constraint for a batchresource, use the following format:

name:organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAMEresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:CONDITIONactionType:ACTIONdisplayName:DISPLAY_NAMEdescription:DESCRIPTION

Replace the following:

  • ORGANIZATION_ID: your organization ID, such as123456789.

  • CONSTRAINT_NAME: the name you want for your newcustom constraint. A custom constraint must start withcustom., and canonly include uppercase letters, lowercase letters, or numbers, forexample,custom.batchMustHaveSpecifiedCategoryLabel. The maximum length of this field is 70characters, not counting the prefix, for example,organizations/123456789/customConstraints/custom..

  • CONDITION: aCEL condition that is written againsta representation of a supported service resource. This field has a maximumlength of 1000 characters. Formore information about the resources available to write conditions against, seeDataproc Serverless constraints on resources and operations.Sample condition:("category" in resource.labels) && (resource.labels['category']in ['retail', 'ads', 'service']).

  • ACTION: the action to take if the condition ismet. This can be eitherALLOW orDENY.

  • DISPLAY_NAME: a human-friendly name for theconstraint. Sample display name: "Enforce batch 'category' label requirement".This field has a maximum length of 200 characters.

  • DESCRIPTION: a human-friendly description of theconstraint to display as an error message when the policy is violated.This field has a maximum length of 2000 characters. Sample description:"Only allow Dataproc batch creation if it has a 'category' label with a 'retail', 'ads', or 'service' value".

Create a custom constraint for a session resource

To create a YAML file for a Serverless for Apache Spark custom constraint for asession resource, use the following format:

name:organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAMEresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:CONDITIONactionType:ACTIONdisplayName:DISPLAY_NAMEdescription:DESCRIPTION

Replace the following:

  • ORGANIZATION_ID: your organization ID, such as123456789.

  • CONSTRAINT_NAME: the name you want for your newcustom constraint. A custom constraint must start withcustom., and canonly include uppercase letters, lowercase letters, or numbers, forexample,custom.SessionNameMustStartWithTeamName. The maximum length of thisfield is 70 characters, not counting the prefix, for example,organizations/123456789/customConstraints/custom..

  • CONDITION: aCEL condition that is written againsta representation of a supported service resource. This field has a maximumlength of 1000 characters. For more information about the resources availableto write conditions against, seeDataproc Serverless constraints on resources and operations.Sample condition:(resource.name.startsWith("dataproc").

  • ACTION: the action to take if the condition ismet. This can be eitherALLOW orDENY.

  • DISPLAY_NAME: a human-friendly name for theconstraint. Sample display name: "Enforce session to have attl < 2 hours". This field has a maximum length of 200 characters.

  • DESCRIPTION: a human-friendly description of theconstraint to display as an error message when the policy is violated.This field has a maximum length of 2000 characters. Sample description:"Only allow session creation if it sets an allowable TTL".

Note: When you set up custom policies for sessions, you must include a constrainton thesessionTemplate field. This constraint must ensure that either thetemplate ID matches an entry in the approved template IDs allow list, or thatthesessionTemplate is left empty. This is essential to maintain organizationpolicy restrictions and prevent users from overriding them using sessiontemplates.

Create a custom constraint for a session template resource

Preview

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

To create a YAML file for a Serverless for Apache Spark custom constraint for asession template resource, use the following format:

name:organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAMEresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:CONDITIONactionType:ACTIONdisplayName:DISPLAY_NAMEdescription:DESCRIPTION

Replace the following:

  • ORGANIZATION_ID: your organization ID, such as123456789.

  • CONSTRAINT_NAME: the name you want for your newcustom constraint. A custom constraint must start withcustom., and canonly include uppercase letters, lowercase letters, or numbers, forexample,custom.SessionTemplateNameMustStartWithTeamName. The maximum lengthof this field is 70 characters, not counting the prefix, for example,organizations/123456789/customConstraints/custom..

  • CONDITION: aCEL condition that is written againsta representation of a supported service resource. This field has a maximumlength of 1000 characters. For more information about the resources availableto write conditions against, seeConstraints on resources and operations.Sample condition:(resource.name.startsWith("dataproc").

  • ACTION: the action to take if the condition ismet. This can be eitherALLOW orDENY.

  • DISPLAY_NAME: a human-friendly name for theconstraint. Sample display name: "Enforce session template to have attl < 2 hours". This field has a maximum length of 200 characters.

  • DESCRIPTION: a human-friendly description of theconstraint to display as an error message when the policy is violated.This field has a maximum length of 2000 characters. Sample description:"Only allow session template creation if it sets an allowable TTL".

Set up a custom constraint

After you have created the YAML file for a new custom constraint, you must set it up to makeit available for organization policies in your organization. To set up a custom constraint, usethegcloud org-policies set-custom-constraint command:
gcloudorg-policiesset-custom-constraintCONSTRAINT_PATH
ReplaceCONSTRAINT_PATH with the full path to yourcustom constraint file. For example,/home/user/customconstraint.yaml.Once completed, your custom constraints are available as organization policiesin your list of Google Cloud organization policies.To verify that the custom constraint exists, use thegcloud org-policies list-custom-constraints command:
gcloudorg-policieslist-custom-constraints--organization=ORGANIZATION_ID
ReplaceORGANIZATION_ID with the ID of your organization resource.For more information, seeViewing organization policies.

Enforce a custom constraint

You can enforce a constraint by creating an organization policy that references it, and thenapplying that organization policy to a Google Cloud resource.

Console

  1. In the Google Cloud console, go to theOrganization policies page.

    Go to Organization policies

  2. From the project picker, select the project for which you want to set the organization policy.
  3. From the list on theOrganization policies page, select your constraint to view thePolicy details page for that constraint.
  4. To configure the organization policy for this resource, clickManage policy.
  5. On theEdit policy page, selectOverride parent's policy.
  6. ClickAdd a rule.
  7. In theEnforcement section, select whether enforcement of this organization policy is on or off.
  8. Optional: To make the organization policy conditional on a tag, clickAdd condition. Note that if you add a conditional rule to an organization policy, you must add at least one unconditional rule or the policy cannot be saved. For more information, seeSetting an organization policy with tags.
  9. ClickTest changes to simulate the effect of the organization policy. Policy simulation isn't available for legacy managed constraints. For more information, see Test organization policy changes with Policy Simulator.
  10. To finish and apply the organization policy, clickSet policy. The policy requires up to 15 minutes to take effect.

gcloud

To create an organization policy with boolean rules, create a policy YAML file that references the constraint:

name:projects/PROJECT_ID/policies/CONSTRAINT_NAMEspec:rules:-enforce:true

Replace the following:

  • PROJECT_ID: the project on which you want to enforce your constraint.
  • CONSTRAINT_NAME: the name you defined for your custom constraint. For example,custom.batchMustHaveSpecifiedCategoryLabel.

To enforce the organization policy containing the constraint, run the following command:

gcloudorg-policiesset-policyPOLICY_PATH

ReplacePOLICY_PATH with the full path to your organization policy YAML file. The policy requires up to 15 minutes to take effect.

Test the custom constraint

This section describes how to test custom constraints for batch, session, andsession template resources.

Test the custom constraint for a batch resource

The following batch creation example assumes a custom constraint has beencreated and enforced on batch creation to require that the batch has a "category"label attached with a value of "retail", "ads" or "service:("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service']).

Note: The "category" label in the example doesn't have one of the required values.
gcloud dataproc batches submit spark \  --region us-west1  --jars file:///usr/lib/spark/examples/jars/spark-examples.jar \  --class org.apache.spark.examples.SparkPi  \  --network default \  --labels category=foo \  --100

Sample output:

Operation denied by custom org policies: ["customConstraints/custom.batchMustHaveSpecifiedCategoryLabel": ""Only allow Dataproc batch creation if it has a 'category' label with  a 'retail', 'ads', or 'service' value""]

Test the custom constraint for a session resource

The following session creation example assumes a custom constraint has beencreated and enforced on session creation to require that the session has aname starting withorgName.

Note: The sessionname label in the example doesn't start with the stringorgName.
gcloud beta dataproc sessions create spark test-session  --location us-central1

Sample output:

Operation denied by custom org policy:["customConstraints/custom.denySessionNameNotStartingWithOrgName": "Deny sessioncreation if its name does not start with 'orgName'"]

Test the custom constraint for a session template resource

Preview

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

The following session template creation example assumes a custom constraint hasbeen created and enforced on session template creation and update to require thatthe session template has aname starting withorgName.

Note: The session templatename label in the example doesn't start with thestringorgName.
gcloud beta dataproc session-templates import test-session-template--source=saved-template.yaml

Sample output:

Operation denied by custom org policy:["customConstraints/custom.denySessionTemplateNameNotStartingWithOrgName":"Deny session template creation or update if its name does not start with'orgName'"]

Constraints on resources and operations

This section lists the available Google Cloud Serverless for Apache Spark custom constraints forbatch and session resources.

Supported batch constraints

The following Serverless for Apache Spark custom constraints areavailable to use when you create (submit) a batch workload:

General

  • resource.labels

PySparkBatch

  • resource.pysparkBatch.mainPythonFileUri
  • resource.pysparkBatch.args
  • resource.pysparkBatch.pythonFileUris
  • resource.pysparkBatch.jarFileUris
  • resource.pysparkBatch.fileUris
  • resource.pysparkBatch.archiveUris

SparkBatch

  • resource.sparkBatch.mainJarFileUri
  • resource.sparkBatch.mainClass
  • resource.sparkBatch.args
  • resource.sparkBatch.jarFileUris
  • resource.sparkBatch.fileUris
  • resource.sparkBatch.archiveUris

SparRBatch

  • resource.sparkRBatch.mainRFileUri
  • resource.sparkRBatch.args
  • resource.sparkRBatch.fileUris
  • resource.sparkRBatch.archiveUris

SparkSqlBatch

  • resource.sparkSqlBatch.queryFileUri
  • resource.sparkSqlBatch.queryVariables
  • resource.sparkSqlBatch.jarFileUris

RuntimeConfig

  • resource.runtimeConfig.version
  • resource.runtimeConfig.containerImage
  • resource.runtimeConfig.properties
  • resource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepository
  • resource.runtimeConfig.autotuningConfig.scenarios
  • resource.runtimeConfig.cohort

ExecutionConfig

  • resource.environmentConfig.executionConfig.serviceAccount
  • resource.environmentConfig.executionConfig.networkUri
  • resource.environmentConfig.executionConfig.subnetworkUri
  • resource.environmentConfig.executionConfig.networkTags
  • resource.environmentConfig.executionConfig.kmsKey
  • resource.environmentConfig.executionConfig.idleTtl
  • resource.environmentConfig.executionConfig.ttl
  • resource.environmentConfig.executionConfig.stagingBucket
  • resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType

PeripheralsConfig

  • resource.environmentConfig.peripheralsConfig.metastoreService
  • resource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster

Supported session constraints

The following session attributes are available to use when you create customconstraints on serverless sessions:

General

  • resource.name
  • resource.sparkConnectSession
  • resource.user
  • resource.sessionTemplate

JupyterSession

  • resource.jupyterSession.kernel
  • resource.jupyterSession.displayName

RuntimeConfig

  • resource.runtimeConfig.version
  • resource.runtimeConfig.containerImage
  • resource.runtimeConfig.properties
  • resource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepository
  • resource.runtimeConfig.autotuningConfig.scenarios
  • resource.runtimeConfig.cohort

ExecutionConfig

  • resource.environmentConfig.executionConfig.serviceAccount
  • resource.environmentConfig.executionConfig.networkUri
  • resource.environmentConfig.executionConfig.subnetworkUri
  • resource.environmentConfig.executionConfig.networkTags
  • resource.environmentConfig.executionConfig.kmsKey
  • resource.environmentConfig.executionConfig.idleTtl
  • resource.environmentConfig.executionConfig.ttl
  • resource.environmentConfig.executionConfig.stagingBucket
  • resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType

PeripheralsConfig

  • resource.environmentConfig.peripheralsConfig.metastoreService
  • resource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster

Supported session template constraints

Preview

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

The following session template attributes are available to usewhen you create custom constraints on serverless session templates:

General

  • resource.name
  • resource.description
  • resource.sparkConnectSession

JupyterSession

  • resource.jupyterSession.kernel
  • resource.jupyterSession.displayName

RuntimeConfig

  • resource.runtimeConfig.version
  • resource.runtimeConfig.containerImage
  • resource.runtimeConfig.properties
  • resource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepository
  • resource.runtimeConfig.autotuningConfig.scenarios
  • resource.runtimeConfig.cohort

ExecutionConfig

  • resource.environmentConfig.executionConfig.serviceAccount
  • resource.environmentConfig.executionConfig.networkUri
  • resource.environmentConfig.executionConfig.subnetworkUri
  • resource.environmentConfig.executionConfig.networkTags
  • resource.environmentConfig.executionConfig.kmsKey
  • resource.environmentConfig.executionConfig.idleTtl
  • resource.environmentConfig.executionConfig.ttl
  • resource.environmentConfig.executionConfig.stagingBucket
  • resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType

PeripheralsConfig

  • resource.environmentConfig.peripheralsConfig.metastoreService
  • resource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster

Example custom constraints for common use cases

This section includes example custom constraints for common uses cases forbatch and session resources.

Example custom constraints for a batch resource

The following table provides examples of Serverless for Apache Spark batch customconstraints:

DescriptionConstraint syntax
Batch must attach a "category" label with allowed values.
name:organizations/ORGANIZATION_ID/customConstraints/custom.batchMustHaveSpecifiedCategoryLabelresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service'])actionType:ALLOWdisplayName:Enforce batch "category" label requirement.description:Only allow batch creation if it attaches a "category" label with an allowable value.
Batch must set an allowed runtime version.
name:organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseAllowedVersionresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.version)) && (resource.runtimeConfig.version in ["2.0.45", "2.0.48"])actionType:ALLOWdisplayName:Enforce batch runtime version.description:Only allow batch creation if it sets an allowable runtime version.
Must use SparkSQL.
name:organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseSparkSQLresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.sparkSqlBatch))actionType:ALLOWdisplayName:Enforce batch only use SparkSQL Batch.description:Only allow creation of SparkSQL Batch.
Batch must set TTL less than 2 hours.
name:organizations/ORGANIZATION_ID/customConstraints/custom.batchMustSetLessThan2hTtlresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.environmentConfig.executionConfig.ttl)) && (resource.environmentConfig.executionConfig.ttl<= duration('2h'))actionType:ALLOWdisplayName:Enforce batch TTL.description:Only allow batch creation if it sets an allowable TTL.
Batch can't set more than 20 Spark initial executors.
name:organizations/ORGANIZATION_ID/customConstraints/custom.batchInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.properties)) && ('spark.executor.instances' in resource.runtimeConfig.properties)     &&(int(resource.runtimeConfig.properties['spark.executor.instances'])>20)actionType:DENYdisplayName:Enforce maximum number of batch Spark executor instances.description:Deny batch creation if it specifies more than 20 Spark executor instances.
Batch can't set more than 20 Spark dynamic allocation initial executors.
name:organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.properties)) && ('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties)     &&(int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20)actionType:DENYdisplayName:Enforce maximum number of batch dynamic allocation initial executors.description:Deny batch creation if it specifies more than 20 Spark dynamic allocation initial executors.
Batch must not allow more than 20 dynamic allocation executors.
name:organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationMaxExecutorMax20resourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(resource.runtimeConfig.properties['spark.dynamicAllocation.enabled']=='false') || (('spark.dynamicAllocation.maxExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.maxExecutors'])<=20))actionType:ALLOWdisplayName:Enforce batch maximum number of dynamic allocation executors.description:Only allow batch creation if dynamic allocation is disabled orthe maximum number of dynamic allocation executors is set to less than or equal to 20.
Batch must set the KMS key to an allowed pattern.
name:organizations/ORGANIZATION_ID/custom.batchKmsPatternresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$')actionType:ALLOWdisplayName:Enforce batch KMS Key pattern.description:Only allow batch creation if it sets the KMS key to an allowable pattern.
Batch must set the staging bucket prefix to an allowed value.
name:organizations/ORGANIZATION_ID/customConstraints/custom.batchStagingBucketPrefixresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:resource.environmentConfig.executionConfig.stagingBucket.startsWith(ALLOWED_PREFIX)actionType:ALLOWdisplayName:Enforce batch staging bucket prefix.description:Only allow batch creation if it sets the staging bucket prefix toALLOWED_PREFIX.
Batch executor memory setting must end with a suffixm and be less than 20000 m.
name:organizations/ORGANIZATION_ID/customConstraints/custom.batchExecutorMemoryMaxresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:('spark.executor.memory' in resource.runtimeConfig.properties) && (resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) && (int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000)actionType:ALLOWdisplayName:Enforce batch executor maximum memory.description:Only allow batch creation if the executor memory setting ends with a suffix 'm' and is less than 20000 m.

Example custom constraints for a session resource

The following table provides examples of Serverless for Apache Spark session customconstraints:

DescriptionConstraint syntax
Session must setsessionTemplate to empty string.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateMustBeEmptyresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:resource.sessionTemplate == ""actionType:ALLOWdisplayName:Enforce empty session templates.description:Only allow session creation if session template is empty string.
sessionTemplate must be equal to approved template IDs.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateIdMustBeApprovedresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:resource.sessionTemplate.startsWith("https://www.googleapis.com/compute/v1/projects/")&&resource.sessionTemplate.contains("/locations/") &&resource.sessionTemplate.contains("/sessionTemplates/") &&(resource.sessionTemplate.endsWith("/1") ||resource.sessionTemplate.endsWith("/2") ||resource.sessionTemplate.endsWith("/13"))actionType:ALLOWdisplayName:Enforce templateId must be 1, 2, or 13.description:Only allow session creation if session template ID is in theapproved list, that is, 1, 2 and 13.
Session must use end user credentials to authenticate the workload.
name:organizations/ORGANIZATION_ID/customConstraints/custom.AllowEUCSessionsresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType=="END_USER_CREDENTIALS"actionType:ALLOWdisplayName:Require end user credential authenticated sessions.description:Allow session creation only if the workload is authenticatedusing end-user credentials.
Session must set an allowed runtime version.
name:organizations/ORGANIZATION_ID/custom.sessionMustUseAllowedVersionresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.version)) &&(resource.runtimeConfig.version in ["2.0.45", "2.0.48"])actionType:ALLOWdisplayName:Enforce session runtime version.description:Only allow session creation if it sets an allowable runtimeversion.
Session must set TTL less than 2 hours.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionMustSetLessThan2hTtlresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:(has(resource.environmentConfig.executionConfig.ttl)) &&(resource.environmentConfig.executionConfig.ttl<= duration('2h'))actionType:ALLOWdisplayName:Enforce session TTL.description:Only allow session creation if it sets an allowable TTL.
Session can't set more than 20 Spark initial executors.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.properties)) &&('spark.executor.instances' in resource.runtimeConfig.properties) &&(int(resource.runtimeConfig.properties['spark.executor.instances'])>20)actionType:DENYdisplayName:Enforce maximum number of session Spark executor instances.description:Deny session creation if it specifies more than 20 Spark executorinstances.
Session can't set more than 20 Spark dynamic allocation initial executors.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionDynamicAllocationInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.properties)) &&('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties)&&(int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20)actionType:DENYdisplayName:Enforce maximum number of session dynamic allocation initial executors.description:Deny session creation if it specifies more than 20 Spark dynamicallocation initial executors.
Session must set the KMS key to an allowed pattern.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionKmsPatternresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$')actionType:ALLOWdisplayName:Enforce session KMS Key pattern.description:Only allow session creation if it sets the KMS key to anallowable pattern.
Session must set the staging bucket prefix to an allowed value.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionStagingBucketPrefixresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:resource.environmentConfig.executionConfig.stagingBucket.startsWith(ALLOWED_PREFIX)actionType:ALLOWdisplayName:Enforce session staging bucket prefix.description:Only allow session creation if it sets the staging bucket prefixtoALLOWED_PREFIX.
Session executor memory setting must end with a suffixm and be less than 20000 m.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionExecutorMemoryMaxresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:('spark.executor.memory' in resource.runtimeConfig.properties) &&(resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) &&(int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000)actionType:ALLOWdisplayName:Enforce session executor maximum memory.description:Only allow session creation if the executor memory setting endswith a suffix 'm' and is less than 20000 m.

Example custom constraints for a session template resource

Preview

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

The following table provides examples of Serverless for Apache Spark session templatecustom constraints:

DescriptionConstraint syntax
Session template name must end withorg-name.
name:organizations/ORGANIZATION_ID/customConstraints/custom.denySessionTemplateNameNotEndingWithOrgNameresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:'!resource.name.endsWith(''org-name'')'actionType:DENYdisplayName:DenySessionTemplateNameNotEndingWithOrgNamedescription:Deny session template creation/update if its name does not end with 'org-name'
Session template must set an allowed runtime version.
name:organizations/ORGANIZATION_ID/custom.sessionTemplateMustUseAllowedVersionresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:(has(resource.runtimeConfig.version)) &&(resource.runtimeConfig.version in ["2.0.45", "2.0.48"])actionType:ALLOWdisplayName:Enforce session template runtime version.description:Only allow session template creation or update if it sets anallowable runtime version.
Session template must set TTL less than 2 hours.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateMustSetLessThan2hTtlresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:(has(resource.environmentConfig.executionConfig.ttl)) &&(resource.environmentConfig.executionConfig.ttl<= duration('2h'))actionType:ALLOWdisplayName:Enforce session template TTL.description:Only allow session template creation or update if it sets anallowable TTL.
Session template can't set more than 20 Spark initial executors.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:(has(resource.runtimeConfig.properties)) &&('spark.executor.instances' in resource.runtimeConfig.properties) &&(int(resource.runtimeConfig.properties['spark.executor.instances'])>20)actionType:DENYdisplayName:Enforce maximum number of session Spark executor instances.description:Deny session template creation or update if it specifies morethan 20 Spark executor instances.
Session template can't set more than 20 Spark dynamic allocation initial executors.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateDynamicAllocationInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:(has(resource.runtimeConfig.properties)) &&('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties)&&(int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20)actionType:DENYdisplayName:Enforce maximum number of session dynamic allocation initial executors.description:Deny session template creation or update if it specifies more than 20Spark dynamic allocation initial executors.
Session template must set the KMS key to an allowed pattern.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateKmsPatternresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$')actionType:ALLOWdisplayName:Enforce session KMS Key pattern.description:Only allow session template creation or update if it sets the KMS key to anallowable pattern.
Session template must set the staging bucket prefix to an allowed value.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateStagingBucketPrefixresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:resource.environmentConfig.executionConfig.stagingBucket.startsWith(ALLOWED_PREFIX)actionType:ALLOWdisplayName:Enforce session staging bucket prefix.description:Only allow session template creation or update if it sets the stagingbucket prefix toALLOWED_PREFIX.
Session template executor memory setting must end with a suffixm and be less than 20000 m.
name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateExecutorMemoryMaxresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:('spark.executor.memory' in resource.runtimeConfig.properties) &&(resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) &&(int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000)actionType:ALLOWdisplayName:Enforce session executor maximum memory.description:Only allow session template creation or update if the executor memory setting endswith a suffix 'm' and is less than 20000 m.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-11-24 UTC.