Create custom constraints
Google Cloud Organization Policy gives you centralized, programmaticcontrol over your organization's resources. As theorganization policy administrator, you can define an organization policy,which is a set of restrictions called constraints that apply toGoogle Cloud resources and descendants of those resources in theGoogle Cloud resource hierarchy. You can enforce organization policies atthe organization, folder, or project level.
Organization Policy providespredefined constraints for variousGoogle Cloud services. However, if you want more granular, customizablecontrol over the specific fields that are restricted in your organizationpolicies, you can also create custom constraints and use those customconstraints in an organization policy.
Benefits
You can use a custom organization policy to allow or deny specificoperations on Serverless for Apache Spark batches, sessions, and session templates.For example, if a request to create a batch workload fails to satisfycustom constraint validation as set by your organization policy,the request will fail, and an error will be returned to the caller.
Policy inheritance
By default, organization policies are inherited by the descendants of theresources on which you enforce the policy. For example, if you enforce a policyon a folder, Google Cloud enforces the policy on all projects in thefolder. To learn more about this behavior and how to change it, refer toHierarchy evaluation rules.
Pricing
The Organization Policy Service, including predefined and custom constraints, is offered atno charge.
Before you begin
- Set up your project
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Serverless for Apache Spark API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Toinitialize the gcloud CLI, run the following command:
gcloudinit
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Serverless for Apache Spark API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Toinitialize the gcloud CLI, run the following command:
gcloudinit
- Ensure that you know yourorganization ID.
Required roles
To get the permissions that you need to manage organization policies, ask your administrator to grant you theOrganization policy administrator (roles/orgpolicy.policyAdmin) IAM role on the organization resource. For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains the permissions required to manage organization policies. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to manage organization policies:
orgpolicy.constraints.listorgpolicy.policies.createorgpolicy.policies.deleteorgpolicy.policies.listorgpolicy.policies.updateorgpolicy.policy.getorgpolicy.policy.set
You might also be able to get these permissions withcustom roles or otherpredefined roles.
Create a custom constraint
A custom constraint is defined in a YAML file by the resources, methods,conditions, and actions it is applied to. Serverless for Apache Spark supportscustom constraints that are applied to theCREATE method of the batch andsession resources.
For more information about how to create a custom constraint, seeCreating and managing custom organization policies.
Create a custom constraint for a batch resource
To create a YAML file for a Serverless for Apache Spark custom constraint for a batchresource, use the following format:
name:organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAMEresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:CONDITIONactionType:ACTIONdisplayName:DISPLAY_NAMEdescription:DESCRIPTIONReplace the following:
ORGANIZATION_ID: your organization ID, such as123456789.CONSTRAINT_NAME: the name you want for your newcustom constraint. A custom constraint must start withcustom., and canonly include uppercase letters, lowercase letters, or numbers, forexample,custom.batchMustHaveSpecifiedCategoryLabel. The maximum length of this field is 70characters, not counting the prefix, for example,organizations/123456789/customConstraints/custom..CONDITION: aCEL condition that is written againsta representation of a supported service resource. This field has a maximumlength of 1000 characters. Formore information about the resources available to write conditions against, seeDataproc Serverless constraints on resources and operations.Sample condition:("category" in resource.labels) && (resource.labels['category']in ['retail', 'ads', 'service']).ACTION: the action to take if the condition ismet. This can be eitherALLOWorDENY.DISPLAY_NAME: a human-friendly name for theconstraint. Sample display name: "Enforce batch 'category' label requirement".This field has a maximum length of 200 characters.DESCRIPTION: a human-friendly description of theconstraint to display as an error message when the policy is violated.This field has a maximum length of 2000 characters. Sample description:"Only allow Dataproc batch creation if it has a 'category' label with a 'retail', 'ads', or 'service' value".
Create a custom constraint for a session resource
To create a YAML file for a Serverless for Apache Spark custom constraint for asession resource, use the following format:
name:organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAMEresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:CONDITIONactionType:ACTIONdisplayName:DISPLAY_NAMEdescription:DESCRIPTIONReplace the following:
ORGANIZATION_ID: your organization ID, such as123456789.CONSTRAINT_NAME: the name you want for your newcustom constraint. A custom constraint must start withcustom., and canonly include uppercase letters, lowercase letters, or numbers, forexample,custom.SessionNameMustStartWithTeamName. The maximum length of thisfield is 70 characters, not counting the prefix, for example,organizations/123456789/customConstraints/custom..CONDITION: aCEL condition that is written againsta representation of a supported service resource. This field has a maximumlength of 1000 characters. For more information about the resources availableto write conditions against, seeDataproc Serverless constraints on resources and operations.Sample condition:(resource.name.startsWith("dataproc").ACTION: the action to take if the condition ismet. This can be eitherALLOWorDENY.DISPLAY_NAME: a human-friendly name for theconstraint. Sample display name: "Enforce session to have attl < 2 hours". This field has a maximum length of 200 characters.DESCRIPTION: a human-friendly description of theconstraint to display as an error message when the policy is violated.This field has a maximum length of 2000 characters. Sample description:"Only allow session creation if it sets an allowable TTL".
sessionTemplate field. This constraint must ensure that either thetemplate ID matches an entry in the approved template IDs allow list, or thatthesessionTemplate is left empty. This is essential to maintain organizationpolicy restrictions and prevent users from overriding them using sessiontemplates.Create a custom constraint for a session template resource
Preview
This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
To create a YAML file for a Serverless for Apache Spark custom constraint for asession template resource, use the following format:
name:organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAMEresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:CONDITIONactionType:ACTIONdisplayName:DISPLAY_NAMEdescription:DESCRIPTIONReplace the following:
ORGANIZATION_ID: your organization ID, such as123456789.CONSTRAINT_NAME: the name you want for your newcustom constraint. A custom constraint must start withcustom., and canonly include uppercase letters, lowercase letters, or numbers, forexample,custom.SessionTemplateNameMustStartWithTeamName. The maximum lengthof this field is 70 characters, not counting the prefix, for example,organizations/123456789/customConstraints/custom..CONDITION: aCEL condition that is written againsta representation of a supported service resource. This field has a maximumlength of 1000 characters. For more information about the resources availableto write conditions against, seeConstraints on resources and operations.Sample condition:(resource.name.startsWith("dataproc").ACTION: the action to take if the condition ismet. This can be eitherALLOWorDENY.DISPLAY_NAME: a human-friendly name for theconstraint. Sample display name: "Enforce session template to have attl < 2 hours". This field has a maximum length of 200 characters.DESCRIPTION: a human-friendly description of theconstraint to display as an error message when the policy is violated.This field has a maximum length of 2000 characters. Sample description:"Only allow session template creation if it sets an allowable TTL".
Set up a custom constraint
After you have created the YAML file for a new custom constraint, you must set it up to makeit available for organization policies in your organization. To set up a custom constraint, usethegcloud org-policies set-custom-constraint command:gcloudorg-policiesset-custom-constraintCONSTRAINT_PATH
CONSTRAINT_PATH with the full path to yourcustom constraint file. For example,/home/user/customconstraint.yaml.Once completed, your custom constraints are available as organization policiesin your list of Google Cloud organization policies.To verify that the custom constraint exists, use thegcloud org-policies list-custom-constraints command:gcloudorg-policieslist-custom-constraints--organization=ORGANIZATION_IDORGANIZATION_ID with the ID of your organization resource.For more information, seeViewing organization policies.Enforce a custom constraint
You can enforce a constraint by creating an organization policy that references it, and thenapplying that organization policy to a Google Cloud resource.Console
- In the Google Cloud console, go to theOrganization policies page.
- From the project picker, select the project for which you want to set the organization policy.
- From the list on theOrganization policies page, select your constraint to view thePolicy details page for that constraint.
- To configure the organization policy for this resource, clickManage policy.
- On theEdit policy page, selectOverride parent's policy.
- ClickAdd a rule.
- In theEnforcement section, select whether enforcement of this organization policy is on or off.
- Optional: To make the organization policy conditional on a tag, clickAdd condition. Note that if you add a conditional rule to an organization policy, you must add at least one unconditional rule or the policy cannot be saved. For more information, seeSetting an organization policy with tags.
- ClickTest changes to simulate the effect of the organization policy. Policy simulation isn't available for legacy managed constraints. For more information, see Test organization policy changes with Policy Simulator.
- To finish and apply the organization policy, clickSet policy. The policy requires up to 15 minutes to take effect.
gcloud
To create an organization policy with boolean rules, create a policy YAML file that references the constraint:
name:projects/PROJECT_ID/policies/CONSTRAINT_NAMEspec:rules:-enforce:true
Replace the following:
PROJECT_ID: the project on which you want to enforce your constraint.CONSTRAINT_NAME: the name you defined for your custom constraint. For example,.custom.batchMustHaveSpecifiedCategoryLabel
To enforce the organization policy containing the constraint, run the following command:
gcloudorg-policiesset-policyPOLICY_PATH
ReplacePOLICY_PATH with the full path to your organization policy YAML file. The policy requires up to 15 minutes to take effect.
Test the custom constraint
This section describes how to test custom constraints for batch, session, andsession template resources.
Test the custom constraint for a batch resource
The following batch creation example assumes a custom constraint has beencreated and enforced on batch creation to require that the batch has a "category"label attached with a value of "retail", "ads" or "service:("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service']).
gcloud dataproc batches submit spark \ --region us-west1 --jars file:///usr/lib/spark/examples/jars/spark-examples.jar \ --class org.apache.spark.examples.SparkPi \ --network default \ --labels category=foo \ --100Sample output:
Operation denied by custom org policies: ["customConstraints/custom.batchMustHaveSpecifiedCategoryLabel": ""Only allow Dataproc batch creation if it has a 'category' label with a 'retail', 'ads', or 'service' value""]Test the custom constraint for a session resource
The following session creation example assumes a custom constraint has beencreated and enforced on session creation to require that the session has aname starting withorgName.
name label in the example doesn't start with the stringorgName.gcloud beta dataproc sessions create spark test-session --location us-central1Sample output:
Operation denied by custom org policy:["customConstraints/custom.denySessionNameNotStartingWithOrgName": "Deny sessioncreation if its name does not start with 'orgName'"]Test the custom constraint for a session template resource
Preview
This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
The following session template creation example assumes a custom constraint hasbeen created and enforced on session template creation and update to require thatthe session template has aname starting withorgName.
name label in the example doesn't start with thestringorgName.gcloud beta dataproc session-templates import test-session-template--source=saved-template.yamlSample output:
Operation denied by custom org policy:["customConstraints/custom.denySessionTemplateNameNotStartingWithOrgName":"Deny session template creation or update if its name does not start with'orgName'"]Constraints on resources and operations
This section lists the available Google Cloud Serverless for Apache Spark custom constraints forbatch and session resources.
Supported batch constraints
The following Serverless for Apache Spark custom constraints areavailable to use when you create (submit) a batch workload:
General
resource.labels
PySparkBatch
resource.pysparkBatch.mainPythonFileUriresource.pysparkBatch.argsresource.pysparkBatch.pythonFileUrisresource.pysparkBatch.jarFileUrisresource.pysparkBatch.fileUrisresource.pysparkBatch.archiveUris
SparkBatch
resource.sparkBatch.mainJarFileUriresource.sparkBatch.mainClassresource.sparkBatch.argsresource.sparkBatch.jarFileUrisresource.sparkBatch.fileUrisresource.sparkBatch.archiveUris
SparRBatch
resource.sparkRBatch.mainRFileUriresource.sparkRBatch.argsresource.sparkRBatch.fileUrisresource.sparkRBatch.archiveUris
SparkSqlBatch
resource.sparkSqlBatch.queryFileUriresource.sparkSqlBatch.queryVariablesresource.sparkSqlBatch.jarFileUris
RuntimeConfig
resource.runtimeConfig.versionresource.runtimeConfig.containerImageresource.runtimeConfig.propertiesresource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepositoryresource.runtimeConfig.autotuningConfig.scenariosresource.runtimeConfig.cohort
ExecutionConfig
resource.environmentConfig.executionConfig.serviceAccountresource.environmentConfig.executionConfig.networkUriresource.environmentConfig.executionConfig.subnetworkUriresource.environmentConfig.executionConfig.networkTagsresource.environmentConfig.executionConfig.kmsKeyresource.environmentConfig.executionConfig.idleTtlresource.environmentConfig.executionConfig.ttlresource.environmentConfig.executionConfig.stagingBucketresource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
PeripheralsConfig
resource.environmentConfig.peripheralsConfig.metastoreServiceresource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
Supported session constraints
The following session attributes are available to use when you create customconstraints on serverless sessions:
General
resource.nameresource.sparkConnectSessionresource.userresource.sessionTemplate
JupyterSession
resource.jupyterSession.kernelresource.jupyterSession.displayName
RuntimeConfig
resource.runtimeConfig.versionresource.runtimeConfig.containerImageresource.runtimeConfig.propertiesresource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepositoryresource.runtimeConfig.autotuningConfig.scenariosresource.runtimeConfig.cohort
ExecutionConfig
resource.environmentConfig.executionConfig.serviceAccountresource.environmentConfig.executionConfig.networkUriresource.environmentConfig.executionConfig.subnetworkUriresource.environmentConfig.executionConfig.networkTagsresource.environmentConfig.executionConfig.kmsKeyresource.environmentConfig.executionConfig.idleTtlresource.environmentConfig.executionConfig.ttlresource.environmentConfig.executionConfig.stagingBucketresource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
PeripheralsConfig
resource.environmentConfig.peripheralsConfig.metastoreServiceresource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
Supported session template constraints
Preview
This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
The following session template attributes are available to usewhen you create custom constraints on serverless session templates:
General
resource.nameresource.descriptionresource.sparkConnectSession
JupyterSession
resource.jupyterSession.kernelresource.jupyterSession.displayName
RuntimeConfig
resource.runtimeConfig.versionresource.runtimeConfig.containerImageresource.runtimeConfig.propertiesresource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepositoryresource.runtimeConfig.autotuningConfig.scenariosresource.runtimeConfig.cohort
ExecutionConfig
resource.environmentConfig.executionConfig.serviceAccountresource.environmentConfig.executionConfig.networkUriresource.environmentConfig.executionConfig.subnetworkUriresource.environmentConfig.executionConfig.networkTagsresource.environmentConfig.executionConfig.kmsKeyresource.environmentConfig.executionConfig.idleTtlresource.environmentConfig.executionConfig.ttlresource.environmentConfig.executionConfig.stagingBucketresource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
PeripheralsConfig
resource.environmentConfig.peripheralsConfig.metastoreServiceresource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
Example custom constraints for common use cases
This section includes example custom constraints for common uses cases forbatch and session resources.
Example custom constraints for a batch resource
The following table provides examples of Serverless for Apache Spark batch customconstraints:
| Description | Constraint syntax |
|---|---|
| Batch must attach a "category" label with allowed values. | name:organizations/ORGANIZATION_ID/customConstraints/custom.batchMustHaveSpecifiedCategoryLabelresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service'])actionType:ALLOWdisplayName:Enforce batch "category" label requirement.description:Only allow batch creation if it attaches a "category" label with an allowable value. |
| Batch must set an allowed runtime version. | name:organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseAllowedVersionresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.version)) && (resource.runtimeConfig.version in ["2.0.45", "2.0.48"])actionType:ALLOWdisplayName:Enforce batch runtime version.description:Only allow batch creation if it sets an allowable runtime version. |
| Must use SparkSQL. | name:organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseSparkSQLresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.sparkSqlBatch))actionType:ALLOWdisplayName:Enforce batch only use SparkSQL Batch.description:Only allow creation of SparkSQL Batch. |
| Batch must set TTL less than 2 hours. | name:organizations/ORGANIZATION_ID/customConstraints/custom.batchMustSetLessThan2hTtlresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.environmentConfig.executionConfig.ttl)) && (resource.environmentConfig.executionConfig.ttl<= duration('2h'))actionType:ALLOWdisplayName:Enforce batch TTL.description:Only allow batch creation if it sets an allowable TTL. |
| Batch can't set more than 20 Spark initial executors. | name:organizations/ORGANIZATION_ID/customConstraints/custom.batchInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.properties)) && ('spark.executor.instances' in resource.runtimeConfig.properties) &&(int(resource.runtimeConfig.properties['spark.executor.instances'])>20)actionType:DENYdisplayName:Enforce maximum number of batch Spark executor instances.description:Deny batch creation if it specifies more than 20 Spark executor instances. |
| Batch can't set more than 20 Spark dynamic allocation initial executors. | name:organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.properties)) && ('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties) &&(int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20)actionType:DENYdisplayName:Enforce maximum number of batch dynamic allocation initial executors.description:Deny batch creation if it specifies more than 20 Spark dynamic allocation initial executors. |
| Batch must not allow more than 20 dynamic allocation executors. | name:organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationMaxExecutorMax20resourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:(resource.runtimeConfig.properties['spark.dynamicAllocation.enabled']=='false') || (('spark.dynamicAllocation.maxExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.maxExecutors'])<=20))actionType:ALLOWdisplayName:Enforce batch maximum number of dynamic allocation executors.description:Only allow batch creation if dynamic allocation is disabled orthe maximum number of dynamic allocation executors is set to less than or equal to 20. |
| Batch must set the KMS key to an allowed pattern. | name:organizations/ORGANIZATION_ID/custom.batchKmsPatternresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$')actionType:ALLOWdisplayName:Enforce batch KMS Key pattern.description:Only allow batch creation if it sets the KMS key to an allowable pattern. |
| Batch must set the staging bucket prefix to an allowed value. | name:organizations/ORGANIZATION_ID/customConstraints/custom.batchStagingBucketPrefixresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:resource.environmentConfig.executionConfig.stagingBucket.startsWith( |
Batch executor memory setting must end with a suffixm and be less than 20000 m. | name:organizations/ORGANIZATION_ID/customConstraints/custom.batchExecutorMemoryMaxresourceTypes:-dataproc.googleapis.com/BatchmethodTypes:-CREATEcondition:('spark.executor.memory' in resource.runtimeConfig.properties) && (resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) && (int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000)actionType:ALLOWdisplayName:Enforce batch executor maximum memory.description:Only allow batch creation if the executor memory setting ends with a suffix 'm' and is less than 20000 m. |
Example custom constraints for a session resource
The following table provides examples of Serverless for Apache Spark session customconstraints:
| Description | Constraint syntax |
|---|---|
Session must setsessionTemplate to empty string. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateMustBeEmptyresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:resource.sessionTemplate == ""actionType:ALLOWdisplayName:Enforce empty session templates.description:Only allow session creation if session template is empty string. |
sessionTemplate must be equal to approved template IDs. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateIdMustBeApprovedresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:resource.sessionTemplate.startsWith("https://www.googleapis.com/compute/v1/projects/")&&resource.sessionTemplate.contains("/locations/") &&resource.sessionTemplate.contains("/sessionTemplates/") &&(resource.sessionTemplate.endsWith("/1") ||resource.sessionTemplate.endsWith("/2") ||resource.sessionTemplate.endsWith("/13"))actionType:ALLOWdisplayName:Enforce templateId must be 1, 2, or 13.description:Only allow session creation if session template ID is in theapproved list, that is, 1, 2 and 13. |
| Session must use end user credentials to authenticate the workload. | name:organizations/ORGANIZATION_ID/customConstraints/custom.AllowEUCSessionsresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType=="END_USER_CREDENTIALS"actionType:ALLOWdisplayName:Require end user credential authenticated sessions.description:Allow session creation only if the workload is authenticatedusing end-user credentials. |
| Session must set an allowed runtime version. | name:organizations/ORGANIZATION_ID/custom.sessionMustUseAllowedVersionresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.version)) &&(resource.runtimeConfig.version in ["2.0.45", "2.0.48"])actionType:ALLOWdisplayName:Enforce session runtime version.description:Only allow session creation if it sets an allowable runtimeversion. |
| Session must set TTL less than 2 hours. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionMustSetLessThan2hTtlresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:(has(resource.environmentConfig.executionConfig.ttl)) &&(resource.environmentConfig.executionConfig.ttl<= duration('2h'))actionType:ALLOWdisplayName:Enforce session TTL.description:Only allow session creation if it sets an allowable TTL. |
| Session can't set more than 20 Spark initial executors. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.properties)) &&('spark.executor.instances' in resource.runtimeConfig.properties) &&(int(resource.runtimeConfig.properties['spark.executor.instances'])>20)actionType:DENYdisplayName:Enforce maximum number of session Spark executor instances.description:Deny session creation if it specifies more than 20 Spark executorinstances. |
| Session can't set more than 20 Spark dynamic allocation initial executors. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionDynamicAllocationInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:(has(resource.runtimeConfig.properties)) &&('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties)&&(int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20)actionType:DENYdisplayName:Enforce maximum number of session dynamic allocation initial executors.description:Deny session creation if it specifies more than 20 Spark dynamicallocation initial executors. |
| Session must set the KMS key to an allowed pattern. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionKmsPatternresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$')actionType:ALLOWdisplayName:Enforce session KMS Key pattern.description:Only allow session creation if it sets the KMS key to anallowable pattern. |
| Session must set the staging bucket prefix to an allowed value. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionStagingBucketPrefixresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:resource.environmentConfig.executionConfig.stagingBucket.startsWith( |
Session executor memory setting must end with a suffixm and be less than 20000 m. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionExecutorMemoryMaxresourceTypes:-dataproc.googleapis.com/SessionmethodTypes:-CREATEcondition:('spark.executor.memory' in resource.runtimeConfig.properties) &&(resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) &&(int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000)actionType:ALLOWdisplayName:Enforce session executor maximum memory.description:Only allow session creation if the executor memory setting endswith a suffix 'm' and is less than 20000 m. |
Example custom constraints for a session template resource
Preview
This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
The following table provides examples of Serverless for Apache Spark session templatecustom constraints:
| Description | Constraint syntax |
|---|---|
Session template name must end withorg-name. | name:organizations/ORGANIZATION_ID/customConstraints/custom.denySessionTemplateNameNotEndingWithOrgNameresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:'!resource.name.endsWith(''org-name'')'actionType:DENYdisplayName:DenySessionTemplateNameNotEndingWithOrgNamedescription:Deny session template creation/update if its name does not end with 'org-name' |
| Session template must set an allowed runtime version. | name:organizations/ORGANIZATION_ID/custom.sessionTemplateMustUseAllowedVersionresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:(has(resource.runtimeConfig.version)) &&(resource.runtimeConfig.version in ["2.0.45", "2.0.48"])actionType:ALLOWdisplayName:Enforce session template runtime version.description:Only allow session template creation or update if it sets anallowable runtime version. |
| Session template must set TTL less than 2 hours. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateMustSetLessThan2hTtlresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:(has(resource.environmentConfig.executionConfig.ttl)) &&(resource.environmentConfig.executionConfig.ttl<= duration('2h'))actionType:ALLOWdisplayName:Enforce session template TTL.description:Only allow session template creation or update if it sets anallowable TTL. |
| Session template can't set more than 20 Spark initial executors. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:(has(resource.runtimeConfig.properties)) &&('spark.executor.instances' in resource.runtimeConfig.properties) &&(int(resource.runtimeConfig.properties['spark.executor.instances'])>20)actionType:DENYdisplayName:Enforce maximum number of session Spark executor instances.description:Deny session template creation or update if it specifies morethan 20 Spark executor instances. |
| Session template can't set more than 20 Spark dynamic allocation initial executors. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateDynamicAllocationInitialExecutorMax20resourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:(has(resource.runtimeConfig.properties)) &&('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties)&&(int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20)actionType:DENYdisplayName:Enforce maximum number of session dynamic allocation initial executors.description:Deny session template creation or update if it specifies more than 20Spark dynamic allocation initial executors. |
| Session template must set the KMS key to an allowed pattern. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateKmsPatternresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$')actionType:ALLOWdisplayName:Enforce session KMS Key pattern.description:Only allow session template creation or update if it sets the KMS key to anallowable pattern. |
| Session template must set the staging bucket prefix to an allowed value. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateStagingBucketPrefixresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:resource.environmentConfig.executionConfig.stagingBucket.startsWith( |
Session template executor memory setting must end with a suffixm and be less than 20000 m. | name:organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateExecutorMemoryMaxresourceTypes:-dataproc.googleapis.com/SessionTemplatemethodTypes:-CREATE-UPDATEcondition:('spark.executor.memory' in resource.runtimeConfig.properties) &&(resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) &&(int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000)actionType:ALLOWdisplayName:Enforce session executor maximum memory.description:Only allow session template creation or update if the executor memory setting endswith a suffix 'm' and is less than 20000 m. |
What's next
- For more information about organization policies, seeIntroduction to the Organization Policy Service.
- Learn more about how tocreate and manage organization policies.
- See the full list of predefinedOrganization policy constraints.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-11-24 UTC.