Create Cloud Composer environments Stay organized with collections Save and categorize content based on your preferences.
Cloud Composer 3 | Cloud Composer 2 | Cloud Composer 1
This page explains how to create a Cloud Composer environment.
- For more information about environments, seeEnvironment architecture.
- For more information about creating an environment with Terraform, seeCreate environments (Terraform).
Before you begin
Enable the Cloud Composer API. For the full listof services used by Cloud Composer, seeServices required by Cloud Composer.
The approximate time to create an environment is25 minutes.
If you create an environment with Terraform, the service account used byTerraform musthave a role withthe
composer.environments.createpermission enabled.For more information about the service account for Terraform, seeGoogle Provider Configuration Reference.
For more information about using Terraform to create aCloud Composer environment, seeTerraform documentation.
For more information about additional parameters, seeTerraform Argument Reference.
VPC SC: To deploy Cloud Composer environments inside a securityperimeter, seeConfiguring VPC SC. When used withCloud Composer, VPC Service Controls have severalknown limitations.
Step 1. Create or choose an environment's service account
When you create an environment, you specify a service account. This serviceaccount is calledenvironment's service account. Your environment uses thisservice account to perform most of the operations.
The service account for your environment is not a user account. Aservice account is a special kind of account used by an application or avirtual machine (VM) instance, not a person.
You can't change the service account of your environment later.
Warning: Your environment's service account can havetoo broad permissions on your project. Because your environment runs DAGs onbehalf of your environment's service account, users who can add and modify DAGsin your environment's bucketcan run their code on behalf of the environment's service account andexercise all permissions of this account. Make sure that you are familiarwithsecurity considerations for environment's service accountsand understand how this account interacts with permissions and roles that yougrant to individual users in your project.If you don't have a service account for Cloud Composerenvironments in your project yet, create it.
SeeCreate environments (Terraform) for anextended example of creating a service account for your environment inTerraform.
Important: You can use the same service account for more than oneCloud Composer environment. In this case, if you grant extrapermissions to access resources in your project to this account, allenvironments that use it will get the same permissions.To create a new service account for your environment:
Create a new service account as described inthe Identity and Access Management documentation.
Grant a role to it, as described in the Identity and Access Managementdocumentation. The required role isComposer Worker (
composer.worker).To access other resources in your Google Cloud project, grantextra permissions to access those resources to this service account.TheComposer Worker (
composer.worker) role provides this requiredset of permissions in most cases. Add extra permissions to this serviceaccount only when it's necessary for the operation of your DAGs.
Step 2. Basic setup
This step creates a Cloud Composer environment with defaultparameters in the specified location.
Note: All other steps in this guide explain how to customize and configuredifferent aspects of your environment. All of the remaining steps are optional.Console
In the Google Cloud console, go to theCreate environment page.
In theName field, enter a name for your environment.
The name must start with a lowercase letter followed by up to 62 lowercaseletters, numbers, or hyphens, and can't end with a hyphen. The environmentname is used to create subcomponents for the environment, so you must providea name that is also valid as a Cloud Storagebucket name. SeeBucket naming guidelines for a listof restrictions.
In theLocation drop-down list, choose a locationfor your environment.
A location is the region where the environment is located.
In theImage version drop-down list, selectaCloud Composer image with the requiredversion of Airflow.
IntheService account drop-down list,select a service account for your environment.
If you don't have a serviceaccount for your environment yet, seeCreate or choose an environment's service account.
gcloud
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versionIMAGE_VERSION\--service-account"SERVICE_ACCOUNT"Replace:
ENVIRONMENT_NAMEwith the name of the environment.The name must start with a lowercase letter followed by up to 62 lowercaseletters, numbers, or hyphens, and can't end with a hyphen. The environmentname is used to create subcomponents for the environment, so you must providea name that is also valid as a Cloud Storagebucket name. SeeBucket naming guidelines for a listof restrictions.
LOCATIONwith the region for the environment.A location is the region where the environment is located.
SERVICE_ACCOUNTwith the service account for your environment.IMAGE_VERSIONwith the name of a Cloud Composer image.
Example:
gcloudcomposerenvironmentscreateexample-environment\--locationus-central1\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"example-account@example-project.iam.gserviceaccount.com"API
Construct anenvironments.create API request. Specify theconfiguration in theEnvironment resource.
{"name":"projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME","config":{"softwareConfig":{"imageVersion":"IMAGE_VERSION"},"nodeConfig":{"serviceAccount":"SERVICE_ACCOUNT"}}}Replace:
PROJECT_IDwith theProject ID.LOCATIONwith the region for the environment.A location is the region where the environment is located.
ENVIRONMENT_NAMEwith the environment name.The name must start with a lowercase letter followed by up to 62 lowercaseletters, numbers, or hyphens, and can't end with a hyphen. The environmentname is used to create subcomponents for the environment, so you must providea name that is also valid as a Cloud Storagebucket name. SeeBucket naming guidelines for a listof restrictions.
IMAGE_VERSIONwith the name of a Cloud Composer image.SERVICE_ACCOUNTwith the service account for your environment.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments{"name":"projects/example-project/locations/us-central1/environments/example-environment","config":{"softwareConfig":{"imageVersion":"composer-3-airflow-2.10.5-build.23"},"nodeConfig":{"serviceAccount":"example-account@example-project.iam.gserviceaccount.com"}}}Terraform
To create an environment with default parameters is a specified location,add the following resource block to your Terraform configuration and runterraform apply.
resource"google_composer_environment""example"{provider=google-betaname="ENVIRONMENT_NAME"region="LOCATION"config{software_config{image_version="IMAGE_VERSION"}node_config{service_account="SERVICE_ACCOUNT"}}}Replace:
ENVIRONMENT_NAMEwith the name of the environment.The name must start with a lowercase letter followed by up to 62 lowercaseletters, numbers, or hyphens, and can't end with a hyphen. The environmentname is used to create subcomponents for the environment, so you must providea name that is also valid as a Cloud Storagebucket name. SeeBucket naming guidelines for a listof restrictions.
LOCATIONwith the region for the environment.A location is the region where the environment is located.
IMAGE_VERSIONwith the name of a Cloud Composer image.SERVICE_ACCOUNTwith the service account for your environment.
Example:
resource"google_composer_environment""example"{provider=google-betaname="example-environment"region="us-central1"config{software_config{image_version="composer-3-airflow-2.10.5-build.23"}node_config{service_account="example-account@example-project.iam.gserviceaccount.com"}}}Step 3. (Optional) Configure environment scale and performance parameters
To specify the scale and performance configuration for your environment,select the environment size and workloads configuration.
You can change all performance and scale parametersafter you create an environment.
Following parameters control the scale and performance:
Environment size. Controls the performance parameters of the managedCloud Composer infrastructure that includes the Airflowdatabase. Consider selecting a larger environment size if you want to run alarge number of DAGs and tasks with higher infrastructure performance. Forexample, larger environment's size increases the amount of Airflow task logentries that your environment can process with minimal delay.
Workloads configuration. Controls the scale and performance of Airflowcomponents that run in a GKE cluster of your environment.
Airflow scheduler. Parses DAG definition files, schedules DAG runsbased on the schedule interval, and queues tasks for execution byAirflow workers.
Your environment can run more than one Airflow scheduler at the sametime. Use multiple schedulers to distribute load between severalscheduler instances for better performance and reliability.
Increasing the number of schedulers does not always improve Airflowperformance. For example, having only one scheduler might provide betterperformance than having two. This might happen when the extra scheduler is notutilized, and thus consumes resources of your environment without contributingto overall performance. The actual scheduler performance depends on thenumber of Airflow workers, the number of DAGs and tasks that run in yourenvironment, and the configuration of both Airflow and the environment.
We recommend starting with two schedulers and then monitoring the performanceof your environment. If you change the number of schedulers, you can alwaysscale your environment back to the original number of schedulers.
For more information about configuring multiple schedulers, seeAirflow documentation.
Airflow triggerer. Asynchronously monitors all deferred tasks in yourenvironment. If you have at least one triggerer instance in yourenvironment (or at least two in highly resilient environments), you canusedeferrable operators in your DAGs.
In Cloud Composer 3, the Airflow triggerer is enabled by default. If youwant to create an environment without a triggerer, set the number oftriggerers to zero.
Airflow DAG processor. Processes DAG files and turns them intoDAG objects. In Cloud Composer 3, this part of the scheduler runs as aseparate environment component.
Airflow web server. Runs the Airflow web interface where you canmonitor, manage, and visualize your DAGs.
Airflow workers. Execute tasks that are scheduled by Airflowschedulers. The minimum and maximum number of workers in yourenvironment changes dynamically depending on the number of tasks in thequeue.
Console
You can select a preset for your environment. When you select a preset, thescale and performance parameters for that preset are automatically selected.You also have an option to select a custom preset and specify all scale andperformance parameters for your environment.
To select the scale and performance configuration for your environment, ontheCreate environment page:
To use predefined values, in theEnvironment resources section, clickSmall,Medium,Large, orExtra Large.
To specify custom values for the scale and performance parameters:
In theEnvironment resources section, clickCustom.
In theScheduler section, set the number of schedulers you want touse, and the resource allocation for their CPU, memory, and storage.
In theTriggerer section, use theNumber of triggerers fieldto enter the number of triggerers in your environment.
If you don't want touse deferrable operators in your DAGs,set the number of triggerers to zero.
If you set at least one triggerer for your environment, use thetheCPU, andMemory fields to configure resource allocationfor your triggerers.
In theDAG processor section, specify the number of DAG processorsin your environment and the amount of CPUs, memory, and storage foreach DAG processor.
Highly resilient environments require at least two DAG processors.
In theWeb server section, specify the amount of CPUs, memory, andstorage for the web server.
In theWorker section, specify:
- The minimum and maximum number of workers for autoscaling limits inyour environment.
- The CPU, memory, and storage allocation for your workers
In theCore infrastructure section, in theEnvironment sizedrop-down list, select the environment size.
gcloud
When you create an environment, the following arguments control thescale and performance parameters of your environment.
Note: If you omit an argument, Cloud Composer uses the defaultvalue.--environment-sizespecifies the environment size.--scheduler-countspecifies the number of schedulers.--scheduler-cpuspecifies the number of CPUs for an Airflow scheduler.--scheduler-memoryspecifies the amount of memory for an Airflowscheduler.--scheduler-storagespecifies the amount of disk space for an Airflowscheduler.--triggerer-countspecifies the number of Airflow triggerers in yourenvironment. The default value for this flag is0.You need triggerers if you want touse deferrable operators in your DAGs.- For standard resilience environments, use a value between
0and10. - For highly resilient environments, use
0or a value between2and10.
- For standard resilience environments, use a value between
--triggerer-cpuspecifies the number of CPUs for an Airflowtriggerer, in vCPU units. Allowed values:0.5,0.75,1. The defaultvalue is0.5.--triggerer-memoryspecifies the amount of memory for anAirflow triggerer, in GB. The default value is0.5.The minimum required memory is equal to the number of CPUs allocated forthe triggerers. The maximum allowed value is equal to the number oftriggerer CPUs multiplied by 6.5.
For example, if you set the
--triggerer-cpuflag to1, theminimum value for--triggerer-memoryis1and themaximum value is6.5.--dag-processor-countspecifies the number of DAG processors in yourenvironment.Highly resilient environments require at least two DAG processors.
--dag-processor-cpuspecifies the number of CPUs for the DAG processor.--dag-processor-memoryspecifies the amount of memory for the DAGprocessor.--dag-processor-storagespecifies the amount of disk space for the DAGprocessor.--web-server-cpuspecifies the number of CPUs for the Airflow web server.--web-server-memoryspecifies the amount of memory for the Airflow webserver.--web-server-storagespecifies the amount of disk space for the Airflowweb server.--worker-cpuspecifies the number of CPUs for an Airflow worker.--worker-memoryspecifies the amount of memory for an Airflow worker.--worker-storagespecifies the amount of disk space for an Airflowworker.--min-workersspecifies the minimum number of Airflow workers. Yourenvironment's cluster runs at least this number of workers.--max-workersspecifies the maximum number of Airflow workers. Yourenvironment's cluster runs at most this number of workers.
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"SERVICE_ACCOUNT"\--environment-sizeENVIRONMENT_SIZE\--scheduler-countSCHEDULER_COUNT\--scheduler-cpuSCHEDULER_CPU\--scheduler-memorySCHEDULER_MEMORY\--scheduler-storageSCHEDULER_STORAGE\--triggerer-countTRIGGERER_COUNT\--triggerer-cpuTRIGGERER_CPU\--triggerer-memoryTRIGGERER_MEMORY\--dag-processor-countDAG_PROCESSOR_COUNT\--dag-processor-cpuDAG_PROCESSOR_CPU\--dag-processor-memoryDAG_PROCESSOR_MEMORY\--dag-processor-storageDAG_PROCESSOR_STORAGE\--web-server-cpuWEB_SERVER_CPU\--web-server-memoryWEB_SERVER_MEMORY\--web-server-storageWEB_SERVER_STORAGE\--worker-cpuWORKER_CPU\--worker-memoryWORKER_MEMORY\--worker-storageWORKER_STORAGE\--min-workersWORKERS_MIN\--max-workersWORKERS_MAXReplace:
ENVIRONMENT_SIZEwithsmall,medium,large,extra-large.SCHEDULER_COUNTwith the number of schedulers.SCHEDULER_CPUwith the number of CPUs for a scheduler, in vCPU units.SCHEDULER_MEMORYwith the amount of memory for a scheduler.SCHEDULER_STORAGEwith the disk size for a scheduler.TRIGGERER_COUNTwith the number of triggerers.TRIGGERER_CPUwith the number of CPUs for a triggerer, in vCPU units.TRIGGERER_MEMORYwith the amount of memory for a triggerer, in GB.DAG_PROCESSOR_COUNTwith the number of DAG processors.DAG_PROCESSOR_CPUwith the number of CPUs for the DAG processor.DAG_PROCESSOR_MEMORYwith the amount of memory for the DAGprocessor.DAG_PROCESSOR_STORAGEwith the amount of disk space for the DAGprocessor.WEB_SERVER_CPUwith the number of CPUs for the web server, in vCPU units.WEB_SERVER_MEMORYwith the amount of memory for the web server.WEB_SERVER_STORAGEwith the amount of memory for the web server.WORKER_CPUwith the number of CPUs for a worker, in vCPU units.WORKER_MEMORYwith the amount of memory for a worker.WORKER_STORAGEwith the disk size for a worker.WORKERS_MINwith the minimum number of Airflow workers that yourenvironment can run. The number of workers in your environment does notgo above this number, even if a lower number of workers can handlethe load.WORKERS_MAXwith the maximum number of Airflow workers that yourenvironment can run. The number of workers in your environment does notgo above this number, even if a higher number of workers is required tohandle the load.
KB for kilobyte,MB for megabyte, orGB forgigabyte. The default size unit isGB. For example,10GB produces a 10gigabyte storage.Example:
gcloudcomposerenvironmentscreateexample-environment\--locationus-central1\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"example-account@example-project.iam.gserviceaccount.com"\--environment-sizesmall\--scheduler-count1\--scheduler-cpu0.5\--scheduler-memory2.5GB\--scheduler-storage2GB\--triggerer-count1\--triggerer-cpu0.5\--triggerer-memory0.5GB\--dag-processor-count1\--dag-processor-cpu0.5\--dag-processor-memory2GB\--dag-processor-storage1GB\--web-server-cpu1\--web-server-memory2.5GB\--web-server-storage2GB\--worker-cpu1\--worker-memory2GB\--worker-storage2GB\--min-workers2\--max-workers4API
When you create an environment, in theEnvironment>EnvironmentConfig>WorkloadsConfig resource, specify environmentscale and performance parameters.
{"name":"projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME","config":{"workloadsConfig":{"scheduler":{"cpu":SCHEDULER_CPU,"memoryGb":SCHEDULER_MEMORY,"storageGb":SCHEDULER_STORAGE,"count":SCHEDULER_COUNT},"triggerer":{"count":TRIGGERER_COUNT,"cpu":TRIGGERER_CPU,"memoryGb":TRIGGERER_MEMORY},"dagProcessor":{"count":DAG_PROCESSOR_COUNT,"cpu":DAG_PROCESSOR_CPU,"memoryGb":DAG_PROCESSOR_MEMORY,"storageGb":DAG_PROCESSOR_STORAGE},"webServer":{"cpu":WEB_SERVER_CPU,"memoryGb":WEB_SERVER_MEMORY,"storageGb":WEB_SERVER_STORAGE},"worker":{"cpu":WORKER_CPU,"memoryGb":WORKER_MEMORY,"storageGb":WORKER_STORAGE,"minCount":WORKERS_MIN,"maxCount":WORKERS_MAX}},"environmentSize":"ENVIRONMENT_SIZE","nodeConfig":{"serviceAccount":"SERVICE_ACCOUNT"}}}Replace:
SCHEDULER_CPUwith the number of CPUs for a scheduler, in vCPU units.SCHEDULER_MEMORYwith the amount of memory for a scheduler, in GB.SCHEDULER_STORAGEwith the disk size for a scheduler, in GB.SCHEDULER_COUNTwith the number of schedulers.TRIGGERER_COUNTwith the number of triggerers. The default value is0.You need triggerers if you want touse deferrable operators in your DAGs.- For standard resilience environments, use a value between
0and10. - For highly resilient environments, use
0or a value between2and10.
If you use at least one triggerer, you must also specify the
TRIGGERER_CPU, andTRIGGERER_MEMORYparameters:- For standard resilience environments, use a value between
TRIGGERER_CPUspecifies the number of CPUs for a triggerer,in vCPU units. Allowed values:0.5,0.75,1.TRIGGERER_MEMORYconfigures the amount of memory for atriggerer. The minimum required memory is equal to the number ofCPUs allocated for the triggerers. The maximum allowed value isequal to the number of triggerer CPUs multiplied by 6.5.For example, if you set the
TRIGGERER_CPUto1, theminimum value forTRIGGERER_MEMORYis1and themaximum value is6.5.DAG_PROCESSOR_COUNTwith the number of DAG processors.Highly resilient environments require at least two DAG processors.
DAG_PROCESSOR_CPUwith the number of CPUs for the DAG processor, in vCPUunits.DAG_PROCESSOR_MEMORYwith the amount of memory for the DAGprocessor, in GB.DAG_PROCESSOR_STORAGEwith the amount of disk space for the DAGprocessor, in GB.WEB_SERVER_CPUwith the number of CPUs for the web server, in vCPU units.WEB_SERVER_MEMORYwith the amount of memory for the web server, in GB.WEB_SERVER_STORAGEwith the disk size for the web server, in GB.WORKER_CPUwith the number of CPUs for a worker, in vCPU units.WORKER_MEMORYwith the amount of memory for a worker, in GB.WORKER_STORAGEwith the disk size for a worker, in GB.WORKERS_MINwith the minimum number of Airflow workers that yourenvironment can run. The number of workers in your environment does notgo above this number, even if a lower number of workers can handlethe load.WORKERS_MAXwith the maximum number of Airflow workers that yourenvironment can run. The number of workers in your environment does notgo above this number, even if a higher number of workers is required tohandle the load.ENVIRONMENT_SIZEwith the environment size:ENVIRONMENT_SIZE_SMALL,ENVIRONMENT_SIZE_MEDIUM,ENVIRONMENT_SIZE_LARGE,ENVIRONMENT_SIZE_EXTRA_LARGE.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments{"name":"projects/example-project/locations/us-central1/environments/example-environment","config":{"workloadsConfig":{"scheduler":{"cpu":2.5,"memoryGb":2.5,"storageGb":2,"count":1},"triggerer":{"cpu":0.5,"memoryGb":0.5,"count":1},"dagProcessor":{"count":1,"cpu":0.5,"memoryGb":2,"storageGb":1},"webServer":{"cpu":1,"memoryGb":2.5,"storageGb":2},"worker":{"cpu":1,"memoryGb":2,"storageGb":2,"minCount":2,"maxCount":4}},"environmentSize":"ENVIRONMENT_SIZE_SMALL","nodeConfig":{"serviceAccount":"example-account@example-project.iam.gserviceaccount.com"}}}Terraform
When you create an environment, following arguments control the scale andperformance parameters of your environment.
In the
configblock:- The
environment_sizefield controls the environment size.
- The
In the
workloads_configblock:- The
scheduler.cpufield specifies the number of CPUs for an Airflowscheduler. - The
scheduler.memory_gbfield specifies the amount of memory for anAirflow scheduler. - The
scheduler.storage_gbfield specifies the amount of disk space fora scheduler. - The
scheduler.countfield specifies the number of schedulers in yourenvironment. - The
triggerer.cpufield specifies the number of CPUs for an Airflowtriggerer. - The
triggerer.memory_gbfield specifies the amount of memory for anAirflow triggerer. The
triggerer.countfield specifies the number of triggerers in yourenvironment.The
dag_processor.cpufield specifies the number of CPUs for a DAGprocessor.The
dag_processor.memory_gbfield specifies the amount of memory fora DAG processor.The
dag_processor.storage_gbfield specifies the amount of disk spacefor a DAG processor.The
dag_processor.countfield specifies the number of DAG processors.Highly resilient environments require at least two DAG processors.
The
web_server.cpufield specifies the number of CPUs for the Airflowweb server.The
web_server.memory_gbfield specifies the amount of memory for theAirflow web server.The
web_server.storage_gbfield specifies the amount of disk spacefor the Airflow web server.The
worker.cpufield specifies the number of CPUs for an Airflowworker.The
worker.memory_gbfield specifies the amount of memory for anAirflow worker.The
worker.storage_gbfield specifies the amount of disk space for anAirflow worker.The
worker.min_countfield specifies the minimum number of workers inyour environment.The
worker.max_countfield specifies the maximum number of workers inyour environment.
- The
resource"google_composer_environment""example"{provider=google-betaname="ENVIRONMENT_NAME"region="LOCATION"config{workloads_config{scheduler{cpu=SCHEDULER_CPUmemory_gb=SCHEDULER_MEMORYstorage_gb=SCHEDULER_STORAGEcount=SCHEDULER_COUNT}triggerer{count=TRIGGERER_COUNTcpu=TRIGGERER_CPUmemory_gb=TRIGGERER_MEMORY}dag_processor{cpu=DAG_PROCESSOR_CPUmemory_gb=DAG_PROCESSOR_MEMORYstorage_gb=DAG_PROCESSOR_STORAGEcount=DAG_PROCESSOR_COUNT}web_server{cpu=WEB_SERVER_CPUmemory_gb=WEB_SERVER_MEMORYstorage_gb=WEB_SERVER_STORAGE}worker{cpu=WORKER_CPUmemory_gb=WORKER_MEMORYstorage_gb=WORKER_STORAGEmin_count=WORKERS_MINmax_count=WORKERS_MAX}}environment_size="ENVIRONMENT_SIZE"node_config{service_account="SERVICE_ACCOUNT"}}}Replace:
ENVIRONMENT_NAMEwith the name of the environment.LOCATIONwith the region where the environment is located.SERVICE_ACCOUNTwith the service account for your environment.SCHEDULER_CPUwith the number of CPUs for a scheduler, in vCPU units.SCHEDULER_MEMORYwith the amount of memory for a scheduler, in GB.SCHEDULER_STORAGEwith the disk size for a scheduler, in GB.SCHEDULER_COUNTwith the number of schedulers.TRIGGERER_COUNTwith the number of triggerers.TRIGGERER_CPUwith the number of CPUs for a triggerer, in vCPU units.TRIGGERER_MEMORYwith the amount of memory for a triggerer, in GB.DAG_PROCESSOR_CPUwith the number of CPUs for the DAG processor, in vCPUunits.DAG_PROCESSOR_MEMORYwith the amount of memory for the DAGprocessor, in GB.DAG_PROCESSOR_STORAGEwith the amount of disk space for the DAGprocessor, in GB.DAG_PROCESSOR_COUNTwith the number of DAG processors.WEB_SERVER_CPUwith the number of CPUs for the web server, in vCPU units.WEB_SERVER_MEMORYwith the amount of memory for the web server, in GB.WEB_SERVER_STORAGEwith the disk size for the web server, in GB.WORKER_CPUwith the number of CPUs for a worker, in vCPU units.WORKER_MEMORYwith the amount of memory for a worker, in GB.WORKER_STORAGEwith the disk size for a worker, in GB.WORKERS_MINwith the minimum number of Airflow workers that yourenvironment can run. The number of workers in your environment does notgo above this number, even if a lower number of workers can handlethe load.WORKERS_MAXwith the maximum number of Airflow workers that yourenvironment can run. The number of workers in your environment does notgo above this number, even if a higher number of workers is required tohandle the load.ENVIRONMENT_SIZEwith the environment size:ENVIRONMENT_SIZE_SMALL,ENVIRONMENT_SIZE_MEDIUM,ENVIRONMENT_SIZE_LARGE,ENVIRONMENT_SIZE_EXTRA_LARGE.
Example:
resource"google_composer_environment""example"{provider=google-betaname="example-environment"region="us-central1"config{workloads_config{scheduler{cpu=2.5memory_gb=2.5storage_gb=2count=1}triggerer{count=1cpu=0.5memory_gb=0.5}dag_processor{cpu=1memory_gb=2storage_gb=1count=1}web_server{cpu=1memory_gb=2.5storage_gb=2}worker{cpu=1memory_gb=2storage_gb=2min_count=2max_count=4}}environment_size="ENVIRONMENT_SIZE_SMALL"node_config{service_account="example-account@example-project.iam.gserviceaccount.com"}}}Step 4. (Optional) Enable high resilience mode
Highly resilient (Highly Available)Cloud Composer environments are environments that use built-inredundancy and failover mechanisms that reduce the environment's susceptibilityto zonal failures and single point of failure outages.
In Cloud Composer 3, highly resilient environments are available startingfrom Airflow builds composer-3-airflow-2.10.2-build.13 andcomposer-3-airflow-2.9.3-build.20.
Important: We strongly recommend touse highly resilient environments for production use cases.A highly resilient environment is multi-zonal and runs across atleast two zones of a selected region. The following components run in separatezones:
Exactly two Airflow schedulers
At least two triggerers(if the number of triggerers isn't set to
0)At least two DAG processors
Two web servers
The minimum number of workers is set to two, and your environment's clusterdistributes worker instances between zones. In case of a zonal outage, affectedworker instances are rescheduled in a different zone. The Cloud SQLcomponent of a highly resilient environment has a primary instance and a standbyinstance that are distributed between zones.
Console
On theCreate environment page:
In theResilience mode section, selectHigh resilience.
In theEnvironment resources section, select scale parameters for ahighly resilient environment. Highly resilient environments requireexactly two schedulers, zero or between two and ten triggerers, and at least two workers:
ClickCustom.
In theNumber of schedulers drop-down list, select
2.In theNumber of triggerers drop-down list, select
0,or a value between2and10. Configure theCPU andMemoryallocation for your triggerers.In theMinimum number of workers drop-down list, select
2ormore, depending on the required number of workers.
In theNetwork configuration section:
In theNetworking type, selectPrivate IP environment.
If required, specifyother networking parameters.
gcloud
When you create an environment, the--enable-high-resilience argumentenables the high resilience mode.
Set the following arguments:
--enable-high-resilience--enable-private-environment, andother networking parameters for a Private IP environment,if required--scheduler-countto2--triggerer-countto0or a value between2and10.If you use triggerers, the--triggerer-cpu and--triggerer-memory`flags are also required for environment creation.For more information about
--triggerer-count,--triggerer-cpu, and--triggerer-memoryflags, seeConfigure environment scale and performance parameters.--min-workersto2or more
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"SERVICE_ACCOUNT"\--enable-high-resilience\--enable-private-environment\--scheduler-count2\--triggerer-count2\--triggerer-cpu0.5\--triggerer-memory0.5\--min-workers2API
When you create an environment, in theEnvironment>EnvironmentConfig resource, enable the highresilience mode.
{"name":"projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME","config":{"resilience_mode":"HIGH_RESILIENCE","nodeConfig":{"serviceAccount":"SERVICE_ACCOUNT"}}}Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments{"name":"projects/example-project/locations/us-central1/environments/example-environment","config":{"resilience_mode":"HIGH_RESILIENCE","nodeConfig":{"serviceAccount":"example-account@example-project.iam.gserviceaccount.com"}}}Terraform
Note: An update to the resilience mode field causes a failure instead ofleading to recreating the Cloud Composer environment.When you create an environment, theresilience_mode field in theconfigblock enables the high resilience mode.
resource"google_composer_environment""example"{provider=google-betaname="ENVIRONMENT_NAME"region="LOCATION"config{resilience_mode="HIGH_RESILIENCE"node_config{service_account="SERVICE_ACCOUNT"}}}Example:
resource"google_composer_environment""example"{provider=google-betaname="example-environment"region="us-central1"config{resilience_mode="HIGH_RESILIENCE"node_config{service_account="example-account@example-project.iam.gserviceaccount.com"}}}Step 5. (Optional) Specify a zone for the environment's database
You can specify a preferred Cloud SQL zone when creating a standardresilience environment.
Note: If your environment useshigh resilience mode,you can't specify a zone for the database. Instead, Cloud Composerautomatically selects two zones in the region where the environment is located.If you enable high resilience for an existing environment with the preferredCloud SQL zone, the preferred zone configuration is removed and twozones are selected automatically.Console
On theCreate environment page:
In theAdvanced configuration section, expandtheShow advanced configuration item.
In theAirflow database zone list, select a preferredCloud SQL zone.
gcloud
When you create an environment, the--cloud-sql-preferred-zone argumentspecifies a preferred Cloud SQL zone.
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"SERVICE_ACCOUNT"\--cloud-sql-preferred-zoneSQL_ZONEReplace the following:
SQL_ZONE: preferred Cloud SQL zone. This zone must be locatedin the region where the environment is located.
Example:
gcloudcomposerenvironmentscreateexample-environment\--locationus-central1\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"example-account@example-project.iam.gserviceaccount.com"\--cloud-sql-preferred-zoneus-central1-aAPI
When you create an environment, in theEnvironment>DatabaseConfig resource, specify thepreferred Cloud SQL zone.
{"name":"projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME","config":{"databaseConfig":{"zone":"SQL_ZONE"},"nodeConfig":{"serviceAccount":"SERVICE_ACCOUNT"}}}Replace the following:
SQL_ZONE: preferred Cloud SQL zone. This zone must be locatedin the region where the environment is located.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments{"name":"projects/example-project/locations/us-central1/environments/example-environment","config":{"databaseConfig":{"zone":"us-central1-a"},"nodeConfig":{"serviceAccount":"example-account@example-project.iam.gserviceaccount.com"}}}Terraform
When you create an environment, thezone field in thedatabase_configblock specifies the preferred Cloud SQL zone.
resource"google_composer_environment""example"{provider=google-betaname="ENVIRONMENT_NAME"region="LOCATION"config{database_config{zone="SQL_ZONE"}node_config{service_account="SERVICE_ACCOUNT"}}}Replace the following:
SQL_ZONE: preferred Cloud SQL zone. This zone must be locatedin the region where the environment is located.
Step 6. (Optional) Configure your environment's networking
You can configure Cloud Composer 3 networking in the following ways:
- In aPublic IP environment, your environment's Airflow components canaccess the internet.
- In aPrivate IP environment, your environment's Airflow components do nothave access to the internet.
- Private IP and Public IP environments canconnect to your VPC network as a separate option.
- You can specify theinternal IP range of your environment. This range can't be changed later.
You canenable access to the internet when installing PyPI packages.For example, your Private IP environment canstill install PyPI packages fromPython Package Index if you enable thisoption.
For aShared VPC environment, you must do additional networking setup forthe host project, then create a Public or a Private IP environment in aservice project. Follow the instructions on theConfiguring Shared VPC page.
Console
Make sure that your networking is configured for the type of environmentthat you want to create.
In theNetwork configuration section, expandtheShow network configuration item.
If you want to connect your environment to a VPC network, in theNetwork attachment field, select a network attachment. You can alsocreate a new network attachment. For more information, seeConnect an environment to a VPC network.
If you want to create a Private IP environment, in theNetworking typesection, select thePrivate IP environment option.
If you want to add network tags, seeAdd network tags formore information.
gcloud
Make sure that your networking is configured for the type of environment that you want to create.
When you create an environment, the following arguments controlthe networking parameters. If you omit a parameter, the default value isused.
--enable-private-environmentenables a Private IP environment.--networkspecifies your VPC network ID.--subnetworkspecifies your VPC subnetwork ID.
--composer-internal-ipv4-cidr-blockspecifies theenvironment's internal IP range.This range is used by Cloud Composer in thetenant project of your environment.
Example (Private IP environmentwith a connected VPC network)
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"SERVICE_ACCOUNT"\--enable-private-environment\--networkNETWORK_ID\--subnetworkSUBNETWORK_ID\Replace:
NETWORK_IDwith your VPC network ID.SUBNETWORK_IDwith your VPC subnetwork ID.
Step 7. (Optional) Add network tags
Networktags are applied to all node VMs in your environment'scluster. Tags are used to identify valid sources or targets for networkfirewalls. Each tag within the list must comply withRFC 1035.
For example, you might want to add network tags if you plan to restricttraffic for a Private IP environment with firewall rules.
Caution: You can't change network tags later. If you want to later configurefirewall rules that target only VMs in your environment's cluster, specifynetwork tags when you create your environment.Console
On theCreate environment page:
- Locate theNetwork configuration section.
- In theNetwork tags field, enter network tags for your environment.
gcloud
When you create an environment, following arguments control network tags:
--tagsspecifies a comma-separated list of network tags applied to allnode VMs.
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"SERVICE_ACCOUNT"\--tagsTAGSReplace:
TAGSwith a comma-separated list of network tags.
Example:
gcloudcomposerenvironmentscreateexample-environment\--locationus-central1\--image-versioncomposer-3-airflow-2.10.5-build.23\--tagsgroup1,productionAPI
When you create an environment, in theEnvironment>EnvironmentConfig resource, specifynetwork tags for your environment.
{"name":"projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME","config":{"nodeConfig":{"tags":["TAG"],"serviceAccount":"SERVICE_ACCOUNT"}}}Replace:
TAGwith a network tag.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments{"name":"projects/example-project/locations/us-central1/environments/example-environment","config":{"nodeConfig":{"tags":["group1","production"],"serviceAccount":"example-account@example-project.iam.gserviceaccount.com"}}}Terraform
When you create an environment, following fields define network tags foryour environment:
tagsfield in thenode_configblock specifies a comma-separated listof network tags applied to all node VMs.
resource"google_composer_environment""example"{provider=google-betaname="ENVIRONMENT_NAME"region="LOCATION"config{node_config{tags=["TAGS"]service_account="SERVICE_ACCOUNT"}}}Replace:
TAGSwith a comma-separated list of network tags.
Example:
resource"google_composer_environment""example"{provider=google-betaname="example-environment"region="us-central1"config{node_config{tags=["group1","production"]service_account="example-account@example-project.iam.gserviceaccount.com"}}}Step 8. (Optional) Configure web server network access
The Airflow web server access parameters do not depend on the type of yourenvironment. Instead, you can configure web server access separately. Forexample, a Private IP environment can still have the Airflow UI accessiblefrom the internet.
You can't configure the allowed IP ranges using private IP addresses.
Caution: In some cases, changes to the web server access parameters can take upto 10 minutes to propagate.Console
On theCreate environment page:
In theNetwork configuration section, expand theShow networkconfiguration item.
In theWeb server network access control section:
To provide access to the Airflow web server from all IP addresses,selectAllow access from all IP addresses.
To restrict access only to specific IP ranges, selectAllow access only from specific IP addresses. In theIP rangefield, specify an IP range in the CIDR notation. IntheDescription field, specify an optional description for thisrange. If you want to specify more than one range, clickAdd IPrange.
To forbid access for all IP addresses, selectAllow access only fromspecific IP addresses and clickDelete item next to the emptyrange entry.
gcloud
When you create an environment, following arguments control web serveraccess level:
--web-server-allow-allprovides access to Airflow from all IP addresses.This is the default option.--web-server-allow-iprestricts access only to specific source IPranges. To specify several IP ranges, use this argument multiple times.--web-server-deny-allforbids access for all IP addresses.
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--web-server-allow-ipip_range=WS_IP_RANGE,description=WS_RANGE_DESCRIPTIONReplace:
WS_IP_RANGEwith the IP range, in the CIDR notation, that can accessAirflow UI.WS_RANGE_DESCRIPTIONwith the description of the IP range.
Example:
gcloudcomposerenvironmentscreateexample-environment\--locationus-central1\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"example-account@example-project.iam.gserviceaccount.com"\--web-server-allow-ipip_range=192.0.2.0/24,description="office net 1"\--web-server-allow-ipip_range=192.0.4.0/24,description="office net 3"API
When you create an environment, in theEnvironment>EnvironmentConfig resource, specify web serveraccess parameters.
To provide access to the Airflow web server from all IP addresses,omit
webServerNetworkAccessControl.To restrict access only to specific IP ranges, specify one or more rangesin
allowedIpRanges.To forbid access for all IP addresses, add
allowedIpRangesand make it anempty list. Do not specify IP ranges in it.
{"name":"projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME","config":{"webServerNetworkAccessControl":{"allowedIpRanges":[{"value":"WS_IP_RANGE","description":"WS_RANGE_DESCRIPTION"}]},"nodeConfig":{"serviceAccount":"SERVICE_ACCOUNT"}}}Replace:
WS_IP_RANGEwith the IP range, in the CIDR notation, that can accessAirflow UI.WS_RANGE_DESCRIPTIONwith the description of the IP range.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments{"name":"projects/example-project/locations/us-central1/environments/example-environment","config":{"webServerNetworkAccessControl":{"allowedIpRanges":[{"value":"192.0.2.0/24","description":"office net 1"},{"value":"192.0.4.0/24","description":"office net 3"}]},"nodeConfig":{"serviceAccount":"example-account@example-project.iam.gserviceaccount.com"}}}Terraform
When you create an environment, theallowed_ip_range block in theweb_server_network_access_control block contains IP ranges that can accessweb server.
resource"google_composer_environment""example"{provider=google-betaname="ENVIRONMENT_NAME"region="LOCATION"config{web_server_network_access_control{allowed_ip_range{value="WS_IP_RANGE"description="WS_RANGE_DESCRIPTION"}}node_config{service_account="SERVICE_ACCOUNT"}}}Replace:
WS_IP_RANGEwith the IP range, in the CIDR notation, that can accessAirflow UI.WS_RANGE_DESCRIPTIONwith the description of the IP range.
Example:
resource"google_composer_environment""example"{provider=google-betaname="example-environment"region="us-central1"config{web_server_network_access_control{allowed_ip_range{value="192.0.2.0/24"description="office net 1"},allowed_ip_range{value="192.0.4.0/24"description="office net 3"}}node_config{service_account="example-account@example-project.iam.gserviceaccount.com"}}Step 9. (Optional) Specify Airflow configuration overrides and environment variables
You can set upAirflow configuration overrides andenvironment variables when you create an environment. As analternative, you can do it later, after your environment is created.
Some Airflow configuration optionsare blocked and youcan't override them.
For the list of available Airflow configuration options, seeConfiguration reference for Airflow 2 andAirflow 1.10.*
To specify Airflow configuration overrides and environment variables:
Console
On theCreate environment page:
In theEnvironment variables section, clickAdd environment variable.
Enter theName andValue for the environment variable.
In theAirflow configuration overrides section, clickAdd Airflow configuration override.
Enter theSection,Key, andValue for the configurationoption override.
For example:
Section Key Value webserverdag_orientationTB
gcloud
When you create an environment, following arguments control environmentvariables and Airflow configuration overrides:
--env-variablesspecifies a comma-separated list of environmentvariables.Variable names may contain upper and lowercase letters, digits, andunderscores, but they may not begin with a digit.
--airflow-configsspecifies a comma-separated list of keys and valuesfor Airflow configuration overrides.
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"SERVICE_ACCOUNT"\--env-variablesENV_VARS\--airflow-configsCONFIG_OVERRIDESReplace:
ENV_VARSwith a list of comma-separatedNAME=VALUEpairs forenvironment variables.CONFIG_OVERRIDESwith a list of comma-separatedSECTION-KEY=VALUEpairs for configuration overrides. Separate the name of theconfiguration section with a-symbol, followed by the key name. Forexample:core-dags_are_paused_at_creation.
Example:
gcloudcomposerenvironmentscreateexample-environment\--locationus-central1\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"example-account@example-project.iam.gserviceaccount.com"\--env-variablesSENDGRID_MAIL_FROM=user@example.com,SENDGRID_API_KEY=example-key\--airflow-configscore-dags_are_paused_at_creation=True,webserver-dag_orientation=TBAPI
When you create an environment, in theEnvironment>EnvironmentConfig resource, specifyenvironment variables and Airflow configuration overrides.
{"name":"projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME","config":{"softwareConfig":{"airflowConfigOverrides":{"SECTION-KEY":"OVERRIDE_VALUE"},"envVariables":{"VAR_NAME":"VAR_VALUE",}},"nodeConfig":{"serviceAccount":"SERVICE_ACCOUNT"}}}Replace:
SECTIONwith the section in the configuration file where the Airflowconfiguration option is located.KEYwith the name of the Airflow configuration option.OVERRIDE_VALUEwith a value of the Airflow configuration option.VAR_NAMEwith the name of the environment variable.VAR_VALUEwith the value of the environment variable.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments{"name":"projects/example-project/locations/us-central1/environments/example-environment","config":{"softwareConfig":{"airflowConfigOverrides":{"core-dags_are_paused_at_creation":"True","webserver-dag_orientation":"TB"},"envVariables":{"SENDGRID_MAIL_FROM":"user@example.com","SENDGRID_API_KEY":"example-key"}},"nodeConfig":{"serviceAccount":"example-account@example-project.iam.gserviceaccount.com"}}}Terraform
When you create an environment, following blocks control environmentvariables and Airflow configuration overrides:
env_variablesblock in thesoftware_configblock specifiesenvironment variables.Variable names may contain upper and lowercase letters, digits, andunderscores, but they may not begin with a digit.
airflow_config_overridesblock in thesoftware_configblock specifiesAirflow configuration overrides.
resource"google_composer_environment""example"{provider=google-betaname="ENVIRONMENT_NAME"region="LOCATION"config{software_config{airflow_config_overrides={SECTION-KEY="OVERRIDE_VALUE"}env_variables={VAR_NAME="VAR_VALUE"}}node_config{service_account="SERVICE_ACCOUNT"}}}Replace:
SECTIONwith the section in the configuration file where the Airflowconfiguration option is located.KEYwith the name of the Airflow configuration option.OVERRIDE_VALUEwith a value of the Airflow configuration option.VAR_NAMEwith the name of the environment variable.VAR_VALUEwith the value of the environment variable.
Example:
resource"google_composer_environment""example"{provider=google-betaname="example-environment"region="us-central1"config{software_config{airflow_config_overrides={core-dags_are_paused_at_creation="True"webserver-dag_orientation="TB"}env_variables={SENDGRID_MAIL_FROM="user@example.com"SENDGRID_API_KEY="example-key"}}node_config{service_account="example-account@example-project.iam.gserviceaccount.com"}}}Step 10. (Optional) Specify maintenance windows
Defaultmaintenance windows in Cloud Composer 3are defined in the following way:
- All times are in the local time zone of the region where your environment islocated, but with daylight saving time ignored.
- On Tuesday, Wednesday, Thursday, and Friday maintenance windows are from00:00:00 to 02:00:00.
- On Saturday, Sunday, and Monday maintenance windows are from 00:00:00to 04:00:00.
To specify custom maintenance windows for your environment:
Console
On theCreate environment page
Locate theMaintenance windows section.
In theTimezone drop-down list, choose a time zone for maintenancewindows.
SetStart time,Days, andLength, so that:
At least 12 hours of time is allocated in a single week.
You can use several time slots, but each slot duration must be at least4 hours.
For example, a period of 4 hours every Monday, Wednesday, and Fridayprovides the required amount of time.
gcloud
The following arguments define maintenance windows parameters:
--maintenance-window-startsets the start time of a maintenancewindow.--maintenance-window-endsets the end time of a maintenance window.--maintenance-window-recurrencesetsthemaintenance window recurrence.
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"SERVICE_ACCOUNT"\--maintenance-window-start'DATETIME_START'\--maintenance-window-end'DATETIME_END'\--maintenance-window-recurrence'MAINTENANCE_RECURRENCE'Replace:
ENVIRONMENT_NAMEwith the name of the environment.DATETIME_STARTwith the start date and time in thedate/time input format. Only the specified time ofthe day is used, the specified date is ignored.DATETIME_ENDwith the end date and time in thedate/time input format. Only the specified time ofthe day is used, the specified date is ignored. The specified date andtime must be after the start date.MAINTENANCE_RECURRENCEwithanRFC 5545 RRULE for maintenancewindows recurrence. Cloud Composer supports two formats:The
FREQ=DAILYformat specifies a daily recurrence.The
FREQ=WEEKLY;BYDAY=SU,MO,TU,WE,TH,FR,SAformat specifies arecurrence on selected days of the week.
The following example specifies a 6-hour maintenance window between 01:00 and07:00 (UTC) on Wednesdays, Saturdays, and Sundays. The 1 January, 2023date is ignored.
gcloudcomposerenvironmentscreateexample-environment\--locationus-central1\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"example-account@example-project.iam.gserviceaccount.com"\--maintenance-window-start'2023-01-01T01:00:00Z'\--maintenance-window-end'2023-01-01T07:00:00Z'\--maintenance-window-recurrence'FREQ=WEEKLY;BYDAY=SU,WE,SA'API
When you create an environment, in theEnvironment>EnvironmentConfig resource, specifymaintenance windows parameters:
{"name":"projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME","config":{"maintenanceWindow":{"startTime":"DATETIME_START","endTime":"DATETIME_END","recurrence":"MAINTENANCE_RECURRENCE"},"nodeConfig":{"serviceAccount":"SERVICE_ACCOUNT"}}}Replace:
DATETIME_STARTwith the start date and time in thedate/time input format. Only thespecified time of the day is used, the specified date is ignored.DATETIME_ENDwith the end date and time in thedate/time input format. Only thespecified time of the day is used, the specified date is ignored. Thespecified date and time must be after the start date.MAINTENANCE_RECURRENCEwith an RFC 5545 RRULE for maintenance windowsrecurrence. Cloud Composer supports two formats:The
FREQ=DAILYformat specifies a daily recurrence.The
FREQ=WEEKLY;BYDAY=SU,MO,TU,WE,TH,FR,SAformat specifies arecurrence on selected days of the week.
The following example specifies a 6-hour maintenance window between 01:00 and07:00 (UTC) on Wednesdays, Saturdays, and Sundays. The 1 January, 2023date is ignored.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments{"name":"projects/example-project/locations/us-central1/environments/example-environment","config":{"maintenanceWindow":{"startTime":"2023-01-01T01:00:00Z","endTime":"2023-01-01T07:00:00Z","recurrence":"FREQ=WEEKLY;BYDAY=SU,WE,SA"},"nodeConfig":{"serviceAccount":"SERVICE_ACCOUNT"}}}Terraform
Themaintenance_window block specifies the maintenance windows for yourenvironment:
resource"google_composer_environment""example"{provider=google-betaname="ENVIRONMENT_NAME"region="LOCATION"config{maintenance_window{start_time="DATETIME_START"end_time="DATETIME_END"recurrence="MAINTENANCE_RECURRENCE"}node_config{service_account="SERVICE_ACCOUNT"}}}Replace:
DATETIME_STARTwith the start date and time in thedate/time input format. Only thespecified time of the day is used, the specified date is ignored.DATETIME_ENDwith the end date and time in thedate/time input format. Only thespecified time of the day is used, the specified date is ignored. Thespecified date and time must be after the start date.MAINTENANCE_RECURRENCEwith an RFC 5545 RRULE for maintenance windowsrecurrence. Cloud Composer supports two formats:- The
FREQ=DAILYformat specifies a daily recurrence. - The
FREQ=WEEKLY;BYDAY=SU,MO,TU,WE,TH,FR,SAformat specifies arecurrence on selected days of the week.
- The
The following example specifies a 6-hour maintenance window between 01:00 and07:00 (UTC) on Wednesdays, Saturdays, and Sundays. The 1 January, 2023date is ignored.
resource"google_composer_environment""example"{provider=google-betaname="example-environment"region="us-central1"config{maintenance_window{start_time="2023-01-01T01:00:00Z"end_time="2023-01-01T07:00:00Z"recurrence="FREQ=WEEKLY;BYDAY=SU,WE,SA"}}}Step 11. (Optional) Data lineage integration
Data lineage is a Dataplex Universal Catalog feature that lets you track data movement.
Data lineage integration is available in all versionsof Cloud Composer 3.Data lineage integration isautomatically enabled in a newCloud Composer environment if the following conditions are met:
Data Lineage API is enabled in your project. For more information, seeEnabling Data Lineage API inDataplex Universal Catalog documentation.
Caution: After you enable Data Lineage API, Dataplex Universal Catalogautomatically starts ingesting data for BigQuery,Cloud Data Fusion, Dataproc. This happens even if you don'tcreate a Cloud Composer environment with data lineageintegration.A customLineage Backend isn't configured inAirflow.
You can disable data lineage integration when you create an environment. Forexample, if you want to override the automatic behavior or choose toenable data lineage later, after the environment iscreated.
Console
To disable Data lineage integration, on theCreate environment page:
In theAdvanced configuration section, expandtheShow advanced configuration item.
In theDataplex data lineage integration section, selectDisable integration with Dataplex data lineage.
gcloud
When you create an environment, the--disable-cloud-data-lineage-integrationargument disables the data lineage integration.
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"SERVICE_ACCOUNT"\--disable-cloud-data-lineage-integrationReplace:
ENVIRONMENT_NAMEwith the name of the environment.LOCATIONwith the region where the environment is located.
Example:
gcloudcomposerenvironmentscreateexample-environment\--locationus-central1\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"example-account@example-project.iam.gserviceaccount.com"\--disable-cloud-data-lineage-integrationStep 12. (Optional) Configure data encryption (CMEK)
By default, data in your environment is encrypted with a key provided byGoogle.
To use customer-managed encryption keys (CMEK) to encrypt data in yourenvironment, follow the instructions outlined inUsing customer-managed encryption keys.
Step 13. (Optional) Use a custom environment's bucket
Warning: Because your environment runs DAGs on behalf of your environment's service account, users who can add and modify DAGs in the custom environment's bucketcan run their code on behalf of the environment's service account andexercise all permissions of this account. Make sure that you are familiarwithsecurity considerations for environment's service accountsand understand how this account interacts with permissions and roles that yougrant to individual users in your project.When you create an environment, Cloud Composer creates a bucket foryour environment automatically.
As an alternative, you can specify a custom Cloud Storage bucket from yourproject. Your environment uses this bucket in the same way as the automaticallycreated bucket.
To use a custom environment bucket, follow the instructions outlined inUse a custom environment's bucket.
Step 14. (Optional) Configure database retention
If you enable database retention in your environment, thenCloud Composer periodically removes records related to DAGexecutions and user sessions older than the specified time period from theAirflow database. The most recent DAG run information is always retained.
By default, database retention is enabled. To configure the retentionperiod for a new environment or disable database retention, follow theinstructions outlined inConfigure database retention policy. You can also configuredatabase retention later.
Step 15. (Optional) Specify environment labels
You can assign labels to your environments tobreak down billing costs based on these labels.
Console
On theCreate environment page, in theLabels section:
ClickAdd label.
InKey andValue fields, specify key and value pairs for theenvironment labels.
gcloud
When you create an environment, the--labels argument specifies a comma-separated list of keys and values with environment labels.
gcloudcomposerenvironmentscreateENVIRONMENT_NAME\--locationLOCATION\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"SERVICE_ACCOUNT"\--labelsLABELSReplace:
LABELSwith a list of comma-separatedKEY=VALUEpairs for environmentlabels.
Example:
gcloudcomposerenvironmentscreateexample-environment\--locationus-central1\--image-versioncomposer-3-airflow-2.10.5-build.23\--service-account"example-account@example-project.iam.gserviceaccount.com"\--labelsowner=engineering-team,env=productionAPI
When you create an environment, in theEnvironmentresource, specify labels for your environment.
{"name":"projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME","labels":{"LABEL_KEY":"LABEL_VALUE"}}Replace:
LABEL_KEYwith a key of the environment label.LABEL_VALUEwith a value of the environment label.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments{"name":"projects/example-project/locations/us-central1/environments/example-environment","labels":{"owner":"engineering-team","env":"production"}}Terraform
When you create an environment, specify labels in thelabels block (outside of theconfig block).
resource"google_composer_environment""example"{provider=google-betaname="ENVIRONMENT_NAME"region="LOCATION"labels={LABEL_KEY="LABEL_VALUE"}}Replace:
LABEL_KEYwith a key of the environment label.LABEL_VALUEwith a value of the environment label.
Example:
resource"google_composer_environment""example"{provider=google-betaname="example-environment"region="us-central1"labels={owner="engineering-team"env="production"}}What's next
- Troubleshooting environment creation
- Configuring Shared VPC
- Configuring VPC Service Controls
- Adding and updating DAGs
- Accessing Airflow UI
- Updating and deleting environments
- About Cloud Composer versions
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-17 UTC.