Cloud Composer overview Stay organized with collections Save and categorize content based on your preferences.
Cloud Composer 3 | Cloud Composer 2 | Cloud Composer 1
This page provides a brief introduction to Airflow and DAGs, and describes thefeatures and capabilities of Cloud Composer.
For more information about new features in Cloud Composer releases,seeRelease notes.
About Cloud Composer
Cloud Composer is a fully managed workflow orchestration service,enabling you to create, schedule, monitor, and manage workflow pipelinesthat span across clouds and on-premises data centers.
Cloud Composer is built on the popularApache Airflow open source project andoperates using the Python programming language.
By using Cloud Composer instead of a local instance of ApacheAirflow, you can benefit from the best of Airflow with no installation ormanagement overhead. Cloud Composer helps you create managed Airflowenvironments quickly and use Airflow-native tools, such as the powerfulAirflow web interface and command-line tools, so you can focus on yourworkflows and not your infrastructure.
Differences between Cloud Composer versions
For more information about differences between major versions ofCloud Composer, seeCloud Composer versioning overview.
Airflow and Airflow DAGs (workflows)
In data analytics, aworkflow represents a series of tasks for ingesting,transforming, analyzing, or utilizing data. In Airflow, workflows are createdusing DAGs, or "Directed Acyclic Graphs".
ADAG is a collection of tasks that you want to schedule and run, organizedin a way that reflects their relationships and dependencies. DAGs are createdin Python files, which define the DAG structure using code. The DAG'spurpose is to ensure that each task is executed at the right timeand in the right order.
Eachtask in a DAG can represent almost anything—for example, one taskmight perform any of the following functions:
- Preparing data for ingestion
- Monitoring an API
- Sending an email
- Running a pipeline
In addition to running a DAG on a schedule, you can trigger DAGs manually orin response to events, such as changes in a Cloud Storagebucket. For more information, seeSchedule and trigger DAGs.
For more information about DAGs andtasks, see theApache Airflow documentation.
Cloud Composer environments
Cloud Composer environments are self-contained Airflowdeployments based on Google Kubernetes Engine. They work with other Google Cloudservices using connectors built into Airflow. You cancreate one or more environments in a single Google Cloudproject, in anysupported region.
Cloud Composer provisions Google Cloud services that runyour workflows and all Airflow components. The main components of anenvironment are:
GKE cluster: Airflow components such as Airflowschedulers, triggerers, and workers run as GKE workloadsin a single cluster created for your environment, and are responsible forprocessing and executing DAGs.
The cluster also hostsother Cloud Composer components like Composer Agentand Airflow Monitoring, which help manage the Cloud Composerenvironment, gather logs to store in Cloud Logging, and gathermetrics to upload to Cloud Monitoring.
Airflow web server: The web server runs the Apache Airflow UI.
Airflow database: The database holds the Apache Airflow metadata.
Cloud Storage bucket: Cloud Composer associatesa Cloud Storage bucket with your environment.This bucket, also calledenvironment's bucket, stores theDAGs,logs, customplugins,and data for the environment. For more information about the environment'sbucket seeData stored in Cloud Storage.
For an in-depth look at the components of an environment, seeEnvironment architecture.
Cloud Composer interfaces
Cloud Composer provides interfaces for managing environments,Airflow instances that run within environments, and individual DAGs.
For example, you cancreate and configureCloud Composer environments in Google Cloud console,Google Cloud CLI,Cloud Composer API, or Terraform.
As another example, you canmanage DAGs fromGoogle Cloud console,native Airflow UI, or by runningGoogle Cloud CLI andAirflow CLI commands.
Airflow features in Cloud Composer
When using Cloud Composer, you can manage and use Airflow featuressuch as:
Airflow DAGs: You can add, update, remove, or triggerAirflow DAGs in Google Cloud console or using the native Airflow UI.
Airflow configuration options: You can change Airflowconfiguration options from default values used byCloud Composer to custom values. InCloud Composer, some of the configuration options areblocked, and you cannot change their values.
Custom plugins: You can installcustom Airflow plugins, such ascustom, in-house Apache Airflow operators, hooks,sensors, or interfaces, into your Cloud Composer environment.
Python dependencies: You can install Pythondependencies from thePython Package Index in your environment or from a private package repository, includingArtifact Registry repositories. If thedependencies are not in the package index, you can also use plugins.
Logging and monitoring for DAGs, Airflow components, andCloud Composer environments:
You can view Airflow logs that are associated with single DAG tasksin theAirflow web interfaceand the
logsfolder in theenvironment's bucket.Cloud Monitoring logs and environment metricsfor Cloud Composer environments.
Access control in Cloud Composer
You manage security at the Google Cloud project level and canassign IAM roles that allow individualusers to modify or create environments. If someone does not have accessto your project or does not have an appropriate Cloud ComposerIAM role, that person cannot access any of your environments.
In addition to IAM, you can useAirflow UI access control, which is based on the ApacheAirflow Access Control model.
For more information about security features in Cloud Composer, seeCloud Composer security overview.
Environment networking
Cloud Composer supports several networking configurations forenvironments, with many configuration options. For example, in a Private IPenvironment, DAGs and Airflow components are fully isolated from the publicinternet.
For more information about networking in Cloud Composer, see pagesfor individual networking features:
- Public IP and Private IP environments
- Private Service Connect environments
- Shared VPC environments
- Configuring VPC Service Controls
- Authorized networks
- IP Masquerade agent
- Privately used public IP ranges
Other features of Cloud Composer
Other Cloud Composer features include:
- Autoscaling environments
- Development with local Airflow environments
- Highly resilient environments
- Environment snapshots
- Encryption with customer-managed encryption keys (CMEK)
- Data lineage integration with Dataplex Universal Catalog
Frequently Asked Questions
What version of Apache Airflow does Cloud Composer use?
Cloud Composer environments are based onCloud Composer images. When you create anenvironment, you can select an image with a specific Airflow version:
- Cloud Composer 3 supports Airflow 2.
- Cloud Composer 2 supports Airflow 2.
- Cloud Composer 1 supports Airflow 1 and Airflow 2.
You have control over the Apache Airflow version of your environment. You candecide toupgrade your environment to a later version ofCloud Composer image. EachCloud Composer release supports several ApacheAirflow versions.
Can I use native Airflow UI and CLI?
You can access the Apache Airflow web interface of your environment. Each ofyour environments has its own Airflow UI. For more information about accessingthe Airflow UI, seeAirflow web interface.
To run Airflow CLI commands in your environments, usegcloud commands.For more information about running Airflow CLI commands inCloud Composer environments, seeAirflow command-line interface.
Can I use my own database as the Airflow database?
Cloud Composer uses a managed database service for the Airflowdatabase. It is not possible to use a user-provided databaseas the Airflow database.
Can I use my own cluster as a Cloud Composer cluster?
Cloud Composer uses Google Kubernetes Engine service to create, manage anddelete environment clusters where Airflow components run. These clusters arefully managed by Cloud Composer.
It is not possible to build a Cloud Composer environment based on aself-managed Google Kubernetes Engine cluster.
Can I use my own container registry?
Cloud Composer uses Artifact Registry service to manage containerimage repositories used by Cloud Composer environments.It is not possible to replace it with a user-provided container registry.
Are Cloud Composer environments zonal or regional?
When you create an environment, you specify a region for it:
Standard Cloud Composer environments have a zonal Airflowdatabase and a multi-zonal Airflow execution layer. The Airflow database islocated in one of the zones in the specified region and the Airflowcomponents are distributed between several zones.
Highly resilient (Highly Available)Cloud Composer environmentshave a multi-zonal Airflow database and a multi-zonal Airflow executionlayer. A highly resilient environment runs across at least two zones of theselected region. Cloud Composer automatically distributes thecomponents of your environment between zones. The Cloud SQLcomponent that stores the Airflow database has a primary instance and astandby instance distributed between zones in the selected region.
What's next
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-11-07 UTC.