Step 1: Create a cluster

You are currently viewing version 1.10 of the Apigee hybrid documentation.This version is end of life. You should upgrade to a newer version. For more information, seeSupported versions.
Upgrading:If you are upgrading to Apigee hybrid version 1.10.5, seeUpgrading Apigee hybrid for instructions. If you are performing a new installation, continue with the configurations described below.

This step explains how to create the cluster in which you will run Apigee hybrid. The instructions vary depending on the platform in which you are running hybrid. Before you begin, make sure to review the following information:

Note: You can create a new, dedicated cluster for Apigee hybrid or you can install it in a cluster that is running other workloads. Creating a dedicated cluster for Apigee hybrid adds isolation and simplifies the overall effort required to maintain the cluster and its Apigee hybrid workloads.

If you install Apigee hybrid in a cluster running other workloads, you need to upgrade and maintain your cluster at the versions and features required in common for Apigee hybrid and for your other workloads. You may want to develop a plan to migrate one or more workloads if conflicts arise between supported versions and requirements.

The remainder of the installation instructions assume you are installing on a dedicated cluster.

Create your cluster

Follow the steps for your selected platform:

GKE

Create a cluster on GKE

These steps explain how to configure and create a GKE cluster in your Google Cloud project.

Caution:Apigee does not supportGKE Sandbox orgVisor for hybrid installations on GKE.

Apigee recommends you create aregional cluster rather than azonal cluster. If you are unfamiliar with the distinction between regions and zones, seeRegions and zones. The available regions are listed inAvailable regions and zones. Just be aware that, for example,us-west1 is a valid region name, whileus-west1-a is a zone in the region.

  1. Make sure you are using a version of GKE that is supported for hybrid version 1.10.5. SeeApigee hybrid supported platforms and versions.
  2. Ensure the clocks on all nodes and application servers are synchronized with Network Time Protocol (NTP), as explained in thePrerequisites. The Cassandra database relies on Network Time Protocol (NTP) synchronization to maintain data consistency. If you plan to install hybrid into multiple regions, make sure they are synchronized with NTP across all regions.
  3. (GKE private clusters only), If you are creating a private cluster, add a firewall rule to allow port 9443 for communication between GKE master nodes and GKE worker nodes and to allow the GKE masters to access Apigee mutating webhooks. Follow the procedure inAdding firewall rules for specific use cases in the Google Kubernetes Engine documentation. For more information seePrivate clusters in GKE.

    You do not need to add this rule if you are creating a standard or public cluster.

  4. Create astandard cluster by following the instructions atCreate a regional cluster with a multi-zone node pool. It's okay to create the cluster with just the default node pool. You will configure and create the required Apigee hybrid node pools in the next step.Important: Create a standard cluster instead of an Autopilot cluster. Apigee hybrid does not support Autopilot clusters, because Autopilot clusters do not support custom node pools. If you are using the Google Cloud console UI, make sure to selectSWITCH TO STANDARD CLUSTER.Create cluster screen heading showing Switch to Standard Cluster choice.Tip:If you are not creating your cluster in the default network, follow the instructions inCreating a cluster in an existing subnet

    Go to the next step only after the cluster creation completes successfully.

  5. Create two node pools by following the instructions inAdd and manage node pools. Be sure to configure the node pools with the minimum requirements listed in the table below.

    Minimum node pool requirements

    Be sure to satisfy these minimum requirements when creating the node pools. If using the Cloud console, be sure to configure both theNode pool details andNodes sections.

    Note: The node pool configuration values described in the following table representminimum values. This minimum configuration is suitable for testing; however, based on your needs, you can upsize the nodes now or in the future. For information on managing cluster nodes, seeAdd and manage node pools.
    Node pool nameDescriptionMinimum nodesMinimum
    machine type
    apigee-dataA stateful node pool used for the Cassandra database.1 per zone
    (3 per region)
    e2-standard-4
    (4 vCPU, 16 GB memory)
    apigee-runtimeA stateless node pool used by the runtime message processor.1 per zone
    (3 per region)
    e2-standard-4
    (4 vCPU, 16 GB memory)

    For more details about node pool configuration seeConfigure dedicated node pools.

  6. (Optional) If you wish, you can delete thedefault node pool. SeeDelete a node pool.
  7. Create the following environment variables. These variables are used in the gcloud commands that follow.

    Linux / MacOS

    export CLUSTER_NAME="YOUR_CLUSTER_NAME"export CLUSTER_LOCATION="YOUR_CLUSTER_LOCATION"

    Windows

    set CLUSTER_NAME="YOUR_CLUSTER_NAME"set CLUSTER_LOCATION=YOUR_CLUSTER_LOCATIONset PROJECT_ID=YOUR_PROJECT_ID

    Where:

    • CLUSTER_NAME: The name of your cluster.
    • CLUSTER_LOCATION: The region in which you created your cluster.
  8. Verify the node pool configurations:

    Regional clusters

    gcloud container node-pools list \--cluster=${CLUSTER_NAME} \--region=${CLUSTER_LOCATION} \--project=${PROJECT_ID}

    Zonal clusters

    gcloud container node-pools list \--cluster=${CLUSTER_NAME} \--zone=${CLUSTER_LOCATION} \--project=${PROJECT_ID}
  9. Make sure your cluster is set as the default cluster forkubectl by getting the

    gcloud credentials of the cluster you just created:

    Regional clusters

      gcloud container clusters get-credentials ${CLUSTER_NAME} \    --region${CLUSTER_LOCATION} \    --project${PROJECT_ID}

    Zonal clusters

      gcloud container clusters get-credentials${CLUSTER_NAME} \    --zone${CLUSTER_LOCATION} \    --project${PROJECT_ID}

    See Set a default cluster for kubectl commands.

  10. Configurepersistent solid state disk (SSD) storage for Cassandra. We do not support usinglocal SSDs. For more information, see Change the default storage class in the Kubernetes documentation.

    1. Get the name of the current default StorageClass:
      kubectl get sc

      For example:

      kubectl get sc  NAME                    PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE  premium-rwo             pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   15h  standard                kubernetes.io/gce-pd    Delete          Immediate              true                   15hstandard-rwo (default)  pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   15h
    2. Describe the StorageClass namedstandard-rwo. Note that its type ispd-balanced:
      kubectl describe sc standard-rwo

      For example:

      kubectl describe sc standard-rwo
      Name:                  standard-rwo  IsDefaultClass:        Yes  Annotations:           components.gke.io/layer=addon,storageclass.kubernetes.io/is-default-class=false  Provisioner:           pd.csi.storage.gke.io  Parameters:            type=pd-balanced  AllowVolumeExpansion:  True  MountOptions:          <none>  ReclaimPolicy:         Delete  VolumeBindingMode:     WaitForFirstConsumer  Events:                <none>
    3. Create a new file calledstorageclass.yaml.
    4. Add this code to the file. Note that the name of the new class isapigee-sc. You can use any name you like. Also, note that the storage type ispd-ssd:
      ---kind:StorageClassapiVersion:storage.k8s.io/v1metadata:name:"apigee-sc"provisioner:kubernetes.io/gce-pdparameters:type:pd-ssdreplication-type:nonevolumeBindingMode:WaitForFirstConsumerallowVolumeExpansion:true
    5. Apply the new StorageClass to your Kubernetes cluster:
      kubectl apply -f storageclass.yaml
    6. Execute the following two commands to change the default StorageClass:
      kubectl patch storageclass standard-rwo \-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
      kubectl patch storageclass apigee-sc \-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
    7. Execute this command to verify that the new default StorageClass is calledapigee-sc:
      kubectl get sc

      For example:

      kubectl get sc  NAME                  PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGEapigee-sc (default)   kubernetes.io/gce-pd    Delete          WaitForFirstConsumer   true                   14h  premium-rwo           pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   15h  standard              kubernetes.io/gce-pd    Delete          Immediate              true                   15h  standard-rwo          pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   15h
  11. Enable workload identity for the cluster. Workload Identity allows workloads in your GKE clusters to impersonate Identity and Access Management (IAM) service accounts to access Google Cloud services. This operation can take up to 30 minutes:

    Regional clusters

    gcloud container clusters update${CLUSTER_NAME} \  --workload-pool=${PROJECT_ID}.svc.id.goog \  --project${PROJECT_ID} \  --region${CLUSTER_LOCATION}

    Zonal clusters

    gcloud container clusters update${CLUSTER_NAME} \  --workload-pool=${PROJECT_ID}.svc.id.goog \  --zone${CLUSTER_LOCATION} \  --project${PROJECT_ID}

    For more information, seeEnable Workload Identity.

  12. Verify whether Workload identity is successfully enabled with the following command;

    Regional clusters

    gcloud container clusters describe${CLUSTER_NAME} \  --project${PROJECT_ID} \  --region${CLUSTER_LOCATION} | grep -i "workload"

    Zonal clusters

    gcloud container clusters describe${CLUSTER_NAME} \  --zone${CLUSTER_LOCATION} \  --project${PROJECT_ID} | grep -i "workload"

When you have a cluster installed and running, go to the next step.

GKE on-prem

Create a cluster on GKE on-prem

These steps explain how to configure and create a GKE on-prem cluster for Apigee hybrid.

  1. Make sure you are using a version of Anthos on-premises VMWare that is supported for hybrid version 1.10.5. SeeApigee hybrid supported platforms and versions.
  2. Ensure the clocks on all nodes and application servers are synchronized with Network Time Protocol (NTP), as explained in thePrerequisites. The Cassandra database relies on Network Time Protocol (NTP) synchronization to maintain data consistency. If you plan to install hybrid into multiple regions, be sure they are synchronized with NTP across all regions.
  3. Create the cluster by following the instructions atCreate basic clusters. It's okay to create the cluster with just the default node pool. You will configure and create the required Apigee hybrid node pools in the next step.

    Go to the next step only after the cluster creation completes successfully.

  4. Create two node pools by following the instructions inCreating and managing node pools. Configure the node pools with the minimum requirements listed in the table below.

    Minimum node pool requirements

    Be sure to satisfy these minimum requirements when creating the node pools.

    Note: The node pool configuration values described in the following table representminimum values. This minimum configuration is suitable for testing; however, based on your needs, you can upsize the nodes now or in the future. For information on managing cluster nodes, seeAdd and manage node pools.
    Node pool nameDescriptionMinimum nodesMinimum
    machine type
    apigee-dataA stateful node pool used for the Cassandra database.1 per zone
    (3 per region)
    e2-standard-4
    (4 vCPU, 16 GB memory)
    apigee-runtimeA stateless node pool used by the runtime message processor.1 per zone
    (3 per region)
    e2-standard-4
    (4 vCPU, 16 GB memory)

    For more details about node pool configuration seeConfigure dedicated node pools.

  5. (Optional) If you wish, you can delete thedefault node pool. SeeDelete a node pool.
  6. Configurepersistent solid state disk (SSD) storage for Cassandra. We do not support usinglocal SSDs. For more information, see Change the default storage class in the Kubernetes documentation.

    1. Get the name of the current default StorageClass:
      kubectl get sc

      For example:

      kubectl get sc  NAME                    PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE  premium-rwo             pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   15h  standard                kubernetes.io/gce-pd    Delete          Immediate              true                   15hstandard-rwo (default)  pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   15h
    2. Describe the StorageClass namedstandard-rwo. Note that itstype ispd-balanced:
      kubectl describe sc standard-rwo

      For example:

      kubectl describe sc standard-rwo
      Name:                  standard-rwoIsDefaultClass:        YesAnnotations:           components.gke.io/layer=addon,storageclass.kubernetes.io/is-default-class=falseProvisioner:           pd.csi.storage.gke.ioParameters:            type=pd-balancedAllowVolumeExpansion:  TrueMountOptions:          <none>ReclaimPolicy:         DeleteVolumeBindingMode:     WaitForFirstConsumerEvents:                <none>
    3. Create a new file calledstorageclass.yaml.
    4. Add this code to the file. Note that the name of the new class isapigee-sc. You can use any name you like. Also, note that the storage type ispd-ssd:
      ---kind:StorageClassapiVersion:storage.k8s.io/v1metadata:name:"apigee-sc"provisioner:kubernetes.io/gce-pdparameters:type:pd-ssdreplication-type:nonevolumeBindingMode:WaitForFirstConsumerallowVolumeExpansion:true
    5. Apply the new StorageClass to your Kubernetes cluster:
      kubectl apply -f storageclass.yaml
    6. Execute the following two commands to change the default StorageClass:
      kubectl patch storageclass standard-rwo \  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
      kubectl patch storageclass apigee-sc \  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
    7. Execute this command to verify that the new default StorageClass is calledapigee-sc:
      kubectl get sc

      For example:

      kubectl get sc  NAME                  PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGEapigee-sc (default)   kubernetes.io/gce-pd    Delete          WaitForFirstConsumer   true                   14h  premium-rwo           pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   15h  standard              kubernetes.io/gce-pd    Delete          Immediate              true                   15h  standard-rwo          pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   15h

When you have a cluster installed and running, go to the next step.

Anthos on bare metal

Create a cluster on Anthos on bare metal

These steps explain how to configure and create a cluster for Apigee hybrid on Anthos on bare metal. Anthos on bare metal lets you run Kubernetes clusters directly on your own machine resources.

  1. Make sure you are using a version of Anthos on Bare Metal that is supported for hybrid version 1.10.5. SeeApigee hybrid supported platforms and versions.
  2. Ensure the clocks on all nodes and application servers are synchronized with Network Time Protocol (NTP), as explained in thePrerequisites. The Cassandra database relies on Network Time Protocol (NTP) synchronization to maintain data consistency. If you plan to install hybrid into multiple regions, be sure they are synchronized with NTP across all regions.
  3. Review the Installation prerequisites overview andCreating clusters: overview.
  4. Create the cluster with two node pools configured as described below:

When you have a cluster installed and running, go to the next step.

AKS

Create a cluster on AKS

These steps explain how to configure and create a cluster for Apigee hybrid on AKS.

  1. Make sure you are using a version of AKS that is supported for hybrid version 1.10.5. SeeApigee hybrid supported platforms and versions.
  2. Ensure the clocks on all nodes and application servers are synchronized with Network Time Protocol (NTP), as explained in thePrerequisites. The Cassandra database relies on Network Time Protocol (NTP) synchronization to maintain data consistency. If you plan to install hybrid into multiple regions, be sure they are synchronized with NTP across all regions.
  3. Create the cluster using either theAzure CLI orAzure Portal, and create two node pools as described below.

    The minimum configurations for your cluster are:

    Note: The node pool configuration values described in the following table representminimum values. This minimum configuration is suitable for testing; however, based on your needs, you can upsize the nodes now or in the future. For information on managing cluster nodes, seeAdd and manage AKS node pools.
    ConfigurationStateful node poolStateless node pool
    PurposeA stateful node pool used for the Cassandra database.A stateless node pool used by the runtime message processor.
    Label nameapigee-dataapigee-runtime
    Number of nodes1 per zone (3 per region)1 per zone (3 per region)
    CPU44
    RAM1515
    StoragedynamicManaged with theApigeeDeployment CRD
    Minimum disk IOPS2000 IOPS with SAN or directly attached storage. NFS is not recommended even if it can support the required IOPS.2000 IOPS with SAN or directly attached storage. NFS is not recommended even if it can support the required IOPS.
    Network bandwidth for each machine instance type1 Gbps1 Gbps

    For more details on minimum cluster configuration see:Minimum cluster configurations

  4. When you have a cluster installed and running, go to the next step.

EKS

Create a cluster on EKS

These steps explain how to configure and create a cluster for Apigee hybrid on EKS.

  1. Make sure you are using a version of EKS that is supported for hybrid version 1.10.5. SeeApigee hybrid supported platforms and versions.
  2. Ensure the clocks on all nodes and application servers are synchronized with Network Time Protocol (NTP), as explained in thePrerequisites. The Cassandra database relies on Network Time Protocol (NTP) synchronization to maintain data consistency. If you plan to install hybrid into multiple regions, be sure they are synchronized with NTP across all regions.
  3. If you are using Kubernetes version 1.24 or newer, make sure you have installed theKubernetes CSI driver for Amazon EBS.
  4. Use the following instructions to create a user cluster, and create two node pools as described below.

    The minimum configurations for your cluster are:

    Note: The node pool configuration values described in the following table representminimum values. This minimum configuration is suitable for testing; however, based on your needs, you can upsize the nodes now or in the future. For information on managing cluster nodes, seeAdd and manage node pools.
    ConfigurationStateful node poolStateless node pool
    PurposeA stateful node pool used for the Cassandra database.A stateless node pool used by the runtime message processor.
    Label nameapigee-dataapigee-runtime
    Number of nodes1 per zone (3 per region)1 per zone (3 per region)
    CPU44
    RAM1515
    StoragedynamicManaged with theApigeeDeployment CRD
    Minimum disk IOPS2000 IOPS with SAN or directly attached storage. NFS is not recommended even if it can support the required IOPS.2000 IOPS with SAN or directly attached storage. NFS is not recommended even if it can support the required IOPS.
    Network bandwidth for each machine instance type1 Gbps1 Gbps

    For more details on minimum cluster configuration see:Minimum cluster configurations

When you have a cluster installed and running, go to the next step.

GKE on AWS

Create a cluster on GKE on AWS

These steps explain how to configure and create a cluster for Apigee hybrid on GKE on AWS.

  1. Make sure you are using a version of GKE that is supported for hybrid version 1.10.5. SeeApigee hybrid supported platforms and versions.
  2. Ensure the clocks on all nodes and application servers are synchronized with Network Time Protocol (NTP), as explained in thePrerequisites. The Cassandra database relies on Network Time Protocol (NTP) synchronization to maintain data consistency. If you plan to install hybrid into multiple regions, be sure they are synchronized with NTP across all regions.
  3. Use the following instructions to create a user cluster, and create two node pools as described below.

    The minimum configurations for your cluster are:

    Note: The node pool configuration values described in the following table representminimum values. This minimum configuration is suitable for testing; however, based on your needs, you can upsize the nodes now or in the future. For information on managing cluster nodes, seeAdd and manage node pools.
    ConfigurationStateful node poolStateless node pool
    PurposeA stateful node pool used for the Cassandra database.A stateless node pool used by the runtime message processor.
    Label nameapigee-dataapigee-runtime
    Number of nodes1 per zone (3 per region)1 per zone (3 per region)
    CPU44
    RAM1515
    StoragedynamicManaged with theApigeeDeployment CRD
    Minimum disk IOPS2000 IOPS with SAN or directly attached storage. NFS is not recommended even if it can support the required IOPS.2000 IOPS with SAN or directly attached storage. NFS is not recommended even if it can support the required IOPS.
    Network bandwidth for each machine instance type1 Gbps1 Gbps

    For more details on minimum cluster configuration see:Minimum cluster configurations

When you have a cluster installed and running, go to the next step.

OpenShift

Create a cluster on OpenShift

These steps explain how to configure and create a cluster for Apigee hybrid on OpenShift.

  1. Make sure you are using a version of OpenShift that is supported for hybrid version 1.10.5. SeeApigee hybrid supported platforms and versions.
  2. Ensure the clocks on all nodes and application servers are synchronized with Network Time Protocol (NTP), as explained in thePrerequisites. The Cassandra database relies on Network Time Protocol (NTP) synchronization to maintain data consistency. If you plan to install hybrid into multiple regions, be sure they are synchronized with NTP across all regions.
  3. Build the OpenShift cluster to deploy on the runtime plane, install Apigee on your OpenShift user cluster, and create two node pools.You can find additional information about OpenShift cluster creation and management on Google Cloud in the OpenShift documentation, for example:Installing a cluster quickly on Google Cloud.For help setting up your OpenShift cluster on Google Cloud VMs, seeHow to Set Up Apigee hybrid on Openshift on Google Cloud VMs.

    As part of the OpenShift install, install and configure theoc CLI tool. SeeGetting started with the OpenShift CLI in the OpenShift documentation.

    The minimum configurations for your cluster are:

    Note: The node pool configuration values described in the following table representminimum values. This minimum configuration is suitable for testing; however, based on your needs, you can upsize the nodes now or in the future. For information on managing cluster nodes, seeAdd and manage node pools.
    ConfigurationStateful node poolStateless node pool
    PurposeA stateful node pool used for the Cassandra database.A stateless node pool used by the runtime message processor.
    Label nameapigee-dataapigee-runtime
    Number of nodes1 per zone (3 per region)1 per zone (3 per region)
    CPU44
    RAM1515
    StoragedynamicManaged with theApigeeDeployment CRD
    Minimum disk IOPS2000 IOPS with SAN or directly attached storage. NFS is not recommended even if it can support the required IOPS.2000 IOPS with SAN or directly attached storage. NFS is not recommended even if it can support the required IOPS.
    Network bandwidth for each machine instance type1 Gbps1 Gbps

    For more details on minimum cluster configuration see:Minimum cluster configurations

When you have installed a cluster, go to the next step.

 

1(NEXT) Step 2: Install cert-manager34567891011

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-17 UTC.