Upgrading Apigee hybrid to version 1.12

You are currently viewing version 1.12 of the Apigee hybrid documentation.This version is end of life. You should upgrade to a newer version. For more information, seeSupported versions.

This procedure covers upgrading from Apigee hybrid version 1.11.x to Apigee hybrid version 1.12.4 and from previous releases of hybrid 1.12.x to version 1.12.4.

Use the same procedures for minor version upgrades (for example version 1.11 to 1.12) and for patch release upgrades (for example 1.12.0 to 1.12.4).

If you are upgrading from Apigee hybrid version 1.10 or older, you must first upgrade to hybrid version 1.11 before upgrading to version 1.12.4. See the instructions forUpgrading Apigee hybrid to version 1.11.

Changes from Apigee hybrid v1.11

Apigee hybrid version 1.12 introduces the following changes that impact the upgrade process. For a complete list of features in v1.12 see thehybrid v1.12.0 Release Notes.

Considerations before starting an upgrade to version 1.12

Cassandra considerations

Upgrading from Apigee hybrid version 1.11 to version 1.12 includes an upgrade of the Cassandra database from version 3.11.x to version 4.x. While the Cassandra upgrade is handled as part of the Apigee hybrid upgrade procedure, please plan for the following considerations:

Considerations before upgrading a single-region installation

If you need to roll back to a previous version of Apigee hybrid, the process may require downtime. Therefore, if you are upgrading a single-region installation, you may want to create a second region and then upgrade only one region at a time in the following sequence:

  1. Add a second region to your existing installation using the same hybrid version. SeeMulti-region deployment in the version 1.11 documentation.
  2. Backup and validate data from the first region before starting an upgrade. SeeCassandra backup overview in the version 1.11 documentation.
  3. Upgrade the newly added region to hybrid 1.12.
  4. Switch the traffic to the new region and validate traffic.
  5. Once validated, upgrade the older region with hybrid 1.12.
  6. Switch all the traffic back to the older region and validate traffic.
  7. Decommission the new region.

Considerations before upgrading a multi-region installation

Apigee recommends the following sequence for upgrading a multi-region installation:

  1. Backup and validate data from each region before starting upgrade.
  2. Upgrade the Hybrid version in one region and make sure all the pods are in a running state to validate the upgrade.
  3. Validate traffic in the newly upgraded region.
  4. Upgrade each subsequent region only after validating the traffic on the previous region.
  5. In case of the potential need to rollback an upgrade in a multi-region deployment, prepare to switch traffic away from failed regions, consider adding enough capacity in the region where traffic will be diverted to handle traffic for both regions.

Prerequisites

Before upgrading to hybrid version 1.12, make sure your installation meets the following requirements:

  • An Apigee hybrid version 1.11 installation managed with Helm.
  • Helm version v3.14.2+.
  • kubectl version 1.27, 1.28, or 1.29 (recommended).
  • cert-manager version v1.13.0. If needed, you will upgrade cert-manager in thePrepare to upgrade to version section below.

Limitations

Keep the following limitations in mind when planning your upgrade from Apigee hybrid version 1.11 to version 1.12. Planning can help reduce the need for downtime if you need to roll back or restore after the upgrade.

  • Backups from Hybrid 1.12 cannot be restored in Hybrid 1.11 and vice versa, due to incompatibility between the two versions.
  • You cannot scale datastore pods during the upgrade to version 1.12. Address your scaling needs in all regions before you start to upgrade your hybrid installation.
  • In a single region hybrid installation, you cannot roll back the datastore component once the datastore upgrade process has finished. You cannot roll a Cassandra 4.x datastore back to a Cassandra 3.x datastore. This will require restoring from your most recent back up of the Cassandra 3.x data (from your hybrid version 1.11 installation).
  • Deleting or adding a region is not supported during upgrade. In a multi-region upgrade, you must complete the upgrade of all regions before you can add or delete regions.

Upgrading to version 1.12.4 overview

Upgrading to Apigee hybrid version 1.12 may require downtime.

When upgrading the Apigee controller to version 1.12.4, all Apigee deployments undergo a rolling restart. To minimize downtime in production hybrid environments during a rolling restart, make sure you are running at least two clusters (in the same or different region/data center). Divert all production traffic to a single cluster and take the cluster you are about to upgrade offline, and then proceed with the upgrade process. Repeat the process for each cluster.

Apigee recommends upgrading all clusters as soon as possible to reduce the chances of production impact. There is no time limit on when all remaining clusters must be upgraded after the first one is upgraded. However, until all remaining clusters are upgraded the following operations will be impacted:

  • Cassandra backup and restore cannot work with mixed versions. For example, a backup from Hybrid 1.11 cannot be used to restore a Hybrid 1.12 instance.
  • Cassandra data streaming will not work between mixed Hybrid versions. Therefore, your Cassandra clusters cannot scale horizontally.
  • Region expansion and decommissioning will be impacted, because these operations depend on Cassandra data streaming.
Note: Management plane changes do not need to be suspended during an upgrade. Any required suspension to management plane changes are included in the upgrade instructions.

The procedures for upgrading Apigee hybrid are organized in the following sections:

  1. Prepare to upgrade.
    • Backup Cassandra.
    • Backup your hybrid installation directories.
  2. Install hybrid runtime version 1.12.4.

Prepare to upgrade to version 1.12

Backup cassandra

  • Backup your Cassandra database in all applicable regions and validate data in your hybrid version 1.11 installation before starting upgrade. SeeMonitoring backups in the version 1.11 documentation.
  • Restart all the Cassandra pods in the cluster before you start the upgrade process, so any lingering issues can surface.

    To restart and test the Cassandra pods, delete each pod individually, one pod at a time, and then validate that it comes back in a running state and that the readiness probe passes:

    1. List the cassandra pods:
      kubectl get pods -nAPIGEE_NAMESPACE -l app=apigee-cassandra

      For example:

      kubectl get pods -n apigee -l app=apigee-cassandraNAME                         READY   STATUS    RESTARTS   AGEapigee-cassandra-default-0   1/1     Running   0          2hapigee-cassandra-default-1   1/1     Running   0          2hapigee-cassandra-default-2   1/1     Running   0          2h. . .
    2. Delete a pod:
      kubectl delete pod -nAPIGEE_NAMESPACECASSANDRA_POD_NAME

      For example:

      kubectl delete pod -n apigee apigee-cassandra-default-0
    3. Check the status by listing the cassandra pods again:
      kubectl get pods -nAPIGEE_NAMESPACE -l app=apigee-cassandra

      For example:

      kubectl get pods -n apigee -l app=apigee-cassandraNAME                         READY   STATUS    RESTARTS   AGEapigee-cassandra-default-0   1/1     Running   0          16sapigee-cassandra-default-1   1/1     Running   0          2hapigee-cassandra-default-2   1/1     Running   0          2h. . .
  • Apply the last known override file again to make sure there are no changes made to it so you can use the same configuration to upgrade to hybrid version 1.12.
  • Ensure that all Cassandra nodes in all regions are in theUN (Up / Normal) state. If any Cassandra node is in a different state, address that first before starting the upgrade.

    You can validate the state of your Cassandra nodes with the following commands:

    1. List the cassandra pods:
      kubectl get pods -nAPIGEE_NAMESPACE -l app=apigee-cassandra

      For example:

      kubectl get pods -n apigee -l app=apigee-cassandraNAME                         READY   STATUS    RESTARTS   AGEapigee-cassandra-default-0   1/1     Running   0          2hapigee-cassandra-default-1   1/1     Running   0          2hapigee-cassandra-default-2   1/1     Running   0          2hapigee-cassandra-default-3   1/1     Running   0          16mapigee-cassandra-default-4   1/1     Running   0          14mapigee-cassandra-default-5   1/1     Running   0          13mapigee-cassandra-default-6   1/1     Running   0          9mapigee-cassandra-default-7   1/1     Running   0          9mapigee-cassandra-default-8   1/1     Running   0          8m
    2. Check the state the nodes for each Cassandra pod with thekubectl nodetool status command:
      kubectl -nAPIGEE_NAMESPACE exec -itCASSANDRA_POD_NAME -- nodetool status

      For example:

      kubectl -n apigee exec -it apigee-cassandra-default-0 -- nodetool statusDatacenter: us-east1====================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--  Address      Load        Tokens       Owns (effective)  Host ID                               RackUN  10.16.2.6    690.17 KiB  256          48.8%             b02089d1-0521-42e1-bbed-900656a58b68  ra-1UN  10.16.4.6    705.55 KiB  256          51.6%             dc6b7faf-6866-4044-9ac9-1269ebd85dab  ra-1UN  10.16.11.11  674.36 KiB  256          48.3%             c7906366-6c98-4ff6-a4fd-17c596c33cf7  ra-1UN  10.16.1.11   697.03 KiB  256          49.8%             ddf221aa-80aa-497d-b73f-67e576ff1a23  ra-1UN  10.16.5.13   703.64 KiB  256          50.9%             2f01ac42-4b6a-4f9e-a4eb-4734c24def95  ra-1UN  10.16.8.15   700.42 KiB  256          50.6%             a27f93af-f8a0-4c88-839f-2d653596efc2  ra-1UN  10.16.11.3   697.03 KiB  256          49.8%             dad221ff-dad1-de33-2cd3-f1.672367e6f  ra-1UN  10.16.14.16  704.04 KiB  256          50.9%             1feed042-a4b6-24ab-49a1-24d4cef95473  ra-1UN  10.16.16.1   699.82 KiB  256          50.6%             beef93af-fee0-8e9d-8bbf-efc22d653596  ra-1

Back up your hybrid installation directories

  1. These instructions use the environment variableAPIGEE_HELM_CHARTS_HOME for the directory in your file system where you have installed the Helm charts. If needed, change directory into this directory and define the variable with the following command:

    Linux

    export APIGEE_HELM_CHARTS_HOME=$PWDecho$APIGEE_HELM_CHARTS_HOME

    Mac OS

    export APIGEE_HELM_CHARTS_HOME=$PWDecho$APIGEE_HELM_CHARTS_HOME

    Windows

    set APIGEE_HELM_CHARTS_HOME=%CD%echo %APIGEE_HELM_CHARTS_HOME%
  2. Make a backup copy of your version 1.11$APIGEE_HELM_CHARTS_HOME/ directory. You can use any backup process. For example, you can create atar file of your entire directory with:
    tar -czvf$APIGEE_HELM_CHARTS_HOME/../apigee-helm-charts-v1.11-backup.tar.gz$APIGEE_HELM_CHARTS_HOME
  3. Back up your Cassandra database following the instructions inCassandra backup and recovery.
  4. If you are using service cert files (.json) in your overrides to authenticate service accounts, make sure your service account cert files reside in the correct Helm chart directory. Helm charts cannot read files outside of each chart directory.

    This step is not required if you are using Kubernetes secrets or Workload Identity to authenticate service accounts.

    The following table shows the destination for each service account file, depending on your type of installation:

    Prod

    Service accountDefault filenameHelm chart directory
    apigee-cassandraPROJECT_ID-apigee-cassandra.json$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
    apigee-loggerPROJECT_ID-apigee-logger.json$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
    apigee-martPROJECT_ID-apigee-mart.json$APIGEE_HELM_CHARTS_HOME/apigee-org/
    apigee-metricsPROJECT_ID-apigee-metrics.json$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
    apigee-runtimePROJECT_ID-apigee-runtime.json$APIGEE_HELM_CHARTS_HOME/apigee-env
    apigee-synchronizerPROJECT_ID-apigee-synchronizer.json$APIGEE_HELM_CHARTS_HOME/apigee-env/
    apigee-udcaPROJECT_ID-apigee-udca.json$APIGEE_HELM_CHARTS_HOME/apigee-org/
    apigee-watcherPROJECT_ID-apigee-watcher.json$APIGEE_HELM_CHARTS_HOME/apigee-org/

    Non-prod

    Make a copy of theapigee-non-prod service account file in each of the following directories:

    Service accountDefault filenameHelm chart directories
    apigee-non-prodPROJECT_ID-apigee-non-prod.json$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
    $APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
    $APIGEE_HELM_CHARTS_HOME/apigee-org/
    $APIGEE_HELM_CHARTS_HOME/apigee-env/
  5. Make sure that your TLS certificate and key files (.crt,.key, and/or.pem) reside in the$APIGEE_HELM_CHARTS_HOME/apigee-virtualhost/ directory.

Upgrade your Kubernetes version

Check your Kubernetes platform version and if needed, upgrade your Kubernetes platform to a version that is supported by both hybrid 1.11 and hybrid 1.12. Follow your platform's documentation if you need help.

Click to expand a list of supported platforms

1.10not supported(3)1.111.12
GKE on Google Cloud1.24.x
1.25.x
1.26.x
1.27.x(≥ 1.10.5)
1.28.x(≥ 1.10.5)
1.25.x
1.26.x
1.27.x
1.28.x(≥ 1.11.2)
1.29.x(≥ 1.11.2)
1.26.x
1.27.x
1.28.x
1.29.x(≥ 1.12.1)
GKE on AWS1.13.x (K8s v1.24.x)
1.14.x (K8s v1.25.x)
1.26.x(12)
1.27.x(≥ 1.10.5)
1.28.x(≥ 1.10.5)
1.14.x (K8s v1.25.x)
1.26.x(12)
1.27.x
1.28.x(≥ 1.11.2)
1.29.x(≥ 1.11.2)
1.26.x(12)
1.27.x
1.28.x
1.29.x(≥ 1.12.1)
GKE on Azure1.13.x
1.14.x
1.26.x(12)
1.27.x(≥ 1.10.5)
1.28.x(≥ 1.10.5)
1.14.x
1.26.x(12)
1.27.x
1.28.x(≥ 1.11.2)
1.29.x(≥ 1.11.2)
1.26.x(12)
1.27.x
1.28.x
1.29.x(≥ 1.12.1)
Google Distributed Cloud (software only) on VMware(1)(13)1.13.x
1.14.x
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x)
1.27.x(≥ 1.10.5)
1.28.x(≥ 1.10.5)
1.14.x
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x)
1.28.x(≥ 1.11.2)
1.29.x(≥ 1.11.2)
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x)
1.28.x(12)
1.29.x(≥ 1.12.1)
Google Distributed Cloud (software only) on bare metal(1)1.13.x
1.14.x (K8s v1.25.x)
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x)
1.27.x(12)(≥ 1.10.5)
1.28.x(≥ 1.10.5)
1.14.x
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x)
1.28.x(12)(≥ 1.11.2)
1.29.x(≥ 1.11.2)
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x)
1.28.x(12)
1.29.x(≥ 1.12.1)
EKS(7)1.24.x
1.25.x
1.26.x
1.27.x(≥ 1.10.5)
1.28.x(≥ 1.10.5)
1.25.x
1.26.x
1.27.x
1.28.x(≥ 1.11.2)
1.29.x(≥ 1.11.2)
1.26.x
1.27.x
1.28.x
1.29.x(≥ 1.12.1)
AKS(7)1.24.x
1.25.x
1.26.x
1.27.x(≥ 1.10.5)
1.28.x(≥ 1.10.5)
1.25.x
1.26.x
1.27.x
1.28.x(≥ 1.11.2)
1.29.x(≥ 1.11.2)
1.26.x
1.27.x
1.28.x
1.29.x(≥ 1.12.1)
OpenShift(7)4.11
4.12
4.14(≥ 1.10.5)
4.15(≥ 1.10.5)
4.12
4.13
4.14
4.15(≥ 1.11.2)
4.16(≥ 1.11.2)
4.12
4.13
4.14
4.15
4.16(≥ 1.12.1)
Rancher Kubernetes Engine (RKE) v1.26.2+rke2r1
1.27.x(≥ 1.10.5)
1.28.x(≥ 1.10.5)
v1.26.2+rke2r1
v1.27.x
1.28.x(≥ 1.11.2)
1.29.x(≥ 1.11.2)
v1.26.2+rke2r1
v1.27.x

1.28.x
1.29.x(≥ 1.12.1)
VMware Tanzu N/A N/Av1.26.x

Components

1.101.111.12
Cloud Service Mesh 1.17.x(10) 1.18.x(10) 1.19.x(10)
JDK JDK 11 JDK 11 JDK 11
cert-manager 1.10.x
1.11.x
1.12.x
1.11.x
1.12.x
1.13.x
1.11.x
1.12.x
1.13.x
Cassandra 3.11 3.11 4.0
Kubernetes1.24.x
1.25.x
1.26.x
1.25.x
1.26.x
1.27.x
1.26.x
1.27.x
1.28.x
kubectl1.27.x
1.28.x
1.29.x
Helm3.10+3.10+3.14.2+
Secret Store CSI driver1.3.41.4.1
Vault1.13.x1.15.2

(1) On Anthos on-premises (Google Distributed Cloud) version 1.13, follow these instructions to avoid conflict withcert-manager:Conflicting cert-manager installation

(2) Support available with Apigee hybrid version 1.7.2 and newer.

(3) The official EOL dates for Apigee hybrid versions 1.10 and older have been reached. Regular monthly patches are no longer available. These releases are no longer officially supported except for customers with explicit and official exceptions for continued support. Other customers must upgrade.

(4)Anthos on-premises (Google Distributed Cloud) versions 1.12 and earlier are out of support. See the Distributed Cloud on bare metalVersion Support Policy and the Distributed Cloud on VMwaresupported versions list.

(5)Google Distributed Cloud on bare metal or VMware requires Cloud Service Mesh 1.14 or later. We recommend that you upgrade to hybrid v1.8 and switch to Apigee ingress gateway which no longer requires you to install Cloud Service Mesh on your hybrid cluster.

(6) Support available with Apigee hybrid version 1.8.4 and newer.

(7) Not supported with Apigee hybrid version 1.8.4 and newer.

(8) Support available with Apigee hybrid version 1.7.6 and newer.

(9) Not supported with Apigee hybrid version 1.8.5 and newer.

(10) Cloud Service Mesh is automatically installed with Apigee hybrid 1.9 and newer.

(11) Support available with Apigee hybrid version 1.9.2 and newer.

(12) GKE on AWS version numbers now reflect the Kubernetes versions. SeeGKE Enterprise version and upgrade support for version details and recommended patches.

(13) Vault is not certified on Google Distributed Cloud for VMware.

(14) Support available with Apigee hybrid version 1.10.5 and newer.

(15) Support available with Apigee hybrid version 1.11.2 and newer.

(16) Support available with Apigee hybrid version 1.12.1 and newer.

About attached clusters

For Apigee hybrid versions 1.7.x and older, you must useGKE attached clusters if you want to run Apigee hybrid in a multi-cloud context on Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or another supported third-party Kubernetes service provider. Cluster attachment allows Google to measure the usage of Cloud Service Mesh.

For Apigee hybrid version 1.8.x, GKE attached clusters are required if you are using Cloud Service Mesh for your ingress gateway. If you are using Apigee ingress gateway, attached clusters are optional.

Note: Hybrid installations are not currently supported on Autopilot clusters.

Install the hybrid 1.12.4 runtime

Caution:Do not create new environments during the upgrade process.

Prepare for the Helm charts upgrade

This upgrade procedure assumes you are using the same namespace and service accounts for the upgraded installation. If you are making any configuration changes, be sure to reflect those changes in your overrides file before installing the Helm charts.
  1. Pull the Apigee Helm charts.

    Apigee hybrid charts are hosted inGoogle Artifact Registry:

    oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts

    Using thepull command, copy all of the Apigee hybrid Helm charts to your local storage with the following command:

    exportCHART_REPO=oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-chartsexportCHART_VERSION=1.12.4helm pull$CHART_REPO/apigee-operator --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-datastore --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-env --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-ingress-manager --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-org --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-redis --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-telemetry --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-virtualhost --version$CHART_VERSION --untar
  2. Upgrade cert-manager if needed.

    If you need to upgrade your cert-manager version, install the new version with the following command:

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
  3. Install the updated Apigee CRDs:Note: From this step onwards, all commands should be run under the chart repo root directory.Note: This is the only supported method for installing Apigee CRDs. Do not usekubectl apply without-k, do not omit--server-side.Note: This step requires elevated cluster permissions.
    1. Use thekubectl dry-run feature by running the following command:

      kubectl apply -k  apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false --dry-run
    2. After validating with the dry-run command, run the following command:

      kubectl apply -k  apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false
    3. Validate the installation with thekubectl get crds command:
      kubectl get crds | grep apigee

      Your output should look something like the following:

      apigeedatastores.apigee.cloud.google.com                    2023-10-09T14:48:30Zapigeedeployments.apigee.cloud.google.com                   2023-10-09T14:48:30Zapigeeenvironments.apigee.cloud.google.com                  2023-10-09T14:48:31Zapigeeissues.apigee.cloud.google.com                        2023-10-09T14:48:31Zapigeeorganizations.apigee.cloud.google.com                 2023-10-09T14:48:32Zapigeeredis.apigee.cloud.google.com                         2023-10-09T14:48:33Zapigeerouteconfigs.apigee.cloud.google.com                  2023-10-09T14:48:33Zapigeeroutes.apigee.cloud.google.com                        2023-10-09T14:48:33Zapigeetelemetries.apigee.cloud.google.com                   2023-10-09T14:48:34Zcassandradatareplications.apigee.cloud.google.com           2023-10-09T14:48:35Z
  4. Check the labels on the cluster nodes. By default, Apigee schedules data pods on nodes with the labelcloud.google.com/gke-nodepool=apigee-data and runtime pods are scheduled on nodes with the labelcloud.google.com/gke-nodepool=apigee-runtime. You can customize your node pool labels in theoverrides.yaml file.

    For more information, see Configuring dedicated node pools.

Install the Apigee hybrid Helm charts

Note: Before executing any of the Helm upgrade/install commands,use the Helm dry-run feature by adding--dry-run at the end ofthe command. Seehelm -h to list supported commands, options,and usage.
  1. If you have not, navigate into yourAPIGEE_HELM_CHARTS_HOME directory. Run the following commands from that directory.
  2. Upgrade the Apigee Operator/Controller:Note: This step requires elevated cluster permissions. Runhelm -h orhelm upgrade -h for details

    Dry run:

    helm upgrade operator apigee-operator/ \  --install \  --create-namespace \  --namespace apigee-system \  -fOVERRIDES_FILE \  --dry-run

    Upgrade the chart:

    helm upgrade operator apigee-operator/ \  --install \  --create-namespace \  --namespace apigee-system \  -fOVERRIDES_FILE
    Note: For installations migrated fromapigeectl to Helm, use:
    helm upgrade operator apigee-operator/ \  --install \  --create-namespace \  --namespace apigee-system \  --force \  -fOVERRIDES_FILE

    Verify Apigee Operator installation:

    helm ls -n apigee-system
    NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                   APP VERSIONoperator   apigee-system   3          2023-06-26 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.12.4   1.12.4

    Verify it is up and running by checking its availability:

    kubectl -n apigee-system get deploy apigee-controller-manager
    NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           7d20h
  3. Upgrade the Apigee datastore:

    Dry run:

    helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run

    Upgrade the chart:

    helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE
    Note: For installations migrated fromapigeectl to Helm, use:
    helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --force \  -fOVERRIDES_FILE

    Verifyapigeedatastore is up and running by checking its state:

    kubectl -n apigee get apigeedatastore default
    NAME      STATE       AGEdefault   running    2d
  4. Upgrade Apigee telemetry:

    Dry run:

    helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run

    Upgrade the chart:

    helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE
    Note: For installations migrated fromapigeectl to Helm, use:
    helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --force \  -fOVERRIDES_FILE

    Verify it is up and running by checking its state:

    kubectl -n apigee get apigeetelemetry apigee-telemetry
    NAME               STATE     AGEapigee-telemetry   running   2d
  5. Upgrade Apigee Redis:

    Dry run:

    helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run

    Upgrade the chart:

    helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE
    Note: For installations migrated fromapigeectl to Helm, use:
    helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --force \  -fOVERRIDES_FILE

    Verify it is up and running by checking its state:

    kubectl -n apigee get apigeeredis default
    NAME      STATE     AGEdefault   running   2d
  6. Upgrade Apigee ingress manager:

    Dry run:

    helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run

    Upgrade the chart:

    helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE
    Note: For installations migrated fromapigeectl to Helm, use:
    helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --force \  -fOVERRIDES_FILE

    Verify it is up and running by checking its availability:

    kubectl -n apigee get deployment apigee-ingressgateway-manager
    NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           2d
  7. Upgrade the Apigee organization:

    Dry run:

    helm upgradeORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run

    Upgrade the chart:

    helm upgradeORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE
    Note: For installations migrated fromapigeectl to Helm, use:
    helm upgradeORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --force \  -fOVERRIDES_FILE

    Verify it is up and running by checking the state of the respective org:

    kubectl -n apigee get apigeeorg
    NAME                      STATE     AGEapigee-org1-xxxxx          running   2d
  8. Upgrade the environment.

    You must install one environment at a time. Specify the environment with--set env=ENV_NAME:

    Dry run:

    helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set env=ENV_NAME \  -fOVERRIDES_FILE \  --dry-run
    • ENV_RELEASE_NAME is the name with which you previously installed theapigee-env chart. In hybrid v1.10, it is usuallyapigee-env-ENV_NAME. In Hybrid v1.11 and newer it is usuallyENV_NAME.
    • ENV_NAME is the name of the environment you are upgrading.
    • OVERRIDES_FILE is your new overrides file for v.1.12.4

    Upgrade the chart:

    helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set env=ENV_NAME \  -fOVERRIDES_FILE
    Note: For installations migrated fromapigeectl to Helm, use:
    helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set env=ENV_NAME \  --force \  -fOVERRIDES_FILE

    Verify it is up and running by checking the state of the respective env:

    kubectl -n apigee get apigeeenv
    NAME                          STATE       AGE   GATEWAYTYPEapigee-org1-dev-xxx            running     2d
  9. Upgrade the environment groups (virtualhosts).
    1. You must upgrade one environment group (virtualhost) at a time. Specify the environment group with--set envgroup=ENV_GROUP_NAME. Repeat the following commands for each env group mentioned in the overrides.yaml file:

      Dry run:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set envgroup=ENV_GROUP_NAME \  -fOVERRIDES_FILE \  --dry-run

      ENV_GROUP_RELEASE_NAME is the name with which you previously installed theapigee-virtualhost chart. In hybrid v1.10, it is usuallyapigee-virtualhost-ENV_GROUP_NAME. In Hybrid v1.11 and newer it is usuallyENV_GROUP_NAME.

      Upgrade the chart:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set envgroup=ENV_GROUP_NAME \  -fOVERRIDES_FILE
      Note: For installations migrated fromapigeectl to Helm, use:
      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set envgroup=ENV_GROUP_NAME \  --force \  -fOVERRIDES_FILE
      Note:ENV_GROUP_RELEASE_NAME must be unique within theapigee namespace.

      For example, if you have a anenvironment namedprod and anenvironment group namedprod, set the value ofENV_GROUP_RELEASE_NAME to something unique, likeprod-envgroup.

    2. Check the state of the ApigeeRoute (AR).

      Installing thevirtualhosts creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls env group related details from the control plane. Therefore, check that the corresponding AR's state is running:

      kubectl -n apigee get arc
      NAME                                STATE   AGEapigee-org1-dev-egroup                       2d
      kubectl -n apigee get ar
      NAME                                        STATE     AGEapigee-org1-dev-egroup-xxxxxx                running   2d

Validate policies after upgrade to 1.12.4

Use this procedure to validate the behavior of theJavaCallout policy after upgrading from 1.12.3 or earlier to 1.12.4 or later.

  1. Check whether the Java JAR files request unnecessary permissions.

    After the policy is deployed, check the runtime logs to see if the following log message is present:"Failed to load and initialize class ...". If you observe this message, it suggests that the deployed JAR requested unnecessary permissions. To resolve this issue, investigate the Java code and update the JAR file.

  2. Investigate and update the Java code.

    Review any Java code (including dependencies) to identify the cause of potentially unallowed operations. When found, modify the source code as required.

  3. Test policies with the security check enabled.

    In anon-production environment, enable the security check flag and redeploy your policies with an updated JAR. To set the flag:

    • In theapigee-env/values.yaml file, setconf_security-secure.constructor.only totrue underruntime:cwcAppend:. For example:
      # Apigee Runtimeruntime:cwcAppend:conf_security-secure.constructor.only:true
    • Update theapigee-env chart for the environment to apply the change. For example:
      helmupgradeENV_RELEASE_NAMEapigee-env/\--install\--namespaceAPIGEE_NAMESPACE\--setenv=ENV_NAME\-fOVERRIDES_FILE

        ENV_RELEASE_NAME is a name used to keep track of installation and upgrades of theapigee-env chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_NAME. However, if your environment has the same name as your environment group, you must use different release names for the environment and environment group, for exampledev-env-release anddev-envgroup-release. For more information on releases in Helm, seeThree big concepts in the Helm documentation.

    If the log message"Failed to load and initialize class ..." is still present, continue modifying and testing the JAR until the log message no longer appears.

  4. Enable the security check in the production environment.

    After you have thoroughly tested and verified the JAR file in the non-production environment, enable the security check in your production environment by setting the flagconf_security-secure.constructor.only totrue and updating theapigee-env chart for the production environment to apply the change.

Congratulations! You have upgraded to Apigee hybrid version 1.12.4. To test your upgrade, call a proxy against the new installation. For an example, seeStep 10: Deploy an API proxy in the Apigee hybrid 1.12 installation guide.

Rolling back to a previous version

This section is divided into sections depending on the state of yourapigee-datastore component after upgrading to Apigee hybrid version 1.12. There are procedures for single region or multi-region rollback with theapigee-datastore component in a good state and procedures for recovery or restore from a backup whenapigee-datastore is in a bad state.

Single region rollback and recovery

Rolling back whenapigee-datastore is in a good state

This procedure explains how to roll back every Apigee hybrid component from v1.12 to v1.11exceptapigee-datastore. The v1.12apigee-datastore component is backwards compatible with hybrid v1.11 components.

Warning: Rolling backapigee-datastore is not possible. Data incompatibility between different versions of the Cassandra database prevents rolling back to the previous version.

To roll back your single region installation to version 1.11:

  1. Before starting rollback, validate all the pods are in a running state:
    kubectl get pods -nAPIGEE_NAMESPACE
    kubectl get pods -n apigee-system
  2. Validate the release of components using helm:
    helm -nAPIGEE_NAMESPACE list
    helm -n apigee-system list

    For example

    helm -n apigee listNAME              NAMESPACE   REVISION   UPDATED                                   STATUS     CHART                           APP VERSIONdatastore         apigee      2          2024-03-29 17:08:07.917848253 +0000 UTC   deployed   apigee-datastore-1.12.01.12.0ingress-manager   apigee      2          2024-03-29 17:21:02.917333616 +0000 UTC   deployed   apigee-ingress-manager-1.12.01.12.0redis             apigee      2          2024-03-29 17:19:51.143728084 +0000 UTC   deployed   apigee-redis-1.12.01.12.0telemetry         apigee      2          2024-03-29 17:16:09.883885403 +0000 UTC   deployed   apigee-telemetry-1.12.01.12.0myhybridorg       apigee      2          2024-03-29 17:21:50.899855344 +0000 UTC   deployed   apigee-org-1.12.01.12.0
  3. Roll back each componentexceptapigee-datastore with the following commands:
    1. Create the following environment variable:
      • PREVIOUS_HELM_CHARTS_HOME: The directory where the previous Apigee hybrid Helm charts are installed. This is the version you are rolling back to.
    2. Roll back the virtualhosts. Repeat the following command for each environment group mentioned in the overrides file.
      helm upgradeENV_GROUP_RELEASE_NAME $PREVIOUS_HELM_CHARTS_HOME/apigee-virtualhost/ \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=ENV_GROUP_NAME \  -fPREVIOUS_OVERRIDES_FILE

      ENV_GROUP_RELEASE_NAME is the name with which you previously installed theapigee-virtualhost chart. In hybrid v1.10, it is usuallyapigee-virtualhost-ENV_GROUP_NAME. In Hybrid v1.11 and newer it is usuallyENV_GROUP_NAME.

    3. Roll back Envs. Repeat the following command for each environment mentioned in the overrides file.
      helm upgrade apigee-env-ENV_NAME $PREVIOUS_HELM_CHARTS_HOME/apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=ENV_NAME \  -fPREVIOUS_OVERRIDES_FILE

      ENV_RELEASE_NAME is the name with which you previously installed theapigee-env chart. In hybrid v1.10, it is usuallyapigee-env-ENV_NAME. In Hybrid v1.11 and newer it is usuallyENV_NAME.

      Verify it is up and running by checking the state of the respective env:

      kubectl -n apigee get apigeeenv
      NAME                  STATE     AGE   GATEWAYTYPEapigee-org1-dev-xxx   running   2d
    4. Roll back Org:
      helm upgradeORG_NAME $PREVIOUS_HELM_CHARTS_HOME/apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify it is up and running by checking the state of the respective org:

      kubectl -n apigee get apigeeorg
      NAME                STATE     AGEapigee-org1-xxxxx   running   2d
    5. Roll back the Ingress Manager:
      helm upgrade ingress-manager $PREVIOUS_HELM_CHARTS_HOME/apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify it is up and running by checking its availability:

      kubectl -n apigee get deployment apigee-ingressgateway-manager
      NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           2d
    6. Roll back Redis:
      helm upgrade redis $PREVIOUS_HELM_CHARTS_HOME/apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify it is up and running by checking its state:

      kubectl -n apigee get apigeeredis default
      NAME      STATE     AGEdefault   running   2d
    7. Roll back Apigee Telemetry:
      helm upgrade telemetry $PREVIOUS_HELM_CHARTS_HOME/apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify it is up and running by checking its state:

      kubectl -n apigee get apigeetelemetry apigee-telemetry
      NAME               STATE     AGEapigee-telemetry   running   2d
    8. Roll back the Apigee Controller:
      helm upgrade operator $PREVIOUS_HELM_CHARTS_HOME/apigee-operator/ \  --install \  --namespace apigee-system \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify Apigee Operator installation:

      helm ls -n apigee-system
      NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                   APP VERSIONoperator   apigee-system   3          2023-06-26 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.12.4   1.12.4

      Verify it is up and running by checking its availability:

      kubectl -n apigee-system get deploy apigee-controller-manager
      NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           7d20h
    9. Roll back the Apigee hybrid CRDs:
      kubectl apply -k  $PREVIOUS_HELM_CHARTS_HOME/apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false
  4. Validate all the pods are either in a running or completed state:
    kubectl get pods -nAPIGEE_NAMESPACE
    kubectl get pods -n apigee-system
  5. Validate the release of all components. All components should be in the previous version except for datastore:
    helm -nAPIGEE_NAMESPACE list
    helm -n apigee-system list

    For example

    helm -n apigee  listNAME              NAMESPACE  REVISION  UPDATED                                  STATUS    CHART                          APP VERSIONdatastore         apigee     2         2024-03-29 18:47:55.979671057 +0000 UTC  deployed  apigee-datastore-1.12.01.12.0ingress-manager   apigee     3         2024-03-14 19:14:57.905700154 +0000 UTC  deployed  apigee-ingress-manager-1.11.01.11.0redis             apigee     3         2024-03-14 19:15:49.406917944 +0000 UTC  deployed  apigee-redis-1.11.01.11.0telemetry         apigee     3         2024-03-14 19:17:04.803421424 +0000 UTC  deployed  apigee-telemetry-1.11.01.11.0myhybridorg       apigee     3         2024-03-14 19:13:17.807673713 +0000 UTC  deployed  apigee-org-1.11.01.11.0

Restoring whenapigee-datastore is not in a good state

If the upgrade of theapigee-datastore component was not successful, you cannot roll backapigee-datastore from version 1.12 to version 1.11. Instead you must restore from a backup made of a v1.11 installation. Use the following sequence to restore your previous version.

  1. If you do not have an active installation of Apigee hybrid version 1.11 (for example in another region), create a new installation of v1.11 using your backed up charts and overrides files. See theApigee hybrid version 1.11 installation instructions.
  2. Restore the v1.11 region (or new installation) from your backup following the instructions in:
  3. Verify traffic to the restored installation
  4. Optional: Remove the version 1.12 installation following the instructions inUninstall hybrid runtime.

Multi-region rollback and recovery

Rolling back whenapigee-datastore is in a good state

Note:> Use this rollback method to rollback from an upgraded hybrid region to the previous hybrid version. This method is intended to recover from a failed upgrade whennot all the regions have not been upgraded yet.

This procedure explains how to roll back every Apigee hybrid component from v1.12 to v1.11exceptapigee-datastore. The v1.12apigee-datastore component is backwards compatible with hybrid v1.11 components.

  1. Before starting rollback, validate all the pods are in a running state:
    kubectl get pods -nAPIGEE_NAMESPACE
    kubectl get pods -n apigee-system
  2. Ensure that all Cassandra nodes in all regions are in theUN (Up / Normal) state. If any Cassandra node is in a different state, address that first before starting the upgrade process.

    You can validate the state of your Cassandra nodes with the following commands:

    1. List the cassandra pods:
      kubectl get pods -nAPIGEE_NAMESPACE -l app=apigee-cassandra

      For example:

      kubectl get pods -n apigee -l app=apigee-cassandraNAME                         READY   STATUS    RESTARTS   AGEapigee-cassandra-default-0   1/1     Running   0          2hapigee-cassandra-default-1   1/1     Running   0          2hapigee-cassandra-default-2   1/1     Running   0          2hapigee-cassandra-default-3   1/1     Running   0          16mapigee-cassandra-default-4   1/1     Running   0          14mapigee-cassandra-default-5   1/1     Running   0          13mapigee-cassandra-default-6   1/1     Running   0          9mapigee-cassandra-default-7   1/1     Running   0          9mapigee-cassandra-default-8   1/1     Running   0          8m
    2. Check the state the nodes for each Cassandra pod with thekubectl nodetool status command:
      kubectl -nAPIGEE_NAMESPACE exec -itCASSANDRA_POD_NAME -- nodetool -uJMX_USER -pwJMX_PASSWORD

      For example:

      kubectl -n apigee exec -it apigee-cassandra-default-0 -- nodetool -u jmxuser -pwJMX_PASSWORD statusDatacenter: us-east1====================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--  Address      Load        Tokens   Owns (effective)   Host ID                                RackUN  10.16.2.6    690.17 KiB  256      48.8%              b02089d1-0521-42e1-bbed-900656a58b68   ra-1UN  10.16.4.6    705.55 KiB  256      51.6%              dc6b7faf-6866-4044-9ac9-1269ebd85dab   ra-1UN  10.16.11.11  674.36 KiB  256      48.3%              c7906366-6c98-4ff6-a4fd-17c596c33cf7   ra-1UN  10.16.1.11   697.03 KiB  256      49.8%              ddf221aa-80aa-497d-b73f-67e576ff1a23   ra-1UN  10.16.5.13   703.64 KiB  256      50.9%              2f01ac42-4b6a-4f9e-a4eb-4734c24def95   ra-1UN  10.16.8.15   700.42 KiB  256      50.6%              a27f93af-f8a0-4c88-839f-2d653596efc2   ra-1UN  10.16.11.3   697.03 KiB  256      49.8%              dad221ff-dad1-de33-2cd3-f1.672367e6f   ra-1UN  10.16.14.16  704.04 KiB  256      50.9%              1feed042-a4b6-24ab-49a1-24d4cef95473   ra-1UN  10.16.16.1   699.82 KiB  256      50.6%              beef93af-fee0-8e9d-8bbf-efc22d653596   ra-1

    If not all Cassandra pods are in aUN state, follow the instructions inRemove DOWN nodes from Cassandra Cluster.

  3. Navigate to the directory where the previous Apigee hybrid Helm charts are installed
  4. Change the context to the region that was upgraded
    kubectl config use-contextUPGRADED_REGION_CONTEXT
  5. Validate all the pods are in a running state:
    kubectl get pods -nAPIGEE_NAMESPACE
    kubectl get pods -n apigee-system
  6. Use the helm command to make sure all the releases were upgraded to Hybrid v1.12:
    helm -nAPIGEE_NAMESPACE list
    helm -n apigee-system list

    For example

    helm -n apigee listNAME             NAMESPACE  REVISION  UPDATED                                  STATUS    CHART                          APP VERSIONdatastore        apigee     2         2024-03-29 17:08:07.917848253 +0000 UTC  deployed  apigee-datastore-1.12.01.12.0ingress-manager  apigee     2         2024-03-29 17:21:02.917333616 +0000 UTC  deployed  apigee-ingress-manager-1.12.01.12.0redis            apigee     2         2024-03-29 17:19:51.143728084 +0000 UTC  deployed  apigee-redis-1.12.01.12.0telemetry        apigee     2         2024-03-29 17:16:09.883885403 +0000 UTC  deployed  apigee-telemetry-1.12.01.12.0myhybridorg      apigee     2         2024-03-29 17:21:50.899855344 +0000 UTC  deployed  apigee-org-1.12.01.12.0
  7. Roll back each componentexceptapigee-datastore with the following commands:
    1. Create the following environment variable:
      • PREVIOUS_HELM_CHARTS_HOME: The directory where the previous Apigee hybrid Helm charts are installed. This is the version you are rolling back to.
    2. Roll back the virtualhosts. Repeat the following command for each environment group mentioned in the overrides file.
      helm upgradeENV_GROUP_RELEASE_NAME $PREVIOUS_HELM_CHARTS_HOME/apigee-virtualhost/ \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=ENV_GROUP_NAME \  -fPREVIOUS_OVERRIDES_FILE

      ENV_GROUP_RELEASE_NAME is the name with which you previously installed theapigee-virtualhost chart. In hybrid v1.10, it is usuallyapigee-virtualhost-ENV_GROUP_NAME. In Hybrid v1.11 and newer it is usuallyENV_GROUP_NAME.

    3. Roll back Envs. Repeat the following command for each environment mentioned in the overrides file.
      helm upgrade apigee-env-ENV_NAME $PREVIOUS_HELM_CHARTS_HOME/apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=ENV_NAME \  -fPREVIOUS_OVERRIDES_FILE

      ENV_RELEASE_NAME is the name with which you previously installed theapigee-env chart. In hybrid v1.10, it is usuallyapigee-env-ENV_NAME. In Hybrid v1.11 and newer it is usuallyENV_NAME.

      Verify each env is up and running by checking the state of the respective env:

      kubectl -n apigee get apigeeenv
      NAME                  STATE     AGE   GATEWAYTYPEapigee-org1-dev-xxx   running   2d
    4. Roll back Org:
      helm upgradeORG_NAME $PREVIOUS_HELM_CHARTS_HOME/apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify it is up and running by checking the state of the respective org:

      kubectl -n apigee get apigeeorg
      NAME                STATE     AGEapigee-org1-xxxxx   running   2d
    5. Roll back the Ingress Manager:
      helm upgrade ingress-manager $PREVIOUS_HELM_CHARTS_HOME/apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify it is up and running by checking its availability:

      kubectl -n apigee get deployment apigee-ingressgateway-manager
      NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           2d
    6. Roll back Redis:
      helm upgrade redis $PREVIOUS_HELM_CHARTS_HOME/apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify it is up and running by checking its state:

      kubectl -n apigee get apigeeredis default
      NAME      STATE     AGEdefault   running   2d
    7. Roll back Apigee Telemetry:
      helm upgrade telemetry $PREVIOUS_HELM_CHARTS_HOME/apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify it is up and running by checking its state:

      kubectl -n apigee get apigeetelemetry apigee-telemetry
      NAME               STATE     AGEapigee-telemetry   running   2d
    8. Roll back the Apigee Controller:
      helm upgrade operator $PREVIOUS_HELM_CHARTS_HOME/apigee-operator/ \  --install \  --namespace apigee-system \  --atomic \  -fPREVIOUS_OVERRIDES_FILE

      Verify Apigee Operator installation:

      helm ls -n apigee-system
      NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                   APP VERSIONoperator   apigee-system   3          2023-06-26 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.12.4   1.12.4

      Verify it is up and running by checking its availability:

      kubectl -n apigee-system get deploy apigee-controller-manager
      NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           7d20h
    9. Roll back the Apigee hybrid CRDs:
      kubectl apply -k  $PREVIOUS_HELM_CHARTS_HOME/apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false
  8. Validate the release of all components. All components should be in the previous version except fordatastore:
    helm -nAPIGEE_NAMESPACE list
    helm -n apigee-system list

    For example

    helm -n apigee  listNAME              NAMESPACE  REVISION  UPDATED                                  STATUS    CHART                          APP VERSIONdatastore         apigee     2         2024-03-29 18:47:55.979671057 +0000 UTC  deployed  apigee-datastore-1.12.01.12.0ingress-manager   apigee     3         2024-03-14 19:14:57.905700154 +0000 UTC  deployed  apigee-ingress-manager-1.11.01.11.0redis             apigee     3         2024-03-14 19:15:49.406917944 +0000 UTC  deployed  apigee-redis-1.11.01.11.0telemetry         apigee     3         2024-03-14 19:17:04.803421424 +0000 UTC  deployed  apigee-telemetry-1.11.01.11.0myhybridorg       apigee     3         2024-03-14 19:13:17.807673713 +0000 UTC  deployed  apigee-org-1.11.01.11.0

    At this point all the releases exceptdatastore have been rolled back to the previous version.

Important: Currently, the Cassandra datastore is partially upgraded. We recommend attempting to upgrade again by addressing any issues encountered during the previous upgrade and retrying the process. If it is not feasible to attempt another upgrade, the recommendation is to recover the datastore using the instructions inRecovering a multi-region installation to a previous version.

Recovering a multi-region installation to a previous version

Recover the region where upgrade failed in a multi region upgrade by removing references to it from multiple region installations. This method is only possible when there is at least 1 live region on Hybrid 1.11. The v1.12 datastore is compatible with v 1.11 components.

To recover failed region(s) from a healthy region, perform the following steps:

  1. Redirect the API traffic from the impacted region(s) to the good working region. Plan the capacity accordingly to support the diverted traffic from failed region(s).
  2. Decommission the impacted region. For each impacted region, follow the steps outlined inDecommission a hybrid region. Wait for decommissioning to complete before moving on to the next step.

  3. Clean up the failed region following the instructions inRecover a region from a failed upgrade.
  4. Recover the impacted region. To recover, create a new region, as described inMulti-region deployment on GKE, GKE on-prem, and AKS.

Restoring a multi-region installation from a backup withapigee-datastore in a bad state

If the upgrade of theapigee-datastore component was not successful, you cannot roll back from version 1.12 to version 1.11. Instead you must restore from a backup made of a v1.11 installation. Use the following sequence to restore your previous version.

Tip: To minimize downtime in multi-region installations, Apigee recommends rebuilding and restoring one region at a time.
  1. If you do not have an active installation of Apigee hybrid version 1.11 (for example in another region), create a new installation of v1.11 using your backed up charts and overrides files. See theApigee hybrid version 1.11 installation instructions.
  2. Restore the v1.11 region (or new installation) from your backup following the instructions in:
  3. Verify traffic to the restored installation
  4. For multi-region installations, rebuild and restore the next region. See the instructions inRestoring from a backup in Restoring in multiple regions.
  5. Remove the version 1.12 installation following the instructions inUninstall hybrid runtime.

APPENDIX: Recover a region from a failed upgrade

Remove a Datacenter if the upgrade fails from 1.11 to 1.12.

  1. Validate Cassandra cluster status from a live region:
    1. Switch the kubectl context to the region to be removed:
      kubectl config use-contextCONTEXT_OF_LIVE_REGION
    2. List the cassandra pods:
      kubectl get pods -nAPIGEE_NAMESPACE -l app=apigee-cassandra

      For example:

      kubectl get pods -n apigee -l app=apigee-cassandraNAME                 READY   STATUS    RESTARTS   AGEapigee-cassandra-default-0   1/1     Running   0          2hapigee-cassandra-default-1   1/1     Running   0          2hapigee-cassandra-default-2   1/1     Running   0          2h
    3. Exec into one of the cassandra pods:
      kubectl exec -it -nCASSANDRA_POD_NAME -- /bin/bash
    4. Check the status of the Cassandra cluster:
      nodetool -uJMX_USER -pwJMX_PASSWORD status

      The output should look something like the following:

      Datacenter: dc-1================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--  Address      Load        Tokens       Owns (effective)  Host ID                               RackUN  10.48.12.16  813.84 KiB  256          100.0%            a6340ad9-37ba-4ec8-a8c2-f7b7ac931807  ra-1UN  10.48.14.16  859.89 KiB  256          100.0%            39f03c51-e387-4dac-8360-6d8732e690a7  ra-1UN  10.48.0.18   888.95 KiB  256          100.0%            0d57df49-52e4-4c01-832d-d9df845ab732  ra-1
    5. Describe the cluster to verify that you only see IPs of Cassandra pods from the live region and all of them on the same schema version:
      nodetool -uJMX_USER -pwJMX_PASSWORD describecluster

      The output should look something like the following:

      nodetool -uJMX_USER -pwJMX_PASSWORD describeclusterSchema versions:    4bebf2de-0582-31b4-9c5f-e36f60127e1b: [10.48.14.16, 10.48.12.16, 10.48.0.18]
  2. Cleanup Cassandra keyspace replication:
    1. Get theuser-setup job and delete it. A newuser-setup job will be created immediately.
      kubectl get jobs -nAPIGEE_NAMESPACE

      For example:

      kubectl get jobs -n apigee  NAME                                                           COMPLETIONS   DURATION   AGE  apigee-cassandra-schema-setup-myhybridorg-8b3e61d          1/1           6m35s      3h5m  apigee-cassandra-schema-val-myhybridorg-8b3e61d-28499150   1/1           10s        9m22sapigee-cassandra-user-setup-myhybridorg-8b3e61d            0/1           21s        21s
      kubectl delete jobsUSER_SETUP_JOB_NAME -nAPIGEE_NAMESPACE

      The output should show the new job starting:

      kubectl delete jobs apigee-cassandra-user-setup-myhybridorg-8b3e61d -n apigee  apigee-cassandra-user-setup-myhybridorg-8b3e61d-wl92b         0/1     Init:0/1    0               1s
    2. Validate Cassandra keyspace replication settings by creating a client container following the instructions inCreate the client container.
    3. Get all the keyspaces. Exec into cassandra-client pod and then starting a cqlsh client:
      kubectl exec -it -nAPIGEE_NAMESPACE cassandra-client -- /bin/bash

      Connect to the Cassandra server withddl user as it has permissions required to run the following commands:

      cqlsh apigee-cassandra-default.apigee.svc.cluster.local -uDDL_USER -pDDL_PASSWORD --ssl

      Get the keyspaces:

      select * from system_schema.keyspaces;

      The output should look like following wheredc-1 is the live DC:

      select * from system_schema.keyspaces; keyspace_name            | durable_writes | replication--------------------------+----------------+--------------------------------------------------------------------------------   kvm_myhybridorg_hybrid |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}              system_auth |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}            system_schema |           True |                        {'class': 'org.apache.cassandra.locator.LocalStrategy'} quota_myhybridorg_hybrid |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'} cache_myhybridorg_hybrid |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}   rtc_myhybridorg_hybrid |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}       system_distributed |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}                   system |           True |                        {'class': 'org.apache.cassandra.locator.LocalStrategy'}                   perses |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}            system_traces |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}   kms_myhybridorg_hybrid |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}(11 rows)
    4. If for some reason theuser-setup job continues to error out and validation is failing, use the following commands to correct the replication in keyspaces.Caution: Use the following commands with extreme care.
      kubectl exec -it -nAPIGEE_NAMESPACE cassandra-client -- /bin/bash

      Connect to the Cassandra server withddl user as it has permissions required to run the following commands:

      cqlsh apigee-cassandra-default.apigee.svc.cluster.local -uDDL_USER -pDDL_PASSWORD --ssl

      Get the keyspaces:

      select * from system_schema.keyspaces;

      Use the keyspace names from the command above and replace them in the following examples

      alter keyspace quota_myhybridorg_hybrid WITH replication = {'class': 'NetworkTopologyStrategy', 'LIVE_DC_NAME':'3'};alter keyspace kms_myhybridorg_hybrid WITH replication = {'class': 'NetworkTopologyStrategy', 'LIVE_DC_NAME':'3'};alter keyspace kvm_myhybridorg_hybrid WITH replication = {'class': 'NetworkTopologyStrategy', 'LIVE_DC_NAME':'3'};alter keyspace cache_myhybridorg_hybrid WITH replication = {'class': 'NetworkTopologyStrategy', 'LIVE_DC_NAME':'3'};alter keyspace perses_myhybridorg_hybrid WITH replication = {'class': 'NetworkTopologyStrategy', 'LIVE_DC_NAME':'3'};alter keyspace rtc_myhybridorg_hybrid WITH replication = {'class': 'NetworkTopologyStrategy', 'LIVE_DC_NAME':'3'};alter keyspace system_auth WITH replication = {'class': 'NetworkTopologyStrategy', 'LIVE_DC_NAME':'3'};alter keyspace system_distributed WITH replication = {'class': 'NetworkTopologyStrategy', 'LIVE_DC_NAME':'3'};alter keyspace system_traces WITH replication = {'class': 'NetworkTopologyStrategy', 'LIVE_DC_NAME':'3'};
    5. Validate that all the keyspaces are replicating in the right region with the followingcqlsh command:
      select * from system_schema.keyspaces;

      For example:

      select * from system_schema.keyspaces; keyspace_name           | durable_writes | replication-------------------------+----------------+--------------------------------------------------------------------------------kvm_myhybridorg_hybrid   |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}          system_auth    |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}        system_schema    |           True |                        {'class': 'org.apache.cassandra.locator.LocalStrategy'}quota_myhybridorg_hybrid |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}cache_myhybridorg_hybrid |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}rtc_myhybridorg_hybrid   |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}   system_distributed    |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}               system    |           True |                        {'class': 'org.apache.cassandra.locator.LocalStrategy'}               perses    |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}        system_traces    |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}kms_myhybridorg_hybrid   |           True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'dc-1': '3'}(11 rows)

At this stage you have completely removed all the references for the dead DC from the Cassandra cluster.

APPENDIX: Remove DOWN nodes from Cassandra Cluster

Use this procedure when you are rolling back a multi-region installation and not all Cassandra pods are in an Up / Normal (UN) state.

  1. Exec into one of the cassandra pods:
    kubectl exec -it -nCASSANDRA_POD_NAME -- /bin/bash
  2. Check the status of the Cassandra cluster:
    nodetool -uJMX_USER -pwJMX_PASSWORD status
  3. Validate that the node is actually Down (DN). Exec into Cassandra pod in a region where the Cassandra pod is not able to come up.
    Datacenter: dc-1================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--  Address      Load        Tokens  Owns (effective)  Host ID                               RackUN  10.48.12.16  1.15 MiB    256     100.0%            a6340ad9-37ba-4ec8-a8c2-f7b7ac931807  ra-1UN  10.48.0.18   1.21 MiB    256     100.0%            0d57df49-52e4-4c01-832d-d9df845ab732  ra-1UN  10.48.14.16  1.18 MiB    256     100.0%            39f03c51-e387-4dac-8360-6d8732e690a7  ra-1Datacenter: us-west1====================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--  Address      Load        Tokens  Owns (effective)  Host ID                               RackDN  10.8.4.4     432.42 KiB  256     100.0%            cd672398-5c45-4c88-a424-86d757951e53  rc-1UN  10.8.19.6    5.8 MiB     256     100.0%            84f771f3-3632-4155-b27f-a67125d73bc5  rc-1UN  10.8.21.5    5.74 MiB    256     100.0%            f6f21b70-348d-482d-89fa-14b7147a5042  rc-1
  4. Remove the reference to the down (DN) node. From the above example we are going to remove reference for host10.8.4.4
    kubectl exec -it -n apigee apigee-cassandra-default-2 -- /bin/bash nodetool -uJMX_USER -pwJMX_PASSWORD removenodeHOST_ID
  5. After the reference is removed, terminate the pod. The new Cassandra pod should come up and join the cluster
    kubectl delete pod -nPOD_NAME
  6. Validate that the new Cassandra pod has joined the cluster.
    Datacenter: dc-1================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--  Address      Load        Tokens  Owns (effective)  Host ID                               RackUN  10.48.12.16  1.16 MiB    256     100.0%            a6340ad9-37ba-4ec8-a8c2-f7b7ac931807  ra-1UN  10.48.0.18   1.22 MiB    256     100.0%            0d57df49-52e4-4c01-832d-d9df845ab732  ra-1UN  10.48.14.16  1.19 MiB    256     100.0%            39f03c51-e387-4dac-8360-6d8732e690a7  ra-1Datacenter: us-west1====================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--  Address      Load        Tokens  Owns (effective)  Host ID                               RackUN  10.8.19.6    5.77 MiB    256     100.0%            84f771f3-3632-4155-b27f-a67125d73bc5  rc-1UN  10.8.4.5     246.99 KiB  256     100.0%            0182e675-eec8-4d68-a465-69211b621601  rc-1UN  10.8.21.5    5.69 MiB    256     100.0%            f6f21b70-348d-482d-89fa-14b7147a5042  rc-1

At this point you can proceed with the upgrade or roll back of the remaining regions of the cluster.

APPENDIX: Troubleshooting:apigee-datastore in a stuck state after rollback

Use this procedure when you have rolled backapigee-datastore to hybrid 1.11 after upgrade, and it is in a stuck state.

  1. Before correcting the datastore controller state again, validate that it is in areleasing state and the pods are not coming up along with Cassandra cluster state.
    1. Validate using the Helm command that datastore was rolled back:
      helm -nAPIGEE_NAMESPACE list

      For example:

      helm -n apigee listNAME              NAMESPACE  REVISION  UPDATED                                   STATUS    CHART                              APP VERSIONdatastore         apigee     3         2024-04-04 22:15:08.792539892 +0000 UTC   deployedapigee-datastore-1.11.0           1.11.0ingress-manager   apigee     1         2024-04-02 22:24:27.564184968 +0000 UTC   deployed   apigee-ingress-manager-1.12.0     1.12.0redis             apigee     1         2024-04-02 22:23:59.938637491 +0000 UTC   deployed   apigee-redis-1.12.0               1.12.0telemetry         apigee     1         2024-04-02 22:23:39.458134303 +0000 UTC   deployed   apigee-telemetry-1.12             1.12.0myhybridorg       apigee     1         2024-04-02 23:36:32.614927914 +0000 UTC   deployed   apigee-org-1.12.0                 1.12.0
    2. Get the status of the Cassandra pods:
      kubectl get pods -nAPIGEE_NAMESPACE

      For example:

      kubectl get pods -n apigeeNAME                         READY   STATUS             RESTARTS      AGEapigee-cassandra-default-0   1/1     Running            0             2hapigee-cassandra-default-1   1/1     Running            0             2hapigee-cassandra-default-2   0/1     CrashLoopBackOff   4 (13s ago)   2m13s
    3. Validate thatapigeeds controller is stuck in releasing state:
      kubectl get apigeeds -nAPIGEE_NAMESPACE

      For example:

      kubectl get apigeeds -n apigeeNAME      STATE       AGEdefault   releasing   46h
    4. Validate Cassandra nodes status (notice that one node is in aDN state which is the node stuck inCrashLoopBackOff state):
      kubectl exec apigee-cassandra-default-0 -nAPIGEE_NAMESPACE  -- nodetool -uJMX_USER -pwJMX_PASSWORD status

      For example:

      kubectl exec apigee-cassandra-default-0 -n apigee  -- nodetool -u jmxuser -pwJMX_PASSWORD statusDefaulted container "apigee-cassandra" out of: apigee-cassandra, apigee-cassandra-ulimit-init (init)Datacenter: us-west1====================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--   Address       Load       Tokens   Owns (effective)   Host ID                               RackUN   10.68.7.28    2.12 MiB   256      100.0%             4de9df37-3997-43e7-8b5b-632d1feb14d3  rc-1UN   10.68.10.29   2.14 MiB   256      100.0%             a54e673b-ec63-4c08-af32-ea6c00194452  rc-1DN   10.68.6.26    5.77 MiB   256      100.0%             0fe8c2f4-40bf-4ba8-887b-9462159cac45   rc-1
  2. Upgrade the datastore using the 1.12 charts.Note: The controller will go in toreleasing state again but this time it will create pods with the images for Hybrid 1.12 which will come up and Cassandra will join the cluster and becomeUN (UP/NORMAL).
    helm upgrade datastoreAPIGEE_HELM_1.12.0_HOME/apigee-datastore/   --install   --namespaceAPIGEE_NAMESPACE   -f overrides.yaml
  3. Validate all the pods areRunning and Cassandra cluster is healthy again.
    1. Validate all the pods areREADY again:
      kubectl get pods -nAPIGEE_NAMESPACE

      For example:

      kubectl get pods -n apigeeNAMEREADY   STATUS    RESTARTS   AGEapigee-cassandra-default-01/1     Running   0          29hapigee-cassandra-default-11/1     Running   0          29hapigee-cassandra-default-21/1     Running   0          60m
    2. Validate Cassandra cluster status:
      kubectl exec apigee-cassandra-default-0 -nAPIGEE_NAMESPACE  -- nodetool -uJMX_USER -pwJMX_PASSWORD status

      For example:

      kubectl exec apigee-cassandra-default-0 -n apigee  -- nodetool -u jmxuser -pwJMX_PASSWORD statusDatacenter: us-west1====================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving--   Address       Load      Tokens   Owns (effective)   Host ID                                RackUN   10.68.4.15    2.05 MiB  256      100.0%             0fe8c2f4-40bf-4ba8-887b-9462159cac45   rc-1UN   10.68.7.28    3.84 MiB  256      100.0%             4de9df37-3997-43e7-8b5b-632d1feb14d3   rc-1UN   10.68.10.29   3.91 MiB  256      100.0%             a54e673b-ec63-4c08-af32-ea6c00194452   rc-1
    3. Validate status of theapigeeds controller:
      kubectl get apigeeds -nAPIGEE_NAMESPACE

      For example:

      kubectl get apigeeds -n apigeeNAME      STATE     AGEdefaultrunning   2d1h

At this point you have fixed the datastore and it should be in arunning state.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-18 UTC.