Upgrading Apigee hybrid to version 1.14

You are currently viewing version 1.14 of the Apigee hybrid documentation. For more information, seeSupported versions.

This procedure covers upgrading from Apigee hybrid version 1.13.x to Apigee hybrid version 1.14.3 and from previous releases of hybrid 1.14.x to version 1.14.3.

Use the same procedures for minor version upgrades (for example version 1.13 to 1.14) and for patch release upgrades (for example 1.4.2 to 1.14.3).

If you are upgrading from Apigee hybrid version 1.12 or older, you must first upgrade to hybrid version 1.13 before upgrading to version 1.14.3. See the instructions forUpgrading Apigee hybrid to version 1.13.

Changes from Apigee hybrid v1.13

Please note the following changes:

  • Recurring fees for monetization: Starting inversion 1.14.3, Apigee hybrid now supports recurring fees for monetization. For information seeEnabling monetization for Apigee hybrid.
  • Wildcards (*) in proxy basepaths: Starting in v1.14.3, the use of wildcards is now supported in Apigee proxy basepaths. To implement this change follow the procedure inKnown issue 378686709.
  • New data pipeline: Starting in version 1.14, data plane components write data directly to the control plane by default. This provides increased reliability and compliance for analytics and debug data. SeeConfigure hybrid to use the new data pipeline.
  • Anthos (on bare metal or VMware) is now Google Distributed Cloud (for bare metal or VMware): For more information see the product overviews forGoogle Distributed Cloud for bare metal andGoogle Distributed Cloud for VMware.
  • Stricter class instantiation checks: Starting in version 1.14.1, Apigee hybrid, theJavaCallout policy now includes additional security during Java class instantiation. The enhanced security measure prevents the deployment of policies that directly or indirectly attempt actions that require permissions that are not allowed.

    In the most cases, existing policies will continue to function as expected without any issues. However, there is a possibility that policies relying on third-party libraries, or those with custom code that indirectly triggers operations requiring elevated permissions, could be affected.

    Important: To test your installation, follow the procedure inValidate policies after upgrade to 1.14.1 to validate policy behavior.

For additional information about features in Hybrid version 1.14, see theApigee hybrid v1.14.0 release notes.

Prerequisites

Before upgrading to hybrid version 1.14, make sure your installation meets the following requirements:

Before you upgrade to 1.14.3 - limitations and important notes

  • Apigee hybrid 1.14.3 introduces a new enhanced per-environment proxy limit that lets you deploy more proxies and shared flows in a single environment. SeeLimits: API Proxies to understand the limits on the number of proxies and shared flows you may deploy per environment. This feature is available only on newly created hybrid organizations, and cannot be applied to upgraded orgs. To use this feature, perform a fresh installation of hybrid 1.14.3, and create a new organization.

    This feature is available exclusively as part of the2024 subscription plan, and is subject to the entitlements granted under that subscription. SeeEnhanced per-environment proxy limits to learn more about this feature.

  • Upgrading to Apigee hybrid version 1.14 may require downtime.

    When upgrading the Apigee controller to version 1.14.3, all Apigee deployments undergo a rolling restart. To minimize downtime in production hybrid environments during a rolling restart, make sure you are running at least two clusters (in the same or different region/data center). Divert all production traffic to a single cluster and take the cluster you are about to upgrade offline, and then proceed with the upgrade process. Repeat the process for each cluster.

    Apigee recommends that once you begin the upgrade, you should upgrade all clusters as soon as possible to reduce the chances of production impact. There is no time limit on when all remaining clusters must be upgraded after the first one is upgraded. However, until all remaining clusters are upgraded Cassandra backup and restore cannot work with mixed versions. For example, a backup from Hybrid 1.13 cannot be used to restore a Hybrid 1.14 instance.

  • Management plane changes do not need to be fully suspended during an upgrade. Any required temporary suspensions to management plane changes are noted in the upgrade instructions below.

Upgrading to version 1.14.3 overview

The procedures for upgrading Apigee hybrid are organized in the following sections:

  1. Prepare to upgrade.
  2. Install hybrid runtime version 1.14.3.

Prepare to upgrade to version 1.14

Back up your hybrid installation

  1. These instructions use the environment variableAPIGEE_HELM_CHARTS_HOME for the directory in your file system where you have installed the Helm charts. If needed, change directory into this directory and define the variable with the following command:

    Linux

    export APIGEE_HELM_CHARTS_HOME=$PWDecho$APIGEE_HELM_CHARTS_HOME

    Mac OS

    export APIGEE_HELM_CHARTS_HOME=$PWDecho$APIGEE_HELM_CHARTS_HOME

    Windows

    set APIGEE_HELM_CHARTS_HOME=%CD%echo %APIGEE_HELM_CHARTS_HOME%
  2. Make a backup copy of your version 1.13$APIGEE_HELM_CHARTS_HOME/ directory. You can use any backup process. For example, you can create atar file of your entire directory with:
    tar -czvf$APIGEE_HELM_CHARTS_HOME/../apigee-helm-charts-v1.13-backup.tar.gz$APIGEE_HELM_CHARTS_HOME
  3. Back up your Cassandra database following the instructions inCassandra backup and recovery.Important: Starting in version 1.14,Guardrails include the following backup-related checks enforced during the Apigee hybrid upgrade.
    1. Backup must be enabled to support restoring to the previous version if needed.
    2. A backup must be made within the last 24 hours prior to the upgrade if theCSI backup is enabled. This will minimize potential data loss if a restore to the previous version is needed.
    For additional information about guardrails checks, please seeGuardrails.
  4. If you are using service cert files (.json) in your overrides to authenticate service accounts, make sure your service account cert files reside in the correct Helm chart directory. Helm charts cannot read files outside of each chart directory.

    This step is not required if you are using Kubernetes secrets or Workload Identity to authenticate service accounts.

    The following table shows the destination for each service account file, depending on your type of installation:

    Prod

    Service accountDefault filenameHelm chart directory
    apigee-cassandraPROJECT_ID-apigee-cassandra.json$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
    apigee-loggerPROJECT_ID-apigee-logger.json$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
    apigee-martPROJECT_ID-apigee-mart.json$APIGEE_HELM_CHARTS_HOME/apigee-org/
    apigee-metricsPROJECT_ID-apigee-metrics.json$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
    apigee-runtimePROJECT_ID-apigee-runtime.json$APIGEE_HELM_CHARTS_HOME/apigee-env
    apigee-synchronizerPROJECT_ID-apigee-synchronizer.json$APIGEE_HELM_CHARTS_HOME/apigee-env/
    apigee-udcaPROJECT_ID-apigee-udca.json$APIGEE_HELM_CHARTS_HOME/apigee-org/
    apigee-watcherPROJECT_ID-apigee-watcher.json$APIGEE_HELM_CHARTS_HOME/apigee-org/

    Non-prod

    Make a copy of theapigee-non-prod service account file in each of the following directories:

    Service accountDefault filenameHelm chart directories
    apigee-non-prodPROJECT_ID-apigee-non-prod.json$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
    $APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
    $APIGEE_HELM_CHARTS_HOME/apigee-org/
    $APIGEE_HELM_CHARTS_HOME/apigee-env/
  5. Make sure that your TLS certificate and key files (.crt,.key, and/or.pem) reside in the$APIGEE_HELM_CHARTS_HOME/apigee-virtualhost/ directory.

Upgrade your Kubernetes version

Check your Kubernetes platform version and if needed, upgrade your Kubernetes platform to a version that is supported by both hybrid 1.13 and hybrid 1.14. Follow your platform's documentation if you need help.

Click to expand a list of supported platforms

1.12not supported(2)1.131.14
GKE on Google Cloud1.26.x
1.27.x
1.28.x
1.29.x(≥ 1.12.1)(8)
1.27.x
1.28.x
1.29.x
1.30.x
1.28.x
1.29.x
1.30.x
1.31.x
GKE on AWS 1.26.x(4)
1.27.x
1.28.x
1.29.x(≥ 1.12.1)(8)
1.27.x
1.28.x
1.29.x(≥ 1.12.1)(8)
1.30.x
1.28.x
1.29.x(≥ 1.12.1)(8)
1.30.x
1.31.x
GKE on Azure1.26.x(4)
1.27.x
1.28.x
1.29.x(≥ 1.12.1)(8)
1.27.x
1.28.x
1.29.x(≥ 1.12.1)(8)
1.30.x
1.28.x
1.29.x(≥ 1.12.1)(8)
1.30.x
1.31.x
Google Distributed Cloud (software only) on VMware(5)1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x)
1.28.x(4)
1.29.x(≥ 1.12.1)(8)
1.16.x (K8s v1.27.x)
1.28.x(4)
1.29.x
1.30.x
1.28.x(4)
1.29.x
1.30.x
1.31.x
Google Distributed Cloud (software only) on bare metal1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x)
1.28.x(4)
1.29.x(≥ 1.12.1)(8)
1.16.x (K8s v1.27.x)
1.28.x(4)
1.29.x
1.30.x
1.28.x(4)
1.29.x
1.30.x
1.31.x
EKS1.26.x
1.27.x
1.28.x
1.29.x(≥ 1.12.1)(8)
1.27.x
1.28.x
1.29.x
1.30.x
1.28.x
1.29.x
1.30.x
1.31.x
AKS1.26.x
1.27.x
1.28.x
1.29.x(≥ 1.12.1)(8)
1.27.x
1.28.x
1.29.x
1.30.x
1.28.x
1.29.x
1.30.x
1.31.x
OpenShift(9)4.12
4.13
4.14
4.15
4.16(≥ 1.12.1)(8)
4.12
4.13
4.14
4.15
4.16
4.13
4.14
4.15
4.16
4.17
Rancher Kubernetes Engine (RKE)v1.26.2+rke2r1
v1.27.x

1.28.x
1.29.x(≥ 1.12.1)(8)
v1.26.2+rke2r1
v1.27.x

1.28.x
1.29.x
1.30.x
v1.27.x
1.28.x
1.29.x
1.30.x
1.31.x
VMware Tanzuv1.26.xv1.26.xv1.26.x

Components

1.12not supported(2)1.131.14
Cloud Service Mesh 1.18.x(3) 1.19.x(3) 1.22.x(3)
JDK JDK 11 JDK 11 JDK 11
cert-manager 1.11.x
1.12.x
1.13.x
1.13.x
1.14.x
1.15.x(10)
1.14.x
1.15.x(10)
1.16.x(10)
Cassandra 4.0 4.0 4.0
Kubernetes1.26.x
1.27.x
1.28.x
1.29.x
1.27.x
1.28.x
1.29.x
1.30.x
1.28.x
1.29.x
1.30.x
1.31.x
kubectl1.26.x
1.27.x
1.28.x
1.29.x
1.27.x
1.28.x
1.29.x
1.30.x
1.28.x
1.29.x
1.30.x
1.31.x
Helm3.14.2+3.14.2+3.14.2+
Secret Store CSI driver1.4.11.4.11.4.6
Vault1.15.21.15.21.17.2

(1) On Anthos on-premises (Google Distributed Cloud) version 1.13, follow these instructions to avoid conflict withcert-manager:Conflicting cert-manager installation

(2) The official EOL dates for Apigee hybrid versions 1.12 and older have been reached. Regular monthly patches are no longer available. These releases are no longer officially supported except for customers with explicit and official exceptions for continued support. Other customers must upgrade.

(3) Cloud Service Mesh is automatically installed with Apigee hybrid 1.9 and newer.

(4) GKE on AWS version numbers now reflect the Kubernetes versions. SeeGKE Enterprise version and upgrade support for version details and recommended patches.

(5) Vault is not certified on Google Distributed Cloud for VMware.

(6) Support available with Apigee hybrid version 1.10.5 and newer.

(7) Support available with Apigee hybrid version 1.11.2 and newer.

(8) Support available with Apigee hybrid version 1.12.1 and newer.

(9) Apigee hybrid is tested and certified on OpenShift using the Kubernetes version bundled with each specific OCP version.

(10) Some versions of cert-manager have an issue where the webhook TLS server may fail to automatically renew its CA certificate. To avoid this, Apigee recommends:

  • Hybrid v1.13: use cert-manager version1.15.5+.
  • Hybrid v1.14: use cert-manager versions1.15.5+ or1.16.3+.
Note: Hybrid installations are not currently supported on Autopilot clusters.

Install the hybrid 1.14.3 runtime

Caution:Do not create new environments during the upgrade process.

Configure the data collection pipeline.

Starting with hybrid v1.14, new analytics and debug data pipeline is enabled by default for all Apigee hybrid orgs. You must follow the steps inenable analytics publisher access to configure the authorization flow.

Prepare for the Helm charts upgrade

This upgrade procedure assumes you are using the same namespace and service accounts for the upgraded installation. If you are making any configuration changes, be sure to reflect those changes in your overrides file before installing the Helm charts.
  1. Pull the Apigee Helm charts.

    Apigee hybrid charts are hosted inGoogle Artifact Registry:

    oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts

    Using thepull command, copy all of the Apigee hybrid Helm charts to your local storage with the following command:

    exportCHART_REPO=oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-chartsexportCHART_VERSION=1.14.3helm pull$CHART_REPO/apigee-operator --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-datastore --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-env --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-ingress-manager --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-org --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-redis --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-telemetry --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-virtualhost --version$CHART_VERSION --untar
  2. Upgrade cert-manager if needed.

    If you need to upgrade your cert-manager version, install the new version with the following command:

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml

    SeeSupported platforms and versions: cert-manager for a list of supported versions.

    Note: Some versions of cert-manager have an issue where the webhook TLS server may fail to automatically renew its CA certificate. To avoid this, Apigee recommends using cert-manager versions1.15.5+ or1.16.3+.
  3. If your Apigee namespace is notapigee, edit theapigee-operator/etc/crds/default/kustomization.yaml file and replace thenamespace value with your Apigee namespace.
    apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationnamespace:APIGEE_NAMESPACE

    If you are usingapigee as your namespace you do not need to edit the file.

  4. Install the updated Apigee CRDs:Note: From this step onwards, all commands should be run under the chart repo root directory.Note: This is the only supported method for installing Apigee CRDs. Do not usekubectl apply without-k, do not omit--server-side.Note: This step requires elevated cluster permissions.
    1. Use thekubectl dry-run feature by running the following command:

      kubectl apply -k  apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false --dry-run=server
    2. After validating with the dry-run command, run the following command:

      kubectl apply -k  apigee-operator/etc/crds/default/ \  --server-side \  --force-conflicts \  --validate=false
    3. Validate the installation with thekubectl get crds command:
      kubectl get crds | grep apigee

      Your output should look something like the following:

      apigeedatastores.apigee.cloud.google.com                    2024-08-21T14:48:30Zapigeedeployments.apigee.cloud.google.com                   2024-08-21T14:48:30Zapigeeenvironments.apigee.cloud.google.com                  2024-08-21T14:48:31Zapigeeissues.apigee.cloud.google.com                        2024-08-21T14:48:31Zapigeeorganizations.apigee.cloud.google.com                 2024-08-21T14:48:32Zapigeeredis.apigee.cloud.google.com                         2024-08-21T14:48:33Zapigeerouteconfigs.apigee.cloud.google.com                  2024-08-21T14:48:33Zapigeeroutes.apigee.cloud.google.com                        2024-08-21T14:48:33Zapigeetelemetries.apigee.cloud.google.com                   2024-08-21T14:48:34Zcassandradatareplications.apigee.cloud.google.com           2024-08-21T14:48:35Z
  5. Check the labels on the cluster nodes. By default, Apigee schedules data pods on nodes with the labelcloud.google.com/gke-nodepool=apigee-data and runtime pods are scheduled on nodes with the labelcloud.google.com/gke-nodepool=apigee-runtime. You can customize your node pool labels in theoverrides.yaml file.

    For more information, seeConfiguring dedicated node pools.

Install the Apigee hybrid Helm charts

Note: Before executing any of the Helm upgrade/install commands,use the Helm dry-run feature by adding--dry-run at the end ofthe command. Seehelm -h to list supported commands, options,and usage.
  1. If you have not, navigate into yourAPIGEE_HELM_CHARTS_HOME directory. Run the following commands from that directory.
  2. Upgrade the Apigee Operator/Controller:Note: This step requires elevated cluster permissions. Runhelm -h orhelm upgrade -h for details.

    Dry run:

    helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run=server

    Upgrade the chart:

    helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE

    Verify Apigee Operator installation:

    helm ls -nAPIGEE_NAMESPACE
    NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                   APP VERSIONoperator   apigee   3          2024-08-21 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.14.3   1.14.3

    Verify it is up and running by checking its availability:

    kubectl -nAPIGEE_NAMESPACE get deploy apigee-controller-manager
    NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           7d20h
  3. Upgrade the Apigee datastore:

    Dry run:

    helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run=server

    Upgrade the chart:

    helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE

    Verifyapigeedatastore is up and running by checking its state:

    kubectl -nAPIGEE_NAMESPACE get apigeedatastore default
    NAME      STATE       AGEdefault   running    2d
  4. Upgrade Apigee telemetry:

    Dry run:

    helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run=server

    Upgrade the chart:

    helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE

    Verify it is up and running by checking its state:

    kubectl -nAPIGEE_NAMESPACE get apigeetelemetry apigee-telemetry
    NAME               STATE     AGEapigee-telemetry   running   2d
  5. Upgrade Apigee Redis:

    Dry run:

    helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run=server

    Upgrade the chart:

    helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE

    Verify it is up and running by checking its state:

    kubectl -nAPIGEE_NAMESPACE get apigeeredis default
    NAME      STATE     AGEdefault   running   2d
  6. Upgrade Apigee ingress manager:

    Dry run:

    helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run=server

    Upgrade the chart:

    helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE

    Verify it is up and running by checking its availability:

    kubectl -nAPIGEE_NAMESPACE get deployment apigee-ingressgateway-manager
    NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           2d
  7. Upgrade the Apigee organization:

    Dry run:

    helm upgradeORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE \  --dry-run=server

    Upgrade the chart:

    helm upgradeORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  -fOVERRIDES_FILE

    Verify it is up and running by checking the state of the respective org:

    kubectl -nAPIGEE_NAMESPACE get apigeeorg
    NAME                      STATE     AGEapigee-org1-xxxxx          running   2d
    Note: If the upgrade command fails with the errorForbidden: state: releasing, the existing components are still in releasing status. You can wait for the current update to complete and then retry. Check release status with the following command:
    kubectl get org -nAPIGEE_NAMESPACE
    Note: During the Apigee Hybrid upgrade from older versions to 1.14.2 or later, the presence of existingistio.io CRDs may cause failed readiness probes in thediscovery containers of theapigee-ingressgateway-manager pods. This is a known issue. For a workaround, please see theKnown issue 416634326.

    Ifgateway.networking.k8s.io/v1 is installed in your cluster,apigee-ingressgateway-manager may fail to upgrade. For example,gateway.networking.k8s.io/v1 is usually installed in clusters running on Google Distributed Cloud (software only) on bare metal v1.32 or later. For a description of the issue and workaround, seeKnown issue 419856132.

  8. Upgrade the environment.

    You must install one environment at a time. Specify the environment with--set env=ENV_NAME.

    Dry run:

    helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set env=ENV_NAME \  -fOVERRIDES_FILE \  --dry-run=server
    • ENV_RELEASE_NAME is a name used to keep track of installation and upgrades of theapigee-env chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_NAME. However, if your environment has the same name as your environment group, you must use different release names for the environment and environment group, for exampledev-env-release anddev-envgroup-release. For more information on releases in Helm, seeThree big concepts in the Helm documentation.
    • ENV_NAME is the name of the environment you are upgrading.
    • OVERRIDES_FILE is your new overrides file for v.1.14.3

    Upgrade the chart:

    helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set env=ENV_NAME \  -fOVERRIDES_FILE

    Verify it is up and running by checking the state of the respective env:

    kubectl -nAPIGEE_NAMESPACE get apigeeenv
    NAME                          STATE       AGE   GATEWAYTYPEapigee-org1-dev-xxx            running     2d
  9. Note: If the upgrade command fails with the errorForbidden: state: releasing, the existing components are still in releasing status. You can wait for the current update to complete and then retry. Check release status with the following command:
    kubectl get env -nAPIGEE_NAMESPACE
  10. Upgrade the environment groups (virtualhosts).
    1. You must upgrade one environment group (virtualhost) at a time. Specify the environment group with--set envgroup=ENV_GROUP_NAME. Repeat the following commands for each environment group mentioned in the overrides.yaml file:

      Dry run:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set envgroup=ENV_GROUP_NAME \  -fOVERRIDES_FILE \  --dry-run=server

      ENV_GROUP_RELEASE_NAME is the name with which you previously installed theapigee-virtualhost chart. It is usuallyENV_GROUP_NAME.

      Upgrade the chart:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --set envgroup=ENV_GROUP_NAME \  -fOVERRIDES_FILE
      Note:ENV_GROUP_RELEASE_NAME must be unique within theapigee namespace.

      For example, if you have anenvironment namedprod and anenvironment group namedprod, set the value ofENV_GROUP_RELEASE_NAME to something unique, likeprod-envgroup.

    2. Check the state of the ApigeeRoute (AR).

      Installing thevirtualhosts creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls environment group-related details from the control plane. Therefore, check that the corresponding AR's state is running:

      kubectl -nAPIGEE_NAMESPACE get arc
      NAME                                STATE   AGEapigee-org1-dev-egroup                       2d
      kubectl -nAPIGEE_NAMESPACE get ar
      NAME                                        STATE     AGEapigee-org1-dev-egroup-xxxxxx                running   2d
  11. After you have verified all the installations are upgraded successfully, delete the olderapigee-operator release from theapigee-system namespace.
    1. Uninstall the oldoperator release:
      helm delete operator -n apigee-system
    2. Delete theapigee-system namespace:
      kubectl delete namespace apigee-system
  12. Upgradeoperator again in your Apigee namespace to re-install the deleted cluster-scoped resources:
    helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -f overrides.yaml

Validate policies after upgrade to 1.14.1

Use this procedure to validate the behavior of theJavaCallout policy after upgrading from 1.14.0 or earlier to 1.14.1 or later.

  1. Check whether the Java JAR files request unnecessary permissions.

    After the policy is deployed, check the runtime logs to see if the following log message is present:"Failed to load and initialize class ...". If you observe this message, it suggests that the deployed JAR requested unnecessary permissions. To resolve this issue, investigate the Java code and update the JAR file.

  2. Investigate and update the Java code.

    Review any Java code (including dependencies) to identify the cause of potentially unallowed operations. When found, modify the source code as required.

  3. Test policies with the security check enabled.

    In anon-production environment, enable the security check flag and redeploy your policies with an updated JAR. To set the flag:

    • In theapigee-env/values.yaml file, setconf_security-secure.constructor.only totrue underruntime:cwcAppend:. For example:
      # Apigee Runtimeruntime:cwcAppend:conf_security-secure.constructor.only:true
    • Update theapigee-env chart for the environment to apply the change. For example:
      helmupgradeENV_RELEASE_NAMEapigee-env/\--install\--namespaceAPIGEE_NAMESPACE\--setenv=ENV_NAME\-fOVERRIDES_FILE

        ENV_RELEASE_NAME is a name used to keep track of installation and upgrades of theapigee-env chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_NAME. However, if your environment has the same name as your environment group, you must use different release names for the environment and environment group, for exampledev-env-release anddev-envgroup-release. For more information on releases in Helm, seeThree big concepts in the Helm documentation.

    If the log message"Failed to load and initialize class ..." is still present, continue modifying and testing the JAR until the log message no longer appears.

  4. Enable the security check in the production environment.

    After you have thoroughly tested and verified the JAR file in the non-production environment, enable the security check in your production environment by setting the flagconf_security-secure.constructor.only totrue and updating theapigee-env chart for the production environment to apply the change.

Congratulations! You have upgraded to Apigee hybrid version 1.14.3. To test your upgrade, call a proxy against the new installation. For an example, seeStep 10: Deploy an API proxy in the Apigee hybrid 1.14 installation guide.

Rolling back to a previous version

To roll back to the previous version, use the older chart version to roll back the upgrade process in the reverse order. Start withapigee-virtualhost and work your way back toapigee-operator, and then revert the CRDs.

Tip: If you know the last release version, you can use thehelm rollback command rather than thehelm upgrade command described below.
  1. Revert all the charts fromapigee-virtualhost toapigee-datastore. The following commands assume you are using the charts from the previous version (v1.13.x).

    Run the following command for each environment group:

    helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespace apigee \  --atomic \  --set envgroup=ENV_GROUP_NAME \  -f1.13_OVERRIDES_FILE

    Run the following command for each environment:

    helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespace apigee \  --atomic \  --set env=ENV_NAME \  -f1.13_OVERRIDES_FILE

    Revert the remaining charts except forapigee-operator.

    helm upgradeORG_NAME apigee-org/ \  --install \  --namespace apigee \  --atomic \  -f1.13_OVERRIDES_FILE
    helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespace apigee \  --atomic \  -f1.13_OVERRIDES_FILE
    helm upgrade redis apigee-redis/ \  --install \  --namespace apigee \  --atomic \  -f1.13_OVERRIDES_FILE
    helm upgrade telemetry apigee-telemetry/ \  --install \  --namespace apigee \  --atomic \  -f1.13_OVERRIDES_FILE
    helm upgrade datastore apigee-datastore/ \  --install \  --namespace apigee \  --atomic \  -f1.13_OVERRIDES_FILE
  2. Create theapigee-system namespace.
    kubectl create namespace apigee-system
  3. Patch the resource annotation back to theapigee-system namespace.
    kubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-namespace='apigee-system'
  4. If you have changed the release name as well, update the annotation with theoperator release name.
    kubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-name='operator'
  5. Installapigee-operator back into theapigee-system namespace.
    helm upgrade operator apigee-operator/ \  --install \  --namespace apigee-system \  --atomic \  -f1.13_OVERRIDES_FILE
  6. Revert the CRDs by reinstalling the older CRDs.
    kubectl apply -k apigee-operator/etc/crds/default/ \  --server-side \  --force-conflicts \  --validate=false
  7. Clean up theapigee-operator release from theAPIGEE_NAMESPACE namespace to complete the rollback process.
    helm uninstall operator -nAPIGEE_NAMESPACE
  8. Some cluster-scoped resources, such asclusterIssuer, are deleted whenoperator is uninstalled. Reinstall them with the following command:
    helm upgrade operator apigee-operator/ \  --install \  --namespace apigee-system \  --atomic \  -f1.13_OVERRIDES_FILE

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-18 UTC.