Installing and managing Apigee hybrid with Helm charts

Preview — Installing Apigee hybrid with Helm charts

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

Note: This preview document is applicable for Apigee hybrid v1.10.x.

This document guides you through the step-by-step process of installing Apigee hybrid v1.10 using Helm charts.

Version

Apigee hybrid Helm charts is for use with Apigee hybrid v1.10.x. See Apigee hybrid release history for the list of hybrid releases.

Prerequisites

Scope

Note: For management of existing Apigee hybrid clusters that were launched withapigeectl, see Apigee hybrid Helm migration tool.

Supported Kubernetes platforms and versions

PlatformVersions
GKE1.24, 1.25, 1.26
AKS1.24, 1.25, 1.26
EKS1.24, 1.25, 1.26
OpenShift4.11, 4.12

Limitations

  • Helm charts do not fully support CRDs; therefore, we will be using thekubectl-k command for installing and upgrading them. We aim to follow community and Google best practices around Kubernetes management. CRD deployments through Helm have not yet reached a community state where we see broad support, or requests for such a model. Therefore, management of Apigee CRDs should be done usingkubectl as mentioned in this document.
  • Inapigeectl, we have used files throughoutoverrides.yaml for service accounts and certs; however, Helm does not support referencing files outside of the chart directory. Pick one of the following options for service account and cert files:
    • Place copies of relevant files within each chart directory
    • Create symbolic links within each chart directory for each file, or a folder. Helm will follow symbolic links out of the chart directory, but will output a warning like the following:
      apigee-operator/gsa -> ../gsa
    • Use Kubernetes secrets. For example, for service accounts:
      kubectl create secret genericSECRET_NAME \  --from-file="client_secret.json=CLOUD_IAM_FILE_NAME.json" \  -n apigee

Supported Kubernetes Platform and versions

For a list of supported platforms, see the v1.10 column in the Apigee hybrid supported platforms table.

Permissions required

This table lists the resources and permissions required for Kubernetes and Apigee.

To filter this table, do one or more of the following: select a category, type a search term, or click a column heading to sort.

CategoryResourceResource typeKubernetes RBAC permissions
Datastoreapigeedatastores.apigee.cloud.google.comApigeecreate delete patch update
Datastorecertificates.cert-manager.ioKubernetescreate delete patch update
Datastorecronjobs.batchKubernetescreate delete patch update
Datastorejobs.batchKubernetescreate delete patch update
DatastoresecretsKubernetescreate delete patch update
Envapigeeenvironments.apigee.cloud.google.comApigeecreate delete patch update
EnvsecretsKubernetescreate delete patch update
EnvserviceaccountsKubernetescreate delete patch update
Ingress managercertificates.cert-manager.ioKubernetescreate delete patch update
Ingress managerconfigmapsKubernetescreate delete patch update
Ingress managerdeployments.appsKubernetescreate get delete patch update
Ingress managerhorizontalpodautoscalers.autoscalingKubernetescreate delete patch update
Ingress managerissuers.cert-manager.ioKubernetescreate delete patch update
Ingress managerserviceaccountsKubernetescreate delete patch update
Ingress managerservicesKubernetescreate delete patch update
Operatorapigeedatastores.apigee.cloud.google.comApigeecreate delete get list patch update watch
Operatorapigeedatastores.apigee.cloud.google.com/finalizersApigeeget patch update
Operatorapigeedatastores.apigee.cloud.google.com/statusApigeeget patch update
Operatorapigeedeployments.apigee.cloud.google.comApigeecreate delete get list patch update watch
Operatorapigeedeployments.apigee.cloud.google.com/finalizersApigeeget patch update
Operatorapigeedeployments.apigee.cloud.google.com/statusApigeeget patch update
Operatorapigeeenvironments.apigee.cloud.google.comApigeecreate delete get list patch update watch
Operatorapigeeenvironments.apigee.cloud.google.com/finalizersApigeeget patch update
Operatorapigeeenvironments.apigee.cloud.google.com/statusApigeeget patch update
Operatorapigeeissues.apigee.cloud.google.comApigeecreate delete get list watch
Operatorapigeeorganizations.apigee.cloud.google.comApigeecreate delete get list patch update watch
Operatorapigeeorganizations.apigee.cloud.google.com/finalizersApigeeget patch update
Operatorapigeeorganizations.apigee.cloud.google.com/statusApigeeget patch update
Operatorapigeeredis.apigee.cloud.google.comApigeecreate delete get list patch update watch
Operatorapigeeredis.apigee.cloud.google.com/finalizersApigeeget patch update
Operatorapigeeredis.apigee.cloud.google.com/statusApigeeget patch update
Operatorapigeerouteconfigs.apigee.cloud.google.comApigeeget list
Operatorapigeeroutes.apigee.cloud.google.comApigeecreate delete get list patch update watch
Operatorapigeeroutes.apigee.cloud.google.com/finalizersApigeeget patch update
Operatorapigeeroutes.apigee.cloud.google.com/statusApigeeget patch update
Operatorapigeetelemetries.apigee.cloud.google.comApigeecreate delete get list patch update watch
Operatorapigeetelemetries.apigee.cloud.google.com/finalizersApigeeget patch update
Operatorapigeetelemetries.apigee.cloud.google.com/statusApigeeget list patch update
Operatorcassandradatareplications.apigee.cloud.google.comApigeeget list patch update watch
Operatorcassandradatareplications.apigee.cloud.google.com/finalizersApigeeget patch update
Operatorcassandradatareplications.apigee.cloud.google.com/statusApigeeget patch update
Operator*.networking.x.k8s.ioKubernetesget list watch
Operatorapiservices.apiregistration.k8s.ioKubernetescreate delete get list patch update watch
Operatorcertificates.cert-manager.ioKubernetescreate delete get list patch update watch
Operatorcertificates.cert-manager.io/finalizersKubernetescreate delete get list patch update watch
Operatorcertificatesigningrequests.certificates.k8s.ioKubernetescreate delete get update watch
Operatorcertificatesigningrequests.certificates.k8s.io/approvalKubernetescreate delete get update watch
Operatorcertificatesigningrequests.certificates.k8s.io/statusKubernetescreate delete get update watch
Operatorclusterissuers.cert-manager.ioKubernetescreate get watch
Operatorclusterrolebindings.rbac.authorization.k8s.ioKubernetescreate delete get list patch update watch
Operatorclusterroles.rbac.authorization.k8s.ioKubernetescreate delete get list patch update watch
OperatorconfigmapsKubernetescreate delete get list patch update watch
Operatorconfigmaps/statusKubernetesget patch update
Operatorcronjobs.batchKubernetescreate delete get list patch update watch
Operatorcustomresourcedefinitions.apiextensions.k8s.ioKubernetesget list watch
Operatordaemonsets.appsKubernetescreate delete get list patch update watch
Operatordeployments.appsKubernetesget list watch
Operatordeployments.extensionsKubernetesget list watch
Operatordestinationrules.networking.istio.ioKubernetescreate delete get list patch update watch
OperatorendpointsKubernetesget list watch
Operatorendpointslices.discovery.k8s.ioKubernetesget list watch
OperatoreventsKubernetescreate delete get list patch update watch
Operatorgateways.networking.istio.ioKubernetescreate delete get list patch update watch
Operatorhorizontalpodautoscalers.autoscalingKubernetescreate delete get list patch update watch
Operatoringressclasses.networking.k8s.ioKubernetesget list watch
Operatoringresses.networking.k8s.io/statusKubernetesall verbs
Operatorissuers.cert-manager.ioKubernetescreate delete get list patch update watch
Operatorjobs.batchKubernetescreate delete get list patch update watch
Operatorleases.coordination.k8s.ioKubernetescreate get list update
OperatornamespacesKubernetescreate get list watch
OperatornodesKubernetesget list watch
Operatorpeerauthentications.security.istio.ioKubernetescreate delete get list patch update watch
OperatorpersistentvolumeclaimsKubernetescreate delete get list patch update watch
OperatorpersistentvolumesKubernetesget list watch
Operatorpoddisruptionbudgets.policyKubernetescreate delete get list patch update watch
OperatorpodsKubernetescreate delete get list patch update watch
Operatorpods/execKubernetescreate
Operatorreplicasets.appsKubernetescreate delete get list patch update watch
Operatorreplicasets.extensionsKubernetesget list watch
OperatorresourcequotasKubernetescreate delete get list patch update watch
Operatorrolebindings.rbac.authorization.k8s.ioKubernetescreate delete get list patch update watch
Operatorroles.rbac.authorization.k8s.ioKubernetescreate delete get list patch update watch
OperatorsecretsKubernetesbatch create delete get list patch update watch
Operatorsecuritycontextconstraints.security.openshift.ioKubernetescreate get list
OperatorserviceaccountsKubernetescreate delete get list patch update watch
OperatorservicesKubernetesbatch create delete get list patch update watch
Operatorsigners.certificates.k8s.ioKubernetesapprove
Operatorstatefulsets.appsKubernetescreate delete get list patch update watch
Operatorsubjectaccessreviews.authorization.k8s.ioKubernetescreate get list
Operatortokenreviews.authentication.k8s.ioKubernetescreate
Operatorvirtualservices.networking.istio.ioKubernetescreate delete get list patch update watch
Orgapigeeorganizations.apigee.cloud.google.comApigeecreate delete patch update
OrgsecretsKubernetescreate delete patch update
OrgserviceaccountsKubernetescreate delete patch update
Redisapigeeredis.apigee.cloud.google.comApigeecreate delete patch update
RedissecretsKubernetescreate delete patch update
Telemetryapigeetelemetry.apigee.cloud.google.comApigeecreate delete patch update
TelemetrysecretsKubernetescreate delete patch update
TelemetryserviceaccountsKubernetescreate delete patch update
Virtual hostapigeerouteconfigs.apigee.cloud.google.comApigeecreate delete patch update
Virtual hostsecretsKubernetescreate delete patch update

See also:

Prepare for installation

Apigee hybrid charts are hosted inGoogle Artifact Registry:

oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts

Pull Apigee Helm charts

Using thepull command, copy all of the Apigee hybrid Helm charts to your local storage with the following command:

exportCHART_REPO=oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-chartsexportCHART_VERSION=1.10.5helm pull$CHART_REPO/apigee-operator --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-datastore --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-env --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-ingress-manager --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-org --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-redis --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-telemetry --version$CHART_VERSION --untarhelm pull$CHART_REPO/apigee-virtualhost --version$CHART_VERSION --untar

Install Apigee hybrid

Installation sequence overview

Installation of components is done from left to right in sequence as shown in the following figure. Components that are stacked vertically in the figure can be installed together and in any order. Once you have installed any component, you can update that component individually and at any point; for example, replica, memory, CPU, and so on.

installation sequence: cert manager and then CRDs and then Apigee operator and then stacked components: redis and datastore and telemetry and ingress manager and then org and then stacked components: env and virtual host

Prepare to install Apigee hybrid with Helm charts

  1. Create the namespace that will be used forapigee resources. This should match the namespace field in theoverrides.yaml file. If this is not present inoverrides.yaml, then the default isapigee.
    1. Check if the namespace already exists:

      kubectl get namespace apigee

      If the namespace exists, your output includes:

        NAME     STATUS   AGE  apigee   Active   1d
    2. If the namespace does not already exist, create it:

      kubectl create namespace apigee
  2. Create theapigee-system namespace used by the Apigee operator resources.
    1. Check if the namespace already exists:

      kubectl get namespace apigee-system
    2. If the namespace does not already exist, create it:

      kubectl create namespace apigee-system
  3. Create the service accounts and assign the appropriate IAM roles to them. Apigee hybrid uses the following service accounts:

    Service accountIAM roles
    apigee-cassandraStorage Object Admin
    apigee-loggerLogs Writer
    apigee-martApigee Connect Agent
    apigee-metricsMonitoring Metric Writer
    apigee-runtimeNo role required
    apigee-synchronizerApigee Synchronizer Manager
    apigee-udcaApigee Analytics Agent
    apigee-watcherApigee Runtime Agent

    Apigee provides a tool,create-service-account, in theapigee-operator/etc/tools directory:

    APIGEE_HELM_CHARTS_HOME/└── apigee-operator/    └── etc/        └── tools/            └── create-service-account

    This tool creates the service accounts, assigns the IAM roles to each account, and downloads the certificate files in JSON format for each account.

    1. Create the directory where you want to download the service account cert files. You will specify this in the following command in the place ofSERVICE_ACCOUNTS_PATH.
    2. You can create all the service accounts with a single command with the following options:
      APIGEE_HELM_CHARTS_HOME/apigee-operator/etc/tools/create-service-account --env prod --dirSERVICE_ACCOUNTS_PATH
    3. List the names of your service accounts for your overrides file:
      ls service-accounts
      my_project-apigee-cassandra.json    my_project-apigee-runtime.jsonmy_project-apigee-logger.json       my_project-apigee-synchronizer.jsonmy_project-apigee-mart.json         my_project-apigee-udca.jsonmy_project-apigee-metrics.json      my_project-apigee-watcher.json

      For more information see:

  4. Before installing, look at theoverrides.yaml file to verify the settings:Important: Apigee hybrid v1.10.5 has been updated with a critical hotfix release. If you wish, you can apply the required configuration settings for the hotfix directly in the configuration overrides file described in this step. If you add the upgrade configurations now, you do not have to perform any further hotfix upgrades later. The configuration settings for the hotfix (v1.10.5-hotfix.1) are listed in theupgrade guide. See also the release note for the hotfix release:
    instanceID:UNIQUE_ID_TO_IDENTIFY_THIS_CLUSTERnamespace:apigee#requiredforHelmchartsinstallation#Bydefault,loggerandmetricsareenabledandrequiresbelowdetails#GoogleCloudprojectandclustergcp:projectID:PROJECT_IDregion:REGIONk8sCluster:name:CLUSTER_NAMEregion:REGIONorg:ORG_NAMEenvs:-name:"ENV_NAME"serviceAccountPaths:runtime:"PATH_TO_RUNTIME_SVC_ACCOUNT"synchronizer:"PATH_TO_SYNCHRONIZER_SVC_ACCOUNT"udca:"PATH_TO_UDCA_SVC_ACCOUNT"ingressGateways:-name:GATEWAY_NAME#maximum17characters,eg:"ingress-1".SeeKnownissue243167389.replicaCountMin:1replicaCountMax:2svcType:LoadBalancervirtualhosts:-name:ENV_GROUP_NAMEselector:app:apigee-ingressgatewayingress_name:GATEWAY_NAMEsslSecret:SECRET_NAMEmart:serviceAccountPath:"PATH_TO_MART_SVC_ACCOUNT"logger:enabled:TRUE_FALSE#lowercasewithoutquotes,eg:trueserviceAccountPath:"PATH_TO_LOGGER_SVC_ACCOUNT"metrics:enabled:TRUE_FALSE#lowercasewithoutquotes,eg:trueserviceAccountPath:"PATH_TO_METRICS_SVC_ACCOUNT"udca:serviceAccountPath:"PATH_TO_UDCA_SVC_ACCOUNT"connectAgent:serviceAccountPath:"PATH_TO_MART_SVC_ACCOUNT"watcher:serviceAccountPath:"PATH_TO_WATCHER_SVC_ACCOUNT"

    This is the same overrides config you will use for this Helm installation. For more settings see theConfiguration property reference.

    For more examples of overrides files, seeStep 6: Configure the hybrid runtime.

  5. Enable synchronizer access. This is a prerequisite for installing Apigee hybrid.
    1. Check to see if synchronizer access is already enabled with the following commands:

      export TOKEN=$(gcloud auth print-access-token)
      curl -X POST -H "Authorization: Bearer$TOKEN" \  -H "Content-Type:application/json" \  "https://apigee.googleapis.com/v1/organizations/ORG_NAME:getSyncAuthorization" \  -d ''

      Your output should look something like the following:

      {  "identities":[     "serviceAccount:SYNCHRONIZER_SERVICE_ACCOUNT_ID"  ],  "etag":"BwWJgyS8I4w="}
    2. If the output does not include the service account ID, enable synchronizer access. Your account must have the Apigee Organization Admin IAM role (roles/apigee.admin) to perform this task.

      curl -X POST -H "Authorization: Bearer$TOKEN" \  -H "Content-Type:application/json" \  "https://apigee.googleapis.com/v1/organizations/ORG_NAME:setSyncAuthorization" \  -d '{"identities":["'"serviceAccount:SYNCHRONIZER_SERVICE_ACCOUNT_ID"'"]}'

      See Step 7: Enable Synchronizer access in the Apigee hybrid installation documentation for more detailed information.

  6. Install Cert Manager with the following command:
    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.1/cert-manager.yaml
  7. Install the Apigee CRDs:

    Note: From this step onwards, all commands should be run under the chart repo root.Note: This is the only supported method for installing Apigee CRDs. Do not usekubectl apply without-k, do not omit--server-side.Note: This step requires elevated cluster permissions.
    1. Use thekubectl dry-run feature by running the following command:

      kubectl apply -k  apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false --dry-run=server
    2. After validating with the dry-run command, run the following command:

      kubectl apply -k  apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false
    3. Validate the installation with thekubectl get crds command:
      kubectl get crds | grep apigee

      Your output should look something like the following:

      apigeedatastores.apigee.cloud.google.com                    2023-10-09T14:48:30Zapigeedeployments.apigee.cloud.google.com                   2023-10-09T14:48:30Zapigeeenvironments.apigee.cloud.google.com                  2023-10-09T14:48:31Zapigeeissues.apigee.cloud.google.com                        2023-10-09T14:48:31Zapigeeorganizations.apigee.cloud.google.com                 2023-10-09T14:48:32Zapigeeredis.apigee.cloud.google.com                         2023-10-09T14:48:33Zapigeerouteconfigs.apigee.cloud.google.com                  2023-10-09T14:48:33Zapigeeroutes.apigee.cloud.google.com                        2023-10-09T14:48:33Zapigeetelemetries.apigee.cloud.google.com                   2023-10-09T14:48:34Zcassandradatareplications.apigee.cloud.google.com           2023-10-09T14:48:35Z
  8. Check the existing labels on the cluster nodes. By default, Apigee schedules data pods on nodes with the labelcloud.google.com/gke-nodepool=apigee-data and runtime pods are scheduled on nodes with the labelcloud.google.com/gke-nodepool=apigee-runtime. You can customize your node pool labels in theoverrides.yaml file.

    For more information, see Configuring dedicated node pools.

Install the Apigee hybrid Helm charts

Note: Before executing any of the Helm upgrade/install commands, use the Helm dry-run feature by adding--dry-run at the end of the command. Seehelm -h to list supported commands, options, and usage.ImportantMigration tool:If you migrated your cluster to Helm management with theApigee hybrid Helm migration tool, you have existing resources in your migrated cluster. To avoid accidentally deleting resources if thehelm upgrade command fails, do not use the‑‑atomic flag.

For example, in the first step, the command will be:

helm upgrade operator apigee-operator/ \  --install \  --namespace apigee-system \  -f overrides.yaml
  1. Install Apigee Operator/Controller:

    Note: This step requires elevated cluster permissions. Runhelm -h orhelm install -h for details
    helm upgrade operator apigee-operator/ \  --install \  --namespace apigee-system \  --atomic \  -f overrides.yaml
    Important:Omit the‑‑atomic flag formigrated clusters.

    Verify Apigee Operator installation:

    helm ls -n apigee-system
    NAME           NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSIONoperator    apigee-system   3               2023-06-26 00:42:44.492009 -0800 PST    deployed        apigee-operator-1.10.5   1.10.5

    Verify it is up and running by checking its availability:

    kubectl -n apigee-system get deploy apigee-controller-manager
    NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           7d20h
  2. Install Apigee datastore:

    helm upgrade datastore apigee-datastore/ \  --install \  --namespace apigee \  --atomic \  -f overrides.yaml
    Important:Omit the‑‑atomic flag formigrated clusters.

    Verifyapigeedatastore is up and running by checking its state:

    kubectl -n apigee get apigeedatastore default
    NAME      STATE       AGEdefault   running    2d
  3. Install Apigee telemetry:

    helm upgrade telemetry apigee-telemetry/ \  --install \  --namespace apigee \  --atomic \  -f overrides.yaml
    Important:Omit the‑‑atomic flag formigrated clusters.

    Verify it is up and running by checking its state:

    kubectl -n apigee get apigeetelemetry apigee-telemetry
    NAME               STATE     AGEapigee-telemetry   running   2d
  4. Install Apigee Redis:

    helm upgrade redis apigee-redis/ \  --install \  --namespace apigee \  --atomic \  -f overrides.yaml
    Important:Omit the‑‑atomic flag formigrated clusters.

    Verify it is up and running by checking its state:

    kubectl -n apigee get apigeeredis default
    NAME      STATE     AGEdefault   running   2d
  5. Install Apigee ingress manager:

    helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespace apigee \  --atomic \  -f overrides.yaml
    Important:Omit the‑‑atomic flag formigrated clusters.

    Verify it is up and running by checking its availability:

    kubectl -n apigee get deployment apigee-ingressgateway-manager
    NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           2d
  6. Install Apigee organization:

    helm upgradeORG_NAME apigee-org/ \  --install \  --namespace apigee \  --atomic \  -f overrides.yaml
    Important:Omit the‑‑atomic flag formigrated clusters.

    Verify it is up and running by checking the state of the respective org:

    kubectl -n apigee get apigeeorg
    NAME                      STATE     AGEapigee-org1-xxxxx          running   2d
  7. Install the environment.

    You must install one environment at a time. Specify the environment with--set env=ENV_NAME:

    helm upgrade apigee-env-ENV_NAME apigee-env/ \  --install \  --namespace apigee \  --atomic \  --set env=ENV_NAME \  -f overrides.yaml
    Important:Omit the‑‑atomic flag formigrated clusters.

    Verify it is up and running by checking the state of the respective env:

    kubectl -n apigee get apigeeenv
    NAME                          STATE       AGE   GATEWAYTYPEapigee-org1-dev-xxx            running     2d
  8. Create the TLS certificates. You are required to provide TLS certificates for the runtime ingress gateway in your Apigee hybrid configuration.
    1. Create the certificates. In a production environment, you will need to use signed certificates. You can use either a certificate and key pair or a Kubernetes secret.

      For demonstration and testing installation, the runtime gateway can accept self-signed credentials. In the following example,openssl is used to generate the self-signed credentials:

      openssl req -nodes -new -x509 \  -keyoutPATH_TO_CERTS_DIRECTORY/keystore_ENV_GROUP_NAME.key \  -outPATH_TO_CERTS_DIRECTORY/keystore_ENV_GROUP_NAME.pem \  -subj '/CN='YOUR_DOMAIN'' -days 3650

      For more information, seeStep 5: Create TLS certificates.

    2. Create the Kubernetes secret to reference the certs:

      kubectl create secret genericNAME \  --from-file="cert=PATH_TO_CRT_FILE" \  --from-file="key=PATH_TO_KEY_FILE" \  -n apigee
  9. Install virtual host.

    You must install one environment group (virtualhost) at a time. Specify the environment group with--set envgroup=ENV_GROUP_NAME:

    # repeat the following command for each env group mentioned in the overrides.yaml filehelm upgrade apigee-virtualhost-ENV_GROUP_NAME apigee-virtualhost/ \  --install \  --namespace apigee \  --atomic \  --set envgroup=ENV_GROUP_NAME \  -f overrides.yaml
    Important:Omit the‑‑atomic flag formigrated clusters.Note:ENV_GROUP_NAME must be unique within theapigee namespace. For example, if you have aprod env and envgroup, you should set this name toprod-envgroup. The later env group name should still beprod.

    This creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls env group related details from the control plane. Therefore, check that the corresponding AR's state is running:

    kubectl -n apigee get arc
    NAME                                STATE   AGEapigee-org1-dev-egroup                       2d
    kubectl -n apigee get ar
    NAME                                        STATE     AGEapigee-org1-dev-egroup-xxxxxx                running   2d

Additional use cases for Helm charts with Apigee hybrid

Cassandra backup and restore

  1. To enable backup:
    1. Update the Cassandra backup details in theoverrides.yaml file:

      cassandra:backup:enabled:trueserviceAccountPath:PATH_TO_GSA_FILEdbStorageBucket:BUCKET_LINKschedule:"45 23 * * 6"
      Note: When using a Cloud Storage bucket, a Google Cloud service accountapigee-cassandra-back needs to be created and have proper roles and/or permissions to write to the bucket specified indbStorageBucket.
    2. Run the Helm upgrade onapigee-datastore chart:

      helm upgrade datastore apigee-datastore/ \  --namespace apigee \  --atomic \  -f overrides.yaml
      Important:Omit the‑‑atomic flag formigrated clusters.
  2. Similarly, to enable restore:
    1. Update the Cassandra restore details in theoverrides.yaml file:

      cassandra:restore:enabled:truesnapshotTimestamp:TIMESTAMPserviceAccountPath:PATH_TO_GSA_FILEcloudProvider:"CSI"
    2. Run the Helm upgrade onapigee-datastore chart:

      helm upgrade datastore apigee-datastore/ \  --namespace apigee \  --atomic \  -f overrides.yaml
      Important:Omit the‑‑atomic flag formigrated clusters.

SeeCassandra backup overview for more details on Cassandra backup and restore.

Multi-region expansion

Multi-region setup with Helm charts requires the same prerequisites as the currentapigeectl procedures. For details, see Prerequisites for multi-region deployments.

The procedure to configure hybrid for multi-region is the same as the existing procedure up through the process of configuring the multi-region seed host and setting up the Kubernetes cluster and context.

Configure the first region

Use the following steps to configure the first region and prepare for configuring the second region:

  1. Follow the steps in Configure Apigee hybrid for multi-region to Configure the multi-region seed host on your platform.
  2. For the first region created, get the pods in the apigee namespace:

    kubectl get pods -o wide -n apigee
  3. Identify the multi-region seed host address for Cassandra in this region, for example10.0.0.11.
  4. Prepare theoverrides.yaml file for the second region and add in the seed host IP address as follows:

    cassandra:multiRegionSeedHost:"SEED_HOST_IP_ADDRESS"datacenter:"DATACENTER_NAME"rack:"RACK_NAME"clusterName:CLUSTER_NAMEhostNetwork:false

    Replace the following:

    • SEED_HOST_IP_ADDRESS with the seed host IP address, for example10.0.0.11.
    • DATACENTER_NAME with the datacenter name, for exampledc-2.
    • RACK_NAME with the rack name, for examplera-1.
    • CLUSTER_NAME with the name of your Apigee cluster. By default the value isapigeecluster. If you use a different cluster name, you must specify a value forcassandra.clusterName. This value must be the same in all regions.

Configure the second region

To set up the new region:

  1. Installcert-manager in region 2:

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.1/cert-manager.yaml
  2. Copy your certificate from the existing cluster to the new cluster. The new CA root is used by Cassandra and other hybrid components for mTLS. Therefore, it is essential to have consistent certificates across the cluster.
    1. Set the context to the original namespace:

      kubectl config use-contextORIGINAL_CLUSTER_NAME
    2. Export the current namespace configuration to a file:

      kubectl get namespace apigee -o yaml > apigee-namespace.yaml
    3. Export theapigee-ca secret to a file:

      kubectl -n cert-manager get secret apigee-ca -o yaml > apigee-ca.yaml
    4. Set the context to the new region's cluster name:

      kubectl config use-contextNEW_CLUSTER_NAME
    5. Import the namespace configuration to the new cluster. Be sure to update the namespace in the file if you're using a different namespace in the new region:

      kubectl apply -f apigee-namespace.yaml
    6. Import the secret to the new cluster:

      kubectl -n cert-manager apply -f apigee-ca.yaml
  3. Now use Helm charts to install Apigee hybrid in the new region with the following Helm Chart commands (as done in region 1):

    Note: You can use the same CRDs for consistency by using the same folder where you have the charts downloaded, you can also keep your overrides file unique between region 1 and 2 by naming your overrides something unique such asoverrides2.yaml, etc.
    helm upgrade operator apigee-operator \  --install \  --namespace apigee-system \  --atomic  -f overrides-DATACENTER_NAME.yamlhelm upgrade datastore apigee-datastore \  --install \  --namespace apigee \  --atomic \  -f overrides-DATACENTER_NAME.yamlhelm upgrade telemetry apigee-telemetry \  --install \  --namespace apigee \  --atomic \  -f overrides-DATACENTER_NAME.yamlhelm upgrade redis apigee-redis \  --install \  --namespace apigee \  --atomic \  -f overrides-DATACENTER_NAME.yamlhelm upgrade ingress-manager apigee-ingress-manager \  --install \  --namespace apigee \  --atomic \  -f overrides-DATACENTER_NAME.yamlhelm upgradeORG_NAME apigee-org \  --install \  --namespace apigee \  --atomic \  -f overrides-DATACENTER_NAME.yaml# repeat the below command for each env mentioned on the overrideshelm upgrade apigee-env-ENV_NAME apigee-env/ \  --install \  --namespace apigee \  --atomic \  --set env=ENV_NAME \  -f overrides-DATACENTER_NAME.yaml# repeat the below command for each env group mentioned on the overrideshelm upgrade apigee-virtualhost-ENV_GROUP_NAME apigee-virtualhost/ \  --install \  --namespace apigee \  --atomic \  --set envgroup=ENV_GROUP_NAME \  -f overrides-DATACENTER_NAME.yaml
    Important:Omit the‑‑atomic flag formigrated clusters.
  4. Once all the components are installed, set up Cassandra on all the pods in the new data centers. For instructions, see Configure Apigee hybrid for multi-region, select your platform, scroll toSet up the new region, and then locate step 5.
  5. Once the data replication is complete and verified, update the seed hosts:
    1. RemovemultiRegionSeedHost: 10.0.0.11 fromoverrides-DATACENTER_NAME.yaml.

      ThemultiRegionSeedHost entry is no longer needed after data replication is established, and pod IPs are expected to change over time.

    2. Reapply the change to update the apigee datastore CR:

      helm upgrade datastore apigee-datastore/ \  --install \  --namespace apigee \  --atomic \  -f overrides-DATACENTER_NAME.yaml
      Important:Omit the‑‑atomic flag formigrated clusters.

Hosting images privately

Instead of relying on the public Google Cloud repository, you may optionally want to host the images privately. Instead of overriding each component, you can add hub details on the overrides:

hub:PRIVATE_REPO

For example, if the following hub is provided, it will automatically resolve the image path:

hub:private-docker-host.com

as:

## an example of internal component vs 3rd partycontainers:- name: apigee-udca  image: private-docker-host.com/apigee-udca:1.10.5  imagePullPolicy: IfNotPresentcontainers:- name: apigee-ingressgateway  image: private-docker-host.com/apigee-asm-ingress:1.17.2-asm.8-distroless  imagePullPolicy: IfNotPresent

Click to expand a list of Apigee images

apigee:gcr.io/apigee-release/hybrid/apigee-mart-server:1.10.3gcr.io/apigee-release/hybrid/apigee-synchronizer:1.10.3gcr.io/apigee-release/hybrid/apigee-runtime:1.10.3gcr.io/apigee-release/hybrid/apigee-hybrid-cassandra-client:1.10.3gcr.io/apigee-release/hybrid/apigee-hybrid-cassandra:1.10.3gcr.io/apigee-release/hybrid/apigee-cassandra-backup-utility:1.10.3gcr.io/apigee-release/hybrid/apigee-udca:1.10.3gcr.io/apigee-release/hybrid/apigee-connect-agent:1.10.3gcr.io/apigee-release/hybrid/apigee-watcher:1.10.3gcr.io/apigee-release/hybrid/apigee-operators:1.10.3gcr.io/apigee-release/hybrid/apigee-redis:1.10.3gcr.io/apigee-release/hybrid/apigee-mint-task-scheduler:1.10.3thirdparty:gcr.io/apigee-release/hybrid/apigee-stackdriver-logging-agent:1.9.12-2gcr.io/apigee-release/hybrid/apigee-prom-prometheus:v2.45.0gcr.io/apigee-release/hybrid/apigee-stackdriver-prometheus-sidecar:0.9.0gcr.io/apigee-release/hybrid/apigee-kube-rbac-proxy:v0.14.2gcr.io/apigee-release/hybrid/apigee-envoy:v1.27.0gcr.io/apigee-release/hybrid/apigee-prometheus-adapter:v0.11.0gcr.io/apigee-release/hybrid/apigee-asm-ingress:1.17.2-asm.8-distrolessgcr.io/apigee-release/hybrid/apigee-asm-istiod:1.17.2-asm.8-distrolessgcr.io/apigee-release/hybrid/apigee-fluent-bit:2.1.8

To display a list of the Apigee images hosted in the Google Cloud repository on the command line:

./apigee-operator/etc/tools/apigee-pull-push.sh --list

Tolerations

To use the Taints and Tolerations feature of Kubernetes, you must define thetolerations override property for each Apigee hybrid component. The following components support defining tolerations:

  • ao
  • apigeeIngressGateway
  • cassandra
  • cassandraSchemaSetup
  • cassandraSchemaValidation
  • cassandraUserSetup
  • connectAgent
  • istiod
  • logger
  • mart
  • metrics
  • mintTaskScheduler
  • redis
  • runtime
  • synchronizer
  • udca
  • Watcher

See Configuration property reference for more information about these components.

For example, to apply the tolerations to the Apigee operator deployment:

ao:tolerations:-key:"key1"operator:"Equal"value:"value1"effect:"NoExecute"tolerationSeconds:3600

To apply the tolerations to the Cassandra StatefulSet:

cassandra:tolerations:-key:"key1"operator:"Equal"value:"value1"effect:"NoExecute"tolerationSeconds:3600

Uninstall Apigee hybrid with Helm

To uninstall a specific update orrelease, you can use thehelm [uninstall/delete]RELEASE-NAME -nNAMESPACE command.

Use the following steps to completely uninstall Apigee Hybrid from the cluster:

  1. Delete the virtualhosts. Run this command for each virtualhost:
    helm -napigee deleteVIRTUALHOST_RELEASE-NAME
  2. Delete the environments. Run this command for each env:
    helm -napigee deleteENV_RELEASE-NAME
  3. delete the Apigee org:
    helm -napigee deleteORG_RELEASE-NAME
  4. delete telemetry:
    helm -napigee deleteTELEMETRY_RELEASE-NAME
  5. Delete redis:
    helm -napigee deleteREDIS_RELEASE-NAME
  6. Delete the ingress manager:
    helm -napigee deleteINGRESS_MANAGER_RELEASE-NAME
  7. Delete the datastore:
    helm -napigee deleteDATASTORE_RELEASE-NAME
    Tip: If the state of the apigeeds/default remainsdeleting but never ends, check the pods to see if a cleanup job failed.
  8. Delete operator.
    1. Make sure all the CRs are deleted before:
      kubectl -n apigee get apigeeds, apigeetelemetry, apigeeorg, apigreeenv, arc, apigeeredis
    2. Delete the Apigee Operator:
      helm -n apigee-system deleteOPERATOR_RELEASE-NAME
  9. Delete the Apigee hybrid CRDs:
    kubectl delete -k  apigee-operator/etc/crds/default/

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.