Step 11: Install Apigee hybrid Using Helm

You are currently viewing version 1.15 of the Apigee hybrid documentation. For more information, seeSupported versions.

Install the Apigee hybrid runtime components

In this step, you will use Helm to install the following Apigee hybrid components:

  • Apigee operator
  • Apigee datastore
  • Apigee telemetry
  • Apigee Redis
  • Apigee ingress manager
  • Apigee organization
  • Your Apigee environment(s)

You will install the charts for each environment one at a time. The sequence in which you install the components matters.

Pre-installation Notes

  1. If you have not already installed Helm v3.14.2+, follow the instructions inInstalling Helm.
  2. Apigee hybrid uses Helm guardrails to verify the configuration before installing or upgrading a chart. You may see guardrail-specific information in the output of each of the commands in this section, for example:

    # Source: apigee-operator/templates/apigee-operators-guardrails.yamlapiVersion: v1kind: Podmetadata:  name: apigee-hybrid-helm-guardrail-operator  namespace:APIGEE_NAMESPACE  annotations:    helm.sh/hook: pre-install,pre-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    app: apigee-hybrid-helm-guardrail

    If any of thehelm upgrade commands fail, you can use the guardrails output to help diagnose the cause. SeeDiagnosing issues with guardrails.

  3. Before executing any of the Helm upgrade/install commands, use the Helm dry-run feature by adding--dry-run=server at the end of the command. Seehelm install --h to list supported commands, options, and usage.

Installation steps

Select the installation instructions for the service account authentication type in your hybrid installation:

Kubernetes Secrets

  1. If you have not, navigate into yourAPIGEE_HELM_CHARTS_HOME directory. Run the following commands from that directory.
  2. Install Apigee Operator/Controller:Note: This step requires elevated cluster permissions. Runhelm -h orhelm install -h for details
    1. Dry run:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify Apigee Operator installation:

      helm ls -nAPIGEE_NAMESPACE
      NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                                       APP VERSIONoperator   apigee   3          2025-06-26 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.15.1   1.15.1
    4. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deploy apigee-controller-manager
      NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           34s
  3. Install Apigee datastore:

    1. Dry run:
      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verifyapigeedatastore is up and running by checking its state before proceeding to the next step:

      kubectl -nAPIGEE_NAMESPACE get apigeedatastore default
      NAME      STATE       AGEdefault   running    51s
  4. Install Apigee telemetry:

    1. Dry run:
      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeetelemetry apigee-telemetry
      NAME               STATE     AGEapigee-telemetry   running   55s
  5. Install Apigee Redis:

    1. Dry run:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeeredis default
      NAME      STATE     AGEdefault   running   79s
  6. Install Apigee ingress manager:

    1. Dry run:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deployment apigee-ingressgateway-manager
      NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           16s
  7. Install Apigee organization. If you have set the$ORG_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking the state of the respective org:

      kubectl -nAPIGEE_NAMESPACE get apigeeorg
      NAME                      STATE     AGEmy-project-123abcd        running   4m18s
  8. Install the environment.

    You must install one environment at a time. Specify the environment with--set env=ENV_NAME. If you have set the$ENV_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml
    3. Verify it is up and running by checking the state of the respective env:

      kubectl -nAPIGEE_NAMESPACE get apigeeenv
      NAME                       STATE     AGE   GATEWAYTYPEapigee-my-project-my-env   running   3m1s
  9. Install the environment groups (virtualhosts).
    1. You must install one environment group (virtualhost) at a time. Specify the environment group with--set envgroup=ENV_GROUP. If you have set the$ENV_GROUP environment variable in your shell, you can use that in the following commands. Repeat the following commands for each env group mentioned in youroverrides.yaml file:

      Dry run:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml \  --dry-run=server

        ENV_GROUP_RELEASE_NAME is a name used to keep track of installation and upgrades of theapigee-virtualhosts chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_GROUP. However, if your environment group has the same name as an environment in your installation, you must use different release names for the environment group and environment, for exampledev-envgroup-release anddev-env-release. For more information on releases in Helm, see Three big concepts in the Helm documentation.

    2. Install the chart:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml
      Note:ENV_GROUP must be unique within theapigee namespace. For example, if you have aprod env and envgroup, you should set this name toprod-envgroup. The later env group name should still beprod.
    3. Check the state of the ApigeeRoute (AR).

      Installing thevirtualhosts creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls env group related details from the control plane. Therefore, check that the corresponding AR's state is running:

      kubectl -nAPIGEE_NAMESPACE get arc
      NAME                                STATE   AGEapigee-org1-dev-egroup                       2m
      kubectl -nAPIGEE_NAMESPACE get ar
      NAME                                                        STATE     AGEapigee-ingressgateway-internal-chaining-my-project-123abcd   running   19mmy-project-myenvgroup-000-321dcba                            running   2m30s

JSON files

  1. If you have not, navigate into yourAPIGEE_HELM_CHARTS_HOME directory. Run the following commands from that directory.
  2. Install Apigee Operator/Controller:Note: This step requires elevated cluster permissions. Runhelm -h orhelm install -h for details
    1. Dry run:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify Apigee Operator installation:

      helm ls -nAPIGEE_NAMESPACE
      NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                                       APP VERSIONoperator   apigee   3          2025-06-26 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.15.1   1.15.1
    4. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deploy apigee-controller-manager
      NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           34s
  3. Install Apigee datastore:

    1. Dry run:
      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verifyapigeedatastore is up and running by checking its state before proceeding to the next step:

      kubectl -nAPIGEE_NAMESPACE get apigeedatastore default
      NAME      STATE       AGEdefault   running    51s
  4. Install Apigee telemetry:

    1. Dry run:
      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeetelemetry apigee-telemetry
      NAME               STATE     AGEapigee-telemetry   running   55s
  5. Install Apigee Redis:

    1. Dry run:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeeredis default
      NAME      STATE     AGEdefault   running   79s
  6. Install Apigee ingress manager:

    1. Dry run:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deployment apigee-ingressgateway-manager
      NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           16s
  7. Install Apigee organization. If you have set the$ORG_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking the state of the respective org:

      kubectl -nAPIGEE_NAMESPACE get apigeeorg
      NAME                      STATE     AGEmy-project-123abcd        running   4m18s
  8. Install the environment.

    You must install one environment at a time. Specify the environment with--set env=ENV_NAME. If you have set the$ENV_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml
    3. Verify it is up and running by checking the state of the respective env:

      kubectl -nAPIGEE_NAMESPACE get apigeeenv
      NAME                       STATE     AGE   GATEWAYTYPEapigee-my-project-my-env   running   3m1s
  9. Install the environment groups (virtualhosts).
    1. You must install one environment group (virtualhost) at a time. Specify the environment group with--set envgroup=ENV_GROUP. If you have set the$ENV_GROUP environment variable in your shell, you can use that in the following commands. Repeat the following commands for each env group mentioned in youroverrides.yaml file:

      Dry run:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml \  --dry-run=server

        ENV_GROUP_RELEASE_NAME is a name used to keep track of installation and upgrades of theapigee-virtualhosts chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_GROUP. However, if your environment group has the same name as an environment in your installation, you must use different release names for the environment group and environment, for exampledev-envgroup-release anddev-env-release. For more information on releases in Helm, see Three big concepts in the Helm documentation.

    2. Install the chart:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml
      Note:ENV_GROUP must be unique within theapigee namespace. For example, if you have aprod env and envgroup, you should set this name toprod-envgroup. The later env group name should still beprod.
    3. Check the state of the ApigeeRoute (AR).

      Installing thevirtualhosts creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls env group related details from the control plane. Therefore, check that the corresponding AR's state is running:

      kubectl -nAPIGEE_NAMESPACE get arc
      NAME                                STATE   AGEapigee-org1-dev-egroup                       2m
      kubectl -nAPIGEE_NAMESPACE get ar
      NAME                                                        STATE     AGEapigee-ingressgateway-internal-chaining-my-project-123abcd   running   19mmy-project-myenvgroup-000-321dcba                            running   2m30s

Vault

  1. If you have not, navigate into yourAPIGEE_HELM_CHARTS_HOME directory. Run the following commands from that directory.
  2. Install Apigee Operator/Controller:Note: This step requires elevated cluster permissions. Runhelm -h orhelm install -h for details
    1. Dry run:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify Apigee Operator installation:

      helm ls -nAPIGEE_NAMESPACE
      NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                                       APP VERSIONoperator   apigee   3          2025-06-26 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.15.1   1.15.1
    4. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deploy apigee-controller-manager
      NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           34s
  3. Install Apigee datastore:

    1. Dry run:
      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verifyapigeedatastore is up and running by checking its state before proceeding to the next step:

      kubectl -nAPIGEE_NAMESPACE get apigeedatastore default
      NAME      STATE       AGEdefault   running    51s
  4. Install Apigee telemetry:

    1. Dry run:
      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeetelemetry apigee-telemetry
      NAME               STATE     AGEapigee-telemetry   running   55s
  5. Install Apigee Redis:

    1. Dry run:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeeredis default
      NAME      STATE     AGEdefault   running   79s
  6. Install Apigee ingress manager:

    1. Dry run:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deployment apigee-ingressgateway-manager
      NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           16s
  7. Install Apigee organization. If you have set the$ORG_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking the state of the respective org:

      kubectl -nAPIGEE_NAMESPACE get apigeeorg
      NAME                      STATE     AGEmy-project-123abcd        running   4m18s
  8. Install the environment.

    You must install one environment at a time. Specify the environment with--set env=ENV_NAME. If you have set the$ENV_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml
    3. Verify it is up and running by checking the state of the respective env:

      kubectl -nAPIGEE_NAMESPACE get apigeeenv
      NAME                       STATE     AGE   GATEWAYTYPEapigee-my-project-my-env   running   3m1s
  9. Install the environment groups (virtualhosts).
    1. You must install one environment group (virtualhost) at a time. Specify the environment group with--set envgroup=ENV_GROUP. If you have set the$ENV_GROUP environment variable in your shell, you can use that in the following commands. Repeat the following commands for each env group mentioned in youroverrides.yaml file:

      Dry run:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml \  --dry-run=server

        ENV_GROUP_RELEASE_NAME is a name used to keep track of installation and upgrades of theapigee-virtualhosts chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_GROUP. However, if your environment group has the same name as an environment in your installation, you must use different release names for the environment group and environment, for exampledev-envgroup-release anddev-env-release. For more information on releases in Helm, see Three big concepts in the Helm documentation.

    2. Install the chart:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml
      Note:ENV_GROUP must be unique within theapigee namespace. For example, if you have aprod env and envgroup, you should set this name toprod-envgroup. The later env group name should still beprod.
    3. Check the state of the ApigeeRoute (AR).

      Installing thevirtualhosts creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls env group related details from the control plane. Therefore, check that the corresponding AR's state is running:

      kubectl -nAPIGEE_NAMESPACE get arc
      NAME                                STATE   AGEapigee-org1-dev-egroup                       2m
      kubectl -nAPIGEE_NAMESPACE get ar
      NAME                                                        STATE     AGEapigee-ingressgateway-internal-chaining-my-project-123abcd   running   19mmy-project-myenvgroup-000-321dcba                            running   2m30s

WIF for GKE

  1. If you have not, navigate into yourAPIGEE_HELM_CHARTS_HOME directory. Run the following commands from that directory.
  2. Install Apigee Operator/Controller:Note: This step requires elevated cluster permissions. Runhelm -h orhelm install -h for details
    1. Dry run:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify Apigee Operator installation:

      helm ls -nAPIGEE_NAMESPACE
      NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                                       APP VERSIONoperator   apigee   3          2025-06-26 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.15.1   1.15.1
    4. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deploy apigee-controller-manager
      NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           34s
  3. Install Apigee datastore:

    1. Dry run:
      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Set up the service account bindings for Cassandra for Workload Identity Federation for GKE:

      The output from thehelm upgrade command should have contained commands in theNOTES section. Follow those commands to set up the service account bindings. There should be two commands in the form of:

      Production

      gcloud iam service-accounts add-iam-policy-bindingCASSANDRA_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-cassandra-default]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-cassandra-default]" \  --projectPROJECT_ID

      And:

      Production

      kubectl annotate serviceaccount apigee-cassandra-default \  iam.gke.io/gcp-service-account=CASSANDRA_SERVICE_ACCOUNT_EMAIL \  --namespaceAPIGEE_NAMESPACE

      Non-prod

      kubectl annotate serviceaccount apigee-cassandra-default \  iam.gke.io/gcp-service-account=NON_PROD_SERVICE_ACCOUNT_EMAIL \  --namespaceAPIGEE_NAMESPACE

      For example:

      Production

      NOTES:For Cassandra backup GKE Workload Identity, please make sure to add the following membership to the IAM policy binding using the respective kubernetes SA (KSA).  gcloud iam service-accounts add-iam-policy-binding apigee-cassandra@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-cassandra-default]" \        --project my-project  kubectl annotate serviceaccount apigee-cassandra-default \        iam.gke.io/gcp-service-account=apigee-cassandra@my-project.iam.gserviceaccount.com \        --namespace apigee

      Non-prod

      NOTES:For Cassandra backup GKE Workload Identity, please make sure to add the following membership to the IAM policy binding using the respective kubernetes SA (KSA).  gcloud iam service-accounts add-iam-policy-binding apigee-non-prod@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-cassandra-default]" \        --project my-project  kubectl annotate serviceaccount apigee-cassandra-default \        iam.gke.io/gcp-service-account=apigee-non-prod@my-project.iam.gserviceaccount.com \        --namespace apigee

      Optional: If you do not want to set up Cassandra backup at this time, edit your overrides file to remove or comment out thecassandra.backup stanza before running thehelm upgrade command without the--dry-run flag. SeeCassandra backup and restore for more information about configuring Cassandra backup.

    3. Install the chart:

      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    4. Verifyapigeedatastore is up and running by checking its state before proceeding to the next step:

      kubectl -nAPIGEE_NAMESPACE get apigeedatastore default
      NAME      STATE       AGEdefault   running    51s
  4. Install Apigee telemetry:

    1. Dry run:
      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Set up the service account bindings for Loggeer and Metrics for Workload Identity Federation for GKE:

      The output from thehelm upgrade command should have contained commands in theNOTES section. Follow those commands to set up the service account bindings. There should be two commands in the form of:

      Logger KSA:apigee-logger-apigee-telemetry

      gcloud iam service-accounts add-iam-policy-bindingLOGGER_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-logger-apigee-telemetry]" \  --projectPROJECT_ID

      Metrics KSA:apigee-metrics-sa

      Production

      gcloud iam service-accounts add-iam-policy-bindingMETRICS_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-metrics-sa]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-metrics-sa]" \  --projectPROJECT_ID

      For example:

      Production

      NOTES:For GKE Workload Identity, please make sure to add the following membership to the IAM policy binding using the respective kubernetes SA (KSA).  Logger KSA: apigee-logger-apigee-telemetry  gcloud iam service-accounts add-iam-policy-binding apigee-logger@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-logger-apigee-telemetry]" \        --project my-project  Metrics KSA: apigee-metrics-sa  gcloud iam service-accounts add-iam-policy-binding apigee-metrics@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-metrics-sa]" \        --project my-project

      Non-prod

      NOTES:For GKE Workload Identity, please make sure to add the following membership to the IAM policy binding using the respective kubernetes SA (KSA).  Logger KSA: apigee-logger-apigee-telemetry  gcloud iam service-accounts add-iam-policy-binding apigee-non-prod@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-logger-apigee-telemetry]" \        --project my-project  Metrics KSA: apigee-metrics-sa  gcloud iam service-accounts add-iam-policy-binding apigee-non-prod@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-metrics-sa]" \        --project my-project
    3. Install the chart:

      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    4. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeetelemetry apigee-telemetry
      NAME               STATE     AGEapigee-telemetry   running   55s
  5. Install Apigee Redis:

    1. Dry run:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeeredis default
      NAME      STATE     AGEdefault   running   79s
  6. Install Apigee ingress manager:

    1. Dry run:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deployment apigee-ingressgateway-manager
      NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           16s
  7. Install Apigee organization. If you have set the$ORG_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Set up the service account bindings for org-scoped components for Workload Identity Federation for GKE, MART, Apigee Connect, UDCA, and Watcher.

      The output from thehelm upgrade command should have contained commands in theNOTES section. Follow those commands to set up the service account bindings. There should be four commands.

      Note: The KSA names include a unique hash id for your organization, for example,apigee-mart-my-project-123abcd-sa.

      MART KSA:apigee-mart-PROJECT_ID-ORG_HASH_ID-sa

      Production

      gcloud iam service-accounts add-iam-policy-bindingMART_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-mart-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-mart-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      Connect Agent KSA:apigee-connect-agent-PROJECT_ID-ORG_HASH_ID-sa

      Note: Apigee Connect uses the MART Google service account.

      Production

      gcloud iam service-accounts add-iam-policy-bindingMART_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-connect-agent-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-connect-agent-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      Mint Task Scheduler KSA: (If you are usingMonetization for Apigee hybrid)apigee-mint-task-scheduler-PROJECT_ID-ORG_HASH_ID-sa

      Production

      gcloud iam service-accounts add-iam-policy-bindingMINT_TASK_SCHEDULER_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-mint-task-scheduler-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-mint-task-scheduler-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      UDCA KSA:apigee-udca-PROJECT_ID-ORG_HASH_ID-sa

      Production

      gcloud iam service-accounts add-iam-policy-bindingUDCA_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-udca-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-udca-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      Watcher KSA:apigee-watcher-PROJECT_ID-ORG_HASH_ID-sa

      Production

      gcloud iam service-accounts add-iam-policy-bindingWATCHER_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-watcher-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-watcher-PROJECT_ID-ORG_HASH_ID-sa]" \  --projectPROJECT_ID

      For example:

      Production

      NOTES:For Apigee Organization GKE Workload Identity, my-project, please make sure to add the following membership to the IAM policy binding using the respective kubernetes SA (KSA).  MART KSA: apigee-mart-my-project-1a2b3c4-sa  gcloud iam service-accounts add-iam-policy-binding apigee-mart@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-mart-my-project-1a2b3c4-sa]" \        --project my-project  Connect Agent KSA: apigee-connect-agent-my-project-1a2b3c4-sa  gcloud iam service-accounts add-iam-policy-binding apigee-mart@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-connect-agent-my-project-1a2b3c4-sa]" \        --project my-project  Mint task scheduler KSA: apigee-mint-task-scheduler-my-project-1a2b3c4-sa  gcloud iam service-accounts add-iam-policy-binding apigee-mint-task-scheduler@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-mint-task-scheduler-my-project-1a2b3c4-sa]" \        --project my-project  UDCA KSA: apigee-udca-my-project-1a2b3c4-sa  gcloud iam service-accounts add-iam-policy-binding apigee-udca@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-udca-my-project-1a2b3c4-sa]" \        --project my-project  Watcher KSA: apigee-watcher-my-project-1a2b3c4-sa  gcloud iam service-accounts add-iam-policy-binding apigee-watcher@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-watcher-my-project-1a2b3c4-sa]" \        --project my-project

      Non-prod

      NOTES:For Apigee Organization GKE Workload Identity, my-project, please make sure to add the following membership to the IAM policy binding using the respective kubernetes SA (KSA).  MART KSA: apigee-mart-my-project-1a2b3c4-sa  gcloud iam service-accounts add-iam-policy-binding apigee-non-prod@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-mart-my-project-1a2b3c4-sa]" \        --project my-project  Connect Agent KSA: apigee-connect-agent-my-project-1a2b3c4-sa  gcloud iam service-accounts add-iam-policy-binding apigee-non-prod@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-connect-agent-my-project-1a2b3c4-sa]" \        --project my-project  UDCA KSA: apigee-udca-my-project-1a2b3c4-sa  gcloud iam service-accounts add-iam-policy-binding apigee-non-prod@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-udca-my-project-1a2b3c4-sa]" \        --project my-project  Watcher KSA: apigee-watcher-my-project-1a2b3c4-sa  gcloud iam service-accounts add-iam-policy-binding apigee-non-prod@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-watcher-my-project-1a2b3c4-sa]" \        --project my-project
    3. Install the chart:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    4. Verify it is up and running by checking the state of the respective org:

      kubectl -nAPIGEE_NAMESPACE get apigeeorg
      NAME                      STATE     AGEmy-project-123abcd        running   4m18s
  8. Install the environment.

    You must install one environment at a time. Specify the environment with--set env=ENV_NAME. If you have set the$ENV_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml \  --dry-run=server

        ENV_RELEASE_NAME is a name used to keep track of installation and upgrades of theapigee-env chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_NAME. However, if your environment has the same name as your environment group, you must use different release names for the environment and environment group, for exampledev-env-release anddev-envgroup-release. For more information on releases in Helm, seeThree big concepts in the Helm documentation.

    2. Set up the service account bindings for env-scoped components for Workload Identity Federation for GKE, Runtime, Synchronizer, and UDCA.

      The output from thehelm upgrade command should have contained commands in theNOTES section. Follow those commands to set up the service account bindings. There should be four commands.

      Note: The KSA names include a unique hash id for your environment, for example,apigee-mart-my-project--myenv-abc1234-sa.

      Runtime KSA:apigee-runtime-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa

      Production

      gcloud iam service-accounts add-iam-policy-bindingRUNTIME_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-runtime-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-runtime-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa]" \  --projectPROJECT_ID

      Synchronizer KSA:apigee-synchronizer-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa

      Production

      gcloud iam service-accounts add-iam-policy-bindingSYNCHRONIZER_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-synchronizer-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-synchronizer-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa]" \  --projectPROJECT_ID

      UDCA KSA:apigee-udca-PROJECT_ID-ORG_HASH_ID-ENV_NAME-ENV_HASH_ID-sa

      Production

      gcloud iam service-accounts add-iam-policy-bindingUDCA_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-udca-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa]" \  --projectPROJECT_ID

      Non-prod

      gcloud iam service-accounts add-iam-policy-bindingNON_PROD_SERVICE_ACCOUNT_EMAIL \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:PROJECT_ID.svc.id.goog[apigee/apigee-udca-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa]" \  --projectPROJECT_ID

      For example:

      NOTES:For Apigee Environment GKE Workload Identity, my-env, please make sure to add the following membership to the IAM policy binding using the respective kubernetes SA (KSA).  Runtime KSA: apigee-runtime-my-project-my-env-b2c3d4e-sa  gcloud iam service-accounts add-iam-policy-binding apigee-runtime@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-runtime-my-project-my-env-b2c3d4e-sa]" \        --project my-project  Synchronizer KSA: apigee-synchronizer-my-project-my-env-b2c3d4e-sa  gcloud iam service-accounts add-iam-policy-binding apigee-synchronizer@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-synchronizer-my-project-my-env-b2c3d4e-sa]" \        --project my-project  UDCA KSA: apigee-udca-my-project-my-env-b2c3d4e-sa:  gcloud iam service-accounts add-iam-policy-binding apigee-udca@my-project.iam.gserviceaccount.com \        --role roles/iam.workloadIdentityUser \        --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-udca-my-project-my-env-b2c3d4e-sa]" \        --project my-project
    3. Install the chart:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml
    4. Verify it is up and running by checking the state of the respective env:

      kubectl -nAPIGEE_NAMESPACE get apigeeenv
      NAME                       STATE     AGE   GATEWAYTYPEapigee-my-project-my-env   running   3m1s
  9. Install the environment groups (virtualhosts).
    1. You must install one environment group (virtualhost) at a time. Specify the environment group with--set envgroup=ENV_GROUP. If you have set the$ENV_GROUP environment variable in your shell, you can use that in the following commands. Repeat the following commands for each env group mentioned in youroverrides.yaml file:

      Dry run:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml
      Note:ENV_GROUP must be unique within theapigee namespace. For example, if you have aprod env and envgroup, you should set this name toprod-envgroup. The later env group name should still beprod.
    3. Check the state of the ApigeeRoute (AR).

      Installing thevirtualhosts creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls env group related details from the control plane. Therefore, check that the corresponding AR's state is running:

      kubectl -nAPIGEE_NAMESPACE get arc
      NAME                                STATE   AGEapigee-org1-dev-egroup                       2m
      kubectl -nAPIGEE_NAMESPACE get ar
      NAME                                                        STATE     AGEapigee-ingressgateway-internal-chaining-my-project-123abcd   running   19mmy-project-myenvgroup-000-321dcba                            running   2m30s
  10. (Optional) You can see the status of your Kubernetes service accounts in theKubernetes: Workloads Overview page in the Google Cloud console.

    Go to Workloads

    Note: You may see an error status for theapigee-cassandra-backup service account. This is because you are not currently running backup or restore, and these processes have not been fully configured yet. For more information on Cassandra backup and restore, seeCassandra backup overview.

WIF on other platforms

  1. If you have not, navigate into yourAPIGEE_HELM_CHARTS_HOME directory. Run the following commands from that directory.
  2. Install Apigee Operator/Controller:Note: This step requires elevated cluster permissions. Runhelm -h orhelm install -h for details
    1. Dry run:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:
      helm upgrade operator apigee-operator/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify Apigee Operator installation:

      helm ls -nAPIGEE_NAMESPACE
      NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                                       APP VERSIONoperator   apigee   3          2025-06-26 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.15.1   1.15.1
    4. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deploy apigee-controller-manager
      NAME                        READY   UP-TO-DATE   AVAILABLE   AGEapigee-controller-manager   1/1     1            1           34s
  3. Install Apigee datastore:

    1. Dry run:
      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade datastore apigee-datastore/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. If you have enabled Cassandra backup or Cassandra restore, grant the Cassandra Kubernetes service accounts access to impersonate the associatedapigee-cassandraIAM service account.
      1. List the email addresses of the IAM service account for Cassandra:

        Production

        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-cassandra"

        Non-prod

        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-non-prod"

        The output should look similar to the following:

        Production

        apigee-cassandra      apigee-cassandra@my-project.iam.gserviceaccount.com      False

        Non-prod

        apigee-non-prod       apigee-non-prod@my-project.iam.gserviceaccount.com      False
      2. List the Cassandra Kubernetes service accounts:
        kubectl get serviceaccount -nAPIGEE_NAMESPACE | grep "apigee-cassandra"

        The output should look similar to the following:

        apigee-cassandra-backup-sa                       0   7m37sapigee-cassandra-default                         0   7m12sapigee-cassandra-guardrails-sa                   0   6m43sapigee-cassandra-restore-sa                      0   7m37sapigee-cassandra-schema-setup-my-project-1a2b2c4 0   7m30sapigee-cassandra-schema-val-my-project-1a2b2c4   0   7m29sapigee-cassandra-user-setup-my-project-1a2b2c4   0   7m22s
      3. If you have created theapigee-cassandra-backup-sa orapigee-cassandra-restore-sa Kubernetes service accounts, grant each of them access to impersonate theapigee-cassandra IAM service account with the following command:

        Production

        Template

        gcloud iam service-accounts add-iam-policy-binding \CASSANDRA_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-cassandra@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-cassandra-backup-sa" \    --role=roles/iam.workloadIdentityUser

        Non-prod

        Template

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-cassandra-backup-sa" \    --role=roles/iam.workloadIdentityUser

        Where:

        • CASSANDRA_IAM_SA_EMAIL: the email address of the Cassandra IAM service account.
        • PROJECT_NUMBER: theproject number of the project where you created the workload identity pool.Note: You must use theproject number in the member identifier. Using the project ID is not supported. You can find the project number with the following command:
          gcloud projects describePROJECT_ID --format="value(projectNumber)"
        • POOL_ID: the workload identity pool ID.
        • MAPPED_SUBJECT: the Kubernetes ServiceAccount from the claim in your ID token. In most hybrid installations, this will have the format:system:serviceaccount:APIGEE_NAMESPACE:K8S_SA_NAME.
          • Forapigee-cassandra-backup-sa, this will be something similar tosystem:serviceaccount:apigee:apigee-cassandra-backup-sa.
          • Forapigee-cassandra-restore-sa, this will be something similar tosystem:serviceaccount:apigee:apigee-cassandra-restore-sa.
          Tip: In the stepAdd the cluster as a workload identity pool provider inStep 5: Set up service account authentication, you mappedgoogle.subject=assertions.sub. Therefore the value ofMAPPED_SUBJECT will be the value in the ID token following"sub":. You can find the mapped subject for a Kubernetes service account by searching the ID token forsub followed by the name of the service account.

          For example: pipe the contents of the ID token throughgrep -E 'sub.*apigee-cassandra-backup-sa'

    4. Verifyapigeedatastore is up and running by checking its state before proceeding to the next step:

      kubectl -nAPIGEE_NAMESPACE get apigeedatastore default
      NAME      STATE       AGEdefault   running    51s
  4. Install Apigee telemetry:

    1. Dry run:
      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade telemetry apigee-telemetry/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeetelemetry apigee-telemetry
      NAME               STATE     AGEapigee-telemetry   running   55s
    4. Grant the telemetry Kubernetes service accounts access to impersonate the associatedapigee-metricsIAM service account.
      1. List the email address of the IAM service account for metrics:

        Production

        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-metrics"

        The output should look similar to the following:

        apigee-metrics   apigee-metrics@my-project.iam.gserviceaccount.com   False

        Non-prod

        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-non-prod"

        The output should look similar to the following:

        apigee-non-prod   apigee-non-prod@my-project.iam.gserviceaccount.com   False
      2. List the telemetry Kubernetes service accounts:
        kubectl get serviceaccount -nAPIGEE_NAMESPACE | grep "telemetry"

        The output should look similar to the following:

        apigee-metrics-apigee-telemetry                    0   42mapigee-open-telemetry-collector-apigee-telemetry   0   37m
      3. Grant each of the telemetry Kubernetes service accounts access to impersonate theapigee-metrics IAM service account with the following command:

        Production

        Apigee Metrics KSA:apigee-metrics-apigee-telemetry toapigee-metrics Google IAM service account

        Code

        gcloud iam service-accounts add-iam-policy-binding \METRICS_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-metrics@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-metrics-apigee-telemetry" \    --role=roles/iam.workloadIdentityUser

        Apigee OpenTelemetry Collector KSA:apigee-open-telemetry-collector-apigee-telemetry toapigee-metrics Google IAM service account

        Code

        gcloud iam service-accounts add-iam-policy-binding \METRICS_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-metrics@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-open-telemetry-collector-apigee-telemetry" \    --role=roles/iam.workloadIdentityUser

        Non-prod

        Apigee Metrics KSA:apigee-metrics-apigee-telemetry toapigee-non-prod Google IAM service account

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-metrics-apigee-telemetry" \    --role=roles/iam.workloadIdentityUser

        Apigee OpenTelemetry Collector KSA:apigee-open-telemetry-collector-apigee-telemetry toapigee-non-prod Google IAM service account

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-open-telemetry-collector-apigee-telemetry" \    --role=roles/iam.workloadIdentityUser
  5. Install Apigee Redis:

    1. Dry run:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade redis apigee-redis/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its state:

      kubectl -nAPIGEE_NAMESPACE get apigeeredis default
      NAME      STATE     AGEdefault   running   79s
  6. Install Apigee ingress manager:

    1. Dry run:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade ingress-manager apigee-ingress-manager/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking its availability:

      kubectl -nAPIGEE_NAMESPACE get deployment apigee-ingressgateway-manager
      NAME                            READY   UP-TO-DATE   AVAILABLE   AGEapigee-ingressgateway-manager   2/2     2            2           16s
  7. Install Apigee organization. If you have set the$ORG_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgrade$ORG_NAME apigee-org/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  -foverrides.yaml
    3. Verify it is up and running by checking the state of the respective org:

      kubectl -nAPIGEE_NAMESPACE get apigeeorg
      NAME                      STATE     AGEmy-project-123abcd        running   4m18s
    4. Grant the org-scoped Kubernetes service accounts access to impersonate the associated IAM service accounts.
      1. List the email addresses of the IAM service accounts used by theapigee-mart,apigee-udca, andapigee-watcher components:

        Production

        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-mart\|apigee-udca\|apigee-watcher"

        The output should look similar to the following:

        apigee-mart      apigee-mart@my-project.iam.gserviceaccount.com      Falseapigee-udca      apigee-udca@my-project.iam.gserviceaccount.com      Falseapigee-watcher   apigee-watcher@my-project.iam.gserviceaccount.com   False

        If you are usingMonetization for Apigee hybrid, also get the email address of theapigee-mint-task-scheduler service account.

        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-mint-task-scheduler"

        The output should look similar to the following:

        apigee-mint-task-scheduler   apigee-mint-task-scheduler@my-project.iam.gserviceaccount.com   False

        Non-prod

        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-non-prod"

        The output should look similar to the following:

        apigee-non-prod     apigee-non-prod@my-project.iam.gserviceaccount.com         False
      2. List the org-scoped Kubernetes service accounts:
        kubectl get serviceaccount -nAPIGEE_NAMESPACE | grep "apigee-connect-agent\|apigee-mart\|apigee-udca\|apigee-watcher"

        The output should look similar to the following:

        apigee-connect-agent-my-project-123abcd         0   1h4mapigee-mart-my-project-123abcd                  0   1h4mapigee-mint-task-scheduler-my-project-123abcd   0   1h3mapigee-udca-my-project-123abcd                  0   1h2mapigee-watcher-my-project-123abcd               0   1h1m
      3. Use the following commands to grant the org-scoped Kubernetes service accounts access to impersonate the associated IAM service accounts as follows:

        Production

        Connect agent KSA:apigee-connect-agent-ORG_NAME-ORG_HASH_ID Kubernetes service account toapigee-mart IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \APIGEE_MART_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-mart@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-connect-agent-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser

        MART KSA:apigee-mart-ORG_NAME-ORG_HASH_ID Kubernetes service account toapigee-mart IAM service account. MART and Connect agent use the same IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \APIGEE_MART_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-mart@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-mart-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser

        Mint task scheduler KSA: (if usingMonetization for Apigee hybrid)

        apigee-mint-task-scheduler-ORG_NAME-ORG_HASH_ID Kubernetes service account toapigee-mint-task-scheduler IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \APIGEE_MINT_TASK_SCHEDULER_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-mint-task-scheduler@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-mint-task-scheduler-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser

        Org-scoped UDCA KSA:apigee-udca-ORG_NAME-ORG_HASH_ID Kubernetes service account toapigee-udca IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \APIGEE_UDCA_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-udca-task-scheduler@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-udca-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser

        Watcher KSA:apigee-watcher-ORG_NAME-ORG_HASH_ID Kubernetes service account toapigee-watcher IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \APIGEE_WATCHER_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-watcher@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-watcher-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser

        Non-prod

        Connect agent KSA:apigee-connect-agent-ORG_NAME-ORG_HASH_ID Kubernetes service account toapigee-non-prod IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-connect-agent-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser

        MART KSA:apigee-mart-ORG_NAME-ORG_HASH_ID Kubernetes service account toapigee-non-prod IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-mart-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser

        Mint task scheduler KSA: (if usingMonetization for Apigee hybrid)

        apigee-mint-task-scheduler-ORG_NAME-UUIORG_HASH_IDD Kubernetes service account toapigee-non-prod IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-mint-task-scheduler-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser

        Org-scoped UDCA KSA:apigee-udca-ORG_NAME-ORG_HASH_ID Kubernetes service account toapigee-non-prod IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-udca-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser

        Watcher KSA:apigee-watcher-ORG_NAME-ORG_HASH_ID Kubernetes service account toapigee-non-prod IAM service account.

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-watcher-my-org-123abcd" \    --role=roles/iam.workloadIdentityUser
  8. Install the environment.

    You must install one environment at a time. Specify the environment with--set env=ENV_NAME. If you have set the$ENV_NAME environment variable in your shell, you can use that in the following commands:

    1. Dry run:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml \  --dry-run=server

        ENV_RELEASE_NAME is a name used to keep track of installation and upgrades of theapigee-env chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_NAME. However, if your environment has the same name as your environment group, you must use different release names for the environment and environment group, for exampledev-env-release anddev-envgroup-release. For more information on releases in Helm, seeThree big concepts in the Helm documentation.

    2. Install the chart:

      helm upgradeENV_RELEASE_NAME apigee-env/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set env=$ENV_NAME \  -foverrides.yaml
    3. Verify it is up and running by checking the state of the respective env:

      kubectl -nAPIGEE_NAMESPACE get apigeeenv
      NAME                       STATE     AGE   GATEWAYTYPEapigee-my-project-my-env   running   3m1s
    4. Grant the environment-scoped Kubernetes service accounts access to impersonate the associated IAM service accounts.
      1. List the email address of the IAM service accounts used by theapigee-runtime,apigee-synchronizer, andapigee-udca components:

        Production

        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-runtime\|apigee-synchronizer\|apigee-udca"

        Non-prod

        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-non-prod"
        gcloud iam service-accounts list --projectPROJECT_ID | grep "apigee-mart\|apigee-udca\|apigee-watcher"

        The output should look similar to the following:

        Production

        apigee-runtime         apigee-runtime@my-project.iam.gserviceaccount.com         Falseapigee-synchronizer    apigee-synchronizer@my-project.iam.gserviceaccount.com      Falseapigee-udca            apigee-udca@my-project.iam.gserviceaccount.com         False

        Non-prod

        apigee-non-prod     apigee-non-prod@my-project.iam.gserviceaccount.com         False
      2. List the environment-scoped Kubernetes service accounts:
        kubectl get serviceaccount -nAPIGEE_NAMESPACE | grep "apigee-runtime\|apigee-synchronizer\|apigee-udca"

        The output should look similar to the following:

        apigee-runtime-my-project--my-env-cdef123          0   19mapigee-synchronizer-my-project-my-env-cdef123      0   17mapigee-udca-my-project-123abcd                     0   1h29mapigee-udca-my-project-my-env-cdef123              0   22m
        Note: You may see both org-scoped and environment-scopedapigee-udca Kubernetes service accounts. In this step, grant access to the environment-scopedapigee-udca service account.
      3. Use the following command to grant the environment-scoped Kubernetes service accounts access to impersonate the associated IAM service accounts as follows:

        Production

        Runtime KSA:apigee-runtime-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa KSA toapigee-runtime Google IAM SA

        Code

        gcloud iam service-accounts add-iam-policy-binding \RUNTIME_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-runtime@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-runtime-my-project-my-env-cdef123" \    --role=roles/iam.workloadIdentityUser

        Synchronizer KSA:apigee-synchronizer-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa KSA toapigee-synchronizer Google IAM SA

        Code

        gcloud iam service-accounts add-iam-policy-binding \SYNCHRONIZER_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-synchronizer@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-synchronizer-my-project-my-env-cdef123" \    --role=roles/iam.workloadIdentityUser

        UDCA KSA:apigee-udca-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa KSA toapigee-udca Google IAM SA

        Code

        gcloud iam service-accounts add-iam-policy-binding \UDCA_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-udca@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-udca-my-project-my-env-cdef123" \    --role=roles/iam.workloadIdentityUser

        Non-prod

        Runtime KSA:apigee-runtime-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa KSA toapigee-non-prod Google IAM SA

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-runtime-my-project-my-env-cdef123" \    --role=roles/iam.workloadIdentityUser

        Non-prod

        Synchronizer KSA:apigee-synchronizer-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa KSA toapigee-non-prod Google IAM SA

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-synchronizer-my-project-my-env-cdef123" \    --role=roles/iam.workloadIdentityUser

        Non-prod

        UDCA KSA:apigee-udca-PROJECT_ID-ENV_NAME-ENV_HASH_ID-sa KSA toapigee-non-prod Google IAM SA

        Code

        gcloud iam service-accounts add-iam-policy-binding \NON_PROD_IAM_SA_EMAIL \    --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/MAPPED_SUBJECT" \    --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \  apigee-non-prod@my-project.iam.gserviceaccount.com \    --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-udca-my-project-my-env-cdef123" \    --role=roles/iam.workloadIdentityUser
  9. Install the environment groups (virtualhosts).
    1. You must install one environment group (virtualhost) at a time. Specify the environment group with--set envgroup=ENV_GROUP. If you have set the$ENV_GROUP environment variable in your shell, you can use that in the following commands. Repeat the following commands for each env group mentioned in youroverrides.yaml file:

      Dry run:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml \  --dry-run=server
    2. Install the chart:

      helm upgradeENV_GROUP_RELEASE_NAME apigee-virtualhost/ \  --install \  --namespaceAPIGEE_NAMESPACE \  --atomic \  --set envgroup=$ENV_GROUP \  -foverrides.yaml
      Note:ENV_GROUP must be unique within theapigee namespace. For example, if you have aprod env and envgroup, you should set this name toprod-envgroup. The later env group name should still beprod.
    3. Check the state of the ApigeeRoute (AR).

      Installing thevirtualhosts creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls env group related details from the control plane. Therefore, check that the corresponding AR's state is running:

      kubectl -nAPIGEE_NAMESPACE get arc
      NAME                                STATE   AGEapigee-org1-dev-egroup                       2m
      kubectl -nAPIGEE_NAMESPACE get ar
      NAME                                                        STATE     AGEapigee-ingressgateway-internal-chaining-my-project-123abcd   running   19mmy-project-myenvgroup-000-321dcba                            running   2m30s
Congratulations!

You've successfully installed and configured the Apigee hybrid runtime plane.

Next step

In the next step, you will configure the Apigee ingress gateway and deploy a proxy to test your installation.

(NEXT) Step 1: Expose Apigee ingress2

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.