Restoring in multiple regions

You are currently viewing version 1.6 of the Apigee hybrid documentation.This version is end of life. You should upgrade to a newer version. For more information, seeSupported versions.

This page describes how to restore Cassandra in multiple regions.

In a multi-region deployment, Apigee hybrid is deployed in multiple geographic locations across different datacenters. It is important to note that, if you have multiple Apigee organizations in your deployment, the restore process restores data forall the organizations. In a multi-organization setup, restoring only a specific organization isnot supported.

Restoring cassandra

In a multi-region deployment, there are two possible ways to salvage a failed region. This topicdescribes the following approaches:

  • Recover failed region(s) - Describes the steps to recover failed region(s) based on a healthy region.
  • Restore failed region(s) - Describes the steps to restore failed region(s) from a backup. This approach is only required ifall hybrid regions are impacted.

Recover failed region(s)

To recover failed region(s) from a healthy region, perform the following steps:

  1. Redirect the API traffic from the impacted region(s) to the good working region. Plan the capacity accordingly to support the diverted traffic from failed region(s).
  2. Decommission the impacted region. For each impacted region, follow the steps outlined inDecommission a hybrid region. Wait for decommissioning to complete before moving on to the next step.

  3. Restore the impacted region. To restore, create a new region, as described inMulti-region deployment on GKE, GKE on-prem, and AKS.

Restoring from a backup

Note: If you want to preserve an existing setup for troubleshooting and root cause analysis (RCA), you should delete all theorg andenv components from the Kubernetes clusterexcept the Apigee controller, and retain the cluster. The cluster will contain the existing Apigee datastore (Cassandra) which you can use for troubleshooting. Create a new Kubernetes cluster and then restore Cassandra in the new cluster.

The Cassandra backup can either reside on Cloud Storage or on a remote server based on your configuration. To restore Cassandra from a backup, perform the following steps:

  1. Delete apigee hybrid deployment from all the regions:
    apigeectl delete -f overrides.yaml
  2. Restore the desired region from a backup. For more information, seeRestoring a region from a backup.

  3. Remove the deleted region(s) references and add the restored region(s) references in theKeySpaces metadata.
  4. Get the region name by using thenodetool status option.
    kubectl exec -n apigee -it apigee-cassandra-default-0 -- bash      nodetool  -u ${APIGEE_JMX_USER} -pw ${APIGEE_JMX_PASSWORD} status |grep -i Datacenter
  5. Update theKeySpaces replication.
    1. Create a client container and connect to the Cassandra cluster through the CQL interface.
    2. Get the list of user keyspaces from CQL interface:
      cqlsh ${CASSANDRA_SEEDS} -u ${CASS_USERNAME} -p ${CASS_PASSWORD}            --ssl -e "select keyspace_name from system_schema.keyspaces;"|grep -v system
    3. For each keyspace, run the following command from the CQL interface to update the replication settings:
      ALTER KEYSPACEKEYSPACE_NAME WITH replication = {'class': 'NetworkTopologyStrategy', 'REGION_NAME':3};

      where:

      • KEYSPACE_NAME is the name of the keyspace listed in the previous step's output.
      • REGION_NAME is the region name obtained in Step 4.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.