Configure dedicated node pools Stay organized with collections Save and categorize content based on your preferences.
About node pools
Anode pools is a group of nodes within a cluster that all have the same configuration.Typically, you define separate node pools when you have pods with differing resource requirements.For example, theapigee-cassandra pods require persistent storage, whilethe other Apigee hybrid pods do not.
This topic discusses how to configure dedicated node pools for a hybrid installation.
Using the default nodeSelectors
The best practice is to set up two dedicated node pools: one for the Cassandra pods and one for all the other runtime pods. Using defaultnodeSelector configurations, the installer will assign the Cassandra pods to astateful node pool namedapigee-data and all the other pods to astateless node pool namedapigee-runtime. All you have to do is create node pools with these names, and Apigee hybrid handles the pod scheduling details for you:
| Default node pool name | Description |
|---|---|
apigee-data | A stateful node pool. |
apigee-runtime | A stateless node pool. |
Following is the defaultnodeSelector configuration. TheapigeeData property specifies a node pool for the Cassandra pods. TheapigeeRuntime specifies the node pool for all the other pods. You can override these default settings in your overrides file, as explained later in this topic:
nodeSelector: requiredForScheduling: false apigeeRuntime: key: "cloud.google.com/gke-nodepool" value: "apigee-runtime" apigeeData: key: "cloud.google.com/gke-nodepool" value: "apigee-data"
Again, to ensure your pods are scheduled on the correct nodes, all you have to do is create two node pools with the namesapigee-data andapigee-runtime.
The requiredForScheduling property
ThenodeSelector config section has a property calledrequiredForScheduling:
nodeSelector:requiredForScheduling: false apigeeRuntime: key: "cloud.google.com/gke-nodepool" value: "apigee-runtime" apigeeData: key: "cloud.google.com/gke-nodepool" value: "apigee-data"
false (the default), underlying pods will be scheduled whether or not node pools are defined with the required names. This means that if you forget to create node pools or if you accidentally name a node pool other thanapigee-runtime orapigee-data, the hybrid runtime installation will succeed. Kuberneteswill decide where to run your pods.If you setrequiredForScheduling totrue, the installation will fail unless there are node pools that match the configurednodeSelector keys and values.
requiredForScheduling:true for a production environment.Using custom node pool names
If you don't want to use node pools with the default names, you can create node pools with custom names and specify those names in thenodeSelector stanza. For example, the following configuration assigns the Cassandra pods to the pool namedmy-cassandra-pool and all other pods to the pool namedmy-runtime-pool:
nodeSelector: requiredForScheduling: false apigeeRuntime: key: "cloud.google.com/gke-nodepool" value: "my-runtime-pool" apigeeData: key: "cloud.google.com/gke-nodepool" value: "my-cassandra-pool"
Overriding the node pool for specific components on GKE
You can also override node pool configurations at the individual component level. For example, the following configuration assigns the node pool with the valueapigee-custom to theruntime component:
runtime: nodeSelector: key: cloud.google.com/gke-nodepool value: apigee-custom
You can specify a custom node pool on any of these components:
istiomartsynchronizerruntimecassandraudcalogger
GKE node pool configuration
In GKE, node pools must have a unique name that you provide when you create the pools, and GKEautomatically labels each node with the following:
cloud.google.com/gke-nodepool=the_node_pool_name
As long as you create node pools namedapigee-data andapigee-runtime, no further configuration is required. If you want to use custom node names, seeUsing custom node pool names.
Anthos node pool configuration
Apigee hybrid currently is only supported on Anthos 1.1.1. This version of Anthos does not support the node pool feature; therefore, you must manually label the worker nodes as explained below. Perform the following steps once your hybrid cluster is up and running:
- Run the following command to get a list of the worker nodes in your cluster:
kubectl -n apigee get nodes
Example output:
NAME STATUS ROLES AGE VERSIONapigee-092d639a-4hqt Ready
7d v1.14.6-gke.2apigee-092d639a-ffd0 Ready 7d v1.14.6-gke.2apigee-109b55fc-5tjf Ready 7d v1.14.6-gke.2apigee-c2a9203a-8h27 Ready 7d v1.14.6-gke.2apigee-c70aedae-t366 Ready 7d v1.14.6-gke.2apigee-d349e89b-hv2b Ready 7d v1.14.6-gke.2 - Label each node to differentiate between runtime nodes and data nodes.Be sure to choose the nodes so that they are equally distributed among availability zones (AZs).
Use this command to label the nodes:
kubectl label nodenode_namekey=value
For example:
$ kubectl label node apigee-092d639a-4hqt apigee.com/apigee-nodepool=apigee-runtime$ kubectl label node apigee-092d639a-ffd0 apigee.com/apigee-nodepool=apigee-runtime$ kubectl label node apigee-109b55fc-5tjf apigee.com/apigee-nodepool=apigee-runtime$ kubectl label node apigee-c2a9203a-8h27 apigee.com/apigee-nodepool=apigee-data$ kubectl label node apigee-c70aedae-t366 apigee.com/apigee-nodepool=apigee-data$ kubectl label node apigee-d349e89b-hv2b apigee.com/apigee-nodepool=apigee-data
Overriding the node pool for specific components on Anthos GKE
You can also override node pool configurations at the individual component level for an Anthos GKE installation. For example, the following configuration assigns the node pool with the valueapigee-custom to theruntime component:
runtime: nodeSelector: key: apigee.com/apigee-nodepool value: apigee-custom
You can specify a custom node pool on any of these components:
istiomartsynchronizerruntimecassandraudcalogger
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.