Assigning Pods to Nodes
You can constrain aPod so that it isrestricted to run on particularnode(s),or toprefer to run on particular nodes.There are several ways to do this and the recommended approaches all uselabel selectors to facilitate the selection.Often, you do not need to set any such constraints; thescheduler will automatically do a reasonable placement(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).However, there are some circumstances where you may want to control which nodethe Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,or to co-locate Pods from two different services that communicate a lot into the same availability zone.
You can use any of the following methods to choose where Kubernetes schedulesspecific Pods:
- nodeSelector field matching againstnode labels
- Affinity and anti-affinity
- nodeName field
- Pod topology spread constraints
Node labels
Like many other Kubernetes objects, nodes havelabels. You canattach labels manually.Kubernetes also populates astandard set of labelson all nodes in a cluster.
Note:
The value of these labels is cloud provider specific and is not guaranteed to be reliable.For example, the value ofkubernetes.io/hostname may be the same as the node name in some environmentsand a different value in other environments.Node isolation/restriction
Adding labels to nodes allows you to target Pods for scheduling on specificnodes or groups of nodes. You can use this functionality to ensure that specificPods only run on nodes with certain isolation, security, or regulatoryproperties.
If you use labels for node isolation, choose label keys that thekubeletcannot modify. This prevents a compromised node from setting those labels onitself so that the scheduler schedules workloads onto the compromised node.
TheNodeRestriction admission pluginprevents the kubelet from setting or modifying labels with anode-restriction.kubernetes.io/ prefix.
To make use of that label prefix for node isolation:
- Ensure you are using theNode authorizer and haveenabled the
NodeRestrictionadmission plugin. - Add labels with the
node-restriction.kubernetes.io/prefix to your nodes, and use those labels in yournode selectors.For example,example.com.node-restriction.kubernetes.io/fips=trueorexample.com.node-restriction.kubernetes.io/pci-dss=true.
nodeSelector
nodeSelector is the simplest recommended form of node selection constraint.You can add thenodeSelector field to your Pod specification and specify thenode labels you want the target node to have.Kubernetes only schedules the Pod onto nodes that have each of the labels youspecify.
SeeAssign Pods to Nodes for moreinformation.
Affinity and anti-affinity
nodeSelector is the simplest way to constrain Pods to nodes with specificlabels. Affinity and anti-affinity expand the types of constraints you candefine. Some of the benefits of affinity and anti-affinity include:
- The affinity/anti-affinity language is more expressive.
nodeSelectoronlyselects nodes with all the specified labels. Affinity/anti-affinity gives youmore control over the selection logic. - You can indicate that a rule issoft orpreferred, so that the schedulerstill schedules the Pod even if it can't find a matching node.
- You can constrain a Pod using labels on other Pods running on the node (or other topological domain),instead of just node labels, which allows you to define rules for which Podscan be co-located on a node.
The affinity feature consists of two types of affinity:
- Node affinity functions like the
nodeSelectorfield but is more expressive andallows you to specify soft rules. - Inter-pod affinity/anti-affinity allows you to constrain Pods against labelson other Pods.
Node affinity
Node affinity is conceptually similar tonodeSelector, allowing you to constrain which nodes yourPod can be scheduled on based on node labels. There are two types of nodeaffinity:
requiredDuringSchedulingIgnoredDuringExecution: The scheduler can'tschedule the Pod unless the rule is met. This functions likenodeSelector,but with a more expressive syntax.preferredDuringSchedulingIgnoredDuringExecution: The scheduler tries tofind a node that meets the rule. If a matching node is not available, thescheduler still schedules the Pod.
Note:
In the preceding types,IgnoredDuringExecution means that if the node labelschange after Kubernetes schedules the Pod, the Pod continues to run.You can specify node affinities using the.spec.affinity.nodeAffinity field inyour Pod spec.
For example, consider the following Pod spec:
apiVersion:v1kind:Podmetadata:name:with-node-affinityspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:topology.kubernetes.io/zoneoperator:Invalues:- antarctica-east1- antarctica-west1preferredDuringSchedulingIgnoredDuringExecution:-weight:1preference:matchExpressions:-key:another-node-label-keyoperator:Invalues:- another-node-label-valuecontainers:-name:with-node-affinityimage:registry.k8s.io/pause:3.8In this example, the following rules apply:
- The nodemust have a label with the key
topology.kubernetes.io/zoneandthe value of that labelmust be eitherantarctica-east1orantarctica-west1. - The nodepreferably has a label with the key
another-node-label-keyandthe valueanother-node-label-value.
You can use theoperator field to specify a logical operator for Kubernetes to use wheninterpreting the rules. You can useIn,NotIn,Exists,DoesNotExist,Gt andLt.
ReadOperatorsto learn more about how these work.
NotIn andDoesNotExist allow you to define node anti-affinity behavior.Alternatively, you can usenode taintsto repel Pods from specific nodes.
Note:
If you specify bothnodeSelector andnodeAffinity,both must be satisfiedfor the Pod to be scheduled onto a node.
If you specify multiple terms innodeSelectorTerms associated withnodeAffinitytypes, then the Pod can be scheduled onto a node if one of the specified termscan be satisfied (terms are ORed).
If you specify multiple expressions in a singlematchExpressions field associated with aterm innodeSelectorTerms, then the Pod can be scheduled onto a node onlyif all the expressions are satisfied (expressions are ANDed).
SeeAssign Pods to Nodes using Node Affinityfor more information.
Node affinity weight
You can specify aweight between 1 and 100 for each instance of thepreferredDuringSchedulingIgnoredDuringExecution affinity type. When thescheduler finds nodes that meet all the other scheduling requirements of the Pod, thescheduler iterates through every preferred rule that the node satisfies and adds thevalue of theweight for that expression to a sum.
The final sum is added to the score of other priority functions for the node.Nodes with the highest total score are prioritized when the scheduler makes ascheduling decision for the Pod.
For example, consider the following Pod spec:
apiVersion:v1kind:Podmetadata:name:with-affinity-preferred-weightspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:kubernetes.io/osoperator:Invalues:- linuxpreferredDuringSchedulingIgnoredDuringExecution:-weight:1preference:matchExpressions:-key:label-1operator:Invalues:- key-1-weight:50preference:matchExpressions:-key:label-2operator:Invalues:- key-2containers:-name:with-node-affinityimage:registry.k8s.io/pause:3.8If there are two possible nodes that match thepreferredDuringSchedulingIgnoredDuringExecution rule, one with thelabel-1:key-1 label and another with thelabel-2:key-2 label, the schedulerconsiders theweight of each node and adds the weight to the other scores forthat node, and schedules the Pod onto the node with the highest final score.
Note:
If you want Kubernetes to successfully schedule the Pods in this example, youmust have existing nodes with thekubernetes.io/os=linux label.Node affinity per scheduling profile
Kubernetes v1.20 [beta]When configuring multiplescheduling profiles, you can associatea profile with a node affinity, which is useful if a profile only applies to a specific set of nodes.To do so, add anaddedAffinity to theargs field of theNodeAffinity pluginin thescheduler configuration. For example:
apiVersion:kubescheduler.config.k8s.io/v1kind:KubeSchedulerConfigurationprofiles:-schedulerName:default-scheduler-schedulerName:foo-schedulerpluginConfig:-name:NodeAffinityargs:addedAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:scheduler-profileoperator:Invalues:- fooTheaddedAffinity is applied to all Pods that set.spec.schedulerName tofoo-scheduler, in addition to theNodeAffinity specified in the PodSpec.That is, in order to match the Pod, nodes need to satisfyaddedAffinity andthe Pod's.spec.NodeAffinity.
Since theaddedAffinity is not visible to end users, its behavior might beunexpected to them. Use node labels that have a clear correlation to thescheduler profile name.
Note:
The DaemonSet controller, whichcreates Pods for DaemonSets,does not support scheduling profiles. When the DaemonSet controller createsPods, the default Kubernetes scheduler places those Pods and honors anynodeAffinity rules in the DaemonSet controller.Inter-pod affinity and anti-affinity
Inter-pod affinity and anti-affinity allow you to constrain which nodes yourPods can be scheduled on based on the labels of Pods already running on thatnode, instead of the node labels.
Types of Inter-pod Affinity and Anti-affinity
Inter-pod affinity and anti-affinity take the form "thisPod should (or, in the case of anti-affinity, should not) run in an X if that Xis already running one or more Pods that meet rule Y", where X is a topologydomain like node, rack, cloud provider zone or region, or similar and Y is therule Kubernetes tries to satisfy.
You express these rules (Y) aslabel selectorswith an optional associated list of namespaces. Pods are namespaced objects inKubernetes, so Pod labels also implicitly have namespaces. Any label selectorsfor Pod labels should specify the namespaces in which Kubernetes should look for thoselabels.
You express the topology domain (X) using atopologyKey, which is the key forthe node label that the system uses to denote the domain. For examples, seeWell-Known Labels, Annotations and Taints.
Note:
Inter-pod affinity and anti-affinity require substantial amounts ofprocessing which can slow down scheduling in large clusters significantly. We donot recommend using them in clusters larger than several hundred nodes.Note:
Pod anti-affinity requires nodes to be consistently labeled, in other words,every node in the cluster must have an appropriate label matchingtopologyKey.If some or all nodes are missing the specifiedtopologyKey label, it can leadto unintended behavior.Similar tonode affinity are two types of Pod affinity andanti-affinity as follows:
requiredDuringSchedulingIgnoredDuringExecutionpreferredDuringSchedulingIgnoredDuringExecution
For example, you could userequiredDuringSchedulingIgnoredDuringExecution affinity to tell the scheduler toco-locate Pods of two services in the same cloud provider zone because theycommunicate with each other a lot. Similarly, you could usepreferredDuringSchedulingIgnoredDuringExecution anti-affinity to spread Podsfrom a service across multiple cloud provider zones.
To use inter-pod affinity, use theaffinity.podAffinity field in the Pod spec.For inter-pod anti-affinity, use theaffinity.podAntiAffinity field in the Podspec.
Scheduling Behavior
When scheduling a new Pod, the Kubernetes scheduler evaluates the Pod's affinity/anti-affinity rules in the context of the current cluster state:
Hard Constraints (Node Filtering):
podAffinity.requiredDuringSchedulingIgnoredDuringExecutionandpodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution:- The scheduler ensures the new Pod is assigned to nodes that satisfy these required affinity and anti-affinity rules based on existing Pods.
Soft Constraints (Scoring):
podAffinity.preferredDuringSchedulingIgnoredDuringExecutionandpodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution:- The scheduler scores nodes based on how well they meet these preferred affinity and anti-affinity rules to optimize Pod placement.
Ignored Fields:
- Existing Pods'
podAffinity.preferredDuringSchedulingIgnoredDuringExecution:- These preferred affinity rules are not considered during the scheduling decision for new Pods.
- Existing Pods'
podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution:- Similarly, preferred anti-affinity rules of existing Pods are ignored during scheduling.
- Existing Pods'
Scheduling a Group of Pods with Inter-pod Affinity to Themselves
If the current Pod being scheduled is the first in a series that have affinity to themselves,it is allowed to be scheduled if it passes all other affinity checks. This is determined byverifying that no other Pod in the cluster matches the namespace and selector of this Pod,that the Pod matches its own terms, and the chosen node matches all requested topologies.This ensures that there will not be a deadlock even if all the Pods have inter-pod affinityspecified.
Pod Affinity Example
Consider the following Pod spec:
apiVersion:v1kind:Podmetadata:name:with-pod-affinityspec:affinity:podAffinity:requiredDuringSchedulingIgnoredDuringExecution:-labelSelector:matchExpressions:-key:securityoperator:Invalues:- S1topologyKey:topology.kubernetes.io/zonepodAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:-weight:100podAffinityTerm:labelSelector:matchExpressions:-key:securityoperator:Invalues:- S2topologyKey:topology.kubernetes.io/zonecontainers:-name:with-pod-affinityimage:registry.k8s.io/pause:3.8This example defines one Pod affinity rule and one Pod anti-affinity rule. ThePod affinity rule uses the "hard"requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity ruleuses the "soft"preferredDuringSchedulingIgnoredDuringExecution.
The affinity rule specifies that the scheduler is allowed to place the example Podon a node only if that node belongs to a specificzonewhere other Pods have been labeled withsecurity=S1.For instance, if we have a cluster with a designated zone, let's call it "Zone V,"consisting of nodes labeled withtopology.kubernetes.io/zone=V, the scheduler canassign the Pod to any node within Zone V, as long as there is at least one Pod withinZone V already labeled withsecurity=S1. Conversely, if there are no Pods withsecurity=S1labels in Zone V, the scheduler will not assign the example Pod to any node in that zone.
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Podon a node if that node belongs to a specificzonewhere other Pods have been labeled withsecurity=S2.For instance, if we have a cluster with a designated zone, let's call it "Zone R,"consisting of nodes labeled withtopology.kubernetes.io/zone=R, the scheduler should avoidassigning the Pod to any node within Zone R, as long as there is at least one Pod withinZone R already labeled withsecurity=S2. Conversely, the anti-affinity rule does not impactscheduling into Zone R if there are no Pods withsecurity=S2 labels.
To get yourself more familiar with the examples of Pod affinity and anti-affinity,refer to thedesign proposal.
You can use theIn,NotIn,Exists andDoesNotExist values in theoperator field for Pod affinity and anti-affinity.
ReadOperatorsto learn more about how these work.
In principle, thetopologyKey can be any allowed label key with the followingexceptions for performance and security reasons:
- For Pod affinity and anti-affinity, an empty
topologyKeyfield is not allowed in bothrequiredDuringSchedulingIgnoredDuringExecutionandpreferredDuringSchedulingIgnoredDuringExecution. - For
requiredDuringSchedulingIgnoredDuringExecutionPod anti-affinity rules,the admission controllerLimitPodHardAntiAffinityTopologylimitstopologyKeytokubernetes.io/hostname. You can modify or disable theadmission controller if you want to allow custom topologies.
In addition tolabelSelector andtopologyKey, you can optionally specify a listof namespaces which thelabelSelector should match against using thenamespaces field at the same level aslabelSelector andtopologyKey.If omitted or empty,namespaces defaults to the namespace of the Pod where theaffinity/anti-affinity definition appears.
Namespace Selector
Kubernetes v1.24 [stable]You can also select matching namespaces usingnamespaceSelector, which is a label query over the set of namespaces.The affinity term is applied to namespaces selected by bothnamespaceSelector and thenamespaces field.Note that an emptynamespaceSelector ({}) matches all namespaces, while a null or emptynamespaces list andnullnamespaceSelector matches the namespace of the Pod where the rule is defined.
matchLabelKeys
Kubernetes v1.33 [stable](enabled by default)Note:
ThematchLabelKeys field is a beta-level field and is enabled by default inKubernetes 1.34.When you want to disable it, you have to disable it explicitly via theMatchLabelKeysInPodAffinityfeature gate.
Kubernetes includes an optionalmatchLabelKeys field for Pod affinityor anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,when satisfying the Pod (anti)affinity.
The keys are used to look up values from the Pod labels; those key-value labels are combined(usingAND) with the match restrictions defined using thelabelSelector field. The combinedfiltering selects the set of existing Pods that will be taken into Pod (anti)affinity calculation.
Caution:
It's not recommended to usematchLabelKeys with labels that might be updated directly on pods.Even if you edit the pod's label that is specified atmatchLabelKeysdirectly, (that is, not via a deployment),kube-apiserver doesn't reflect the label update onto the mergedlabelSelector.A common use case is to usematchLabelKeys withpod-template-hash (set on Podsmanaged as part of a Deployment, where the value is unique for each revision).Usingpod-template-hash inmatchLabelKeys allows you to target the Pods that belongto the same revision as the incoming Pod, so that a rolling upgrade won't break affinity.
apiVersion:apps/v1kind:Deploymentmetadata:name:application-server...spec:template:spec:affinity:podAffinity:requiredDuringSchedulingIgnoredDuringExecution:-labelSelector:matchExpressions:-key:appoperator:Invalues:- databasetopologyKey:topology.kubernetes.io/zone# Only Pods from a given rollout are taken into consideration when calculating pod affinity.# If you update the Deployment, the replacement Pods follow their own affinity rules# (if there are any defined in the new Pod template)matchLabelKeys:- pod-template-hashmismatchLabelKeys
Kubernetes v1.33 [stable](enabled by default)Note:
ThemismatchLabelKeys field is a beta-level field and is enabled by default inKubernetes 1.34.When you want to disable it, you have to disable it explicitly via theMatchLabelKeysInPodAffinityfeature gate.
Kubernetes includes an optionalmismatchLabelKeys field for Pod affinityor anti-affinity. The field specifies keys for the labels that should not match with the incoming Pod's labels,when satisfying the Pod (anti)affinity.
Caution:
It's not recommended to usemismatchLabelKeys with labels that might be updated directly on pods.Even if you edit the pod's label that is specified atmismatchLabelKeysdirectly, (that is, not via a deployment),kube-apiserver doesn't reflect the label update onto the mergedlabelSelector.One example use case is to ensure Pods go to the topology domain (node, zone, etc) where only Pods from the same tenant or team are scheduled in.In other words, you want to avoid running Pods from two different tenants on the same topology domain at the same time.
apiVersion:v1kind:Podmetadata:labels:# Assume that all relevant Pods have a "tenant" label settenant:tenant-a...spec:affinity:podAffinity:requiredDuringSchedulingIgnoredDuringExecution:# ensure that Pods associated with this tenant land on the correct node pool-matchLabelKeys:- tenantlabelSelector:{}topologyKey:node-poolpodAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:# ensure that Pods associated with this tenant can't schedule to nodes used for another tenant-mismatchLabelKeys:- tenant# whatever the value of the "tenant" label for this Pod, prevent# scheduling to nodes in any pool where any Pod from a different# tenant is running.labelSelector:# We have to have the labelSelector which selects only Pods with the tenant label,# otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,# which aren't supposed to have the tenant label.matchExpressions:-key:tenantoperator:ExiststopologyKey:node-poolMore practical use-cases
Inter-pod affinity and anti-affinity can be even more useful when they are used with higherlevel collections such as ReplicaSets, StatefulSets, Deployments, etc. Theserules allow you to configure that a set of workloads shouldbe co-located in the same defined topology; for example, preferring to place two relatedPods onto the same node.
For example: imagine a three-node cluster. You use the cluster to run a web applicationand also an in-memory cache (such as Redis). For this example, also assume that latency betweenthe web application and the memory cache should be as low as is practical. You could use inter-podaffinity and anti-affinity to co-locate the web servers with the cache as much as possible.
In the following example Deployment for the Redis cache, the replicas get the labelapp=store. ThepodAntiAffinity rule tells the scheduler to avoid placing multiple replicaswith theapp=store label on a single node. This creates each cache in aseparate node.
apiVersion:apps/v1kind:Deploymentmetadata:name:redis-cachespec:selector:matchLabels:app:storereplicas:3template:metadata:labels:app:storespec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:-labelSelector:matchExpressions:-key:appoperator:Invalues:- storetopologyKey:"kubernetes.io/hostname"containers:-name:redis-serverimage:redis:3.2-alpineThe following example Deployment for the web servers creates replicas with the labelapp=web-store.The Pod affinity rule tells the scheduler to place each replica on a node that has a Podwith the labelapp=store. The Pod anti-affinity rule tells the scheduler never to placemultipleapp=web-store servers on a single node.
apiVersion:apps/v1kind:Deploymentmetadata:name:web-serverspec:selector:matchLabels:app:web-storereplicas:3template:metadata:labels:app:web-storespec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:-labelSelector:matchExpressions:-key:appoperator:Invalues:- web-storetopologyKey:"kubernetes.io/hostname"podAffinity:requiredDuringSchedulingIgnoredDuringExecution:-labelSelector:matchExpressions:-key:appoperator:Invalues:- storetopologyKey:"kubernetes.io/hostname"containers:-name:web-appimage:nginx:1.16-alpineCreating the two preceding Deployments results in the following cluster layout,where each web server is co-located with a cache, on three separate nodes.
| node-1 | node-2 | node-3 |
|---|---|---|
| webserver-1 | webserver-2 | webserver-3 |
| cache-1 | cache-2 | cache-3 |
The overall effect is that each cache instance is likely to be accessed by a single client thatis running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
You might have other reasons to use Pod anti-affinity.See theZooKeeper tutorialfor an example of a StatefulSet configured with anti-affinity for highavailability, using the same technique as this example.
nodeName
nodeName is a more direct form of node selection than affinity ornodeSelector.nodeName is a field in the Pod spec. If thenodeName fieldis not empty, the scheduler ignores the Pod and the kubelet on the named nodetries to place the Pod on that node. UsingnodeName overrules usingnodeSelector or affinity and anti-affinity rules.
Some of the limitations of usingnodeName to select nodes are:
- If the named node does not exist, the Pod will not run, and insome cases may be automatically deleted.
- If the named node does not have the resources to accommodate thePod, the Pod will fail and its reason will indicate why,for example OutOfmemory or OutOfcpu.
- Node names in cloud environments are not always predictable or stable.
Warning:
nodeName is intended for use by custom schedulers or advanced use cases whereyou need to bypass any configured schedulers. Bypassing the schedulers might lead tofailed Pods if the assigned Nodes get oversubscribed. You can usenode affinityor thenodeSelector field to assign a Pod to a specific Node without bypassing the schedulers.Here is an example of a Pod spec using thenodeName field:
apiVersion:v1kind:Podmetadata:name:nginxspec:containers:-name:nginximage:nginxnodeName:kube-01The above Pod will only run on the nodekube-01.
nominatedNodeName
Kubernetes v1.34 [alpha](disabled by default)nominatedNodeName can be used for external components to nominate node for a pending pod.This nomination is best effort: it might be ignored if the scheduler determines the pod cannot go to a nominated node.
Also, this field can be (over)written by the scheduler:
- If the scheduler finds a node to nominate via the preemption.
- If the scheduler decides where the pod is going, and move it to the binding cycle.
- Note that, in this case,
nominatedNodeNameis put only when the pod has to go throughWaitOnPermitorPreBindextension points.
- Note that, in this case,
Here is an example of a Pod status using thenominatedNodeName field:
apiVersion:v1kind:Podmetadata:name:nginx...status:nominatedNodeName:kube-01Pod topology spread constraints
You can usetopology spread constraints to control howPodsare spread across your cluster among failure-domains such as regions, zones, nodes, or among any othertopology domains that you define. You might do this to improve performance, expected availability, oroverall utilization.
ReadPod topology spread constraintsto learn more about how these work.
Operators
The following are all the logical operators that you can use in theoperator field fornodeAffinity andpodAffinity mentioned above.
| Operator | Behavior |
|---|---|
In | The label value is present in the supplied set of strings |
NotIn | The label value is not contained in the supplied set of strings |
Exists | A label with this key exists on the object |
DoesNotExist | No label with this key exists on the object |
The following operators can only be used withnodeAffinity.
| Operator | Behavior |
|---|---|
Gt | The field value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
Lt | The field value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
Note:
Gt andLt operators will not work with non-integer values. If the given valuedoesn't parse as an integer, the Pod will fail to get scheduled. Also,Gt andLtare not available forpodAffinity.What's next
- Read more abouttaints and tolerations.
- Read the design docs fornode affinityand forinter-pod affinity/anti-affinity.
- Learn about how thetopology manager takes part in node-levelresource allocation decisions.
- Learn how to usenodeSelector.
- Learn how to useaffinity and anti-affinity.