Este contenido no está disponible en el idioma seleccionado.
Chapter 14. Scheduling resources
14.1. Using node selectors to move logging resources Copiar enlaceEnlace copiado en el portapapeles!
A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods.
For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node.
14.1.1. About node selectors Copiar enlaceEnlace copiado en el portapapeles!
You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels.
You can use a node selector to place specific pods on specific nodes, cluster-wide node selectors to place new pods on specific nodes anywhere in the cluster, and project node selectors to place new pods in a project on specific nodes.
For example, as a cluster administrator, you can create an infrastructure where application developers can deploy pods only onto the nodes closest to their geographical location by including a node selector in every pod they create. In this example, the cluster consists of five data centers spread across two regions. In the U.S., label the nodes as us-east, us-central, or us-west. In the Asia-Pacific region (APAC), label the nodes as apac-east or apac-west. The developers can add a node selector to the pods they create to ensure the pods get scheduled on those nodes.
A pod is not scheduled if the Pod object contains a node selector, but no node has a matching label.
If you are using node selectors and node affinity in the same pod configuration, the following rules control pod placement onto nodes:
-
If you configure both
nodeSelectorandnodeAffinity, both conditions must be satisfied for the pod to be scheduled onto a candidate node. -
If you specify multiple
nodeSelectorTermsassociated withnodeAffinitytypes, then the pod can be scheduled onto a node if one of thenodeSelectorTermsis satisfied. -
If you specify multiple
matchExpressionsassociated withnodeSelectorTerms, then the pod can be scheduled onto a node only if allmatchExpressionsare satisfied.
- Node selectors on specific pods and nodes
You can control which node a specific pod is scheduled on by using node selectors and labels.
To use node selectors and labels, first label the node to avoid pods being descheduled, then add the node selector to the pod.
NoteYou cannot add a node selector directly to an existing scheduled pod. You must label the object that controls the pod, such as deployment config.
For example, the following
Nodeobject has theregion: eastlabel:Sample
Nodeobject with a labelCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Labels to match the pod node selector.
A pod has the
type: user-node,region: eastnode selector:Sample
Podobject with node selectorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Node selectors to match the node label. The node must have a label for each node selector.
When you create the pod using the example pod spec, it can be scheduled on the example node.
- Default cluster-wide node selectors
With default cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels.
For example, the following
Schedulerobject has the default cluster-wideregion=eastandtype=user-nodenode selectors:Example Scheduler Operator Custom Resource
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A node in that cluster has the
type=user-node,region=eastlabels:Example
NodeobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Podobject with a node selectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you create the pod using the example pod spec in the example cluster, the pod is created with the cluster-wide node selector and is scheduled on the labeled node:
Example pod list with the pod on the labeled node
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the project where you create the pod has a project node selector, that selector takes preference over a cluster-wide node selector. Your pod is not created or scheduled if the pod does not have the project node selector.
- Project node selectors
With project node selectors, when you create a pod in this project, OpenShift Container Platform adds the node selectors to the pod and schedules the pods on a node with matching labels. If there is a cluster-wide default node selector, a project node selector takes preference.
For example, the following project has the
region=eastnode selector:Example
NamespaceobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following node has the
type=user-node,region=eastlabels:Example
NodeobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you create the pod using the example pod spec in this example project, the pod is created with the project node selectors and is scheduled on the labeled node:
Example
PodobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example pod list with the pod on the labeled node
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>Copy to Clipboard Copied! Toggle word wrap Toggle overflow A pod in the project is not created or scheduled if the pod contains different node selectors. For example, if you deploy the following pod into the example project, it is not be created:
Example
Podobject with an invalid node selectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.1.2. Loki pod placement Copiar enlaceEnlace copiado en el portapapeles!
You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods.
You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node.
Example LokiStack with node selectors
In the previous example configuration, all Loki pods are moved to nodes containing the node-role.kubernetes.io/infra: "" label.
Example LokiStack CR with node selectors and tolerations
To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource:
oc explain lokistack.spec.template
$ oc explain lokistack.spec.template
Example output
For more detailed information, you can add a specific field:
oc explain lokistack.spec.template.compactor
$ oc explain lokistack.spec.template.compactor
Example output
14.1.3. Configuring resources and scheduling for logging collectors Copiar enlaceEnlace copiado en el portapapeles!
Administrators can modify the resources or scheduling of the collector by creating a ClusterLogging custom resource (CR) that is in the same namespace and has the same name as the ClusterLogForwarder CR that it supports.
The applicable stanzas for the ClusterLogging CR when using multiple log forwarders in a deployment are managementState and collection. All other stanzas are ignored.
Prerequisites
- You have administrator permissions.
- You have installed the Red Hat OpenShift Logging Operator version 5.8 or newer.
-
You have created a
ClusterLogForwarderCR.
Procedure
Create a
ClusterLoggingCR that supports your existingClusterLogForwarderCR:Example
ClusterLoggingCR YAMLCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ClusterLoggingCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.1.4. Viewing logging collector pods Copiar enlaceEnlace copiado en el portapapeles!
You can view the logging collector pods and the corresponding nodes that they are running on.
Procedure
Run the following command in a project to view the logging collector pods and their details:
oc get pods --selector component=collector -o wide -n <project_name>
$ oc get pods --selector component=collector -o wide -n <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.2. Using taints and tolerations to control logging pod placement Copiar enlaceEnlace copiado en el portapapeles!
Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them.
14.2.1. Understanding taints and tolerations Copiar enlaceEnlace copiado en el portapapeles!
A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration.
You apply taints to a node through the Node specification (NodeSpec) and apply tolerations to a pod through the Pod specification (PodSpec). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint.
Example taint in a node specification
Example toleration in a Pod spec
Taints and tolerations consist of a key, value, and effect.
| Parameter | Description | ||||||
|---|---|---|---|---|---|---|---|
|
|
The | ||||||
|
|
The | ||||||
|
| The effect is one of the following:
| ||||||
|
|
|
If you add a
NoScheduletaint to a control plane node, the node must have thenode-role.kubernetes.io/master=:NoScheduletaint, which is added by default.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
A toleration matches a taint:
If the
operatorparameter is set toEqual:-
the
keyparameters are the same; -
the
valueparameters are the same; -
the
effectparameters are the same.
-
the
If the
operatorparameter is set toExists:-
the
keyparameters are the same; -
the
effectparameters are the same.
-
the
The following taints are built into OpenShift Container Platform:
-
node.kubernetes.io/not-ready: The node is not ready. This corresponds to the node conditionReady=False. -
node.kubernetes.io/unreachable: The node is unreachable from the node controller. This corresponds to the node conditionReady=Unknown. -
node.kubernetes.io/memory-pressure: The node has memory pressure issues. This corresponds to the node conditionMemoryPressure=True. -
node.kubernetes.io/disk-pressure: The node has disk pressure issues. This corresponds to the node conditionDiskPressure=True. -
node.kubernetes.io/network-unavailable: The node network is unavailable. -
node.kubernetes.io/unschedulable: The node is unschedulable. -
node.cloudprovider.kubernetes.io/uninitialized: When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure: The node has pid pressure. This corresponds to the node conditionPIDPressure=True.ImportantOpenShift Container Platform does not set a default pid.available
evictionHard.
14.2.2. Loki pod placement Copiar enlaceEnlace copiado en el portapapeles!
You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods.
You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node.
Example LokiStack with node selectors
In the previous example configuration, all Loki pods are moved to nodes containing the node-role.kubernetes.io/infra: "" label.
Example LokiStack CR with node selectors and tolerations
To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource:
oc explain lokistack.spec.template
$ oc explain lokistack.spec.template
Example output
For more detailed information, you can add a specific field:
oc explain lokistack.spec.template.compactor
$ oc explain lokistack.spec.template.compactor
Example output
14.2.3. Using tolerations to control log collector pod placement Copiar enlaceEnlace copiado en el portapapeles!
By default, log collector pods have the following tolerations configuration:
Prerequisites
-
You have installed the Red Hat OpenShift Logging Operator and OpenShift CLI (
oc).
Procedure
Add a taint to a node where you want logging collector pods to schedule logging collector pods by running the following command:
oc adm taint nodes <node_name> <key>=<value>:<effect>
$ oc adm taint nodes <node_name> <key>=<value>:<effect>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc adm taint nodes node1 collector=node:NoExecute
$ oc adm taint nodes node1 collector=node:NoExecuteCopy to Clipboard Copied! Toggle word wrap Toggle overflow This example places a taint on
node1that has keycollector, valuenode, and taint effectNoExecute. You must use theNoExecutetaint effect.NoExecuteschedules only pods that match the taint and removes existing pods that do not match.Edit the
collectionstanza of theClusterLoggingcustom resource (CR) to configure a toleration for the logging collector pods:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto node1.
14.2.4. Configuring resources and scheduling for logging collectors Copiar enlaceEnlace copiado en el portapapeles!
Administrators can modify the resources or scheduling of the collector by creating a ClusterLogging custom resource (CR) that is in the same namespace and has the same name as the ClusterLogForwarder CR that it supports.
The applicable stanzas for the ClusterLogging CR when using multiple log forwarders in a deployment are managementState and collection. All other stanzas are ignored.
Prerequisites
- You have administrator permissions.
- You have installed the Red Hat OpenShift Logging Operator version 5.8 or newer.
-
You have created a
ClusterLogForwarderCR.
Procedure
Create a
ClusterLoggingCR that supports your existingClusterLogForwarderCR:Example
ClusterLoggingCR YAMLCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ClusterLoggingCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.2.5. Viewing logging collector pods Copiar enlaceEnlace copiado en el portapapeles!
You can view the logging collector pods and the corresponding nodes that they are running on.
Procedure
Run the following command in a project to view the logging collector pods and their details:
oc get pods --selector component=collector -o wide -n <project_name>
$ oc get pods --selector component=collector -o wide -n <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow