This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.4.2. Working with nodes
As an administrator, you can perform a number of tasks to make your clusters more efficient.
4.2.1. Understanding how to evacuate pods on nodes 复制链接链接已复制到粘贴板!
Evacuating pods allows you to migrate all or selected pods from a given node or nodes.
You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s).
Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated.
Procedure
Mark the nodes unschedulable before performing the pod evacuation.
Mark the node as unschedulable:
oc adm cordon <node1>
$ oc adm cordon <node1>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
node/<node1> cordoned
node/<node1> cordoned
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the node status is
NotReady,SchedulingDisabled
:oc get node <node1>
$ oc get node <node1>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION <node1> NotReady,SchedulingDisabled worker 1d v1.18.3
NAME STATUS ROLES AGE VERSION <node1> NotReady,SchedulingDisabled worker 1d v1.18.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Evacuate the pods using one of the following methods:
Evacuate all or selected pods on one or more nodes:
oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]
$ oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Force the deletion of bare pods using the
--force
option. When set totrue
, deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set:oc adm drain <node1> <node2> --force=true
$ oc adm drain <node1> <node2> --force=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a period of time in seconds for each Pod to terminate gracefully, use
--grace-period
. If negative, the default value specified in the Pod will be used:oc adm drain <node1> <node2> --grace-period=-1
$ oc adm drain <node1> <node2> --grace-period=-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ignore pods managed by daemon sets using the
--ignore-daemonsets
flag set totrue
:oc adm drain <node1> <node2> --ignore-daemonsets=true
$ oc adm drain <node1> <node2> --ignore-daemonsets=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the length of time to wait before giving up using the
--timeout
flag. A value of0
sets an infinite length of time:oc adm drain <node1> <node2> --timeout=5s
$ oc adm drain <node1> <node2> --timeout=5s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete pods even if there are pods using emptyDir using the
--delete-local-data
flag set totrue
. Local data is deleted when the node is drained:oc adm drain <node1> <node2> --delete-local-data=true
$ oc adm drain <node1> <node2> --delete-local-data=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List objects that will be migrated without actually performing the evacuation, using the
--dry-run
option set totrue
:oc adm drain <node1> <node2> --dry-run=true
$ oc adm drain <node1> <node2> --dry-run=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Instead of specifying specific node names (for example,
<node1> <node2>
), you can use the--selector=<node_selector>
option to evacuate pods on selected nodes.
Mark the node as schedulable when done.
oc adm uncordon <node1>
$ oc adm uncordon <node1>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2. Understanding how to update labels on nodes 复制链接链接已复制到粘贴板!
You can update any label on a node.
Node labels are not persisted after a node is deleted even if the node is backed up by a Machine.
Any change to a MachineSet
object is not applied to existing machines owned by the machine set. For example, labels edited or added to an existing MachineSet
object are not propagated to existing machines and nodes associated with the machine set.
The following command adds or updates labels on a node:
oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>
$ oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc label nodes webconsole-7f7f6 unhealthy=true
$ oc label nodes webconsole-7f7f6 unhealthy=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following command updates all pods in the namespace:
oc label pods --all <key_1>=<value_1>
$ oc label pods --all <key_1>=<value_1>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc label pods --all status=unhealthy
$ oc label pods --all status=unhealthy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
By default, healthy nodes with a Ready
status are marked as schedulable, meaning that new pods are allowed for placement on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected.
The following command marks a node or nodes as unschedulable:
Example output
oc adm cordon <node>
$ oc adm cordon <node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc adm cordon node1.example.com
$ oc adm cordon node1.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled
node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following command marks a currently unschedulable node or nodes as schedulable:
oc adm uncordon <node1>
$ oc adm uncordon <node1>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, instead of specifying specific node names (for example,
<node>
), you can use the--selector=<node_selector>
option to mark selected nodes as schedulable or unschedulable.
4.2.4. Configuring master nodes as schedulable 复制链接链接已复制到粘贴板!
You can configure master nodes to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, master nodes are not schedulable.
You can set the masters to be schedulable, but must retain the worker nodes.
You can deploy OpenShift Container Platform with no worker nodes on a bare metal cluster. In this case, the master nodes are marked schedulable by default.
You can allow or disallow master nodes to be schedulable by configuring the mastersSchedulable
field.
Procedure
Edit the
schedulers.config.openshift.io
resource.oc edit schedulers.config.openshift.io cluster
$ oc edit schedulers.config.openshift.io cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
mastersSchedulable
field.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set to
true
to allow master nodes to be schedulable, orfalse
to disallow master nodes to be schedulable.
- Save the file to apply the changes.
4.2.5. Deleting nodes 复制链接链接已复制到粘贴板!
4.2.5.1. Deleting nodes from a cluster 复制链接链接已复制到粘贴板!
When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.
Procedure
To delete a node from the OpenShift Container Platform cluster, edit the appropriate MachineSet
object:
If you are running cluster on bare metal, you cannot delete a node by editing MachineSet
objects. Machine sets are only available when a cluster is integrated with a cloud provider. Instead you must unschedule and drain the node before manually deleting it.
View the machine sets that are in the cluster:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The machine sets are listed in the form of <clusterid>-worker-<aws-region-az>.
Scale the machine set:
oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information on scaling your cluster using a machine set, see Manually scaling a machine set.
4.2.5.2. Deleting nodes from a bare metal cluster 复制链接链接已复制到粘贴板!
When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.
Procedure
Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps:
Mark the node as unschedulable:
oc adm cordon <node_name>
$ oc adm cordon <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain all pods on your node:
oc adm drain <node_name> --force=true
$ oc adm drain <node_name> --force=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete your node from the cluster:
oc delete node <node_name>
$ oc delete node <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node.
4.2.6. Adding kernel arguments to Nodes 复制链接链接已复制到粘贴板!
In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set.
Improper use of kernel arguments can result in your systems becoming unbootable.
Examples of kernel arguments you could set include:
- enforcing=0: Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not recommended for production systems, permissive mode can be helpful for debugging.
-
nosmt: Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider
nosmt
in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance.
See Kernel.org kernel parameters for a list and descriptions of kernel arguments.
In the following procedure, you create a MachineConfig
object that identifies:
- A set of machines to which you want to add the kernel argument. In this case, machines with a worker role.
- Kernel arguments that are appended to the end of the existing kernel arguments.
- A label that indicates where in the list of machine configs the change is applied.
Prerequisites
- Have administrative privilege to a working OpenShift Container Platform cluster.
Procedure
List existing
MachineConfig
objects for your OpenShift Container Platform cluster to determine how to label your machine config:oc get MachineConfig
$ oc get MachineConfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
MachineConfig
object file that identifies the kernel argument (for example,05-worker-kernelarg-selinuxpermissive.yaml
)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new machine config:
oc create -f 05-worker-kernelarg-selinuxpermissive.yaml
$ oc create -f 05-worker-kernelarg-selinuxpermissive.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the machine configs to see that the new one was added:
oc get MachineConfig
$ oc get MachineConfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the nodes:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can see that scheduling on each worker node is disabled as the change is being applied.
Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in
/proc/cmdline
on the host):oc debug node/ip-10-0-141-105.ec2.internal
$ oc debug node/ip-10-0-141-105.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see the
enforcing=0
argument added to the other kernel arguments.
4.2.7. Additional resources 复制链接链接已复制到粘贴板!
For more information on scaling your cluster using a MachineSet, see Manually scaling a MachineSet.