Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Understanding OpenShift Container Platform update duration
OpenShift Container Platform update duration varies based on the deployment topology. This page helps you understand the factors that affect update duration and estimate how long the cluster update takes in your environment.
4.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- You are familiar with OpenShift Container Platform architecture and OpenShift Container Platform updates.
4.2. Factors affecting update duration Link kopierenLink in die Zwischenablage kopiert!
The following factors can affect your cluster update duration:
The reboot of compute nodes to the new machine configuration by Machine Config Operator (MCO)
-
The value of in the machine config pool
MaxUnavailable - The minimum number or percentages of replicas set in pod disruption budget (PDB)
-
The value of
- The number of nodes in the cluster
- The health of the cluster nodes
4.3. Cluster update phases Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform, the cluster update happens in two phases:
- Cluster Version Operator (CVO) target update payload deployment
- Machine Config Operator (MCO) node updates
4.3.1. Cluster Version Operator target update payload deployment Link kopierenLink in die Zwischenablage kopiert!
The Cluster Version Operator (CVO) retrieves the target update release image and applies to the cluster. All components which run as pods are updated during this phase, whereas the host components are updated by the Machine Config Operator (MCO). This process might take 60 to 120 minutes.
The CVO phase of the update does not restart the nodes.
4.3.2. Machine Config Operator node updates Link kopierenLink in die Zwischenablage kopiert!
The Machine Config Operator (MCO) applies a new machine configuration to each control plane and compute node. During this process, the MCO performs the following sequential actions on each node of the cluster:
- Cordon and drain all the nodes
- Update the operating system (OS)
- Reboot the nodes
- Uncordon all nodes and schedule workloads on the node
When a node is cordoned, workloads cannot be scheduled to it.
The time to complete this process depends on several factors including the node and infrastructure configuration. This process might take 5 or more minutes to complete per node.
In addition to MCO, you should consider the impact of the following parameters:
- The control plane node update duration is predictable and oftentimes shorter than compute nodes, because the control plane workloads are tuned for graceful updates and quick drains.
-
You can update the compute nodes in parallel by setting the field to greater than
maxUnavailablein the Machine Config Pool (MCP). The MCO cordons the number of nodes specified in1and marks them unavailable for update.maxUnavailable -
When you increase on the MCP, it can help the pool to update more quickly. However, if
maxUnavailableis set too high, and several nodes are cordoned simultaneously, the pod disruption budget (PDB) guarded workloads could fail to drain because a schedulable node cannot be found to run the replicas. If you increasemaxUnavailablefor the MCP, ensure that you still have sufficient schedulable nodes to allow PDB guarded workloads to drain.maxUnavailable Before you begin the update, you must ensure that all the nodes are available. Any unavailable nodes can significantly impact the update duration because the node unavailability affects the
and pod disruption budgets.maxUnavailableTo check the status of nodes from the terminal, run the following command:
$ oc get nodeExample Output
NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacbIf the status of the node is
orNotReady, then the node is not available and this impacts the update duration.SchedulingDisabledYou can check the status of nodes from the Administrator perspective in the web console by expanding Compute
Nodes.
4.4. Estimating cluster update time Link kopierenLink in die Zwischenablage kopiert!
Historical update duration of similar clusters provides you the best estimate for the future cluster updates. However, if the historical data is not available, you can use the following convention to estimate your cluster update time:
Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)
A node update iteration consists of one or more nodes updated in parallel. The control plane nodes are always updated in parallel with the compute nodes. In addition, one or more compute nodes can be updated in parallel based on the
maxUnavailable
For example, to estimate the update time, consider an OpenShift Container Platform cluster with three control plane nodes and six compute nodes and each host takes about 5 minutes to reboot.
The time it takes to reboot a particular node varies significantly. In cloud instances, the reboot might take about 1 to 2 minutes, whereas in physical bare metal hosts the reboot might take more than 15 minutes.
Scenario-1
When you set
maxUnavailable
1
Cluster update time = 60 + (6 x 5) = 90 minutes
Scenario-2
When you set
maxUnavailable
2
Cluster update time = 60 + (3 x 5) = 75 minutes
The default setting for
maxUnavailable
1
maxUnavailable
4.5. Red Hat Enterprise Linux (RHEL) compute nodes Link kopierenLink in die Zwischenablage kopiert!
Red Hat Enterprise Linux (RHEL) compute nodes require an additional usage of
openshift-ansible