Updating clusters
Updating OpenShift Container Platform clusters
Abstract
Chapter 1. Updating a cluster between minor versions
You can update, or upgrade, an OpenShift Container Platform cluster between minor versions.
Because of the difficulty of changing update channels by using oc
, use the web console to change the update channel. It is recommended to complete the update process within the web console. You can follow the steps in Updating a cluster within a minor version by using the CLI to complete the update after you change to a 4.5 channel.
1.1. Prerequisites
-
Have access to the cluster as a user with
admin
privileges. See Using RBAC to define and apply permissions. - Have a recent etcd backup in case your upgrade fails and you must restore your cluster to a previous state.
- Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid upgrade path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster upgrade. See Upgrading installed Operators for more information.
Using the unsupportedConfigOverrides
section to modify the configuration of an Operator is unsupported and might block cluster upgrades. You must remove this setting before you can upgrade your cluster.
1.2. About the OpenShift Container Platform update service
The OpenShift Container Platform update service is the hosted service that provides over-the-air updates to both OpenShift Container Platform and Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram that contain vertices and the edges that connect them, of component Operators. The edges in the graph show which versions you can safely update to, and the vertices are update payloads that specify the intended state of the managed cluster components.
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Container Platform update service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the OpenShift Container Platform CVO uses the release image for that update to upgrade your cluster. The release artifacts are hosted in Quay as container images.
To allow the OpenShift Container Platform update service to provide only compatible updates, a release verification pipeline exists to drive automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Container Platform update service notifies you that it is available.
Because the update service displays all valid updates, you must not force an update to a version that the update service does not display.
During continuous update mode, two controllers run. One continuously updates the payload manifests, applies them to the cluster, and outputs the status of the controlled rollout of the Operators, whether they are available, upgrading, or failed. The second controller polls the OpenShift Container Platform update service to determine if updates are available.
Reverting your cluster to a previous version, or a rollback, is not supported. Only upgrading to a newer version is supported. If your upgrade fails, contact Red Hat support.
During the upgrade process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. It cordons the number of nodes that is specified by the maxUnavailable
field on the machine configuration pool and marks them as unavailable. By default, this value is set to 1
. It then applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet on these machines because you must update the OpenShift API on them first. Because the specification for the new version is applied to the old kubelet, the RHEL machine cannot return to the Ready
state. You cannot complete the update until the machines are available. However, the maximum number of nodes that are unavailable is set to ensure that normal cluster operations are likely to continue with that number of machines out of service.
Additional resources
1.3. OpenShift Container Platform upgrade channels and releases
In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. Upgrade channels are tied to a minor version of OpenShift Container Platform. For instance, OpenShift Container Platform 4.5 upgrade channels recommend upgrades to 4.5 and upgrades within 4.5. They also recommend upgrades within 4.4 and from 4.4 to 4.5, to allow clusters on 4.4 to eventually upgrade to 4.5. They do not recommend upgrades to 4.6 or later releases. This strategy ensures that administrators explicitly decide to upgrade to the next minor version of OpenShift Container Platform.
Upgrade channels control only release selection and do not impact the version of the cluster that you install; the openshift-install
binary file for a specific version of OpenShift Container Platform always installs that version.
OpenShift Container Platform 4.5 offers the following upgrade channels:
-
candidate-4.5
-
fast-4.5
-
stable-4.5
-
eus-4.6
(only available when running 4.6)
candidate-4.5 channel
The candidate-4.5
channel contains candidate builds for a z-stream (4.5.z) and previous minor version releases. Release candidates contain all the features of the product but are not supported. Use release candidate versions to test feature acceptance and assist in qualifying the next version of OpenShift Container Platform. A release candidate is any build that is available in the candidate channel, including ones that do not contain a pre-release version such as -rc
in their names. After a version is available in the candidate channel, it goes through more quality checks. If it meets the quality standard, it is promoted to the fast-4.5
or stable-4.5
channels. Because of this strategy, if a specific release is available in both the candidate-4.5
channel and in the fast-4.5
or stable-4.5
channels, it is a Red Hat-supported version. The candidate-4.5
channel can include release versions from which there are no recommended updates in any channel.
You can use the candidate-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
Release candidates differ from the nightly builds. Nightly builds are available for early access to features, but updating to or from nightly builds is neither recommended nor supported. Nightly builds are not available in any upgrade channel. You can reference the OpenShift Container Platform release statuses for more build information.
fast-4.5 channel
The fast-4.5
channel is updated with new and previous minor versions of 4.5 as soon as Red Hat declares the given version as a general availability release. As such, these releases are fully supported, are production quality, and have performed well while available as a release candidate in the candidate-4.5
channel from where they were promoted. Some time after a release appears in the fast-4.5
channel, it is added to the stable-4.5
channel. Releases never appear in the stable-4.5
channel before they appear in the fast-4.5
channel.
You can use the fast-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
stable-4.5 channel
While the fast-4.5
channel contains releases as soon as their errata are published, releases are added to the stable-4.5
channel after a delay. During this delay, data is collected from Red Hat SRE teams, Red Hat support services, and pre-production and production environments that participate in connected customer program about the stability of the release.
You can use the stable-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
eus-4.6 channel
In addition to the stable channel, certain minor versions of OpenShift Container Platform offer an Extended Update Support (EUS). These EUS versions extend the maintenance phase for customers with Premium Subscriptions to 14 months. OpenShift Container Platform 4.6 is currently the only minor version with EUS.
Although there is no difference between stable-4.6 and eus-4.6 channels until OpenShift Container Platform 4.6 transitions to the EUS phase, you can switch to the EUS channel as soon as it becomes available. When OpenShift Container Platform 4.6 transitions to the EUS phase of its lifecycle, the stable-4.6 channel will no longer receive subsequent z-stream updates. After you upgrade to a version that is exclusive to the EUS channel, that cluster will no longer be eligible for minor version upgrades until upgrades to the next EUS version become available. The next planned EUS version is to be 4.10 and the upgrade to that version will require a serial set of version upgrades, such as from 4.6 to 4.7 to 4.8 to 4.9 to 4.10.
Additionally, you may only switch to the EUS channel when your cluster is running a supported version of OpenShift Container Platform 4.6.
Finally, if you install a 4.6 version that is exclusive to EUS, you will similarly not be able to upgrade to a later minor version until upgrades are provided to 4.10.
Upgrade version paths
OpenShift Container Platform maintains an upgrade recommendation service that understands the version of OpenShift Container Platform you have installed as well as the path to take within the channel you choose to get you to the next release.
You can imagine seeing the following in the fast-4.5
channel:
- 4.5.0
- 4.5.1
- 4.5.3
- 4.5.4
The service recommends only upgrades that have been tested and have no serious issues. It will not suggest updating to a version of OpenShift Container Platform that contains known vulnerabilities. For example, if your cluster is on 4.5.1 and OpenShift Container Platform suggests 4.5.4, then it is safe for you to update from 4.5.1 to 4.5.4. Do not rely on consecutive patch numbers. In this example, 4.5.2 is not and never was available in the channel.
Update stability depends on your channel. The presence of an update recommendation in the candidate-4.5
channel does not imply that the update is supported. It means that no serious issues have been found with the update yet, but there might not be significant traffic through the update to suggest stability. The presence of an update recommendation in the fast-4.5
or stable-4.5
channels at any point is a declaration that the update is supported. While releases will never be removed from a channel, update recommendations that exhibit serious issues will be removed from all channels. Updates initiated after the update recommendation has been removed are still supported.
Red Hat will eventually provide supported update paths from any supported release in the fast-4.5
or stable-4.5
channels to the latest release in 4.5.z, although there can be delays while safe paths away from troubled releases are constructed and verified.
Fast and stable channel use and strategies
The fast-4.5
and stable-4.5
channels present a choice between receiving general availability releases as soon as they are available or allowing Red Hat to control the rollout of those updates. If issues are detected during rollout or at a later time, upgrades to that version might be blocked in both the fast-4.5
and stable-4.5
channels, and a new version might be introduced that becomes the new preferred upgrade target.
Customers can improve this process by configuring pre-production systems on the fast-4.5
channel, configuring production systems on the stable-4.5
channel, and participating in the Red Hat connected customer program. Red Hat uses this program to observe the impact of updates on your specific hardware and software configurations. Future releases might improve or alter the pace at which updates move from the fast-4.5
to the stable-4.5
channel.
Restricted network clusters
If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact upgrades. During upgrade, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings.
Switching between channels
Your cluster is still supported if you change from the stable-4.5
channel to the fast-4.5
channel. Although you can switch to the candidate-4.5
channel at any time, some releases in that channel might be unsupported release candidates. You can switch from the candidate-4.5
channel to the fast-4.5
channel if your current release is a general availability release. You can always switch from the fast-4.5
channel to the stable-4.5
channel, although if the current release was recently promoted to fast-4.5
there can be a delay of up to a day for the release to be promoted to stable-4.5
. If you change to a channel that does not include your current release, an alert displays and no updates can be recommended, but you can safely change back to your original channel at any point.
1.4. Updating a cluster by using the web console
If updates are available, you can update your cluster from the web console.
You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal.
Prerequisites
-
Have access to the web console as a user with
admin
privileges.
Procedure
- From the web console, click Administration > Cluster Settings and review the contents of the Overview tab.
For production clusters, ensure that the CHANNEL is set to the correct channel for your current minor version, such as
stable-4.5
.ImportantFor production clusters, you must subscribe to a stable-* or fast-* channel.
- If the UPDATE STATUS is not Updates Available, you cannot upgrade your cluster.
- The DESIRED VERSION indicates the cluster version that your cluster is running or is updating to.
-
Click Updates Available, select the highest available version and click Update. The UPDATE STATUS changes to
Updating
, and you can review the progress of the Operator upgrades on the Cluster Operators tab. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel.
- If updates are available, continue to perform updates in the current channel until you can no longer update.
- If no updates are available, change the CHANNEL to the stable-* or fast-* channel for the next minor version, and update to the version that you want in that channel.
You might need to perform several intermediate updates until you reach the version that you want.
Chapter 2. Updating a cluster within a minor version from the web console
You can update, or upgrade, an OpenShift Container Platform cluster by using the web console.
2.1. Prerequisites
-
Have access to the cluster as a user with
admin
privileges. See Using RBAC to define and apply permissions. - Have a recent etcd backup in case your upgrade fails and you must restore your cluster to a previous state.
2.2. About the OpenShift Container Platform update service
The OpenShift Container Platform update service is the hosted service that provides over-the-air updates to both OpenShift Container Platform and Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram that contain vertices and the edges that connect them, of component Operators. The edges in the graph show which versions you can safely update to, and the vertices are update payloads that specify the intended state of the managed cluster components.
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Container Platform update service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the OpenShift Container Platform CVO uses the release image for that update to upgrade your cluster. The release artifacts are hosted in Quay as container images.
To allow the OpenShift Container Platform update service to provide only compatible updates, a release verification pipeline exists to drive automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Container Platform update service notifies you that it is available.
Because the update service displays all valid updates, you must not force an update to a version that the update service does not display.
During continuous update mode, two controllers run. One continuously updates the payload manifests, applies them to the cluster, and outputs the status of the controlled rollout of the Operators, whether they are available, upgrading, or failed. The second controller polls the OpenShift Container Platform update service to determine if updates are available.
Reverting your cluster to a previous version, or a rollback, is not supported. Only upgrading to a newer version is supported. If your upgrade fails, contact Red Hat support.
During the upgrade process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. It cordons the number of nodes that is specified by the maxUnavailable
field on the machine configuration pool and marks them as unavailable. By default, this value is set to 1
. It then applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet on these machines because you must update the OpenShift API on them first. Because the specification for the new version is applied to the old kubelet, the RHEL machine cannot return to the Ready
state. You cannot complete the update until the machines are available. However, the maximum number of nodes that are unavailable is set to ensure that normal cluster operations are likely to continue with that number of machines out of service.
Additional resources
2.3. OpenShift Container Platform upgrade channels and releases
In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. Upgrade channels are tied to a minor version of OpenShift Container Platform. For instance, OpenShift Container Platform 4.5 upgrade channels recommend upgrades to 4.5 and upgrades within 4.5. They also recommend upgrades within 4.4 and from 4.4 to 4.5, to allow clusters on 4.4 to eventually upgrade to 4.5. They do not recommend upgrades to 4.6 or later releases. This strategy ensures that administrators explicitly decide to upgrade to the next minor version of OpenShift Container Platform.
Upgrade channels control only release selection and do not impact the version of the cluster that you install; the openshift-install
binary file for a specific version of OpenShift Container Platform always installs that version.
OpenShift Container Platform 4.5 offers the following upgrade channels:
-
candidate-4.5
-
fast-4.5
-
stable-4.5
-
eus-4.6
(only available when running 4.6)
candidate-4.5 channel
The candidate-4.5
channel contains candidate builds for a z-stream (4.5.z) and previous minor version releases. Release candidates contain all the features of the product but are not supported. Use release candidate versions to test feature acceptance and assist in qualifying the next version of OpenShift Container Platform. A release candidate is any build that is available in the candidate channel, including ones that do not contain a pre-release version such as -rc
in their names. After a version is available in the candidate channel, it goes through more quality checks. If it meets the quality standard, it is promoted to the fast-4.5
or stable-4.5
channels. Because of this strategy, if a specific release is available in both the candidate-4.5
channel and in the fast-4.5
or stable-4.5
channels, it is a Red Hat-supported version. The candidate-4.5
channel can include release versions from which there are no recommended updates in any channel.
You can use the candidate-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
Release candidates differ from the nightly builds. Nightly builds are available for early access to features, but updating to or from nightly builds is neither recommended nor supported. Nightly builds are not available in any upgrade channel. You can reference the OpenShift Container Platform release statuses for more build information.
fast-4.5 channel
The fast-4.5
channel is updated with new and previous minor versions of 4.5 as soon as Red Hat declares the given version as a general availability release. As such, these releases are fully supported, are production quality, and have performed well while available as a release candidate in the candidate-4.5
channel from where they were promoted. Some time after a release appears in the fast-4.5
channel, it is added to the stable-4.5
channel. Releases never appear in the stable-4.5
channel before they appear in the fast-4.5
channel.
You can use the fast-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
stable-4.5 channel
While the fast-4.5
channel contains releases as soon as their errata are published, releases are added to the stable-4.5
channel after a delay. During this delay, data is collected from Red Hat SRE teams, Red Hat support services, and pre-production and production environments that participate in connected customer program about the stability of the release.
You can use the stable-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
eus-4.6 channel
In addition to the stable channel, certain minor versions of OpenShift Container Platform offer an Extended Update Support (EUS). These EUS versions extend the maintenance phase for customers with Premium Subscriptions to 14 months. OpenShift Container Platform 4.6 is currently the only minor version with EUS.
Although there is no difference between stable-4.6 and eus-4.6 channels until OpenShift Container Platform 4.6 transitions to the EUS phase, you can switch to the EUS channel as soon as it becomes available. When OpenShift Container Platform 4.6 transitions to the EUS phase of its lifecycle, the stable-4.6 channel will no longer receive subsequent z-stream updates. After you upgrade to a version that is exclusive to the EUS channel, that cluster will no longer be eligible for minor version upgrades until upgrades to the next EUS version become available. The next planned EUS version is to be 4.10 and the upgrade to that version will require a serial set of version upgrades, such as from 4.6 to 4.7 to 4.8 to 4.9 to 4.10.
Additionally, you may only switch to the EUS channel when your cluster is running a supported version of OpenShift Container Platform 4.6.
Finally, if you install a 4.6 version that is exclusive to EUS, you will similarly not be able to upgrade to a later minor version until upgrades are provided to 4.10.
Upgrade version paths
OpenShift Container Platform maintains an upgrade recommendation service that understands the version of OpenShift Container Platform you have installed as well as the path to take within the channel you choose to get you to the next release.
You can imagine seeing the following in the fast-4.5
channel:
- 4.5.0
- 4.5.1
- 4.5.3
- 4.5.4
The service recommends only upgrades that have been tested and have no serious issues. It will not suggest updating to a version of OpenShift Container Platform that contains known vulnerabilities. For example, if your cluster is on 4.5.1 and OpenShift Container Platform suggests 4.5.4, then it is safe for you to update from 4.5.1 to 4.5.4. Do not rely on consecutive patch numbers. In this example, 4.5.2 is not and never was available in the channel.
Update stability depends on your channel. The presence of an update recommendation in the candidate-4.5
channel does not imply that the update is supported. It means that no serious issues have been found with the update yet, but there might not be significant traffic through the update to suggest stability. The presence of an update recommendation in the fast-4.5
or stable-4.5
channels at any point is a declaration that the update is supported. While releases will never be removed from a channel, update recommendations that exhibit serious issues will be removed from all channels. Updates initiated after the update recommendation has been removed are still supported.
Red Hat will eventually provide supported update paths from any supported release in the fast-4.5
or stable-4.5
channels to the latest release in 4.5.z, although there can be delays while safe paths away from troubled releases are constructed and verified.
Fast and stable channel use and strategies
The fast-4.5
and stable-4.5
channels present a choice between receiving general availability releases as soon as they are available or allowing Red Hat to control the rollout of those updates. If issues are detected during rollout or at a later time, upgrades to that version might be blocked in both the fast-4.5
and stable-4.5
channels, and a new version might be introduced that becomes the new preferred upgrade target.
Customers can improve this process by configuring pre-production systems on the fast-4.5
channel, configuring production systems on the stable-4.5
channel, and participating in the Red Hat connected customer program. Red Hat uses this program to observe the impact of updates on your specific hardware and software configurations. Future releases might improve or alter the pace at which updates move from the fast-4.5
to the stable-4.5
channel.
Restricted network clusters
If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact upgrades. During upgrade, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings.
Switching between channels
Your cluster is still supported if you change from the stable-4.5
channel to the fast-4.5
channel. Although you can switch to the candidate-4.5
channel at any time, some releases in that channel might be unsupported release candidates. You can switch from the candidate-4.5
channel to the fast-4.5
channel if your current release is a general availability release. You can always switch from the fast-4.5
channel to the stable-4.5
channel, although if the current release was recently promoted to fast-4.5
there can be a delay of up to a day for the release to be promoted to stable-4.5
. If you change to a channel that does not include your current release, an alert displays and no updates can be recommended, but you can safely change back to your original channel at any point.
2.4. Updating a cluster by using the web console
If updates are available, you can update your cluster from the web console.
You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal.
Prerequisites
-
Have access to the web console as a user with
admin
privileges.
Procedure
- From the web console, click Administration > Cluster Settings and review the contents of the Overview tab.
For production clusters, ensure that the CHANNEL is set to the correct channel for the version that you want to update to, your current minor version, such as
stable-4.5
.ImportantFor production clusters, you must subscribe to a stable-* or fast-* channel.
- If the UPDATE STATUS is not Updates Available, you cannot upgrade your cluster.
- The DESIRED VERSION indicates the cluster version that your cluster is running or is updating to.
-
Click Updates Available, select a version to update to, the highest available version and click Update. The UPDATE STATUS changes to
Updating
, and you can review the progress of the Operator upgrades on the Cluster Operators tab. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel.
- If updates are available, continue to perform updates in the current channel until you can no longer update.
- If no updates are available, change the CHANNEL to the stable-* or fast-* channel for the next minor version, and update to the version that you want in that channel.
You might need to perform several intermediate updates until you reach the version that you want.
Chapter 3. Updating a cluster within a minor version by using the CLI
You can update, or upgrade, an OpenShift Container Platform cluster within a minor version by using the OpenShift CLI (oc
).
3.1. Prerequisites
-
Have access to the cluster as a user with
admin
privileges. See Using RBAC to define and apply permissions. - Have a recent etcd backup in case your upgrade fails and you must restore your cluster to a previous state.
- Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid upgrade path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster upgrade. See Upgrading installed Operators for more information.
Using the unsupportedConfigOverrides
section to modify the configuration of an Operator is unsupported and might block cluster upgrades. You must remove this setting before you can upgrade your cluster.
3.2. About the OpenShift Container Platform update service
The OpenShift Container Platform update service is the hosted service that provides over-the-air updates to both OpenShift Container Platform and Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram that contain vertices and the edges that connect them, of component Operators. The edges in the graph show which versions you can safely update to, and the vertices are update payloads that specify the intended state of the managed cluster components.
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Container Platform update service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the OpenShift Container Platform CVO uses the release image for that update to upgrade your cluster. The release artifacts are hosted in Quay as container images.
To allow the OpenShift Container Platform update service to provide only compatible updates, a release verification pipeline exists to drive automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Container Platform update service notifies you that it is available.
Because the update service displays all valid updates, you must not force an update to a version that the update service does not display.
During continuous update mode, two controllers run. One continuously updates the payload manifests, applies them to the cluster, and outputs the status of the controlled rollout of the Operators, whether they are available, upgrading, or failed. The second controller polls the OpenShift Container Platform update service to determine if updates are available.
Reverting your cluster to a previous version, or a rollback, is not supported. Only upgrading to a newer version is supported. If your upgrade fails, contact Red Hat support.
During the upgrade process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. It cordons the number of nodes that is specified by the maxUnavailable
field on the machine configuration pool and marks them as unavailable. By default, this value is set to 1
. It then applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet on these machines because you must update the OpenShift API on them first. Because the specification for the new version is applied to the old kubelet, the RHEL machine cannot return to the Ready
state. You cannot complete the update until the machines are available. However, the maximum number of nodes that are unavailable is set to ensure that normal cluster operations are likely to continue with that number of machines out of service.
Additional resources
3.3. OpenShift Container Platform upgrade channels and releases
In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. Upgrade channels are tied to a minor version of OpenShift Container Platform. For instance, OpenShift Container Platform 4.5 upgrade channels recommend upgrades to 4.5 and upgrades within 4.5. They also recommend upgrades within 4.4 and from 4.4 to 4.5, to allow clusters on 4.4 to eventually upgrade to 4.5. They do not recommend upgrades to 4.6 or later releases. This strategy ensures that administrators explicitly decide to upgrade to the next minor version of OpenShift Container Platform.
Upgrade channels control only release selection and do not impact the version of the cluster that you install; the openshift-install
binary file for a specific version of OpenShift Container Platform always installs that version.
OpenShift Container Platform 4.5 offers the following upgrade channels:
-
candidate-4.5
-
fast-4.5
-
stable-4.5
-
eus-4.6
(only available when running 4.6)
candidate-4.5 channel
The candidate-4.5
channel contains candidate builds for a z-stream (4.5.z) and previous minor version releases. Release candidates contain all the features of the product but are not supported. Use release candidate versions to test feature acceptance and assist in qualifying the next version of OpenShift Container Platform. A release candidate is any build that is available in the candidate channel, including ones that do not contain a pre-release version such as -rc
in their names. After a version is available in the candidate channel, it goes through more quality checks. If it meets the quality standard, it is promoted to the fast-4.5
or stable-4.5
channels. Because of this strategy, if a specific release is available in both the candidate-4.5
channel and in the fast-4.5
or stable-4.5
channels, it is a Red Hat-supported version. The candidate-4.5
channel can include release versions from which there are no recommended updates in any channel.
You can use the candidate-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
Release candidates differ from the nightly builds. Nightly builds are available for early access to features, but updating to or from nightly builds is neither recommended nor supported. Nightly builds are not available in any upgrade channel. You can reference the OpenShift Container Platform release statuses for more build information.
fast-4.5 channel
The fast-4.5
channel is updated with new and previous minor versions of 4.5 as soon as Red Hat declares the given version as a general availability release. As such, these releases are fully supported, are production quality, and have performed well while available as a release candidate in the candidate-4.5
channel from where they were promoted. Some time after a release appears in the fast-4.5
channel, it is added to the stable-4.5
channel. Releases never appear in the stable-4.5
channel before they appear in the fast-4.5
channel.
You can use the fast-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
stable-4.5 channel
While the fast-4.5
channel contains releases as soon as their errata are published, releases are added to the stable-4.5
channel after a delay. During this delay, data is collected from Red Hat SRE teams, Red Hat support services, and pre-production and production environments that participate in connected customer program about the stability of the release.
You can use the stable-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
eus-4.6 channel
In addition to the stable channel, certain minor versions of OpenShift Container Platform offer an Extended Update Support (EUS). These EUS versions extend the maintenance phase for customers with Premium Subscriptions to 14 months. OpenShift Container Platform 4.6 is currently the only minor version with EUS.
Although there is no difference between stable-4.6 and eus-4.6 channels until OpenShift Container Platform 4.6 transitions to the EUS phase, you can switch to the EUS channel as soon as it becomes available. When OpenShift Container Platform 4.6 transitions to the EUS phase of its lifecycle, the stable-4.6 channel will no longer receive subsequent z-stream updates. After you upgrade to a version that is exclusive to the EUS channel, that cluster will no longer be eligible for minor version upgrades until upgrades to the next EUS version become available. The next planned EUS version is to be 4.10 and the upgrade to that version will require a serial set of version upgrades, such as from 4.6 to 4.7 to 4.8 to 4.9 to 4.10.
Additionally, you may only switch to the EUS channel when your cluster is running a supported version of OpenShift Container Platform 4.6.
Finally, if you install a 4.6 version that is exclusive to EUS, you will similarly not be able to upgrade to a later minor version until upgrades are provided to 4.10.
Upgrade version paths
OpenShift Container Platform maintains an upgrade recommendation service that understands the version of OpenShift Container Platform you have installed as well as the path to take within the channel you choose to get you to the next release.
You can imagine seeing the following in the fast-4.5
channel:
- 4.5.0
- 4.5.1
- 4.5.3
- 4.5.4
The service recommends only upgrades that have been tested and have no serious issues. It will not suggest updating to a version of OpenShift Container Platform that contains known vulnerabilities. For example, if your cluster is on 4.5.1 and OpenShift Container Platform suggests 4.5.4, then it is safe for you to update from 4.5.1 to 4.5.4. Do not rely on consecutive patch numbers. In this example, 4.5.2 is not and never was available in the channel.
Update stability depends on your channel. The presence of an update recommendation in the candidate-4.5
channel does not imply that the update is supported. It means that no serious issues have been found with the update yet, but there might not be significant traffic through the update to suggest stability. The presence of an update recommendation in the fast-4.5
or stable-4.5
channels at any point is a declaration that the update is supported. While releases will never be removed from a channel, update recommendations that exhibit serious issues will be removed from all channels. Updates initiated after the update recommendation has been removed are still supported.
Red Hat will eventually provide supported update paths from any supported release in the fast-4.5
or stable-4.5
channels to the latest release in 4.5.z, although there can be delays while safe paths away from troubled releases are constructed and verified.
Fast and stable channel use and strategies
The fast-4.5
and stable-4.5
channels present a choice between receiving general availability releases as soon as they are available or allowing Red Hat to control the rollout of those updates. If issues are detected during rollout or at a later time, upgrades to that version might be blocked in both the fast-4.5
and stable-4.5
channels, and a new version might be introduced that becomes the new preferred upgrade target.
Customers can improve this process by configuring pre-production systems on the fast-4.5
channel, configuring production systems on the stable-4.5
channel, and participating in the Red Hat connected customer program. Red Hat uses this program to observe the impact of updates on your specific hardware and software configurations. Future releases might improve or alter the pace at which updates move from the fast-4.5
to the stable-4.5
channel.
Restricted network clusters
If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact upgrades. During upgrade, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings.
Switching between channels
Your cluster is still supported if you change from the stable-4.5
channel to the fast-4.5
channel. Although you can switch to the candidate-4.5
channel at any time, some releases in that channel might be unsupported release candidates. You can switch from the candidate-4.5
channel to the fast-4.5
channel if your current release is a general availability release. You can always switch from the fast-4.5
channel to the stable-4.5
channel, although if the current release was recently promoted to fast-4.5
there can be a delay of up to a day for the release to be promoted to stable-4.5
. If you change to a channel that does not include your current release, an alert displays and no updates can be recommended, but you can safely change back to your original channel at any point.
3.4. Updating a cluster by using the CLI
If updates are available, you can update your cluster by using the OpenShift CLI (oc
).
You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal.
Prerequisites
-
Install the OpenShift CLI (
oc
) that matches the version for your updated version. -
Log in to the cluster as user with
cluster-admin
privileges. -
Install the
jq
package.
Procedure
Ensure that your cluster is available:
$ oc get clusterversion
Example output
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.4 True False 158m Cluster version is 4.5.4
Review the current update channel information and confirm that your channel is set to
stable-4.5
:$ oc get clusterversion -o json|jq ".items[0].spec"
Example output
{ "channel": "stable-4.5", "clusterID": "990f7ab8-109b-4c95-8480-2bd1deec55ff", "upstream": "https://api.openshift.com/api/upgrades_info/v1/graph" }
ImportantFor production clusters, you must subscribe to a
stable-*
orfast-*
channel.View the available updates and note the version number of the update that you want to apply:
$ oc adm upgrade
Example output
Cluster version is 4.1.0 Updates: VERSION IMAGE 4.1.2 quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b
Apply an update:
Review the status of the Cluster Version Operator:
$ oc get clusterversion -o json|jq ".items[0].spec"
Example output
{ "channel": "stable-4.5", "clusterID": "990f7ab8-109b-4c95-8480-2bd1deec55ff", "desiredUpdate": { "force": false, "image": "quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b", "version": "4.5.4" 1 }, "upstream": "https://api.openshift.com/api/upgrades_info/v1/graph" }
- 1
- If the
version
number in thedesiredUpdate
stanza matches the value that you specified, the update is in progress.
Review the cluster version status history to monitor the status of the update. It might take some time for all the objects to finish updating.
$ oc get clusterversion -o json|jq ".items[0].status.history"
Example output
[ { "completionTime": null, "image": "quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b", "startedTime": "2019-06-19T20:30:50Z", "state": "Partial", "verified": true, "version": "4.1.2" }, { "completionTime": "2019-06-19T20:30:50Z", "image": "quay.io/openshift-release-dev/ocp-release@sha256:b8307ac0f3ec4ac86c3f3b52846425205022da52c16f56ec31cbe428501001d6", "startedTime": "2019-06-19T17:38:10Z", "state": "Completed", "verified": false, "version": "4.1.0" } ]
The history contains a list of the most recent versions applied to the cluster. This value is updated when the CVO applies an update. The list is ordered by date, where the newest update is first in the list. Updates in the history have state
Completed
if the rollout completed andPartial
if the update failed or did not complete.ImportantIf an upgrade fails, the Operator stops and reports the status of the failing component. Rolling your cluster back to a previous version is not supported. If your upgrade fails, contact Red Hat support.
After the update completes, you can confirm that the cluster version has updated to the new version:
$ oc get clusterversion
Example output
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.4 True False 2m Cluster version is 4.5.4
If you are upgrading your cluster to the next minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are upgraded before deploying workloads that rely on a new feature:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.18.0 ip-10-0-170-223.ec2.internal Ready master 82m v1.18.0 ip-10-0-179-95.ec2.internal Ready worker 70m v1.18.0 ip-10-0-182-134.ec2.internal Ready worker 70m v1.18.0 ip-10-0-211-16.ec2.internal Ready master 82m v1.18.0 ip-10-0-250-100.ec2.internal Ready worker 69m v1.18.0
Chapter 4. Updating a cluster that includes RHEL compute machines
You can update, or upgrade, an OpenShift Container Platform cluster. If your cluster contains Red Hat Enterprise Linux (RHEL) machines, you must perform more steps to update those machines.
4.1. Prerequisites
-
Have access to the cluster as a user with
admin
privileges. See Using RBAC to define and apply permissions. - Have a recent etcd backup in case your upgrade fails and you must restore your cluster to a previous state.
4.2. About the OpenShift Container Platform update service
The OpenShift Container Platform update service is the hosted service that provides over-the-air updates to both OpenShift Container Platform and Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram that contain vertices and the edges that connect them, of component Operators. The edges in the graph show which versions you can safely update to, and the vertices are update payloads that specify the intended state of the managed cluster components.
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Container Platform update service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the OpenShift Container Platform CVO uses the release image for that update to upgrade your cluster. The release artifacts are hosted in Quay as container images.
To allow the OpenShift Container Platform update service to provide only compatible updates, a release verification pipeline exists to drive automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Container Platform update service notifies you that it is available.
Because the update service displays all valid updates, you must not force an update to a version that the update service does not display.
During continuous update mode, two controllers run. One continuously updates the payload manifests, applies them to the cluster, and outputs the status of the controlled rollout of the Operators, whether they are available, upgrading, or failed. The second controller polls the OpenShift Container Platform update service to determine if updates are available.
Reverting your cluster to a previous version, or a rollback, is not supported. Only upgrading to a newer version is supported. If your upgrade fails, contact Red Hat support.
During the upgrade process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. It cordons the number of nodes that is specified by the maxUnavailable
field on the machine configuration pool and marks them as unavailable. By default, this value is set to 1
. It then applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet on these machines because you must update the OpenShift API on them first. Because the specification for the new version is applied to the old kubelet, the RHEL machine cannot return to the Ready
state. You cannot complete the update until the machines are available. However, the maximum number of nodes that are unavailable is set to ensure that normal cluster operations are likely to continue with that number of machines out of service.
Additional resources
4.3. OpenShift Container Platform upgrade channels and releases
In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. Upgrade channels are tied to a minor version of OpenShift Container Platform. For instance, OpenShift Container Platform 4.5 upgrade channels recommend upgrades to 4.5 and upgrades within 4.5. They also recommend upgrades within 4.4 and from 4.4 to 4.5, to allow clusters on 4.4 to eventually upgrade to 4.5. They do not recommend upgrades to 4.6 or later releases. This strategy ensures that administrators explicitly decide to upgrade to the next minor version of OpenShift Container Platform.
Upgrade channels control only release selection and do not impact the version of the cluster that you install; the openshift-install
binary file for a specific version of OpenShift Container Platform always installs that version.
OpenShift Container Platform 4.5 offers the following upgrade channels:
-
candidate-4.5
-
fast-4.5
-
stable-4.5
-
eus-4.6
(only available when running 4.6)
candidate-4.5 channel
The candidate-4.5
channel contains candidate builds for a z-stream (4.5.z) and previous minor version releases. Release candidates contain all the features of the product but are not supported. Use release candidate versions to test feature acceptance and assist in qualifying the next version of OpenShift Container Platform. A release candidate is any build that is available in the candidate channel, including ones that do not contain a pre-release version such as -rc
in their names. After a version is available in the candidate channel, it goes through more quality checks. If it meets the quality standard, it is promoted to the fast-4.5
or stable-4.5
channels. Because of this strategy, if a specific release is available in both the candidate-4.5
channel and in the fast-4.5
or stable-4.5
channels, it is a Red Hat-supported version. The candidate-4.5
channel can include release versions from which there are no recommended updates in any channel.
You can use the candidate-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
Release candidates differ from the nightly builds. Nightly builds are available for early access to features, but updating to or from nightly builds is neither recommended nor supported. Nightly builds are not available in any upgrade channel. You can reference the OpenShift Container Platform release statuses for more build information.
fast-4.5 channel
The fast-4.5
channel is updated with new and previous minor versions of 4.5 as soon as Red Hat declares the given version as a general availability release. As such, these releases are fully supported, are production quality, and have performed well while available as a release candidate in the candidate-4.5
channel from where they were promoted. Some time after a release appears in the fast-4.5
channel, it is added to the stable-4.5
channel. Releases never appear in the stable-4.5
channel before they appear in the fast-4.5
channel.
You can use the fast-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
stable-4.5 channel
While the fast-4.5
channel contains releases as soon as their errata are published, releases are added to the stable-4.5
channel after a delay. During this delay, data is collected from Red Hat SRE teams, Red Hat support services, and pre-production and production environments that participate in connected customer program about the stability of the release.
You can use the stable-4.5
channel to upgrade from a previous minor version of OpenShift Container Platform.
eus-4.6 channel
In addition to the stable channel, certain minor versions of OpenShift Container Platform offer an Extended Update Support (EUS). These EUS versions extend the maintenance phase for customers with Premium Subscriptions to 14 months. OpenShift Container Platform 4.6 is currently the only minor version with EUS.
Although there is no difference between stable-4.6 and eus-4.6 channels until OpenShift Container Platform 4.6 transitions to the EUS phase, you can switch to the EUS channel as soon as it becomes available. When OpenShift Container Platform 4.6 transitions to the EUS phase of its lifecycle, the stable-4.6 channel will no longer receive subsequent z-stream updates. After you upgrade to a version that is exclusive to the EUS channel, that cluster will no longer be eligible for minor version upgrades until upgrades to the next EUS version become available. The next planned EUS version is to be 4.10 and the upgrade to that version will require a serial set of version upgrades, such as from 4.6 to 4.7 to 4.8 to 4.9 to 4.10.
Additionally, you may only switch to the EUS channel when your cluster is running a supported version of OpenShift Container Platform 4.6.
Finally, if you install a 4.6 version that is exclusive to EUS, you will similarly not be able to upgrade to a later minor version until upgrades are provided to 4.10.
Upgrade version paths
OpenShift Container Platform maintains an upgrade recommendation service that understands the version of OpenShift Container Platform you have installed as well as the path to take within the channel you choose to get you to the next release.
You can imagine seeing the following in the fast-4.5
channel:
- 4.5.0
- 4.5.1
- 4.5.3
- 4.5.4
The service recommends only upgrades that have been tested and have no serious issues. It will not suggest updating to a version of OpenShift Container Platform that contains known vulnerabilities. For example, if your cluster is on 4.5.1 and OpenShift Container Platform suggests 4.5.4, then it is safe for you to update from 4.5.1 to 4.5.4. Do not rely on consecutive patch numbers. In this example, 4.5.2 is not and never was available in the channel.
Update stability depends on your channel. The presence of an update recommendation in the candidate-4.5
channel does not imply that the update is supported. It means that no serious issues have been found with the update yet, but there might not be significant traffic through the update to suggest stability. The presence of an update recommendation in the fast-4.5
or stable-4.5
channels at any point is a declaration that the update is supported. While releases will never be removed from a channel, update recommendations that exhibit serious issues will be removed from all channels. Updates initiated after the update recommendation has been removed are still supported.
Red Hat will eventually provide supported update paths from any supported release in the fast-4.5
or stable-4.5
channels to the latest release in 4.5.z, although there can be delays while safe paths away from troubled releases are constructed and verified.
Fast and stable channel use and strategies
The fast-4.5
and stable-4.5
channels present a choice between receiving general availability releases as soon as they are available or allowing Red Hat to control the rollout of those updates. If issues are detected during rollout or at a later time, upgrades to that version might be blocked in both the fast-4.5
and stable-4.5
channels, and a new version might be introduced that becomes the new preferred upgrade target.
Customers can improve this process by configuring pre-production systems on the fast-4.5
channel, configuring production systems on the stable-4.5
channel, and participating in the Red Hat connected customer program. Red Hat uses this program to observe the impact of updates on your specific hardware and software configurations. Future releases might improve or alter the pace at which updates move from the fast-4.5
to the stable-4.5
channel.
Restricted network clusters
If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact upgrades. During upgrade, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings.
Switching between channels
Your cluster is still supported if you change from the stable-4.5
channel to the fast-4.5
channel. Although you can switch to the candidate-4.5
channel at any time, some releases in that channel might be unsupported release candidates. You can switch from the candidate-4.5
channel to the fast-4.5
channel if your current release is a general availability release. You can always switch from the fast-4.5
channel to the stable-4.5
channel, although if the current release was recently promoted to fast-4.5
there can be a delay of up to a day for the release to be promoted to stable-4.5
. If you change to a channel that does not include your current release, an alert displays and no updates can be recommended, but you can safely change back to your original channel at any point.
4.4. Updating a cluster by using the web console
If updates are available, you can update your cluster from the web console.
You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal.
Prerequisites
-
Have access to the web console as a user with
admin
privileges.
Procedure
- From the web console, click Administration > Cluster Settings and review the contents of the Overview tab.
For production clusters, ensure that the CHANNEL is set to the correct channel for the version that you want to update to, your current minor version, such as
stable-4.5
.ImportantFor production clusters, you must subscribe to a stable-* or fast-* channel.
- If the UPDATE STATUS is not Updates Available, you cannot upgrade your cluster.
- The DESIRED VERSION indicates the cluster version that your cluster is running or is updating to.
-
Click Updates Available, select a version to update to, the highest available version and click Update. The UPDATE STATUS changes to
Updating
, and you can review the progress of the Operator upgrades on the Cluster Operators tab. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel.
- If updates are available, continue to perform updates in the current channel until you can no longer update.
- If no updates are available, change the CHANNEL to the stable-* or fast-* channel for the next minor version, and update to the version that you want in that channel.
You might need to perform several intermediate updates until you reach the version that you want.
NoteWhen you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker machines, those workers temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the
NotReady
state for the cluster to finish updating.
4.5. Optional: Adding hooks to perform Ansible tasks on RHEL machines
You can use hooks to run Ansible tasks on the RHEL compute machines during the OpenShift Container Platform update.
4.5.1. About Ansible hooks for upgrades
When you update OpenShift Container Platform, you can run custom tasks on your Red Hat Enterprise Linux (RHEL) nodes during specific operations by using hooks. Hooks allow you to provide files that define tasks to run before or after specific update tasks. You can use hooks to validate or modify custom infrastructure when you update the RHEL compute nodes in you OpenShift Container Platform cluster.
Because when a hook fails, the operation fails, you must design hooks that are idempotent, or can run multiple times and provide the same results.
Hooks have the following important limitations: - Hooks do not have a defined or versioned interface. They can use internal openshift-ansible
variables, but it is possible that the variables will be modified or removed in future OpenShift Container Platform releases. - Hooks do not have error handling, so an error in a hook halts the update process. If you get an error, you must address the problem and then start the upgrade again.
4.5.2. Configuring the Ansible inventory file to use hooks
You define the hooks to use when you update the Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, in the hosts
inventory file under the all:vars
section.
Prerequisites
-
You have access to the machine that you used to add the RHEL compute machines cluster. You must have access to the
hosts
Ansible inventory file that defines your RHEL machines.
Procedure
After you design the hook, create a YAML file that defines the Ansible tasks for it. This file must be a set of tasks and cannot be a playbook, as shown in the following example:
--- # Trivial example forcing an operator to acknowledge the start of an upgrade # file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: "Compute machine upgrade of {{ inventory_hostname }} is about to start" - name: require the user agree to start an upgrade pause: prompt: "Press Enter to start the compute machine update"
Modify the
hosts
Ansible inventory file to specify the hook files. The hook files are specified as parameter values in the[all:vars]
section, as shown:Example hook definitions in an inventory file
[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml
To avoid ambiguity in the paths to the hook, use absolute paths instead of a relative paths in their definitions.
4.5.3. Available hooks for RHEL compute machines
You can use the following hooks when you update the Red Hat Enterprise Linux (RHEL) compute machines in your OpenShift Container Platform cluster.
Hook name | Description |
---|---|
|
|
|
|
|
|
|
|
4.6. Updating RHEL compute machines in your cluster
After you update your cluster, you must update the Red Hat Enterprise Linux (RHEL) compute machines in your cluster.
Prerequisites
You updated your cluster.
ImportantBecause the RHEL machines require assets that are generated by the cluster to complete the update process, you must update the cluster before you update the RHEL compute machines in it.
-
You have access to the machine that you used to add the RHEL compute machines cluster. You must have access to the
hosts
Ansible inventory file that defines your RHEL machines and theupgrade
playbook.
Procedure
Stop and disable firewalld on the host:
# systemctl disable --now firewalld.service
NoteYou must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker.
Enable the repositories that are required for OpenShift Container Platform 4.5:
On the machine that you run the Ansible playbooks, update the required repositories:
# subscription-manager repos --disable=rhel-7-server-ose-4.4-rpms \ --enable=rhel-7-server-ansible-2.9-rpms \ --enable=rhel-7-server-ose-4.5-rpms
On the machine that you run the Ansible playbooks, update the required packages, including
openshift-ansible
:# yum update openshift-ansible openshift-clients
On each RHEL compute node, update the required repositories:
# subscription-manager repos --disable=rhel-7-server-ose-4.4-rpms \ --enable=rhel-7-server-ose-4.5-rpms
Update a RHEL worker machine:
Review the current node status to determine which RHEL worker to update:
# oc get node
Example output
NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.18.3 mycluster-control-plane-1 Ready master 145m v1.18.3 mycluster-control-plane-2 Ready master 145m v1.18.3 mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.14.6+97c81d00e mycluster-rhel7-1 Ready worker 98m v1.14.6+97c81d00e mycluster-rhel7-2 Ready worker 98m v1.14.6+97c81d00e mycluster-rhel7-3 Ready worker 98m v1.14.6+97c81d00e
Note which machine has the
NotReady,SchedulingDisabled
status.Review your Ansible inventory file at
/<path>/inventory/hosts
and update its contents so that only the machine with theNotReady,SchedulingDisabled
status is listed in the[workers]
section, as shown in the following example:[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path="~/.kube/config" [workers] mycluster-rhel7-0.example.com
Change to the
openshift-ansible
directory:$ cd /usr/share/ansible/openshift-ansible
Run the
upgrade
playbook:$ ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1
- 1
- For
<path>
, specify the path to the Ansible inventory file that you created.
- Follow the process in the previous step to update each RHEL worker machine in your cluster.
After you update all of the workers, confirm that all of your cluster nodes have updated to the new version:
# oc get node
Example output
NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.18.3 mycluster-control-plane-1 Ready master 145m v1.18.3 mycluster-control-plane-2 Ready master 145m v1.18.3 mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.18.3 mycluster-rhel7-1 Ready worker 98m v1.18.3 mycluster-rhel7-2 Ready worker 98m v1.18.3 mycluster-rhel7-3 Ready worker 98m v1.18.3
Chapter 5. Updating a restricted network cluster
You can upgrade a restricted network OpenShift Container Platform cluster by using the oc
command-line interface (CLI).
A restricted network environment is the one in which your cluster nodes cannot access the internet. For this reason, you must populate a registry with the installation images. If your registry host cannot access both the internet and the cluster, you can mirror the images to a file system that disconnected from that environment and then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror registry’s host, you can directly push the release images to the local registry.
If multiple clusters are present within the restricted network, mirror the required release images to a single container image registry and use that registry to update all the clusters.
5.1. Prerequisites
- Have access to the internet to obtain the necessary container images.
- Have write access to a container registry in the restricted-network environment to push and pull images. The container registry must be compatible with Docker registry API v2.
-
You must have the
oc
command-line interface (CLI) tool installed. -
Have access to the cluster as a user with
admin
privileges. See Using RBAC to define and apply permissions. - Have a recent etcd backup in case your upgrade fails and you must restore your cluster to a previous state.
5.2. Preparing your mirror host
Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location.
5.2.1. Installing the CLI by downloading the binary
You can install the OpenShift CLI (oc
) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.5. Download and install the new version of oc
. If you are upgrading a cluster in a restricted network, install the oc
version that you plan to upgrade to.
5.2.1.1. Installing the CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command line interface section, select Linux from the drop-down menu and click Download command-line tools.
Unpack the archive:
$ tar xvzf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
After you install the CLI, it is available using the oc
command:
$ oc <command>
5.2.1.2. Installing the CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command line interface section, select Windows from the drop-down menu and click Download command-line tools.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
After you install the CLI, it is available using the oc
command:
C:\> oc <command>
5.2.1.3. Installing the CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command line interface section, select MacOS from the drop-down menu and click Download command-line tools.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
After you install the CLI, it is available using the oc
command:
$ oc <command>
5.3. Configuring credentials that allow images to be mirrored
Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror.
Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry.
This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret.
Prerequisites
- You configured a mirror registry to use in your restricted network.
- You identified an image repository location on your mirror registry to mirror images into.
- You provisioned a mirror registry account that allows images to be uploaded to that image repository.
Procedure
Complete the following steps on the installation host:
-
Download your
registry.redhat.io
pull secret from the Pull Secret page on the Red Hat OpenShift Cluster Manager site and save it to a.json
file. Generate the base64-encoded user name and password or token for your mirror registry:
$ echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=
- 1
- For
<user_name>
and<password>
, specify the user name and password that you configured for your registry.
Make a copy of your pull secret in JSON format:
$ cat ./pull-secret.text | jq . > <path>/<pull-secret-file>1
- 1
- Specify the path to the folder to store the pull secret in and a name for the JSON file that you create.
The contents of the file resemble the following example:
{ "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "quay.io": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" } } }
Edit the new file and add a section that describes your registry to it:
"auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "you@example.com" },
The file resembles the following example:
{ "auths": { "<mirror_registry>": { "auth": "<credentials>", "email": "you@example.com" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "quay.io": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" } } }
5.4. Mirroring the OpenShift Container Platform image repository
Before you upgrade a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. You can also use this procedure in unrestricted networks to ensure your clusters only use container images that have satisfied your organizational controls on external content.
Procedure
- Use the Red Hat OpenShift Container Platform Upgrade Graph visualizer and update planner to plan an upgrade from one version to another. The OpenShift Upgrade Graph provides channel graphs and a way to confirm that there is an update path between your current and intended cluster versions.
Set the required environment variables:
Export the release version:
$ export OCP_RELEASE=<release_version>
For
<release_version>
, specify the tag that corresponds to the version of OpenShift Container Platform to which you want to upgrade, such as4.5.4
.Export the local registry name and host port:
$ LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'
For
<local_registry_host_name>
, specify the registry domain name for your mirror repository, and for<local_registry_host_port>
, specify the port that it serves content on.Export the local repository name:
$ LOCAL_REPOSITORY='<local_repository_name>'
For
<local_repository_name>
, specify the name of the repository to create in your registry, such asocp4/openshift4
.Export the name of the repository to mirror:
$ PRODUCT_REPO='openshift-release-dev'
For a production release, you must specify
openshift-release-dev
.Export the path to your registry pull secret:
$ LOCAL_SECRET_JSON='<path_to_pull_secret>'
For
<path_to_pull_secret>
, specify the absolute path to and file name of the pull secret for your mirror registry that you created.NoteIf your cluster uses an
ImageContentSourcePolicy
object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project.Export the release mirror:
$ RELEASE_NAME="ocp-release"
For a production release, you must specify
ocp-release
.Export the type of architecture for your server, such as
x86_64
.:$ ARCHITECTURE=<server_architecture>
Export the path to the directory to host the mirrored images:
$ REMOVABLE_MEDIA_PATH=<path> 1
- 1
- Specify the full path, including the initial forward slash (/) character.
Review the images and configuration manifests to mirror:
$ oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} --dry-run
Mirror the version images to the internal container registry.
If your mirror host does not have internet access, take the following actions:
- Connect the removable media to a system that is connected to the internet.
Mirror the images and configuration manifests to a directory on the removable media:
$ oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE}
Take the media to the restricted network environment and upload the images to the local container registry.
$ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:${OCP_RELEASE}*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} 1
- 1
- For
REMOVABLE_MEDIA_PATH
, you must use the same path that you specified when you mirrored the images.
If the local container registry and the cluster are connected to the mirror host, directly push the release images to the local registry and apply the config map to the cluster by using following command:
$ oc adm release mirror -a ${LOCAL_SECRET_JSON} --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} --apply-release-image-signature
NoteIf you include the
--apply-release-image-signature
option, do not create the config map for image signature verification.
5.5. Creating the image signature config map
Before you update your cluster, you must manually create a config map that contains the signatures of the release images that you use. This signature allows the Cluster Version Operator (CVO) to verify that the release images have not been modified by comparing the expected and actual image signatures.
If you are upgrading from version 4.4.8 or later, you can use the oc
CLI to create the config map. If you are upgrading from an earlier version, you must use the manual method.
5.5.1. Creating the config map for image signature verification by using the oc
CLI
Before you update your cluster, you must manually create a config map that contains the signatures of the release images that you use. This signature allows the Cluster Version Operator (CVO) to verify that the release images have not been modified by comparing the expected and actual image signatures.
If you are upgrading from a release prior to version 4.4.8, you must use the manual method for creating the config map instead of this procedure. The commands that this procedure uses are not in earlier versions of the oc
command-line interface (CLI).
Prerequisites
-
Install the OpenShift CLI (
oc
), version 4.4.8 or later.
Procedure
- Obtain the image signature for the version that you are upgrading to from either mirror.openshift.com or Google Cloud Storage (GCS).
-
Use
oc
command-line interface (CLI) to log into the cluster that you are upgrading. Apply the mirrored release image signature config map to the connected cluster:
$ oc apply -f <image_signature_file> 1
- 1
- For
<image_signature_file>
, specify the path and name of the file, for example,mirror/config/signature-sha256-81154f5c03294534.yaml
.
5.5.2. Creating an image signature config map manually
Create and apply the image signature config map to the cluster that you want to update.
You must perform following steps each time that you update a cluster.
Procedure
- Review the OpenShift Container Platform upgrade paths knowledge base article to determine a valid upgrade path for your cluster.
Add the version to the
OCP_RELEASE_NUMBER
environment variable:$ OCP_RELEASE_NUMBER=<release_version> 1
- 1
- For
<release_version>
, specify the tag that corresponds to the version of OpenShift Container Platform you want to update the cluster, such as4.4.0
.
Add the system architecture for your cluster to
ARCHITECTURE
environment variable:$ ARCHITECTURE=<server_architecture> 1
- 1
- For
server_architecture
, specify the architecture of the server, such asx86_64
.
Get the release image digest from Quay:
$ DIGEST="$(oc adm release info quay.io/openshift-release-dev/ocp-release:${OCP_RELEASE_NUMBER}-${ARCHITECTURE} | sed -n 's/Pull From: .*@//p')"
Set the digest algorithm:
$ DIGEST_ALGO="${DIGEST%%:*}"
Set the digest signature:
$ DIGEST_ENCODED="${DIGEST#*:}"
Get the image signature from mirror.openshift.com website.
$ SIGNATURE_BASE64=$(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/${DIGEST_ALGO}=${DIGEST_ENCODED}/signature-1" | base64 -w0 && echo)
Create the config map:
$ cat >checksum-${OCP_RELEASE_NUMBER}.yaml <<EOF apiVersion: v1 kind: ConfigMap metadata: name: release-image-${OCP_RELEASE_NUMBER} namespace: openshift-config-managed labels: release.openshift.io/verification-signatures: "" binaryData: ${DIGEST_ALGO}-${DIGEST_ENCODED}: ${SIGNATURE_BASE64} EOF
Apply the config map to the cluster to update:
$ oc apply -f checksum-${OCP_RELEASE_NUMBER}.yaml
5.6. Upgrading the restricted network cluster
Update the restricted network cluster to the OpenShift Container Platform version that you downloaded the release images for.
Prerequisites
- You mirrored the images for the new release to your registry.
- You applied the release image signature ConfigMap for the new release to your cluster.
- You obtained the sha256 sum value for the release from the image signature ConfigMap.
-
Install the OpenShift CLI (
oc
), version 4.4.8 or later.
Procedure
Update the cluster:
$ oc adm upgrade --allow-explicit-upgrade --to-image ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}<sha256_sum_value> 1
- 1
- The
<sha256_sum_value>
value is the sha256 sum value for the release from the image signature ConfigMap, for example,@sha256:81154f5c03294534e1eaf0319bef7a601134f891689ccede5d705ef659aa8c92
If you use an
ImageContentSourcePolicy
for the mirror registry, you can use the canonical registry name instead ofLOCAL_REGISTRY
.NoteYou can only configure global pull secrets for clusters that have an
ImageContentSourcePolicy
object. You cannot add a pull secret to a project.
5.7. Configuring image registry repository mirroring
Setting up container registry repository mirroring enables you to do the following:
- Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry.
- Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used.
The attributes of repository mirroring in OpenShift Container Platform include:
- Image pulls are resilient to registry downtimes.
- Clusters in restricted networks can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images.
- A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried.
-
The mirror information you enter is added to the
/etc/containers/registries.conf
file on every node in the OpenShift Container Platform cluster. - When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node.
Setting up repository mirroring can be done in the following ways:
At OpenShift Container Platform installation:
By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company’s firewall, you can install OpenShift Container Platform into a datacenter that is in a restricted network.
After OpenShift Container Platform installation:
Even if you don’t configure mirroring during OpenShift Container Platform installation, you can do so later using the
ImageContentSourcePolicy
object.
The following procedure provides a post-installation mirror configuration, where you create an ImageContentSourcePolicy
object that identifies:
- The source of the container image repository you want to mirror.
- A separate entry for each mirror repository you want to offer the content requested from the source repository.
You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy
object. You cannot add a pull secret to a project.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Configure mirrored repositories, by either:
- Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring. Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time.
Using a tool such as
skopeo
to copy images manually from the source directory to the mirrored repository.For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the
skopeo
command as shown in this example:$ skopeo copy \ docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 \ docker://example.io/example/ubi-minimal
In this example, you have a container image registry that is named
example.io
with an image repository namedexample
to which you want to copy theubi8/ubi-minimal
image fromregistry.access.redhat.com
. After you create the registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository.
- Log in to your OpenShift Container Platform cluster.
Create an
ImageContentSourcePolicy
file (for example,registryrepomirror.yaml
), replacing the source and mirrors with your own registry and repository pairs and images:apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 source: registry.access.redhat.com/ubi8/ubi-minimal 2 - mirrors: - example.com/example/ubi-minimal source: registry.access.redhat.com/ubi8/ubi-minimal
Create the new
ImageContentSourcePolicy
object:$ oc create -f registryrepomirror.yaml
After the
ImageContentSourcePolicy
object is created, the new settings are deployed to each node and the cluster starts using the mirrored repository for requests to the source repository.To check that the mirrored configuration settings, are applied, do the following on one of the nodes.
List your nodes:
$ oc get node
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.18.3 ip-10-0-138-148.ec2.internal Ready master 11m v1.18.3 ip-10-0-139-122.ec2.internal Ready master 11m v1.18.3 ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.18.3 ip-10-0-153-12.ec2.internal Ready worker 7m v1.18.3 ip-10-0-154-10.ec2.internal Ready master 11m v1.18.3
You can see that scheduling on each worker node is disabled as the change is being applied.
Start the debugging process to access the node:
$ oc debug node/ip-10-0-147-35.ec2.internal
Example output
Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host`
Access the node’s files:
sh-4.2# chroot /host
Check the
/etc/containers/registries.conf
file to make sure the changes were made:sh-4.2# cat /etc/containers/registries.conf
Example output
unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] location = "registry.access.redhat.com/ubi8/" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "example.io/example/ubi8-minimal" insecure = false [[registry.mirror]] location = "example.com/example/ubi8-minimal" insecure = false
Pull an image digest to the node from the source and check if it is resolved by the mirror.
ImageContentSourcePolicy
objects support image digests only, not image tags.sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6
Troubleshooting repository mirroring
If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem.
- The first working mirror is used to supply the pulled image.
- The main registry is only used if no other mirror works.
-
From the system context, the
Insecure
flags are used as fallback. -
The format of the
/etc/containers/registries.conf
file has changed recently. It is now version 2 and in TOML format.
Legal Notice
Copyright © 2024 Red Hat, Inc.
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.