Chapter 17. Upgrading your cluster
After you create Red Hat OpenShift Container Platform clusters that you want to manage with Red Hat Advanced Cluster Management for Kubernetes, you can use the Red Hat Advanced Cluster Management for Kubernetes console to upgrade those clusters to the latest minor version that is available in the version channel that the managed cluster uses.
In a connected environment, the updates are automatically identified with notifications provided for each cluster that requires an upgrade in the Red Hat Advanced Cluster Management console.
The process for upgrading your clusters in a disconnected environment requires some additional steps to configure and mirror the required release images. It uses the operator for Red Hat OpenShift Update Service to identify the upgrades. If you are in a disconnected environment, see Upgrading disconnected clusters for the required steps.
Note: To upgrade to a major version, you must verify that you meet all of the prerequisites for upgrading to that version. You must update the version channel on the managed cluster before you can upgrade the cluster with the console. After you update the version channel on the managed cluster, the Red Hat Advanced Cluster Management for Kubernetes console displays the latest versions that are available for the upgrade.
Important: You cannot upgrade Red Hat OpenShift Kubernetes Service managed clusters or OpenShift Container Platform managed clusters on Red Hat OpenShift Dedicated by using the Red Hat Advanced Cluster Management for Kubernetes console.
This method of upgrading only works for OpenShift Container Platform managed clusters that are in a Ready state.
To upgrade your cluster in a connected environment, complete the following steps:
- From the navigation menu, navigate to Infrastructure > Clusters. If an upgrade is available, it is shown in the Distribution version column.
- Select the clusters that you want to upgrade. Remember: A cluster must be in Ready state, and must be a Red Hat OpenShift Container Platform cluster to be upgraded with the console.
- Select Upgrade.
- Select the new version of each cluster.
- Select Upgrade.
If your cluster upgrade fails, the Operator generally retries the upgrade a few times, stops, and reports the status of the failing component. In some cases, the upgrade process continues to cycle through attempts to complete the process. Rolling your cluster back to a previous version following a failed upgrade is not supported. Contact Red Hat support for assistance if your cluster upgrade fails.
17.1. Selecting a channel
You can use the Red Hat Advanced Cluster Management console to select a channel for your cluster upgrades on OpenShift Container Platform version 4.6, or later. After selecting a channel, you are automatically reminded of cluster upgrades that are available for both z-stream versions (4.8.1 > 4.8.2 > 4.8.3, and so on) and y-stream versions (4.7 > 4.8, and so on).
To select a channel for your cluster, complete the following steps:
- From the Red Hat Advanced Cluster Management navigation, select Infrastructure > Clusters.
- Select the name of the cluster that you want to change to view the Cluster details page. If a different channel is available for the cluster, an edit icon is displayed in the Channel field.
- Click the edit icon to modify the setting in the field.
- Select a channel in the New channel field.
- Click Save to commit your change.
You can find the reminders for the available channel updates in the Cluster details page of the cluster.
17.2. Upgrading disconnected clusters
You can use Red Hat OpenShift Update Service with Red Hat Advanced Cluster Management for Kubernetes to upgrade your clusters in a disconnected environment.
In some cases, security concerns prevent clusters from being connected directly to the Internet. This makes it difficult to know when upgrades are available, and how to process those upgrades. Configuring OpenShift Update Service can help.
OpenShift Update Service is a separate operator and operand that monitors the available versions of your managed clusters in a disconnected environment, and makes them available for upgrading your clusters in a disconnected environment. After OpenShift Update Service is configured, it can perform the following actions:
- Monitor when upgrades are available for your disconnected clusters.
- Identify which updates are mirrored to your local site for upgrading by using the graph data file.
Notify you that an upgrade is available for your cluster by using the Red Hat Advanced Cluster Management console.
- Prerequisites
- Prepare your disconnected mirror registry
- Deploy the operator for OpenShift Update Service
- Build the graph data init container
- Configure certificate for the mirrored registry
- Deploy the OpenShift Update Service instance
- Deploy a policy to override the default registry (optional)
- Deploy a policy to deploy a disconnected catalog source
- Deploy a policy to change the managed cluster parameter
- Viewing available upgrades
- Selecting a channel
- Upgrading the cluster
17.2.1. Prerequisites
You must have the following prerequisites before you can use OpenShift Update Service to upgrade your disconnected clusters:
A deployed Red Hat Advanced Cluster Management hub cluster that is running on Red Hat OpenShift Container Platform version 4.6 or later with restricted OLM configured. See Using Operator Lifecycle Manager on restricted networks for details about how to configure restricted OLM.
Tip: Make a note of the catalog source image when you configure restricted OLM.
- An OpenShift Container Platform cluster that is managed by the Red Hat Advanced Cluster Management hub cluster
Access credentials to a local repository where you can mirror the cluster images. See Mirroring images for a disconnected installation for more information about how to create this repository.
Note: The image for the current version of the cluster that you upgrade must always be available as one of the mirrored images. If an upgrade fails, the cluster reverts back to the version of the cluster at the time that the upgrade was attempted.
17.2.2. Prepare your disconnected mirror registry
You must mirror both the image that you want to upgrade to and the current image that you are upgrading from to your local mirror registry. Complete the following steps to mirror the images:
Create a script file that contains content that resembles the following example:
UPSTREAM_REGISTRY=quay.io PRODUCT_REPO=openshift-release-dev RELEASE_NAME=ocp-release OCP_RELEASE=4.5.2-x86_64 LOCAL_REGISTRY=$(hostname):5000 LOCAL_SECRET_JSON=/path/to/pull/secret oc adm -a ${LOCAL_SECRET_JSON} release mirror \ --from=${UPSTREAM_REGISTRY}/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE} \ --to=${LOCAL_REGISTRY}/ocp4 \ --to-release-image=${LOCAL_REGISTRY}/ocp4/release:${OCP_RELEASE}
Replace
/path/to/pull/secret
with the path to your OpenShift Container Platform pull secret.- Run the script to mirror the images, configure settings, and separate the release images from the release content.
Tip: You can use the output of the last line of this script when you create your ImageContentSourcePolicy
.
17.2.3. Deploy the operator for OpenShift Update Service
To deploy the operator for OpenShift Update Service in your OpenShift Container Platform environment, complete the following steps:
- On the hub cluster, access the OpenShift Container Platform operator hub.
-
Deploy the operator by selecting
Red Hat OpenShift Update Service Operator
. Update the default values, if necessary. The deployment of the operator creates a new project namedopenshift-cincinnati
. Wait for the installation of the operator to finish.
Tip: You can check the status of the installation by entering the
oc get pods
command on your OpenShift Container Platform command line. Verify that the operator is in therunning
state.
17.2.4. Build the graph data init container
OpenShift Update Service uses graph data information to determine the available upgrades. In a connected environment, OpenShift Update Service pulls the graph data information for available upgrades directly from the Cincinnati graph data GitHub repository. Because you are configuring a disconnected environment, you must make the graph data available in a local repository by using an init container
. Complete the following steps to create a graph data init container
:
Clone the graph data Git repository by entering the following command:
git clone https://github.com/openshift/cincinnati-graph-data
Create a file that contains the information for your graph data
init
. You can find this sample Dockerfile in thecincinnati-operator
GitHub repository. The contents of the file is shown in the following sample:FROM registry.access.redhat.com/ubi8/ubi:8.1 RUN curl -L -o cincinnati-graph-data.tar.gz https://github.com/openshift/cincinnati-graph-data/archive/master.tar.gz RUN mkdir -p /var/lib/cincinnati/graph-data/ CMD exec /bin/bash -c "tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/ cincinnati/graph-data/ --strip-components=1"
In this example:
-
The
FROM
value is the external registry where OpenShift Update Service finds the images. -
The
RUN
commands create the directory and package the upgrade files. -
The
CMD
command copies the package file to the local repository and extracts the files for an upgrade.
-
The
Run the following commands to build the
graph data init container
:podman build -f <path_to_Dockerfile> -t ${DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container:latest podman push ${DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container:latest --authfile=/path/to/pull_secret.json
Replace path_to_Dockerfile with the path to the file that you created in the previous step.
Replace ${DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container with the path to your local graph data init container.
Replace /path/to/pull_secret with the path to your pull secret file.
Note: You can also replace
podman
in the commands withdocker
, if you don’t havepodman
installed.
17.2.5. Configure certificate for the mirrored registry
If you are using a secure external container registry to store your mirrored OpenShift Container Platform release images, OpenShift Update Service requires access to this registry to build an upgrade graph. Complete the following steps to configure your CA certificate to work with the OpenShift Update Service pod:
Find the OpenShift Container Platform external registry API, which is located in
image.config.openshift.io
. This is where the external registry CA certificate is stored.See Configuring additional trust stores for image registry access in the OpenShift Container Platform documentation for more information.
-
Create a ConfigMap in the
openshift-config
namespace. Add your CA certificate under the key
cincinnati-registry
. OpenShift Update Service uses this setting to locate your certificate:apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca data: cincinnati-registry: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----
Edit the
cluster
resource in theimage.config.openshift.io
API to set theadditionalTrustedCA
field to the name of the ConfigMap that you created.oc patch image.config.openshift.io cluster -p '{"spec":{"additionalTrustedCA":{"name":"trusted-ca"}}}' --type merge
Replace
trusted-ca
with the path to your new ConfigMap.
The OpenShift Update Service Operator watches the image.config.openshift.io
API and the ConfigMap you created in the openshift-config
namespace for changes, then restart the deployment if the CA cert has changed.
17.2.6. Deploy the OpenShift Update Service instance
When you finish deploying the OpenShift Update Service instance on your hub cluster, this instance is located where the images for the cluster upgrades are mirrored and made available to the disconnected managed cluster. Complete the following steps to deploy the instance:
If you do not want to use the default namespace of the operator, which is
openshift-cincinnati
, create a namespace for your OpenShift Update Service instance:- In the OpenShift Container Platform hub cluster console navigation menu, select Administration > Namespaces.
- Select Create Namespace.
- Add the name of your namespace, and any other information for your namespace.
- Select Create to create the namespace.
- In the Installed Operators section of the OpenShift Container Platform console, select Red Hat OpenShift Update Service Operator.
- Select Create Instance in the menu.
Paste the contents from your OpenShift Update Service instance. Your YAML instance might resemble the following manifest:
apiVersion: cincinnati.openshift.io/v1beta2 kind: Cincinnati metadata: name: openshift-update-service-instance namespace: openshift-cincinnati spec: registry: <registry_host_name>:<port> replicas: 1 repository: ${LOCAL_REGISTRY}/ocp4/release graphDataImage: '<host_name>:<port>/cincinnati-graph-data-container'
Replace the
spec.registry
value with the path to your local disconnected registry for your images.Replace the
spec.graphDataImage
value with the path to your graph data init container. Tip: This is the same value that you used when you ran thepodman push
command to push your graph data init container.- Select Create to create the instance.
-
From the hub cluster CLI, enter the
oc get pods
command to view the status of the instance creation. It might take a while, but the process is complete when the result of the command shows that the instance and the operator are running.
17.2.7. Deploy a policy to override the default registry (optional)
Note: The steps in this section only apply if you have mirrored your releases into your mirrored registry.
OpenShift Container Platform has a default image registry value that specifies where it finds the upgrade packages. In a disconnected environment, you can create a policy to replace that value with the path to your local image registry where you mirrored your release images.
For these steps, the policy is named ImageContentSourcePolicy. Complete the following steps to create the policy:
- Log in to the OpenShift Container Platform environment of your hub cluster.
- In the OpenShift Container Platform navigation, select Administration > Custom Resource Definitions.
- Select the Instances tab.
- Select the name of the ImageContentSourcePolicy that you created when you set up your disconnected OLM to view the contents.
-
Select the YAML tab to view the content in
YAML
format. - Copy the entire contents of the ImageContentSourcePolicy.
- From the Red Hat Advanced Cluster Management console, select Governance > Create policy.
-
Set the
YAML
switch to On to view the YAML version of the policy. -
Delete all of the content in the
YAML
code. Paste the following
YAML
content into the window to create a custom policy:apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-pod namespace: default annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-pod-sample-nginx-pod spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Pod metadata: name: sample-nginx-pod namespace: default status: phase: Running remediationAction: inform severity: low remediationAction: enforce --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-pod namespace: default placementRef: name: placement-policy-pod kind: PlacementRule apiGroup: apps.open-cluster-management.io subjects: - name: policy-pod kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-pod namespace: default spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: [] # selects all clusters if not specified
Replace the content inside the
objectDefinition
section of the template with content that is similar to the following content to add the settings for your ImageContentSourcePolicy:apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ImageContentSourcePolicy spec: repositoryDigestMirrors: - mirrors: - <path-to-local-mirror> source: registry.redhat.io
-
Replace
path-to-local-mirror
with the path to your local mirror repository. -
Tip: You can find your path to your local mirror by entering the
oc adm release mirror
command.
-
Replace
- Select the box for Enforce if supported.
- Select Create to create the policy.
17.2.8. Deploy a policy to deploy a disconnected catalog source
Push the Catalogsource policy to the managed cluster to change the default location from a connected location to your disconnected local registry.
- In the Red Hat Advanced Cluster Management console, select Infrastructure > Clusters.
- Find the managed cluster to receive the policy in the list of clusters.
-
Note the value of the
name
label for the managed cluster. The label format isname=managed-cluster-name
. This value is used when pushing the policy. - In the Red Hat Advanced Cluster Management console menu, select Governance > Create policy.
-
Set the
YAML
switch to On to view the YAML version of the policy. -
Delete all of the content in the
YAML
code. -
Paste the following
YAML
content into the window to create a custom policy: Paste the following
YAML
content into the window to create a custom policy:apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-pod namespace: default annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-pod-sample-nginx-pod spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Pod metadata: name: sample-nginx-pod namespace: default status: phase: Running remediationAction: inform severity: low remediationAction: enforce --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-pod namespace: default placementRef: name: placement-policy-pod kind: PlacementRule apiGroup: apps.open-cluster-management.io subjects: - name: policy-pod kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-pod namespace: default spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: [] # selects all clusters if not specified
Add the following content to the policy:
apiVersion: config.openshift.io/vi kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true
Add the following content:
apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc image: <registry_host_name>:<port>/olm/redhat-operators:v1 displayName: My Operator Catalog publisher: grpc
Replace the value of spec.image with the path to your local restricted catalog source image.
-
In the Red Hat Advanced Cluster Management console navigation, select Infrastructure > Clusters to check the status of the managed cluster. When the policy is applied, the cluster status is
ready
.
17.2.9. Deploy a policy to change the managed cluster parameter
Push the ClusterVersion policy to the managed cluster to change the default location where it retrieves its upgrades.
From the managed cluster, confirm that the ClusterVersion upstream parameter is currently the default public OpenShift Update Service operand by entering the following command:
oc get clusterversion -o yaml
The returned content might resemble the following content:
apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: ClusterVersion [..] spec: channel: stable-4.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph
From the hub cluster, identify the route URL to the OpenShift Update Service operand by entering the following command:
oc get routes
.Tip: Note this value for later steps.
- In the hub cluster Red Hat Advanced Cluster Management console menu, select Governance > Create a policy.
-
Set the
YAML
switch to On to view the YAML version of the policy. -
Delete all of the content in the
YAML
code. Paste the following
YAML
content into the window to create a custom policy:apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-pod namespace: default annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-pod-sample-nginx-pod spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Pod metadata: name: sample-nginx-pod namespace: default status: phase: Running remediationAction: inform severity: low remediationAction: enforce --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-pod namespace: default placementRef: name: placement-policy-pod kind: PlacementRule apiGroup: apps.open-cluster-management.io subjects: - name: policy-pod kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-pod namespace: default spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: [] # selects all clusters if not specified
Add the following content to
policy.spec
in the policy section:apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.4 upstream: https://example-cincinnati-policy-engine-uri/api/upgrades_info/v1/graph
Replace the value of spec.upstream with the path to your hub cluster OpenShift Update Service operand.
Tip: You can complete the following steps to determine the path to the operand:
-
Run the
oc get get routes -A
command on the hub cluster. -
Find the route to
cincinnati
. + The path to the operand is the value in theHOST/PORT
field.
-
Run the
In the managed cluster CLI, confirm that the upstream parameter in the
ClusterVersion
is updated with the local hub cluster OpenShift Update Service URL by entering:oc get clusterversion -o yaml
Verify that the results resemble the following content:
apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: ClusterVersion [..] spec: channel: stable-4.4 upstream: https://<hub-cincinnati-uri>/api/upgrades_info/v1/graph
17.2.10. Viewing available upgrades
You can view a list of available upgrades for your managed cluster by completing the following steps:
- Log in to your Red Hat Advanced Cluster Management console.
- In the navigation menu, select Infrastructure > Clusters.
- Select a cluster that is in the Ready state.
- From the Actions menu, select Upgrade cluster.
Verify that the optional upgrade paths are available.
Note: No available upgrade versions are shown if the current version is not mirrored into the local image repository.
17.2.11. Selecting a channel
You can use the Red Hat Advanced Cluster Management console to select a channel for your cluster upgrades on OpenShift Container Platform version 4.6, or later. Those versions must be available on the mirror registry. Complete the steps in Selecting a channel to specify a channel for your upgrades.
17.2.12. Upgrading the cluster
After configuring the disconnected registry, Red Hat Advanced Cluster Management and OpenShift Update Service use the disconnected registry to determine if upgrades are available. If no available upgrades are displayed, make sure that you have the release image of the current level of the cluster and at least one later level mirrored in the local repository. If the release image for the current version of the cluster is not available, no upgrades are available.
Complete the following steps to upgrade:
- In the Red Hat Advanced Cluster Management console, select Infrastructure > Clusters.
- Find the cluster that you want to determine if there is an available upgrade.
- If there is an upgrade available, the Distribution version column for the cluster indicates that there is an upgrade available.
- Select the Options menu for the cluster, and select Upgrade cluster.
- Select the target version for the upgrade, and select Upgrade.
The managed cluster is updated to the selected version.
If your cluster upgrade fails, the Operator generally retries the upgrade a few times, stops, and reports the status of the failing component. In some cases, the upgrade process continues to cycle through attempts to complete the process. Rolling your cluster back to a previous version following a failed upgrade is not supported. Contact Red Hat support for assistance if your cluster upgrade fails.