Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 12. Updating managed clusters in a disconnected environment with the Topology Aware Lifecycle Manager
You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of OpenShift Container Platform managed clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters.
12.1. Updating clusters in a disconnected environment Copier lienLien copié sur presse-papiers!
You can upgrade managed clusters and Operators for managed clusters that you have deployed using GitOps Zero Touch Provisioning (ZTP) and Topology Aware Lifecycle Manager (TALM).
12.1.1. Setting up the environment Copier lienLien copié sur presse-papiers!
TALM can perform both platform and Operator updates.
You must mirror both the platform image and Operator images that you want to update to in your mirror registry before you can use TALM to update your disconnected clusters. Complete the following steps to mirror the images:
For platform updates, you must perform the following steps:
Mirror the desired OpenShift Container Platform image repository. Ensure that the desired platform image is mirrored by following the "Mirroring the OpenShift Container Platform image repository" procedure linked in the Additional resources. Save the contents of the
imageContentSourcessection in theimageContentSources.yamlfile:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the image signature of the desired platform image that was mirrored. You must add the image signature to the
PolicyGenTemplateCR for platform updates. To get the image signature, perform the following steps:Specify the desired OpenShift Container Platform tag by running the following command:
OCP_RELEASE_NUMBER=<release_version>
$ OCP_RELEASE_NUMBER=<release_version>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the architecture of the cluster by running the following command:
ARCHITECTURE=<cluster_architecture>
$ ARCHITECTURE=<cluster_architecture>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the architecture of the cluster, such as
x86_64,aarch64,s390x, orppc64le.
Get the release image digest from Quay by running the following command
DIGEST="$(oc adm release info quay.io/openshift-release-dev/ocp-release:${OCP_RELEASE_NUMBER}-${ARCHITECTURE} | sed -n 's/Pull From: .*@//p')"$ DIGEST="$(oc adm release info quay.io/openshift-release-dev/ocp-release:${OCP_RELEASE_NUMBER}-${ARCHITECTURE} | sed -n 's/Pull From: .*@//p')"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the digest algorithm by running the following command:
DIGEST_ALGO="${DIGEST%%:*}"$ DIGEST_ALGO="${DIGEST%%:*}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the digest signature by running the following command:
DIGEST_ENCODED="${DIGEST#*:}"$ DIGEST_ENCODED="${DIGEST#*:}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the image signature from the mirror.openshift.com website by running the following command:
SIGNATURE_BASE64=$(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/${DIGEST_ALGO}=${DIGEST_ENCODED}/signature-1" | base64 -w0 && echo)$ SIGNATURE_BASE64=$(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/${DIGEST_ALGO}=${DIGEST_ENCODED}/signature-1" | base64 -w0 && echo)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the image signature to the
checksum-<OCP_RELEASE_NUMBER>.yamlfile by running the following commands:cat >checksum-${OCP_RELEASE_NUMBER}.yaml <<EOF$ cat >checksum-${OCP_RELEASE_NUMBER}.yaml <<EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow ${DIGEST_ALGO}-${DIGEST_ENCODED}: ${SIGNATURE_BASE64} EOF${DIGEST_ALGO}-${DIGEST_ENCODED}: ${SIGNATURE_BASE64} EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Prepare the update graph. You have two options to prepare the update graph:
Use the OpenShift Update Service.
For more information about how to set up the graph on the hub cluster, see Deploy the operator for OpenShift Update Service and Build the graph data init container.
Make a local copy of the upstream graph. Host the update graph on an
httporhttpsserver in the disconnected environment that has access to the managed cluster. To download the update graph, use the following command:curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.15 -o ~/upgrade-graph_stable-4.15
$ curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.15 -o ~/upgrade-graph_stable-4.15Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For Operator updates, you must perform the following task:
- Mirror the Operator catalogs. Ensure that the desired operator images are mirrored by following the procedure in the "Mirroring Operator catalogs for use with disconnected clusters" section.
12.1.2. Performing a platform update Copier lienLien copié sur presse-papiers!
You can perform a platform update with the TALM.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Update GitOps Zero Touch Provisioning (ZTP) to the latest version.
- Provision one or more managed clusters with GitOps ZTP.
- Mirror the desired image repository.
-
Log in as a user with
cluster-adminprivileges. - Create RHACM policies in the hub cluster.
Procedure
Create a
PolicyGenTemplateCR for the platform update:Save the following contents of the
PolicyGenTemplateCR in thedu-upgrade.yamlfile.Example of
PolicyGenTemplatefor platform updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
ConfigMapCR contains the signature of the desired release image to update to. - 2
- Shows the image signature of the desired OpenShift Container Platform release. Get the signature from the
checksum-${OCP_RELEASE_NUMBER}.yamlfile you saved when following the procedures in the "Setting up the environment" section. - 3
- Shows the mirror repository that contains the desired OpenShift Container Platform image. Get the mirrors from the
imageContentSources.yamlfile that you saved when following the procedures in the "Setting up the environment" section. - 4
- Shows the
ClusterVersionCR to trigger the update. Thechannel,upstream, anddesiredVersionfields are all required for image pre-caching.
The
PolicyGenTemplateCR generates two policies:-
The
du-upgrade-platform-upgrade-preppolicy does the preparation work for the platform update. It creates theConfigMapCR for the desired release image signature, creates the image content source of the mirrored release image repository, and updates the cluster version with the desired update channel and the update graph reachable by the managed cluster in the disconnected environment. -
The
du-upgrade-platform-upgradepolicy is used to perform platform upgrade.
Add the
du-upgrade.yamlfile contents to thekustomization.yamlfile located in the GitOps ZTP Git repository for thePolicyGenTemplateCRs and push the changes to the Git repository.ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster.
Check the created policies by running the following command:
oc get policies -A | grep platform-upgrade
$ oc get policies -A | grep platform-upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the
ClusterGroupUpdateCR for the platform update with thespec.enablefield set tofalse.Save the content of the platform update
ClusterGroupUpdateCR with thedu-upgrade-platform-upgrade-prepand thedu-upgrade-platform-upgradepolicies and the target clusters to thecgu-platform-upgrade.ymlfile, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ClusterGroupUpdateCR to the hub cluster by running the following command:oc apply -f cgu-platform-upgrade.yml
$ oc apply -f cgu-platform-upgrade.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: Pre-cache the images for the platform update.
Enable pre-caching in the
ClusterGroupUpdateCR by running the following command:oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the hub cluster:
oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'$ oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Start the platform update:
Enable the
cgu-platform-upgradepolicy and disable pre-caching by running the following command:oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the process. Upon completion, ensure that the policy is compliant by running the following command:
oc get policies --all-namespaces
$ oc get policies --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.3. Performing an Operator update Copier lienLien copié sur presse-papiers!
You can perform an Operator update with the TALM.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Update GitOps Zero Touch Provisioning (ZTP) to the latest version.
- Provision one or more managed clusters with GitOps ZTP.
- Mirror the desired index image, bundle images, and all Operator images referenced in the bundle images.
-
Log in as a user with
cluster-adminprivileges. - Create RHACM policies in the hub cluster.
Procedure
Update the
PolicyGenTemplateCR for the Operator update.Update the
du-upgradePolicyGenTemplateCR with the following additional contents in thedu-upgrade.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The index image URL contains the desired Operator images. If the index images are always pushed to the same image name and tag, this change is not needed.
- 2
- Set how frequently the Operator Lifecycle Manager (OLM) polls the index image for new Operator versions with the
registryPoll.intervalfield. This change is not needed if a new index image tag is always pushed for y-stream and z-stream Operator updates. TheregistryPoll.intervalfield can be set to a shorter interval to expedite the update, however shorter intervals increase computational load. To counteract this, you can restoreregistryPoll.intervalto the default value once the update is complete. - 3
- Last observed state of the catalog connection. The
READYvalue ensures that theCatalogSourcepolicy is ready, indicating that the index pod is pulled and is running. This way, TALM upgrades the Operators based on up-to-date policy compliance states.
This update generates one policy,
du-upgrade-operator-catsrc-policy, to update theredhat-operators-disconnectedcatalog source with the new index images that contain the desired Operators images.NoteIf you want to use the image pre-caching for Operators and there are Operators from a different catalog source other than
redhat-operators-disconnected, you must perform the following tasks:- Prepare a separate catalog source policy with the new index image or registry poll interval update for the different catalog source.
- Prepare a separate subscription policy for the desired Operators that are from the different catalog source.
For example, the desired SRIOV-FEC Operator is available in the
certified-operatorscatalog source. To update the catalog source and the Operator subscription, add the following contents to generate two policies,du-upgrade-fec-catsrc-policyanddu-upgrade-subscriptions-fec-policy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the specified subscriptions channels in the common
PolicyGenTemplateCR, if they exist. The default subscriptions channels from the GitOps ZTP image are used for the update.NoteThe default channel for the Operators applied through GitOps ZTP 4.15 is
stable, except for theperformance-addon-operator. As of OpenShift Container Platform 4.11, theperformance-addon-operatorfunctionality was moved to thenode-tuning-operator. For the 4.10 release, the default channel for PAO isv4.10. You can also specify the default channels in the commonPolicyGenTemplateCR.Push the
PolicyGenTemplateCRs updates to the GitOps ZTP Git repository.ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster.
Check the created policies by running the following command:
oc get policies -A | grep -E "catsrc-policy|subscription"
$ oc get policies -A | grep -E "catsrc-policy|subscription"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Apply the required catalog source updates before starting the Operator update.
Save the content of the
ClusterGroupUpgradeCR namedoperator-upgrade-prepwith the catalog source policies and the target managed clusters to thecgu-operator-upgrade-prep.ymlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy to the hub cluster by running the following command:
oc apply -f cgu-operator-upgrade-prep.yml
$ oc apply -f cgu-operator-upgrade-prep.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the update process. Upon completion, ensure that the policy is compliant by running the following command:
oc get policies -A | grep -E "catsrc-policy"
$ oc get policies -A | grep -E "catsrc-policy"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the
ClusterGroupUpgradeCR for the Operator update with thespec.enablefield set tofalse.Save the content of the Operator update
ClusterGroupUpgradeCR with thedu-upgrade-operator-catsrc-policypolicy and the subscription policies created from the commonPolicyGenTemplateand the target clusters to thecgu-operator-upgrade.ymlfile, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The policy is needed by the image pre-caching feature to retrieve the operator images from the catalog source.
- 2
- The policy contains Operator subscriptions. If you have followed the structure and content of the reference
PolicyGenTemplates, all Operator subscriptions are grouped into thecommon-subscriptions-policypolicy.
NoteOne
ClusterGroupUpgradeCR can only pre-cache the images of the desired Operators defined in the subscription policy from one catalog source included in theClusterGroupUpgradeCR. If the desired Operators are from different catalog sources, such as in the example of the SRIOV-FEC Operator, anotherClusterGroupUpgradeCR must be created withdu-upgrade-fec-catsrc-policyanddu-upgrade-subscriptions-fec-policypolicies for the SRIOV-FEC Operator images pre-caching and update.Apply the
ClusterGroupUpgradeCR to the hub cluster by running the following command:oc apply -f cgu-operator-upgrade.yml
$ oc apply -f cgu-operator-upgrade.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: Pre-cache the images for the Operator update.
Before starting image pre-caching, verify the subscription policy is
NonCompliantat this point by running the following command:oc get policy common-subscriptions-policy -n <policy_namespace>
$ oc get policy common-subscriptions-policy -n <policy_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d
NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable pre-caching in the
ClusterGroupUpgradeCR by running the following command:oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster:
oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'$ oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the pre-caching is completed before starting the update by running the following command:
oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq$ oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Start the Operator update.
Enable the
cgu-operator-upgradeClusterGroupUpgradeCR and disable pre-caching to start the Operator update by running the following command:oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the process. Upon completion, ensure that the policy is compliant by running the following command:
oc get policies --all-namespaces
$ oc get policies --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.3.1. Troubleshooting missed Operator updates due to out-of-date policy compliance states Copier lienLien copié sur presse-papiers!
In some scenarios, Topology Aware Lifecycle Manager (TALM) might miss Operator updates due to an out-of-date policy compliance state.
After a catalog source update, it takes time for the Operator Lifecycle Manager (OLM) to update the subscription status. The status of the subscription policy might continue to show as compliant while TALM decides whether remediation is needed. As a result, the Operator specified in the subscription policy does not get upgraded.
To avoid this scenario, add another catalog source configuration to the PolicyGenTemplate and specify this configuration in the subscription for any Operators that require an update.
Procedure
Add a catalog source configuration in the
PolicyGenTemplateresource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
Subscriptionresource to point to the new configuration for Operators that require an update:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enter the name of the additional catalog source configuration that you defined in the
PolicyGenTemplateresource.
12.1.4. Performing a platform and an Operator update together Copier lienLien copié sur presse-papiers!
You can perform a platform and an Operator update at the same time.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Update GitOps Zero Touch Provisioning (ZTP) to the latest version.
- Provision one or more managed clusters with GitOps ZTP.
-
Log in as a user with
cluster-adminprivileges. - Create RHACM policies in the hub cluster.
Procedure
-
Create the
PolicyGenTemplateCR for the updates by following the steps described in the "Performing a platform update" and "Performing an Operator update" sections. Apply the prep work for the platform and the Operator update.
Save the content of the
ClusterGroupUpgradeCR with the policies for platform update preparation work, catalog source updates, and target clusters to thecgu-platform-operator-upgrade-prep.ymlfile, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
cgu-platform-operator-upgrade-prep.ymlfile to the hub cluster by running the following command:oc apply -f cgu-platform-operator-upgrade-prep.yml
$ oc apply -f cgu-platform-operator-upgrade-prep.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the process. Upon completion, ensure that the policy is compliant by running the following command:
oc get policies --all-namespaces
$ oc get policies --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the
ClusterGroupUpdateCR for the platform and the Operator update with thespec.enablefield set tofalse.Save the contents of the platform and Operator update
ClusterGroupUpdateCR with the policies and the target clusters to thecgu-platform-operator-upgrade.ymlfile, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
cgu-platform-operator-upgrade.ymlfile to the hub cluster by running the following command:oc apply -f cgu-platform-operator-upgrade.yml
$ oc apply -f cgu-platform-operator-upgrade.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: Pre-cache the images for the platform and the Operator update.
Enable pre-caching in the
ClusterGroupUpgradeCR by running the following command:oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=merge$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster:
oc get jobs,pods -n openshift-talm-pre-cache
$ oc get jobs,pods -n openshift-talm-pre-cacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the pre-caching is completed before starting the update by running the following command:
oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'$ oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Start the platform and Operator update.
Enable the
cgu-du-upgradeClusterGroupUpgradeCR to start the platform and the Operator update by running the following command:oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the process. Upon completion, ensure that the policy is compliant by running the following command:
oc get policies --all-namespaces
$ oc get policies --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe CRs for the platform and Operator updates can be created from the beginning by configuring the setting to
spec.enable: true. In this case, the update starts immediately after pre-caching completes and there is no need to manually enable the CR.Both pre-caching and the update create extra resources, such as policies, placement bindings, placement rules, managed cluster actions, and managed cluster view, to help complete the procedures. Setting the
afterCompletion.deleteObjectsfield totruedeletes all these resources after the updates complete.
12.1.5. Removing Performance Addon Operator subscriptions from deployed clusters Copier lienLien copié sur presse-papiers!
In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11 or later, these functions are part of the Node Tuning Operator.
Do not install the Performance Addon Operator on clusters running OpenShift Container Platform 4.11 or later. If you upgrade to OpenShift Container Platform 4.11 or later, the Node Tuning Operator automatically removes the Performance Addon Operator.
You need to remove any policies that create Performance Addon Operator subscriptions to prevent a re-installation of the Operator.
The reference DU profile includes the Performance Addon Operator in the PolicyGenTemplate CR common-ranGen.yaml. To remove the subscription from deployed managed clusters, you must update common-ranGen.yaml.
If you install Performance Addon Operator 4.10.3-5 or later on OpenShift Container Platform 4.11 or later, the Performance Addon Operator detects the cluster version and automatically hibernates to avoid interfering with the Node Tuning Operator functions. However, to ensure best performance, remove the Performance Addon Operator from your OpenShift Container Platform 4.11 clusters.
Prerequisites
- Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for ArgoCD.
- Update to OpenShift Container Platform 4.11 or later.
-
Log in as a user with
cluster-adminprivileges.
Procedure
Change the
complianceTypetomustnothavefor the Performance Addon Operator namespace, Operator group, and subscription in thecommon-ranGen.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The status of the
common-subscriptions-policypolicy changes toNon-Compliant. - Apply the change to your target clusters by using the Topology Aware Lifecycle Manager. For more information about rolling out configuration changes, see the "Additional resources" section.
Monitor the process. When the status of the
common-subscriptions-policypolicy for a target cluster isCompliant, the Performance Addon Operator has been removed from the cluster. Get the status of thecommon-subscriptions-policyby running the following command:oc get policy -n ztp-common common-subscriptions-policy
$ oc get policy -n ztp-common common-subscriptions-policyCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the Performance Addon Operator namespace, Operator group and subscription CRs from
.spec.sourceFilesin thecommon-ranGen.yamlfile. - Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The policy remains compliant.
12.1.6. Pre-caching user-specified images with TALM on single-node OpenShift clusters Copier lienLien copié sur presse-papiers!
You can pre-cache application-specific workload images on single-node OpenShift clusters before upgrading your applications.
You can specify the configuration options for the pre-caching jobs using the following custom resources (CR):
-
PreCachingConfigCR -
ClusterGroupUpgradeCR
All fields in the PreCachingConfig CR are optional.
Example PreCachingConfig CR
- 1
- By default, TALM automatically populates the
platformImage,operatorsIndexes, and theoperatorsPackagesAndChannelsfields from the policies of the managed clusters. You can specify values to override the default TALM-derived values for these fields. - 2
- Specifies the minimum required disk space on the cluster. If unspecified, TALM defines a default value for OpenShift Container Platform images. The disk space field must include an integer value and the storage unit. For example:
40 GiB,200 MB,1 TiB. - 3
- Specifies the images to exclude from pre-caching based on image name matching.
- 4
- Specifies the list of additional images to pre-cache.
Example ClusterGroupUpgrade CR with PreCachingConfig CR reference
12.1.6.1. Creating the custom resources for pre-caching Copier lienLien copié sur presse-papiers!
You must create the PreCachingConfig CR before or concurrently with the ClusterGroupUpgrade CR.
Create the
PreCachingConfigCR with the list of additional images you want to pre-cache.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ClusterGroupUpgradeCR with thepreCachingfield set totrueand specify thePreCachingConfigCR created in the previous step:Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningOnce you install the images on the cluster, you cannot change or delete them.
When you want to start pre-caching the images, apply the
ClusterGroupUpgradeCR by running the following command:oc apply -f cgu.yaml
$ oc apply -f cgu.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
TALM verifies the ClusterGroupUpgrade CR.
From this point, you can continue with the TALM pre-caching workflow.
All sites are pre-cached concurrently.
Verification
Check the pre-caching status on the hub cluster where the
ClusterUpgradeGroupCR is applied by running the following command:oc get cgu <cgu_name> -n <cgu_namespace> -oyaml
$ oc get cgu <cgu_name> -n <cgu_namespace> -oyamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The pre-caching configurations are validated by checking if the managed policies exist. Valid configurations of the
ClusterGroupUpgradeand thePreCachingConfigCRs result in the following statuses:Example output of valid CRs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example of an invalid PreCachingConfig CR
Type: "PrecacheSpecValid" Status: False, Reason: "PrecacheSpecIncomplete" Message: "Precaching spec is incomplete: failed to get PreCachingConfig resource due to PreCachingConfig.ran.openshift.io "<pre-caching_cr_name>" not found"
Type: "PrecacheSpecValid" Status: False, Reason: "PrecacheSpecIncomplete" Message: "Precaching spec is incomplete: failed to get PreCachingConfig resource due to PreCachingConfig.ran.openshift.io "<pre-caching_cr_name>" not found"Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can find the pre-caching job by running the following command on the managed cluster:
oc get jobs -n openshift-talo-pre-cache
$ oc get jobs -n openshift-talo-pre-cacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example of pre-caching job in progress
NAME COMPLETIONS DURATION AGE pre-cache 0/1 1s 1s
NAME COMPLETIONS DURATION AGE pre-cache 0/1 1s 1sCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can check the status of the pod created for the pre-caching job by running the following command:
oc describe pod pre-cache -n openshift-talo-pre-cache
$ oc describe pod pre-cache -n openshift-talo-pre-cacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example of pre-caching job in progress
Type Reason Age From Message Normal SuccesfulCreate 19s job-controller Created pod: pre-cache-abcd1
Type Reason Age From Message Normal SuccesfulCreate 19s job-controller Created pod: pre-cache-abcd1Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can get live updates on the status of the job by running the following command:
oc logs -f pre-cache-abcd1 -n openshift-talo-pre-cache
$ oc logs -f pre-cache-abcd1 -n openshift-talo-pre-cacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the pre-cache job is successfully completed, run the following command:
oc describe pod pre-cache -n openshift-talo-pre-cache
$ oc describe pod pre-cache -n openshift-talo-pre-cacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example of completed pre-cache job
Type Reason Age From Message Normal SuccesfulCreate 5m19s job-controller Created pod: pre-cache-abcd1 Normal Completed 19s job-controller Job completed
Type Reason Age From Message Normal SuccesfulCreate 5m19s job-controller Created pod: pre-cache-abcd1 Normal Completed 19s job-controller Job completedCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the images are successfully pre-cached on the single-node OpenShift, do the following:
Enter into the node in debug mode:
oc debug node/cnfdf00.example.lab
$ oc debug node/cnfdf00.example.labCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change root to
host:chroot /host/
$ chroot /host/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Search for the desired images:
sudo podman images | grep <operator_name>
$ sudo podman images | grep <operator_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2. About the auto-created ClusterGroupUpgrade CR for GitOps ZTP Copier lienLien copié sur presse-papiers!
TALM has a controller called ManagedClusterForCGU that monitors the Ready state of the ManagedCluster CRs on the hub cluster and creates the ClusterGroupUpgrade CRs for GitOps Zero Touch Provisioning (ZTP).
For any managed cluster in the Ready state without a ztp-done label applied, the ManagedClusterForCGU controller automatically creates a ClusterGroupUpgrade CR in the ztp-install namespace with its associated RHACM policies that are created during the GitOps ZTP process. TALM then remediates the set of configuration policies that are listed in the auto-created ClusterGroupUpgrade CR to push the configuration CRs to the managed cluster.
If there are no policies for the managed cluster at the time when the cluster becomes Ready, a ClusterGroupUpgrade CR with no policies is created. Upon completion of the ClusterGroupUpgrade the managed cluster is labeled as ztp-done. If there are policies that you want to apply for that managed cluster, manually create a ClusterGroupUpgrade as a day-2 operation.
Example of an auto-created ClusterGroupUpgrade CR for GitOps ZTP