Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 19. Clusters at the network far edge
19.1. Challenges of the network far edge Link kopierenLink in die Zwischenablage kopiert!
Edge computing presents complex challenges when managing many sites in geographically displaced locations. Use zero touch provisioning (ZTP) and GitOps to provision and manage sites at the far edge of the network.
19.1.1. Overcoming the challenges of the network far edge Link kopierenLink in die Zwischenablage kopiert!
Today, service providers want to deploy their infrastructure at the edge of the network. This presents significant challenges:
- How do you handle deployments of many edge sites in parallel?
- What happens when you need to deploy sites in disconnected environments?
- How do you manage the lifecycle of large fleets of clusters?
Zero touch provisioning (ZTP) and GitOps meets these challenges by allowing you to provision remote edge sites at scale with declarative site definitions and configurations for bare-metal equipment. Template or overlay configurations install OpenShift Container Platform features that are required for CNF workloads. The full lifecycle of installation and upgrades is handled through the ZTP pipeline.
ZTP uses GitOps for infrastructure deployments. With GitOps, you use declarative YAML files and other defined patterns stored in Git repositories. Red Hat Advanced Cluster Management (RHACM) uses your Git repositories to drive the deployment of your infrastructure.
GitOps provides traceability, role-based access control (RBAC), and a single source of truth for the desired state of each site. Scalability issues are addressed by Git methodologies and event driven operations through webhooks.
You start the ZTP workflow by creating declarative site definition and configuration custom resources (CRs) that the ZTP pipeline delivers to the edge nodes.
The following diagram shows how ZTP works within the far edge framework.
19.1.2. Using ZTP to provision clusters at the network far edge Link kopierenLink in die Zwischenablage kopiert!
Red Hat Advanced Cluster Management (RHACM) manages clusters in a hub-and-spoke architecture, where a single hub cluster manages many spoke clusters. Hub clusters running RHACM provision and deploy the managed clusters by using zero touch provisioning (ZTP) and the assisted service that is deployed when you install RHACM.
The assisted service handles provisioning of OpenShift Container Platform on single node clusters, three-node clusters, or standard clusters running on bare metal.
A high-level overview of using ZTP to provision and maintain bare-metal hosts with OpenShift Container Platform is as follows:
- A hub cluster running RHACM manages an OpenShift image registry that mirrors the OpenShift Container Platform release images. RHACM uses the OpenShift image registry to provision the managed clusters.
- You manage the bare-metal hosts in a YAML format inventory file, versioned in a Git repository.
- You make the hosts ready for provisioning as managed clusters, and use RHACM and the assisted service to install the bare-metal hosts on site.
Installing and deploying the clusters is a two-stage process, involving an initial installation phase, and a subsequent configuration phase. The following diagram illustrates this workflow:
19.1.3. Installing managed clusters with SiteConfig resources and RHACM Link kopierenLink in die Zwischenablage kopiert!
GitOps ZTP uses
SiteConfig
SiteConfig
The ZTP GitOps plugin processes
SiteConfig
You can provision single clusters manually or in batches with ZTP:
- Provisioning a single cluster
-
Create a single
SiteConfigCR and related installation and configuration CRs for the cluster, and apply them in the hub cluster to begin cluster provisioning. This is a good way to test your CRs before deploying on a larger scale. - Provisioning many clusters
-
Install managed clusters in batches of up to 400 by defining
SiteConfigand related CRs in a Git repository. ArgoCD uses theSiteConfigCRs to deploy the sites. The RHACM policy generator creates the manifests and applies them to the hub cluster. This starts the cluster provisioning process.
19.1.4. Configuring managed clusters with policies and PolicyGenTemplate resources Link kopierenLink in die Zwischenablage kopiert!
Zero touch provisioning (ZTP) uses Red Hat Advanced Cluster Management (RHACM) to configure clusters by using a policy-based governance approach to applying the configuration.
The policy generator or
PolicyGen
For scalability and to reduce the complexity of managing configurations across the fleet of clusters, use configuration CRs with as much commonality as possible.
- Where possible, apply configuration CRs using a fleet-wide common policy.
- The next preference is to create logical groupings of clusters to manage as much of the remaining configurations as possible under a group policy.
- When a configuration is unique to an individual site, use RHACM templating on the hub cluster to inject the site-specific data into a common or group policy. Alternatively, apply an individual site policy for the site.
The following diagram shows how the policy generator interacts with GitOps and RHACM in the configuration phase of cluster deployment.
For large fleets of clusters, it is typical for there to be a high-level of consistency in the configuration of those clusters.
The following recommended structuring of policies combines configuration CRs to meet several goals:
- Describe common configurations once and apply to the fleet.
- Minimize the number of maintained and managed policies.
- Support flexibility in common configurations for cluster variants.
| Policy category | Description |
|---|---|
| Common | A policy that exists in the common category is applied to all clusters in the fleet. Use common
|
| Groups | A policy that exists in the groups category is applied to a group of clusters in the fleet. Use group
|
| Sites | A policy that exists in the sites category is applied to a specific cluster site. Any cluster can have its own specific policies maintained. |
19.2. Preparing the hub cluster for ZTP Link kopierenLink in die Zwischenablage kopiert!
To use RHACM in a disconnected environment, create a mirror registry that mirrors the OpenShift Container Platform release images and Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You can also use a disconnected mirror host to serve the RHCOS ISO and RootFS disk images that are used to provision the bare-metal hosts.
19.2.1. Telco RAN 4.12 validated solution software versions Link kopierenLink in die Zwischenablage kopiert!
The Red Hat Telco Radio Access Network (RAN) version 4.12 solution has been validated using the following Red Hat software products.
| Product | Software version |
|---|---|
| Hub cluster OpenShift Container Platform version | 4.12 |
| GitOps ZTP plugin | 4.10, 4.11, or 4.12 |
| Red Hat Advanced Cluster Management (RHACM) | 2.6, 2.7 |
| Red Hat OpenShift GitOps | 1.9, 1.10 |
| Topology Aware Lifecycle Manager (TALM) | 4.10, 4.11, or 4.12 |
19.2.2. Installing GitOps ZTP in a disconnected environment Link kopierenLink in die Zwischenablage kopiert!
Use Red Hat Advanced Cluster Management (RHACM), Red Hat OpenShift GitOps, and Topology Aware Lifecycle Manager (TALM) on the hub cluster in the disconnected environment to manage the deployment of multiple managed clusters.
Prerequisites
-
You have installed the OpenShift Container Platform CLI ().
oc -
You have logged in as a user with privileges.
cluster-admin You have configured a disconnected mirror registry for use in the cluster.
NoteThe disconnected mirror registry that you create must contain a version of TALM backup and pre-cache images that matches the version of TALM running in the hub cluster. The spoke cluster must be able to resolve these images in the disconnected mirror registry.
Procedure
- Install RHACM in the hub cluster. See Installing RHACM in a disconnected environment.
- Install GitOps and TALM in the hub cluster.
19.2.3. Adding RHCOS ISO and RootFS images to the disconnected mirror host Link kopierenLink in die Zwischenablage kopiert!
Before you begin installing clusters in the disconnected environment with Red Hat Advanced Cluster Management (RHACM), you must first host Red Hat Enterprise Linux CoreOS (RHCOS) images for it to use. Use a disconnected mirror to host the RHCOS images.
Prerequisites
- Deploy and configure an HTTP server to host the RHCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create.
The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. You require ISO and RootFS images to install RHCOS on the hosts. RHCOS QCOW2 images are not supported for this installation type.
Procedure
- Log in to the mirror host.
Obtain the RHCOS ISO and RootFS images from mirror.openshift.com, for example:
Export the required image names and OpenShift Container Platform version as environment variables:
$ export ISO_IMAGE_NAME=<iso_image_name>1 $ export ROOTFS_IMAGE_NAME=<rootfs_image_name>1 $ export OCP_VERSION=<ocp_version>1 Download the required images:
$ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/${OCP_VERSION}/${ISO_IMAGE_NAME} -O /var/www/html/${ISO_IMAGE_NAME}$ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/${OCP_VERSION}/${ROOTFS_IMAGE_NAME} -O /var/www/html/${ROOTFS_IMAGE_NAME}
Verification steps
Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example:
$ wget http://$(hostname)/${ISO_IMAGE_NAME}Example output
Saving to: rhcos-4.12.1-x86_64-live.x86_64.iso rhcos-4.12.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s
19.2.4. Enabling the assisted service Link kopierenLink in die Zwischenablage kopiert!
Red Hat Advanced Cluster Management (RHACM) uses the assisted service to deploy OpenShift Container Platform clusters. The assisted service is deployed automatically when you enable the MultiClusterHub Operator on Red Hat Advanced Cluster Management (RHACM). After that, you need to configure the
Provisioning
AgentServiceConfig
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin - You have RHACM with MultiClusterHub enabled.
Procedure
-
Enable the resource to watch all namespaces and configure mirrors for disconnected environments. For more information, see Enabling the Central Infrastructure Management service.
Provisioning Update the
CR by running the following command:AgentServiceConfig$ oc edit AgentServiceConfigAdd the following entry to the
field in the CR:items.spec.osImages- cpuArchitecture: x86_64 openshiftVersion: "4.12" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.isowhere:
- <host>
- Is the fully qualified domain name (FQDN) for the target mirror registry HTTP server.
- <path>
- Is the path to the image on the target mirror registry.
Save and quit the editor to apply the changes.
19.2.5. Configuring the hub cluster to use a disconnected mirror registry Link kopierenLink in die Zwischenablage kopiert!
You can configure the hub cluster to use a disconnected mirror registry for a disconnected environment.
Prerequisites
- You have a disconnected hub cluster installation with Red Hat Advanced Cluster Management (RHACM) 2.7 installed.
-
You have hosted the and
rootfsimages on an HTTP server. See the Additional resources section for guidance about Mirroring the OpenShift Container Platform image repository.iso
If you enable TLS for the HTTP server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and managed clusters and the HTTP server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported.
Procedure
Create a
containing the mirror registry config:ConfigMapapiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine1 labels: app: assisted-service data: ca-bundle.crt: <certificate>2 registries.conf: |3 unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] location = <mirror_registry_url>4 insecure = false mirror-by-digest-only = true- 1
- The
ConfigMapnamespace must be set tomulticluster-engine. - 2
- The mirror registry’s certificate used when creating the mirror registry.
- 3
- The configuration file for the mirror registry. The mirror registry configuration adds mirror information to
/etc/containers/registries.confin the Discovery image. The mirror information is stored in theimageContentSourcessection of theinstall-config.yamlfile when passed to the installation program. The Assisted Service pod running on the HUB cluster fetches the container images from the configured mirror registry. - 4
- The URL of the mirror registry. You must use the URL from the
imageContentSourcessection by running theoc adm release mirrorcommand when you configure the mirror registry. For more information, see the Mirroring the OpenShift Container Platform image repository section.
This updates
in themirrorRegistryRefcustom resource, as shown below:AgentServiceConfigExample output
apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config2 osImages: - openshiftVersion: <ocp_version> url: <iso_url>3
A valid NTP server is required during cluster installation. Ensure that a suitable NTP server is available and can be reached from the installed clusters through the disconnected network.
19.2.6. Configuring the hub cluster to use unauthenticated registries Link kopierenLink in die Zwischenablage kopiert!
You can configure the hub cluster to use unauthenticated registries. Unauthenticated registries does not require authentication to access and download images.
Prerequisites
- You have installed and configured a hub cluster and installed Red Hat Advanced Cluster Management (RHACM) on the hub cluster.
- You have installed the OpenShift Container Platform CLI (oc).
-
You have logged in as a user with privileges.
cluster-admin - You have configured an unauthenticated registry for use with the hub cluster.
Procedure
Update the
custom resource (CR) by running the following command:AgentServiceConfig$ oc edit AgentServiceConfig agentAdd the
field in the CR:unauthenticatedRegistriesapiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com ...Unauthenticated registries are listed under
in thespec.unauthenticatedRegistriesresource. Any registry on this list is not required to have an entry in the pull secret used for the spoke cluster installation.AgentServiceConfigvalidates the pull secret by making sure it contains the authentication information for every image registry used for installation.assisted-service
Mirror registries are automatically added to the ignore list and do not need to be added under
spec.unauthenticatedRegistries
PUBLIC_CONTAINER_REGISTRIES
ConfigMap
PUBLIC_CONTAINER_REGISTRIES
Verification
Verify that you can access the newly added registry from the hub cluster by running the following commands:
Open a debug shell prompt to the hub cluster:
$ oc debug node/<node_name>Test access to the unauthenticated registry by running the following command:
sh-4.4# podman login -u kubeadmin -p $(oc whoami -t) <unauthenticated_registry>where:
- <unauthenticated_registry>
-
Is the new registry, for example,
unauthenticated-image-registry.openshift-image-registry.svc:5000.
Example output
Login Succeeded!
19.2.7. Configuring the hub cluster with ArgoCD Link kopierenLink in die Zwischenablage kopiert!
You can configure the hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CRs) for each site with GitOps zero touch provisioning (ZTP).
Red Hat Advanced Cluster Management (RHACM) uses
SiteConfig
SiteConfig
Prerequisites
- You have a OpenShift Container Platform hub cluster with Red Hat Advanced Cluster Management (RHACM) and Red Hat OpenShift GitOps installed.
-
You have extracted the reference deployment from the ZTP GitOps plugin container as described in the "Preparing the GitOps ZTP site configuration repository" section. Extracting the reference deployment creates the directory referenced in the following procedure.
out/argocd/deployment
Procedure
Prepare the ArgoCD pipeline configuration:
- Create a Git repository with the directory structure similar to the example directory. For more information, see "Preparing the GitOps ZTP site configuration repository".
Configure access to the repository using the ArgoCD UI. Under Settings configure the following:
-
Repositories - Add the connection information. The URL must end in , for example,
.gitand credentials.https://repo.example.com/repo.git - Certificates - Add the public certificate for the repository, if needed.
-
Repositories - Add the connection information. The URL must end in
Modify the two ArgoCD applications,
andout/argocd/deployment/clusters-app.yaml, based on your Git repository:out/argocd/deployment/policies-app.yaml-
Update the URL to point to the Git repository. The URL ends with , for example,
.git.https://repo.example.com/repo.git -
The indicates which Git repository branch to monitor.
targetRevision -
specifies the path to the
pathandSiteConfigCRs, respectively.PolicyGenTemplate
-
Update the URL to point to the Git repository. The URL ends with
To install the ZTP GitOps plugin you must patch the ArgoCD instance in the hub cluster by using the patch file previously extracted into the
directory. Run the following command:out/argocd/deployment/$ oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.jsonNoteFor a disconnected environment, amend the
file with theout/argocd/deployment/argocd-openshift-gitops-patch.jsonimage mirrored in your local registry. Run the following command:ztp-site-generate$ oc patch argocd openshift-gitops -n openshift-gitops --type='json' \ -p='[{"op": "replace", "path": "/spec/repo/initContainers/0/image", \ "value": "<local_registry>/<ztp_site_generate_image_ref>"}]'where:
- <local_registry>
-
Is the URL of the disconnected registry, for example,
my.local.registry:5000 - <ztp-site-generate-image-ref>
-
Is the path to the mirrored
ztp-site-generateimage in the local registry, for exampleopenshift4-ztp-site-generate:custom.
In RHACM 2.7 and later, the multicluster engine enables the
feature by default. To disable this feature, apply the following patch to disable and remove the relevant hub cluster and managed cluster pods that are responsible for this add-on.cluster-proxy-addon$ oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.jsonApply the pipeline configuration to your hub cluster by using the following command:
$ oc apply -k out/argocd/deployment
19.2.8. Preparing the GitOps ZTP site configuration repository Link kopierenLink in die Zwischenablage kopiert!
Before you can use the ZTP GitOps pipeline, you need to prepare the Git repository to host the site configuration data.
Prerequisites
- You have configured the hub cluster GitOps applications for generating the required installation and policy custom resources (CRs).
- You have deployed the managed clusters using zero touch provisioning (ZTP).
Procedure
-
Create a directory structure with separate paths for the and
SiteConfigCRs.PolicyGenTemplate Export the
directory from theargocdcontainer image using the following commands:ztp-site-generate$ podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12$ mkdir -p ./out$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 extract /home/ztp --tar | tar x -C ./outCheck that the
directory contains the following subdirectories:out-
contains the source CR files that
out/extra-manifestuses to generate extra manifestSiteConfig.configMap -
contains the source CR files that
out/source-crsuses to generate the Red Hat Advanced Cluster Management (RHACM) policies.PolicyGenTemplate -
contains patches and YAML files to apply on the hub cluster for use in the next step of this procedure.
out/argocd/deployment -
contains the examples for
out/argocd/exampleandSiteConfigfiles that represent the recommended configuration.PolicyGenTemplate
-
The directory structure under
out/argocd/example
SiteConfig
PolicyGenTemplate
example
├── policygentemplates
│ ├── common-ranGen.yaml
│ ├── example-sno-site.yaml
│ ├── group-du-sno-ranGen.yaml
│ ├── group-du-sno-validator-ranGen.yaml
│ ├── kustomization.yaml
│ └── ns.yaml
└── siteconfig
├── example-sno.yaml
├── KlusterletAddonConfigOverride.yaml
└── kustomization.yaml
Keep
SiteConfig
PolicyGenTemplate
SiteConfig
PolicyGenTemplate
kustomization.yaml
This directory structure and the
kustomization.yaml
kustomization.yaml
SiteConfig
example-sno.yaml
PolicyGenTemplate
common-ranGen.yaml
group-du-sno*.yaml
example-sno-site.yaml
The
KlusterletAddonConfigOverride.yaml
SiteConfig
example-sno.yaml
19.3. Installing managed clusters with RHACM and SiteConfig resources Link kopierenLink in die Zwischenablage kopiert!
You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The zero touch priovisioning (ZTP) pipeline performs the cluster installations. ZTP can be used in a disconnected environment.
19.3.1. GitOps ZTP and Topology Aware Lifecycle Manager Link kopierenLink in die Zwischenablage kopiert!
GitOps zero touch provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM.
- Inform policies
-
By default, GitOps ZTP creates all policies with a remediation action of
inform. These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the ZTP process, after OpenShift installation, the TALM steps through the createdinformpolicies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. - Automatic creation of ClusterGroupUpgrade CRs
To automate the initial configuration of newly deployed clusters, TALM monitors the state of all
CRs on the hub cluster. AnyManagedClusterCR that does not have aManagedClusterlabel applied, including newly createdztp-doneCRs, causes the TALM to automatically create aManagedClusterCR with the following characteristics:ClusterGroupUpgrade-
The CR is created and enabled in the
ClusterGroupUpgradenamespace.ztp-install -
CR has the same name as the
ClusterGroupUpgradeCR.ManagedCluster -
The cluster selector includes only the cluster associated with that CR.
ManagedCluster -
The set of managed policies includes all policies that RHACM has bound to the cluster at the time the is created.
ClusterGroupUpgrade - Pre-caching is disabled.
- Timeout set to 4 hours (240 minutes).
The automatic creation of an enabled
ensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of aClusterGroupUpgradeCR for anyClusterGroupUpgradewithout theManagedClusterlabel allows a failed ZTP installation to be restarted by simply deleting theztp-doneCR for the cluster.ClusterGroupUpgrade-
The
- Waves
Each policy generated from a
CR includes aPolicyGenTemplateannotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generatedztp-deploy-waveCR. The wave annotation is not used other than for the auto-generatedClusterGroupUpgradeCR.ClusterGroupUpgradeNoteAll CRs in the same policy must have the same setting for the
annotation. The default value of this annotation for each CR can be overridden in theztp-deploy-wave. The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime.PolicyGenTemplateThe TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the next policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the
for an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account.CatalogSourceNoteMultiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves.
To check the default wave value in each source CR, run the following command against the
directory that is extracted from theout/source-crscontainer image:ztp-site-generate$ grep -r "ztp-deploy-wave" out/source-crs- Phase labels
The
CR is automatically created and includes directives to annotate theClusterGroupUpgradeCR with labels at the start and end of the ZTP process.ManagedClusterWhen ZTP configuration postinstallation commences, the
has theManagedClusterlabel applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove theztp-runninglabel and apply theztp-runninglabel.ztp-doneFor deployments that make use of the
policy, theinformDuValidatorlabel is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the ZTP applied configuration CRs. Theztp-donelabel affects automaticztp-doneCR creation by TALM. Do not manipulate this label after the initial ZTP installation of the cluster.ClusterGroupUpgrade- Linked CRs
-
The automatically created
ClusterGroupUpgradeCR has the owner reference set as theManagedClusterfrom which it was derived. This reference ensures that deleting theManagedClusterCR causes the instance of theClusterGroupUpgradeto be deleted along with any supporting resources.
19.3.2. Overview of deploying managed clusters with ZTP Link kopierenLink in die Zwischenablage kopiert!
Red Hat Advanced Cluster Management (RHACM) uses zero touch provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters.
The deployment of the clusters includes:
- Installing the host operating system (RHCOS) on a blank server
- Deploying OpenShift Container Platform
- Creating cluster policies and site subscriptions
- Making the necessary network configurations to the server operating system
- Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV
Overview of the managed site installation process
After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically:
- A Discovery image ISO file is generated and booted on the target host.
- When the ISO file successfully boots on the target host it reports the host hardware information to RHACM.
- After all hosts are discovered, OpenShift Container Platform is installed.
-
When OpenShift Container Platform finishes installing, the hub installs the service on the target cluster.
klusterlet - The requested add-on services are installed on the target cluster.
The Discovery image ISO process is complete when the
Agent
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads.
19.3.3. Creating the managed bare-metal host secrets Link kopierenLink in die Zwischenablage kopiert!
Add the required
Secret
The secrets are referenced from the
SiteConfig
SiteConfig
Procedure
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file
:example-sno-secret.yamlapiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno1 data:2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno3 data: .dockerconfigjson: <pull_secret>4 type: kubernetes.io/dockerconfigjson
-
Add the relative path to to the
example-sno-secret.yamlfile that you use to install the cluster.kustomization.yaml
19.3.4. Configuring Discovery ISO kernel arguments for installations using GitOps ZTP Link kopierenLink in die Zwischenablage kopiert!
The GitOps ZTP workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the
InfraEnv
rd.net.timeout.carrier
In OpenShift Container Platform 4.12, you can only add kernel arguments. You can not replace or delete kernel arguments.
Prerequisites
- You have installed the OpenShift CLI (oc).
- You have logged in to the hub cluster as a user with cluster-admin privileges.
Procedure
Create the
CR and edit theInfraEnvspecification to configure kernel arguments.spec.kernelArgumentsSave the following YAML in an
file:InfraEnv-example.yamlNoteThe
CR in this example uses template syntax such asInfraEnvthat is populated based on values in the{{ .Cluster.ClusterName }}CR. TheSiteConfigCR automatically populates values for these templates during deployment. Do not edit the templates manually.SiteConfigapiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" spec: clusterRef: name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" kernelArguments: - operation: append1 value: audit=02 - operation: append value: trace=1 sshAuthorizedKey: "{{ .Site.SshPublicKey }}" proxy: "{{ .Cluster.ProxySettings }}" pullSecretRef: name: "{{ .Site.PullSecretRef.Name }}" ignitionConfigOverride: "{{ .Cluster.IgnitionConfigOverride }}" nmStateConfigLabelSelector: matchLabels: nmstate-label: "{{ .Cluster.ClusterName }}" additionalNTPSources: "{{ .Cluster.AdditionalNTPSources }}"
Commit the
CR to the same location in your Git repository that has theInfraEnv-example.yamlCR and push your changes. The following example shows a sample Git repository structure:SiteConfig~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ...Edit the
specification in thespec.clusters.crTemplatesCR to reference theSiteConfigCR in your Git repository:InfraEnv-example.yamlclusters: crTemplates: InfraEnv: "InfraEnv-example.yaml"When you are ready to deploy your cluster by committing and pushing the
CR, the build pipeline uses the customSiteConfigCR in your Git repository to configure the infrastructure environment, including the custom kernel arguments.InfraEnv-example
Verification
To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the
/proc/cmdline
Begin an SSH session with the target host:
$ ssh -i /path/to/privatekey core@<host_name>View the system’s kernel arguments by using the following command:
$ cat /proc/cmdline
19.3.5. Deploying a managed cluster with SiteConfig and ZTP Link kopierenLink in die Zwischenablage kopiert!
Use the following procedure to create a
SiteConfig
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin - You configured the hub cluster for generating the required installation and policy CRs.
You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information.
NoteWhen you create the source repository, ensure that you patch the ArgoCD application with the
patch-file that you extract from theargocd/deployment/argocd-openshift-gitops-patch.jsoncontainer. See "Configuring the hub cluster with ArgoCD".ztp-site-generateTo be ready for provisioning managed clusters, you require the following for each bare-metal host:
- Network connectivity
- Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host.
- Baseboard Management Controller (BMC) details
-
ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the
ManagedClusterCRs on the hub cluster based on theSiteConfigCR in your site Git repo. You create individualBMCSecretCRs for each host manually.
Procedure
Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in
, the cluster name and namespace isout/argocd/example/siteconfig/example-sno.yaml.example-snoExport the cluster namespace by running the following command:
$ export CLUSTERNS=example-snoCreate the namespace:
$ oc create namespace $CLUSTERNS
Create pull secret and BMC
CRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information.SecretNoteThe secrets are referenced from the
custom resource (CR) by name. The namespace must match theSiteConfignamespace.SiteConfigCreate a
CR for your cluster in your local clone of the Git repository:SiteConfigChoose the appropriate example for your CR from the
folder. The folder includes example files for single node, three-node, and standard clusters:out/argocd/example/siteconfig/-
example-sno.yaml -
example-3node.yaml -
example-standard.yaml
-
Change the cluster and host details in the example file to match the type of cluster you want. For example:
# example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA…" clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot bootMode: "UEFI" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at
/var/lib/containerswith ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": " Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254+
NoteFor more information about BMC addressing, see the "Additional resources" section. The
andinstallConfigOverridesfields are expanded in the example for ease of readability.ignitionConfigOverride-
You can inspect the default set of extra-manifest CRs in
MachineConfig. It is automatically applied to the cluster when it is installed.out/argocd/extra-manifest -
Optional: To provision additional install-time manifests on the provisioned cluster, create a directory in your Git repository, for example, , and add your custom manifest CRs to this directory. If your
sno-extra-manifest/refers to this directory in theSiteConfig.yamlfield, any CRs in this referenced directory are appended to the default set of extra manifests.extraManifestPath
-
Add the CR to the
SiteConfigfile in thekustomization.yamlsection, similar to the example shown ingenerators.out/argocd/example/siteconfig/kustomization.yaml Commit the
CR and associatedSiteConfigchanges in your Git repository and push the changes.kustomization.yamlThe ArgoCD pipeline detects the changes and begins the managed cluster deployment.
19.3.5.1. Single-node OpenShift SiteConfig CR installation reference Link kopierenLink in die Zwischenablage kopiert!
| SiteConfig CR field | Description |
|---|---|
|
| Set
|
|
| Configure the image set available on the hub cluster for all the clusters in the site. To see the list of supported versions on your hub cluster, run
|
|
| Set the
Important Use the reference configuration as specified in the example
|
|
| Configure cluster labels to correspond to the
|
|
| Optional. Set
|
|
| For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with
|
|
| BMC address that you use to access the host. Applies to all cluster types. {ztp} supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. |
|
| BMC address that you use to access the host. Applies to all cluster types. {ztp} supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. Note In far edge Telco use cases, only virtual media is supported for use with {ztp}. |
|
| Configure the
|
|
| Set the boot mode for the host to
|
|
| Specifies the device for deployment. Identifiers that are stable across reboots are recommended. For example,
|
|
| Configure
|
|
| Configure the network settings for the node. |
|
| Configure the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. |
19.3.6. Monitoring managed cluster installation progress Link kopierenLink in die Zwischenablage kopiert!
The ArgoCD pipeline uses the
SiteConfig
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin
Procedure
When the synchronization is complete, the installation generally proceeds as follows:
The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands:
Export the cluster name:
$ export CLUSTER=<clusterName>Query the
CR for the managed cluster:AgentClusterInstall$ oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jqGet the installation events for the cluster:
$ curl -sk $(oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'
19.3.7. Troubleshooting GitOps ZTP by validating the installation CRs Link kopierenLink in die Zwischenablage kopiert!
The ArgoCD pipeline uses the
SiteConfig
PolicyGenTemplate
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin
Procedure
Check that the installation CRs were created by using the following command:
$ oc get AgentClusterInstall -n <cluster_name>If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from
files to the installation CRs.SiteConfigVerify that the
CR was generated using theManagedClusterCR on the hub cluster:SiteConfig$ oc get managedclusterIf the
is missing, check if theManagedClusterapplication failed to synchronize the files from the Git repository to the hub cluster:clusters$ oc describe -n openshift-gitops application clustersCheck for the
field to view the error logs for the managed cluster. For example, setting an invalid value forStatus.Conditionsin theextraManifestPath:CR raises the following error:SiteConfigStatus: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/siteconfigs/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not create extra-manifest ranSite1.extra-manifest3 stat extra-manifest3: no such file or directory 2021/11/26 17:21:40 Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-913473579: stat extra-manifest3: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-913473579; exit status 1: exit status 1 Type: ComparisonErrorCheck the
field. If there are log errors, theStatus.Syncfield could indicate anStatus.Syncerror:UnknownStatus: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown
19.3.8. Troubleshooting {ztp} virtual media booting on Supermicro servers Link kopierenLink in die Zwischenablage kopiert!
SuperMicro X11 servers do not support virtual media installations when the image is served using the
https
Provisioning
https
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin
Procedure
Disable TLS in the
resource by running the following command:Provisioning$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}'- Continue the steps to deploy your single-node OpenShift cluster.
19.3.9. Removing a managed cluster site from the ZTP pipeline Link kopierenLink in die Zwischenablage kopiert!
You can remove a managed site and the associated installation and configuration policy CRs from the ZTP pipeline.
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin
Procedure
Remove a site and the associated CRs by removing the associated
andSiteConfigfiles from thePolicyGenTemplatefile.kustomization.yamlWhen you run the ZTP pipeline again, the generated CRs are removed.
-
Optional: If you want to permanently remove a site, you should also remove the and site-specific
SiteConfigfiles from the Git repository.PolicyGenTemplate -
Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the and site-specific
SiteConfigCRs in the Git repository.PolicyGenTemplate
19.3.10. Removing obsolete content from the ZTP pipeline Link kopierenLink in die Zwischenablage kopiert!
If a change to the
PolicyGenTemplate
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin
Procedure
-
Remove the affected files from the Git repository, commit and push to the remote repository.
PolicyGenTemplate - Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster.
Add the updated
files back to the Git repository, and then commit and push to the remote repository.PolicyGenTemplateNoteRemoving zero touch provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster.
Optional: As an alternative, after making changes to
CRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command:PolicyGenTemplate$ oc delete policy -n <namespace> <policy_name>
19.3.11. Tearing down the ZTP pipeline Link kopierenLink in die Zwischenablage kopiert!
You can remove the ArgoCD pipeline and all generated ZTP artifacts.
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin
Procedure
- Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster.
Delete the
file in thekustomization.yamldirectory using the following command:deployment$ oc delete -k out/argocd/deployment- Commit and push your changes to the site repository.
19.4. Configuring managed clusters with policies and PolicyGenTemplate resources Link kopierenLink in die Zwischenablage kopiert!
Applied policy custom resources (CRs) configure the managed clusters that you provision. You can customize how Red Hat Advanced Cluster Management (RHACM) uses
PolicyGenTemplate
19.4.1. About the PolicyGenTemplate CRD Link kopierenLink in die Zwischenablage kopiert!
The
PolicyGenTemplate
PolicyGen
The following example shows a
PolicyGenTemplate
common-du-ranGen.yaml
ztp-site-generate
common-du-ranGen.yaml
policyName
common-du-ranGen.yaml
bindingRules
Example PolicyGenTemplate CR - common-du-ranGen.yaml
---
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: "common"
namespace: "ztp-common"
spec:
bindingRules:
common: "true"
sourceFiles:
- fileName: SriovSubscription.yaml
policyName: "subscriptions-policy"
- fileName: SriovSubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: SriovSubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: SriovOperatorStatus.yaml
policyName: "subscriptions-policy"
- fileName: PtpSubscription.yaml
policyName: "subscriptions-policy"
- fileName: PtpSubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: PtpSubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: PtpOperatorStatus.yaml
policyName: "subscriptions-policy"
- fileName: ClusterLogNS.yaml
policyName: "subscriptions-policy"
- fileName: ClusterLogOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: ClusterLogSubscription.yaml
policyName: "subscriptions-policy"
- fileName: ClusterLogOperatorStatus.yaml
policyName: "subscriptions-policy"
- fileName: StorageNS.yaml
policyName: "subscriptions-policy"
- fileName: StorageOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: StorageSubscription.yaml
policyName: "subscriptions-policy"
- fileName: StorageOperatorStatus.yaml
policyName: "subscriptions-policy"
- fileName: ReduceMonitoringFootprint.yaml
policyName: "config-policy"
- fileName: OperatorHub.yaml
policyName: "config-policy"
- fileName: DefaultCatsrc.yaml
policyName: "config-policy"
metadata:
name: redhat-operators
spec:
displayName: disconnected-redhat-operators
image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9
- fileName: DisconnectedICSP.yaml
policyName: "config-policy"
spec:
repositoryDigestMirrors:
- mirrors:
- registry.example.com:5000
source: registry.redhat.io
- 1
common: "true"applies the policies to all clusters with this label.- 2
- Files listed under
sourceFilescreate the Operator policies for installed clusters. - 3
OperatorHub.yamlconfigures the OperatorHub for the disconnected registry.- 4
DefaultCatsrc.yamlconfigures the catalog source for the disconnected registry.- 5
policyName: "config-policy"configures Operator subscriptions. TheOperatorHubCR disables the default and this CR replacesredhat-operatorswith aCatalogSourceCR that points to the disconnected registry.
A
PolicyGenTemplate
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: "group-du-sno"
namespace: "ztp-group"
spec:
bindingRules:
group-du-sno: ""
mcp: "master"
sourceFiles:
- fileName: PtpConfigSlave.yaml
policyName: "config-policy"
metadata:
name: "du-ptp-slave"
spec:
profile:
- name: "slave"
interface: "ens5f0"
ptp4lOpts: "-2 -s --summary_interval -4"
phc2sysOpts: "-a -r -n 24"
Using the source file
PtpConfigSlave.yaml
PtpConfig
PtpConfigSlave
group-du-sno-config-policy
PtpConfig
group-du-sno-config-policy
du-ptp-slave
spec
PtpConfigSlave.yaml
du-ptp-slave
spec
The following example shows the
group-du-sno-config-policy
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: group-du-ptp-config-policy
namespace: groups-sub
annotations:
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
policy.open-cluster-management.io/standards: NIST SP 800-53
spec:
remediationAction: inform
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: group-du-ptp-config-policy-config
spec:
remediationAction: inform
severity: low
namespaceselector:
exclude:
- kube-*
include:
- '*'
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: du-ptp-slave
namespace: openshift-ptp
spec:
recommend:
- match:
- nodeLabel: node-role.kubernetes.io/worker-du
priority: 4
profile: slave
profile:
- interface: ens5f0
name: slave
phc2sysOpts: -a -r -n 24
ptp4lConf: |
[global]
#
# Default Data Set
#
twoStepFlag 1
slaveOnly 0
priority1 128
priority2 128
domainNumber 24
.....
19.4.2. Recommendations when customizing PolicyGenTemplate CRs Link kopierenLink in die Zwischenablage kopiert!
Consider the following best practices when customizing site configuration
PolicyGenTemplate
-
Use as few policies as are necessary. Using fewer policies requires less resources. Each additional policy creates overhead for the hub cluster and the deployed managed cluster. CRs are combined into policies based on the field in the
policyNameCR. CRs in the samePolicyGenTemplatewhich have the same value forPolicyGenTemplateare managed under a single policy.policyName -
In disconnected environments, use a single catalog source for all Operators by configuring the registry as a single index containing all Operators. Each additional CR on the managed clusters increases CPU usage.
CatalogSource -
CRs should be included as
MachineConfigin theextraManifestsCR so that they are applied during installation. This can reduce the overall time taken until the cluster is ready to deploy applications.SiteConfig -
should override the channel field to explicitly identify the desired version. This ensures that changes in the source CR during upgrades does not update the generated subscription.
PolicyGenTemplates
When managing large numbers of spoke clusters on the hub cluster, minimize the number of policies to reduce resource consumption.
Grouping multiple configuration CRs into a single or limited number of policies is one way to reduce the overall number of policies on the hub cluster. When using the common, group, and site hierarchy of policies for managing site configuration, it is especially important to combine site-specific configuration into a single policy.
19.4.3. PolicyGenTemplate CRs for RAN deployments Link kopierenLink in die Zwischenablage kopiert!
Use
PolicyGenTemplate
The reference configuration, obtained from the GitOps ZTP container, is designed to provide a set of critical features and node tuning settings that ensure the cluster can support the stringent performance and resource utilization constraints typical of RAN (Radio Access Network) Distributed Unit (DU) applications. Changes or omissions from the baseline configuration can affect feature availability, performance, and resource utilization. Use the reference
PolicyGenTemplate
The baseline
PolicyGenTemplate
ztp-site-generate
The
PolicyGenTemplate
./out/argocd/example/policygentemplates
PolicyGenTemplate
./out/source-crs
The
PolicyGenTemplate
PolicyGenTemplate
| PolicyGenTemplate CR | Description |
|---|---|
|
| Contains a set of CRs that get applied to multi-node clusters. These CRs configure SR-IOV features typical for RAN installations. |
|
| Contains a set of CRs that get applied to single-node OpenShift clusters. These CRs configure SR-IOV features typical for RAN installations. |
|
| Contains a set of common RAN CRs that get applied to all clusters. These CRs subscribe to a set of operators providing cluster features typical for RAN as well as baseline cluster tuning. |
|
| Contains the RAN policies for three-node clusters only. |
|
| Contains the RAN policies for single-node clusters only. |
|
| Contains the RAN policies for standard three control-plane clusters. |
|
|
|
|
|
|
|
|
|
19.4.4. Customizing a managed cluster with PolicyGenTemplate CRs Link kopierenLink in die Zwischenablage kopiert!
Use the following procedure to customize the policies that get applied to the managed cluster that you provision using the zero touch provisioning (ZTP) pipeline.
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin - You configured the hub cluster for generating the required installation and policy CRs.
- You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Procedure
Create a
CR for site-specific configuration CRs.PolicyGenTemplate-
Choose the appropriate example for your CR from the folder, for example,
out/argocd/example/policygentemplatesorexample-sno-site.yaml.example-multinode-site.yaml Change the
field in the example file to match the site-specific label included in thebindingRulesCR. In the exampleSiteConfigfile, the site-specific label isSiteConfig.sites: example-snoNoteEnsure that the labels defined in your
PolicyGenTemplatefield correspond to the labels that are defined in the related managed clustersbindingRulesCR.SiteConfig- Change the content in the example file to match the desired configuration.
-
Choose the appropriate example for your CR from the
Optional: Create a
CR for any common configuration CRs that apply to the entire fleet of clusters.PolicyGenTemplate-
Select the appropriate example for your CR from the folder, for example,
out/argocd/example/policygentemplates.common-ranGen.yaml - Change the content in the example file to match the desired configuration.
-
Select the appropriate example for your CR from the
Optional: Create a
CR for any group configuration CRs that apply to the certain groups of clusters in the fleet.PolicyGenTemplateEnsure that the content of the overlaid spec files matches your desired end state. As a reference, the out/source-crs directory contains the full list of source-crs available to be included and overlaid by your PolicyGenTemplate templates.
NoteDepending on the specific requirements of your clusters, you might need more than a single group policy per cluster type, especially considering that the example group policies each have a single PerformancePolicy.yaml file that can only be shared across a set of clusters if those clusters consist of identical hardware configurations.
-
Select the appropriate example for your CR from the folder, for example,
out/argocd/example/policygentemplates.group-du-sno-ranGen.yaml - Change the content in the example file to match the desired configuration.
-
Select the appropriate example for your CR from the
-
Optional. Create a validator inform policy CR to signal when the ZTP installation and configuration of the deployed cluster is complete. For more information, see "Creating a validator inform policy".
PolicyGenTemplate Define all the policy namespaces in a YAML file similar to the example
file.out/argocd/example/policygentemplates/ns.yamlImportantDo not include the
CR in the same file with theNamespaceCR.PolicyGenTemplate-
Add the CRs and
PolicyGenTemplateCR to theNamespacefile in the generators section, similar to the example shown inkustomization.yaml.out/argocd/example/policygentemplates/kustomization.yaml Commit the
CRs,PolicyGenTemplateCR, and associatedNamespacefile in your Git repository and push the changes.kustomization.yamlThe ArgoCD pipeline detects the changes and begins the managed cluster deployment. You can push the changes to the
CR and theSiteConfigCR simultaneously.PolicyGenTemplate
19.4.5. Monitoring managed cluster policy deployment progress Link kopierenLink in die Zwischenablage kopiert!
The ArgoCD pipeline uses
PolicyGenTemplate
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin
Procedure
The Topology Aware Lifecycle Manager (TALM) applies the configuration policies that are bound to the cluster.
After the cluster installation is complete and the cluster becomes
, aReadyCR corresponding to this cluster, with a list of ordered policies defined by theClusterGroupUpgrade, is automatically created by the TALM. The cluster’s policies are applied in the order listed inran.openshift.io/ztp-deploy-wave annotationsCR.ClusterGroupUpgradeYou can monitor the high-level progress of configuration policy reconciliation by using the following commands:
$ export CLUSTER=<clusterName>$ oc get clustergroupupgrades -n ztp-install $CLUSTER -o jsonpath='{.status.conditions[-1:]}' | jqExample output
{ "lastTransitionTime": "2022-11-09T07:28:09Z", "message": "Remediating non-compliant policies", "reason": "InProgress", "status": "True", "type": "Progressing" }You can monitor the detailed cluster policy compliance status by using the RHACM dashboard or the command line.
To check policy compliance by using
, run the following command:oc$ oc get policies -n $CLUSTERExample output
NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42mTo check policy status from the RHACM web console, perform the following actions:
-
Click Governance
Find policies. - Click on a cluster policy to check it’s status.
-
Click Governance
When all of the cluster policies become compliant, ZTP installation and configuration for the cluster is complete. The
ztp-done
In the reference configuration, the final policy that becomes compliant is the one defined in the
*-du-validator-policy
19.4.6. Validating the generation of configuration policy CRs Link kopierenLink in die Zwischenablage kopiert!
Policy custom resources (CRs) are generated in the same namespace as the
PolicyGenTemplate
PolicyGenTemplate
ztp-common
ztp-group
ztp-site
$ export NS=<namespace>
$ oc get policy -n $NS
The expected set of policy-wrapped CRs should be displayed.
If the policies failed synchronization, use the following troubleshooting steps.
Procedure
To display detailed information about the policies, run the following command:
$ oc describe -n openshift-gitops application policiesCheck for
to show the error logs. For example, setting an invalidStatus: Conditions:generates the error shown below:sourceFile→fileName:Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonErrorCheck for
. If there are log errors atStatus: Sync:, theStatus: Conditions:showsStatus: Sync:orUnknown:ErrorStatus: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: ErrorWhen Red Hat Advanced Cluster Management (RHACM) recognizes that policies apply to a
object, the policy CR objects are applied to the cluster namespace. Check to see if the policies were copied to the cluster namespace:ManagedCluster$ oc get policy -n $CLUSTERExample output:
NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d Ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13dRHACM copies all applicable policies into the cluster namespace. The copied policy names have the format:
.<policyGenTemplate.Namespace>.<policyGenTemplate.Name>-<policyName>Check the placement rule for any policies not copied to the cluster namespace. The
in thematchSelectorfor those policies should match labels on thePlacementRuleobject:ManagedCluster$ oc get placementrule -n $NSNote the
name appropriate for the missing policy, common, group, or site, using the following command:PlacementRule$ oc get placementrule -n $NS <placementRuleName> -o yaml- The status-decisions should include your cluster name.
-
The key-value pair of the in the spec must match the labels on your managed cluster.
matchSelector
Check the labels on the
object using the following command:ManagedCluster$ oc get ManagedCluster $CLUSTER -o jsonpath='{.metadata.labels}' | jqCheck to see which policies are compliant using the following command:
$ oc get policy -n $CLUSTERIf the
,Namespace, andOperatorGrouppolicies are compliant but the Operator configuration policies are not, it is likely that the Operators did not install on the managed cluster. This causes the Operator configuration policies to fail to apply because the CRD is not yet applied to the spoke.Subscription
19.4.7. Restarting policy reconciliation Link kopierenLink in die Zwischenablage kopiert!
You can restart policy reconciliation when unexpected compliance issues occur, for example, when the
ClusterGroupUpgrade
Procedure
A
CR is generated in the namespaceClusterGroupUpgradeby the Topology Aware Lifecycle Manager after the managed cluster becomesztp-install:Ready$ export CLUSTER=<clusterName>$ oc get clustergroupupgrades -n ztp-install $CLUSTERIf there are unexpected issues and the policies fail to become complaint within the configured timeout (the default is 4 hours), the status of the
CR showsClusterGroupUpgrade:UpgradeTimedOut$ oc get clustergroupupgrades -n ztp-install $CLUSTER -o jsonpath='{.status.conditions[?(@.type=="Ready")]}'A
CR in theClusterGroupUpgradestate automatically restarts its policy reconciliation every hour. If you have changed your policies, you can start a retry immediately by deleting the existingUpgradeTimedOutCR. This triggers the automatic creation of a newClusterGroupUpgradeCR that begins reconciling the policies immediately:ClusterGroupUpgrade$ oc delete clustergroupupgrades -n ztp-install $CLUSTER
Note that when the
ClusterGroupUpgrade
UpgradeCompleted
ztp-done
PolicyGenTemplate
ClusterGroupUpgrade
At this point, ZTP has completed its interaction with the cluster and any further interactions should be treated as an update and a new
ClusterGroupUpgrade
19.4.8. Changing applied managed cluster CRs using policies Link kopierenLink in die Zwischenablage kopiert!
You can remove content from a custom resource (CR) that is deployed in a managed cluster through a policy.
By default, all
Policy
PolicyGenTemplate
complianceType
musthave
musthave
With the
complianceType
mustonlyhave
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin - You have deployed a managed cluster from a hub cluster running RHACM.
- You have installed Topology Aware Lifecycle Manager on the hub cluster.
Procedure
Remove the content that you no longer need from the affected CRs. In this example, the
line was removed from thedisableDrain: falseCR.SriovOperatorConfigExample CR
apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: "node-role.kubernetes.io/$mcp": "" disableDrain: true enableInjector: true enableOperatorWebhook: trueChange the
of the affected policies tocomplianceTypein themustonlyhavefile.group-du-sno-ranGen.yamlExample YAML
# ... - fileName: SriovOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave # ...Create a
CR and specify the clusters that must receive the CR changes::ClusterGroupUpdatesExample ClusterGroupUpdates CR
apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction:Create the
CR by running the following command:ClusterGroupUpgrade$ oc create -f cgu-remove.yamlWhen you are ready to apply the changes, for example, during an appropriate maintenance window, change the value of the
field tospec.enableby running the following command:true$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove \ --patch '{"spec":{"enable":true}}' --type=merge
Verification
Check the status of the policies by running the following command:
$ oc get <kind> <changed_cr_name>Example output
NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15hWhen the
of the policy isCOMPLIANCE STATE, it means that the CR is updated and the unwanted content is removed.CompliantCheck that the policies are removed from the targeted clusters by running the following command on the managed clusters:
$ oc get <kind> <changed_cr_name>If there are no results, the CR is removed from the managed cluster.
19.4.9. Indication of done for ZTP installations Link kopierenLink in die Zwischenablage kopiert!
Zero touch provisioning (ZTP) simplifies the process of checking the ZTP installation status for a cluster. The ZTP status moves through three phases: cluster installation, cluster configuration, and ZTP done.
- Cluster installation phase
-
The cluster installation phase is shown by the
ManagedClusterJoinedandManagedClusterAvailableconditions in theManagedClusterCR . If theManagedClusterCR does not have these conditions, or the condition is set toFalse, the cluster is still in the installation phase. Additional details about installation are available from theAgentClusterInstallandClusterDeploymentCRs. For more information, see "Troubleshooting GitOps ZTP". - Cluster configuration phase
-
The cluster configuration phase is shown by a
ztp-runninglabel applied theManagedClusterCR for the cluster. - ZTP done
Cluster installation and configuration is complete in the ZTP done phase. This is shown by the removal of the
label and addition of theztp-runninglabel to theztp-doneCR. TheManagedClusterlabel shows that the configuration has been applied and the baseline DU configuration has completed cluster tuning.ztp-doneThe transition to the ZTP done state is conditional on the compliant state of a Red Hat Advanced Cluster Management (RHACM) validator inform policy. This policy captures the existing criteria for a completed installation and validates that it moves to a compliant state only when ZTP provisioning of the managed cluster is complete.
The validator inform policy ensures the configuration of the cluster is fully applied and Operators have completed their initialization. The policy validates the following:
-
The target contains the expected entries and has finished updating. All nodes are available and not degraded.
MachineConfigPool -
The SR-IOV Operator has completed initialization as indicated by at least one with
SriovNetworkNodeState.syncStatus: Succeeded - The PTP Operator daemon set exists.
-
The target
19.5. Manually installing a single-node OpenShift cluster with ZTP Link kopierenLink in die Zwischenablage kopiert!
You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service.
If you are creating multiple managed clusters, use the
SiteConfig
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads.
19.5.1. Generating ZTP installation and configuration CRs manually Link kopierenLink in die Zwischenablage kopiert!
Use the
generator
ztp-site-generate
SiteConfig
PolicyGenTemplate
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin
Procedure
Create an output folder by running the following command:
$ mkdir -p ./outExport the
directory from theargocdcontainer image:ztp-site-generate$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 extract /home/ztp --tar | tar x -C ./outThe
directory has the reference./outandPolicyGenTemplateCRs in theSiteConfigfolder.out/argocd/example/Example output
out └── argocd └── example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yamlCreate an output folder for the site installation CRs:
$ mkdir -p ./site-installModify the example
CR for the cluster type that you want to install. CopySiteConfigtoexample-sno.yamland modify the CR to match the details of the site and bare-metal host that you want to install, for example:site-1-sno.yamlExample single-node OpenShift cluster SiteConfig CR
apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "<site_name>" namespace: "<site_name>" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret"1 clusterImageSetNameRef: "openshift-4.12"2 sshPublicKey: "ssh-rsa AAAA..."3 clusters: - clusterName: "<site_name>" networkType: "OVNKubernetes" clusterLabels:4 common: true group-du-sno: "" sites : "<site_name>" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml"5 nodes: - hostName: "example-node.example.com"6 role: "master" bmcAddress: idrac-virtualmedia://<out_of_band_ip>/<system_id>/7 bmcCredentialsName: name: "bmh-secret"8 bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI"9 rootDeviceHints: wwn: "0x11111000000asd123" cpuset: "0-1,52-53"10 nodeNetwork:11 interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6:12 enabled: true address: - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254- 1
- Create the
assisted-deployment-pull-secretCR with the same namespace as theSiteConfigCR. - 2
clusterImageSetNameRefdefines an image set available on the hub cluster. To see the list of supported versions on your hub cluster, runoc get clusterimagesets.- 3
- Configure the SSH public key used to access the cluster.
- 4
- Cluster labels must correspond to the
bindingRulesfield in thePolicyGenTemplateCRs that you define. For example,policygentemplates/common-ranGen.yamlapplies to all clusters withcommon: trueset,policygentemplates/group-du-sno-ranGen.yamlapplies to all clusters withgroup-du-sno: ""set. - 5
- Optional. The CR specifed under
KlusterletAddonConfigis used to override the defaultKlusterletAddonConfigthat is created for the cluster. - 6
- For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with
role: masterand two or more hosts defined withrole: worker. - 7
- BMC address that you use to access the host. Applies to all cluster types.
- 8
- Name of the
bmh-secretCR that you separately create with the host BMC credentials. When creating thebmh-secretCR, use the same namespace as theSiteConfigCR that provisions the host. - 9
- Configures the boot mode for the host. The default value is
UEFI. UseUEFISecureBootto enable secure boot on the host. - 10
cpusetmust match the value set in the clusterPerformanceProfileCRspec.cpu.reservedfield for workload partitioning.- 11
- Specifies the network settings for the node.
- 12
- Configures the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same.
Generate the day-0 installation CRs by processing the modified
CRSiteConfigby running the following command:site-1-sno.yaml$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 generator install site-1-sno.yaml /outputExample output
site-install └── site-1-sno ├── site-1_agentclusterinstall_example-sno.yaml ├── site-1-sno_baremetalhost_example-node1.example.com.yaml ├── site-1-sno_clusterdeployment_example-sno.yaml ├── site-1-sno_configmap_example-sno.yaml ├── site-1-sno_infraenv_example-sno.yaml ├── site-1-sno_klusterletaddonconfig_example-sno.yaml ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml ├── site-1-sno_managedcluster_example-sno.yaml ├── site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yamlOptional: Generate just the day-0
installation CRs for a particular cluster type by processing the referenceMachineConfigCR with theSiteConfigoption. For example, run the following commands:-ECreate an output folder for the
CRs:MachineConfig$ mkdir -p ./site-machineconfigGenerate the
installation CRs:MachineConfig$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 generator install -E site-1-sno.yaml /outputExample output
site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml
Generate and export the day-2 configuration CRs using the reference
CRs from the previous step. Run the following commands:PolicyGenTemplateCreate an output folder for the day-2 CRs:
$ mkdir -p ./refGenerate and export the day-2 configuration CRs:
$ podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 generator config -N . /outputThe command generates example group and site-specific
CRs for single-node OpenShift, three-node clusters, and standard clusters in thePolicyGenTemplatefolder../refExample output
ref └── customResource ├── common ├── example-multinode-site ├── example-sno ├── group-du-3node ├── group-du-3node-validator │ └── Multiple-validatorCRs ├── group-du-sno ├── group-du-sno-validator ├── group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs
- Use the generated CRs as the basis for the CRs that you use to install the cluster. You apply the installation CRs to the hub cluster as described in "Installing a single managed cluster". The configuration CRs can be applied to the cluster after cluster installation is complete.
19.5.2. Creating the managed bare-metal host secrets Link kopierenLink in die Zwischenablage kopiert!
Add the required
Secret
The secrets are referenced from the
SiteConfig
SiteConfig
Procedure
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file
:example-sno-secret.yamlapiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno1 data:2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno3 data: .dockerconfigjson: <pull_secret>4 type: kubernetes.io/dockerconfigjson
-
Add the relative path to to the
example-sno-secret.yamlfile that you use to install the cluster.kustomization.yaml
19.5.3. Configuring Discovery ISO kernel arguments for manual installations using GitOps ZTP Link kopierenLink in die Zwischenablage kopiert!
The GitOps ZTP workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the
InfraEnv
rd.net.timeout.carrier
In OpenShift Container Platform 4.12, you can only add kernel arguments. You can not replace or delete kernel arguments.
Prerequisites
- You have installed the OpenShift CLI (oc).
- You have logged in to the hub cluster as a user with cluster-admin privileges.
- You have manually generated the installation and configuration custom resources (CRs).
Procedure
-
Edit the specification in the
spec.kernelArgumentsCR to configure kernel arguments:InfraEnv
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: <cluster_name>
namespace: <cluster_name>
spec:
kernelArguments:
- operation: append
value: audit=0
- operation: append
value: trace=1
clusterRef:
name: <cluster_name>
namespace: <cluster_name>
pullSecretRef:
name: pull-secret
The
SiteConfig
InfraEnv
Verification
To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the
/proc/cmdline
Begin an SSH session with the target host:
$ ssh -i /path/to/privatekey core@<host_name>View the system’s kernel arguments by using the following command:
$ cat /proc/cmdline
19.5.4. Installing a single managed cluster Link kopierenLink in die Zwischenablage kopiert!
You can manually deploy a single managed cluster using the assisted service and Red Hat Advanced Cluster Management (RHACM).
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin -
You have created the baseboard management controller (BMC) and the image pull-secret
Secretcustom resources (CRs). See "Creating the managed bare-metal host secrets" for details.Secret - Your target bare-metal host meets the networking and hardware requirements for managed clusters.
Procedure
Create a
for each specific cluster version to be deployed, for exampleClusterImageSet. AclusterImageSet-4.12.yamlhas the following format:ClusterImageSetapiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.12.01 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.12.0-x86_642 Apply the
CR:clusterImageSet$ oc apply -f clusterImageSet-4.12.yamlCreate the
CR in theNamespacefile:cluster-namespace.yamlapiVersion: v1 kind: Namespace metadata: name: <cluster_name>1 labels: name: <cluster_name>2 Apply the
CR by running the following command:Namespace$ oc apply -f cluster-namespace.yamlApply the generated day-0 CRs that you extracted from the
container and customized to meet your requirements:ztp-site-generate$ oc apply -R ./site-install/site-sno-1
19.5.5. Monitoring the managed cluster installation status Link kopierenLink in die Zwischenablage kopiert!
Ensure that cluster provisioning was successful by checking the cluster status.
Prerequisites
-
All of the custom resources have been configured and provisioned, and the custom resource is created on the hub for the managed cluster.
Agent
Procedure
Check the status of the managed cluster:
$ oc get managedclusterindicates the managed cluster is ready.TrueCheck the agent status:
$ oc get agent -n <cluster_name>Use the
command to provide an in-depth description of the agent’s condition. Statuses to be aware of includedescribe,BackendError,InputError,ValidationsFailing, andInstallationFailed. These statuses are relevant to theAgentIsConnectedandAgentcustom resources.AgentClusterInstall$ oc describe agent -n <cluster_name>Check the cluster provisioning status:
$ oc get agentclusterinstall -n <cluster_name>Use the
command to provide an in-depth description of the cluster provisioning status:describe$ oc describe agentclusterinstall -n <cluster_name>Check the status of the managed cluster’s add-on services:
$ oc get managedclusteraddon -n <cluster_name>Retrieve the authentication information of the
file for the managed cluster:kubeconfig$ oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig
19.5.6. Troubleshooting the managed cluster Link kopierenLink in die Zwischenablage kopiert!
Use this procedure to diagnose any installation issues that might occur with the managed cluster.
Procedure
Check the status of the managed cluster:
$ oc get managedclusterExample output
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19hIf the status in the
column isAVAILABLE, the managed cluster is being managed by the hub.TrueIf the status in the
column isAVAILABLE, the managed cluster is not being managed by the hub. Use the following steps to continue checking to get more information.UnknownCheck the
install status:AgentClusterInstall$ oc get clusterdeployment -n <cluster_name>Example output
NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14hIf the status in the
column isINSTALLED, the installation was unsuccessful.falseIf the installation failed, enter the following command to review the status of the
resource:AgentClusterInstall$ oc describe agentclusterinstall -n <cluster_name> <cluster_name>Resolve the errors and reset the cluster:
Remove the cluster’s managed cluster resource:
$ oc delete managedcluster <cluster_name>Remove the cluster’s namespace:
$ oc delete namespace <cluster_name>This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the
CR deletion to complete before proceeding.ManagedCluster- Recreate the custom resources for the managed cluster.
19.5.7. RHACM generated cluster installation CRs reference Link kopierenLink in die Zwischenablage kopiert!
Red Hat Advanced Cluster Management (RHACM) supports deploying OpenShift Container Platform on single-node clusters, three-node clusters, and standard clusters with a specific set of installation custom resources (CRs) that you generate using
SiteConfig
Every managed cluster has its own namespace, and all of the installation CRs except for
ManagedCluster
ClusterImageSet
ManagedCluster
ClusterImageSet
The following table lists the installation CRs that are automatically applied by the RHACM assisted service when it installs clusters using the
SiteConfig
| CR | Description | Usage |
|---|---|---|
|
| Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host. | Provides access to the BMC to load and start the discovery image on the target server by using the Redfish protocol. |
|
| Contains information for installing OpenShift Container Platform on the target bare-metal host. | Used with
|
|
| Specifies details of the managed cluster configuration such as networking and the number of control plane nodes. Displays the cluster
| Specifies the managed cluster configuration information and provides status during the installation of the cluster. |
|
| References the
| Used with
|
|
| Provides network configuration information such as
| Sets up a static IP address for the managed cluster’s Kube API server. |
|
| Contains hardware information about the target bare-metal host. | Created automatically on the hub when the target machine’s discovery image boots. |
|
| When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface. | The hub uses this resource to manage and show the status of managed clusters. |
|
| Contains the list of services provided by the hub to be deployed to the
| Tells the hub which addon services to deploy to the
|
|
| Logical space for
| Propagates resources to the
|
|
| Two CRs are created:
|
|
|
| Contains OpenShift Container Platform image information such as the repository and image name. | Passed into resources to provide OpenShift Container Platform images. |
19.6. Recommended single-node OpenShift cluster configuration for vDU application workloads Link kopierenLink in die Zwischenablage kopiert!
Use the following reference information to understand the single-node OpenShift configurations required to deploy virtual distributed unit (vDU) applications in the cluster. Configurations include cluster optimizations for high performance workloads, enabling workload partitioning, and minimizing the number of reboots required postinstallation.
19.6.1. Running low latency applications on OpenShift Container Platform Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform enables low latency processing for applications running on commercial off-the-shelf (COTS) hardware by using several technologies and specialized hardware devices:
- Real-time kernel for RHCOS
- Ensures workloads are handled with a high degree of process determinism.
- CPU isolation
- Avoids CPU scheduling delays and ensures CPU capacity is available consistently.
- NUMA-aware topology management
- Aligns memory and huge pages with CPU and PCI devices to pin guaranteed container memory and huge pages to the non-uniform memory access (NUMA) node. Pod resources for all Quality of Service (QoS) classes stay on the same NUMA node. This decreases latency and improves performance of the node.
- Huge pages memory management
- Using huge page sizes improves system performance by reducing the amount of system resources required to access page tables.
- Precision timing synchronization using PTP
- Allows synchronization between nodes in the network with sub-microsecond accuracy.
19.6.2. Recommended cluster host requirements for vDU application workloads Link kopierenLink in die Zwischenablage kopiert!
Running vDU application workloads requires a bare-metal host with sufficient resources to run OpenShift Container Platform services and production workloads.
| Profile | vCPU | Memory | Storage |
|---|---|---|---|
| Minimum | 4 to 8 vCPU | 32GB of RAM | 120GB |
One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core:
- (threads per core × cores) × sockets = vCPUs
The server must have a Baseboard Management Controller (BMC) when booting with virtual media.
19.6.3. Configuring host firmware for low latency and high performance Link kopierenLink in die Zwischenablage kopiert!
Bare-metal hosts require the firmware to be configured before the host can be provisioned. The firmware configuration is dependent on the specific hardware and the particular requirements of your installation.
Procedure
-
Set the UEFI/BIOS Boot Mode to .
UEFI - In the host boot sequence order, set Hard drive first.
Apply the specific firmware configuration for your hardware. The following table describes a representative firmware configuration for an Intel Xeon Skylake or Intel Cascade Lake server, based on the Intel FlexRAN 4G and 5G baseband PHY reference design.
ImportantThe exact firmware configuration depends on your specific hardware and network requirements. The following sample configuration is for illustrative purposes only.
Expand Table 19.7. Sample firmware configuration for an Intel Xeon Skylake or Cascade Lake server Firmware setting Configuration CPU Power and Performance Policy
Performance
Uncore Frequency Scaling
Disabled
Performance P-limit
Disabled
Enhanced Intel SpeedStep ® Tech
Enabled
Intel Configurable TDP
Enabled
Configurable TDP Level
Level 2
Intel® Turbo Boost Technology
Enabled
Energy Efficient Turbo
Disabled
Hardware P-States
Disabled
Package C-State
C0/C1 state
C1E
Disabled
Processor C6
Disabled
Enable global SR-IOV and VT-d settings in the firmware for the host. These settings are relevant to bare-metal environments.
19.6.4. Connectivity prerequisites for managed cluster networks Link kopierenLink in die Zwischenablage kopiert!
Before you can install and provision a managed cluster with the zero touch provisioning (ZTP) GitOps pipeline, the managed cluster host must meet the following networking prerequisites:
- There must be bi-directional connectivity between the ZTP GitOps container in the hub cluster and the Baseboard Management Controller (BMC) of the target bare-metal host.
The managed cluster must be able to resolve and reach the API hostname of the hub hostname and
hostname. Here is an example of the API hostname of the hub and*.appshostname:*.apps-
api.hub-cluster.internal.domain.com -
console-openshift-console.apps.hub-cluster.internal.domain.com
-
The hub cluster must be able to resolve and reach the API and
hostname of the managed cluster. Here is an example of the API hostname of the managed cluster and*.appshostname:*.apps-
api.sno-managed-cluster-1.internal.domain.com -
console-openshift-console.apps.sno-managed-cluster-1.internal.domain.com
-
19.6.5. Workload partitioning in single-node OpenShift with GitOps ZTP Link kopierenLink in die Zwischenablage kopiert!
Workload partitioning configures OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved number of host CPUs.
To configure workload partitioning with GitOps ZTP, you specify cluster management CPU resources with the
cpuset
SiteConfig
reserved
PolicyGenTemplate
MachineConfig
cpuset
PerformanceProfile
reserved
For maximum performance, ensure that the
reserved
isolated
-
The workload partitioning CR pins the OpenShift Container Platform infrastructure pods to a defined
MachineConfigconfiguration.cpuset -
The CR pins the systemd services to the reserved CPUs.
PerformanceProfile
The value for the
reserved
PerformanceProfile
cpuset
MachineConfig
19.6.6. Recommended installation-time cluster configurations Link kopierenLink in die Zwischenablage kopiert!
The ZTP pipeline applies the following custom resources (CRs) during cluster installation. These configuration CRs ensure that the cluster meets the feature and performance requirements necessary for running a vDU application.
When using the ZTP GitOps plugin and
SiteConfig
MachineConfig
Use the
SiteConfig
extraManifests
19.6.6.1. Workload partitioning Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters that run DU workloads require workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads.
Workload partitioning can only be enabled during cluster installation. You cannot disable workload partitioning postinstallation. However, you can reconfigure workload partitioning by updating the
cpu
MachineConfig
The base64-encoded CR that enables workload partitioning contains the CPU set that the management workloads are constrained to. Encode host-specific values for
andcrio.confin base64. Adjust the content to match the CPU set that is specified in the cluster performance profile. It must match the number of cores in the cluster host.kubelet.confRecommended workload partitioning configuration
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMSw1Mi01MyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root
When configured in the cluster host, the contents of
should look like this:/etc/crio/crio.conf.d/01-workload-partitioning[crio.runtime.workloads.management] activation_annotation = "target.workload.openshift.io/management" annotation_prefix = "resources.workload.openshift.io" resources = { "cpushares" = 0, "cpuset" = "0-1,52-53" }1 - 1
- The
cpusetvalue varies based on the installation. If Hyper-Threading is enabled, specify both threads for each core. Thecpusetvalue must match the reserved CPUs that you define in thespec.cpu.reservedfield in the performance profile.
When configured in the cluster, the contents of
should look like this:/etc/kubernetes/openshift-workload-pinning{ "management": { "cpuset": "0-1,52-53"1 } }- 1
- The
cpusetmust match thecpusetvalue in/etc/crio/crio.conf.d/01-workload-partitioning.
Verification
Check that the applications and cluster system CPU pinning is correct. Run the following commands:
Open a remote shell connection to the managed cluster:
$ oc debug node/example-sno-1Check that the OpenShift infrastructure applications CPU pinning is correct:
sh-4.4# pgrep ovn | while read i; do taskset -cp $i; doneExample output
pid 8481's current affinity list: 0-1,52-53 pid 8726's current affinity list: 0-1,52-53 pid 9088's current affinity list: 0-1,52-53 pid 9945's current affinity list: 0-1,52-53 pid 10387's current affinity list: 0-1,52-53 pid 12123's current affinity list: 0-1,52-53 pid 13313's current affinity list: 0-1,52-53Check that the system applications CPU pinning is correct:
sh-4.4# pgrep systemd | while read i; do taskset -cp $i; doneExample output
pid 1's current affinity list: 0-1,52-53 pid 938's current affinity list: 0-1,52-53 pid 962's current affinity list: 0-1,52-53 pid 1197's current affinity list: 0-1,52-53
19.6.6.2. Reduced platform management footprint Link kopierenLink in die Zwischenablage kopiert!
To reduce the overall management footprint of the platform, a
MachineConfig
MachineConfig
Recommended container mount namespace configuration
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: container-mount-namespace-and-kubelet-conf-master
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo=
mode: 493
path: /usr/local/bin/extractExecStart
- contents:
source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo=
mode: 493
path: /usr/local/bin/nsenterCmns
systemd:
units:
- contents: |
[Unit]
Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=container-mount-namespace
Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace
Environment=BIND_POINT=%t/container-mount-namespace/mnt
ExecStartPre=bash -c "findmnt ${RUNTIME_DIRECTORY} || mount --make-unbindable --bind ${RUNTIME_DIRECTORY} ${RUNTIME_DIRECTORY}"
ExecStartPre=touch ${BIND_POINT}
ExecStart=unshare --mount=${BIND_POINT} --propagation slave mount --make-rshared /
ExecStop=umount -R ${RUNTIME_DIRECTORY}
enabled: true
name: container-mount-namespace.service
- dropins:
- contents: |
[Unit]
Wants=container-mount-namespace.service
After=container-mount-namespace.service
[Service]
ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
EnvironmentFile=-/%t/%N-execstart.env
ExecStart=
ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
${ORIG_EXECSTART}"
name: 90-container-mount-namespace.conf
name: crio.service
- dropins:
- contents: |
[Unit]
Wants=container-mount-namespace.service
After=container-mount-namespace.service
[Service]
ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
EnvironmentFile=-/%t/%N-execstart.env
ExecStart=
ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
${ORIG_EXECSTART} --housekeeping-interval=30s"
name: 90-container-mount-namespace.conf
- contents: |
[Service]
Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s"
Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s"
name: 30-kubelet-interval-tuning.conf
name: kubelet.service
19.6.6.3. SCTP Link kopierenLink in die Zwischenablage kopiert!
Stream Control Transmission Protocol (SCTP) is a key protocol used in RAN applications. This
MachineConfig
Recommended SCTP configuration
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: load-sctp-module
spec:
config:
ignition:
version: 2.2.0
storage:
files:
- contents:
source: data:,
verification: {}
filesystem: root
mode: 420
path: /etc/modprobe.d/sctp-blacklist.conf
- contents:
source: data:text/plain;charset=utf-8,sctp
filesystem: root
mode: 420
path: /etc/modules-load.d/sctp-load.conf
19.6.6.4. Accelerated container startup Link kopierenLink in die Zwischenablage kopiert!
The following
MachineConfig
Recommended accelerated container startup configuration
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 04-accelerated-container-startup-master
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,#!/bin/bash
#
# Temporarily reset the core system processes's CPU affinity to be unrestricted to accelerate startup and shutdown
#
# The defaults below can be overridden via environment variables
#

# The default set of critical processes whose affinity should be temporarily unbound:
CRITICAL_PROCESSES=${CRITICAL_PROCESSES:-"crio kubelet NetworkManager conmon dbus"}

# Default wait time is 600s = 10m:
MAXIMUM_WAIT_TIME=${MAXIMUM_WAIT_TIME:-600}

# Default steady-state threshold = 2%
# Allowed values:
#  4  - absolute pod count (+/-)
#  4% - percent change (+/-)
#  -1 - disable the steady-state check
STEADY_STATE_THRESHOLD=${STEADY_STATE_THRESHOLD:-2%}

# Default steady-state window = 60s
# If the running pod count stays within the given threshold for this time
# period, return CPU utilization to normal before the maximum wait time has
# expires
STEADY_STATE_WINDOW=${STEADY_STATE_WINDOW:-60}

# Default steady-state allows any pod count to be "steady state"
# Increasing this will skip any steady-state checks until the count rises above
# this number to avoid false positives if there are some periods where the
# count doesn't increase but we know we can't be at steady-state yet.
STEADY_STATE_MINIMUM=${STEADY_STATE_MINIMUM:-0}

#######################################################

KUBELET_CPU_STATE=/var/lib/kubelet/cpu_manager_state
FULL_CPU_STATE=/sys/fs/cgroup/cpuset/cpuset.cpus
KUBELET_CONF=/etc/kubernetes/kubelet.conf
unrestrictedCpuset() {
  local cpus
  if [[ -e $KUBELET_CPU_STATE ]]; then
    cpus=$(jq -r '.defaultCpuSet' <$KUBELET_CPU_STATE)
    if [[ -n "${cpus}" && -e ${KUBELET_CONF} ]]; then
      reserved_cpus=$(jq -r '.reservedSystemCPUs' </etc/kubernetes/kubelet.conf)
      if [[ -n "${reserved_cpus}" ]]; then
        # Use taskset to merge the two cpusets
        cpus=$(taskset -c "${reserved_cpus},${cpus}" grep -i Cpus_allowed_list /proc/self/status | awk '{print $2}')
      fi
    fi
  fi
  if [[ -z $cpus ]]; then
    # fall back to using all cpus if the kubelet state is not configured yet
    [[ -e $FULL_CPU_STATE ]] || return 1
    cpus=$(<$FULL_CPU_STATE)
  fi
  echo $cpus
}

restrictedCpuset() {
  for arg in $(</proc/cmdline); do
    if [[ $arg =~ ^systemd.cpu_affinity= ]]; then
      echo ${arg#*=}
      return 0
    fi
  done
  return 1
}

resetAffinity() {
  local cpuset="$1"
  local failcount=0
  local successcount=0
  logger "Recovery: Setting CPU affinity for critical processes \"$CRITICAL_PROCESSES\" to $cpuset"
  for proc in $CRITICAL_PROCESSES; do
    local pids="$(pgrep $proc)"
    for pid in $pids; do
      local tasksetOutput
      tasksetOutput="$(taskset -apc "$cpuset" $pid 2>&1)"
      if [[ $? -ne 0 ]]; then
        echo "ERROR: $tasksetOutput"
        ((failcount++))
      else
        ((successcount++))
      fi
    done
  done

  logger "Recovery: Re-affined $successcount pids successfully"
  if [[ $failcount -gt 0 ]]; then
    logger "Recovery: Failed to re-affine $failcount processes"
    return 1
  fi
}

setUnrestricted() {
  logger "Recovery: Setting critical system processes to have unrestricted CPU access"
  resetAffinity "$(unrestrictedCpuset)"
}

setRestricted() {
  logger "Recovery: Resetting critical system processes back to normally restricted access"
  resetAffinity "$(restrictedCpuset)"
}

currentAffinity() {
  local pid="$1"
  taskset -pc $pid | awk -F': ' '{print $2}'
}

within() {
  local last=$1 current=$2 threshold=$3
  local delta=0 pchange
  delta=$(( current - last ))
  if [[ $current -eq $last ]]; then
    pchange=0
  elif [[ $last -eq 0 ]]; then
    pchange=1000000
  else
    pchange=$(( ( $delta * 100) / last ))
  fi
  echo -n "last:$last current:$current delta:$delta pchange:${pchange}%: "
  local absolute limit
  case $threshold in
    *%)
      absolute=${pchange##-} # absolute value
      limit=${threshold%%%}
      ;;
    *)
      absolute=${delta##-} # absolute value
      limit=$threshold
      ;;
  esac
  if [[ $absolute -le $limit ]]; then
    echo "within (+/-)$threshold"
    return 0
  else
    echo "outside (+/-)$threshold"
    return 1
  fi
}

steadystate() {
  local last=$1 current=$2
  if [[ $last -lt $STEADY_STATE_MINIMUM ]]; then
    echo "last:$last current:$current Waiting to reach $STEADY_STATE_MINIMUM before checking for steady-state"
    return 1
  fi
  within $last $current $STEADY_STATE_THRESHOLD
}

waitForReady() {
  logger "Recovery: Waiting ${MAXIMUM_WAIT_TIME}s for the initialization to complete"
  local lastSystemdCpuset="$(currentAffinity 1)"
  local lastDesiredCpuset="$(unrestrictedCpuset)"
  local t=0 s=10
  local lastCcount=0 ccount=0 steadyStateTime=0
  while [[ $t -lt $MAXIMUM_WAIT_TIME ]]; do
    sleep $s
    ((t += s))
    # Re-check the current affinity of systemd, in case some other process has changed it
    local systemdCpuset="$(currentAffinity 1)"
    # Re-check the unrestricted Cpuset, as the allowed set of unreserved cores may change as pods are assigned to cores
    local desiredCpuset="$(unrestrictedCpuset)"
    if [[ $systemdCpuset != $lastSystemdCpuset || $lastDesiredCpuset != $desiredCpuset ]]; then
      resetAffinity "$desiredCpuset"
      lastSystemdCpuset="$(currentAffinity 1)"
      lastDesiredCpuset="$desiredCpuset"
    fi

    # Detect steady-state pod count
    ccount=$(crictl ps | wc -l)
    if steadystate $lastCcount $ccount; then
      ((steadyStateTime += s))
      echo "Steady-state for ${steadyStateTime}s/${STEADY_STATE_WINDOW}s"
      if [[ $steadyStateTime -ge $STEADY_STATE_WINDOW ]]; then
        logger "Recovery: Steady-state (+/- $STEADY_STATE_THRESHOLD) for ${STEADY_STATE_WINDOW}s: Done"
        return 0
      fi
    else
      if [[ $steadyStateTime -gt 0 ]]; then
        echo "Resetting steady-state timer"
        steadyStateTime=0
      fi
    fi
    lastCcount=$ccount
  done
  logger "Recovery: Recovery Complete Timeout"
}

main() {
  if ! unrestrictedCpuset >&/dev/null; then
    logger "Recovery: No unrestricted Cpuset could be detected"
    return 1
  fi

  if ! restrictedCpuset >&/dev/null; then
    logger "Recovery: No restricted Cpuset has been configured.  We are already running unrestricted."
    return 0
  fi

  # Ensure we reset the CPU affinity when we exit this script for any reason
  # This way either after the timer expires or after the process is interrupted
  # via ^C or SIGTERM, we return things back to the way they should be.
  trap setRestricted EXIT

  logger "Recovery: Recovery Mode Starting"
  setUnrestricted
  waitForReady
}

if [[ "${BASH_SOURCE[0]}" = "${0}" ]]; then
  main "${@}"
  exit $?
fi

mode: 493
path: /usr/local/bin/accelerated-container-startup.sh
systemd:
units:
- contents: |
[Unit]
Description=Unlocks more CPUs for critical system processes during container startup
[Service]
Type=simple
ExecStart=/usr/local/bin/accelerated-container-startup.sh
# Maximum wait time is 600s = 10m:
Environment=MAXIMUM_WAIT_TIME=600
# Steady-state threshold = 2%
# Allowed values:
# 4 - absolute pod count (+/-)
# 4% - percent change (+/-)
# -1 - disable the steady-state check
# Note: '%' must be escaped as '%%' in systemd unit files
Environment=STEADY_STATE_THRESHOLD=2%%
# Steady-state window = 120s
# If the running pod count stays within the given threshold for this time
# period, return CPU utilization to normal before the maximum wait time has
# expires
Environment=STEADY_STATE_WINDOW=120
# Steady-state minimum = 40
# Increasing this will skip any steady-state checks until the count rises above
# this number to avoid false positives if there are some periods where the
# count doesn't increase but we know we can't be at steady-state yet.
Environment=STEADY_STATE_MINIMUM=40
[Install]
WantedBy=multi-user.target
enabled: true
name: accelerated-container-startup.service
- contents: |
[Unit]
Description=Unlocks more CPUs for critical system processes during container shutdown
DefaultDependencies=no
[Service]
Type=simple
ExecStart=/usr/local/bin/accelerated-container-startup.sh
# Maximum wait time is 600s = 10m:
Environment=MAXIMUM_WAIT_TIME=600
# Steady-state threshold
# Allowed values:
# 4 - absolute pod count (+/-)
# 4% - percent change (+/-)
# -1 - disable the steady-state check
# Note: '%' must be escaped as '%%' in systemd unit files
Environment=STEADY_STATE_THRESHOLD=-1
# Steady-state window = 60s
# If the running pod count stays within the given threshold for this time
# period, return CPU utilization to normal before the maximum wait time has
# expires
Environment=STEADY_STATE_WINDOW=60
[Install]
WantedBy=shutdown.target reboot.target halt.target
enabled: true
name: accelerated-container-shutdown.service
19.6.6.5. Automatic kernel crash dumps with kdump Link kopierenLink in die Zwischenablage kopiert!
The
kdump
kdump
MachineConfig
Recommended MachineConfig CR to remove ice driver from control plane kdump logs (05-kdump-config-master.yaml)
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 05-kdump-config-master
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: kdump-remove-ice-module.service
contents: |
[Unit]
Description=Remove ice module when doing kdump
Before=kdump.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/usr/local/bin/kdump-remove-ice-module.sh
[Install]
WantedBy=multi-user.target
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo=
mode: 448
path: /usr/local/bin/kdump-remove-ice-module.sh
Recommended control plane node kdump configuration (06-kdump-master.yaml)
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 06-kdump-enable-master
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: kdump.service
kernelArguments:
- crashkernel=512M
Recommended MachineConfig CR to remove ice driver from worker node kdump logs (05-kdump-config-worker.yaml)
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 05-kdump-config-worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: kdump-remove-ice-module.service
contents: |
[Unit]
Description=Remove ice module when doing kdump
Before=kdump.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/usr/local/bin/kdump-remove-ice-module.sh
[Install]
WantedBy=multi-user.target
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo=
mode: 448
path: /usr/local/bin/kdump-remove-ice-module.sh
Recommended kdump worker node configuration (06-kdump-worker.yaml)
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 06-kdump-enable-worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: kdump.service
kernelArguments:
- crashkernel=512M
19.6.7. Recommended postinstallation cluster configurations Link kopierenLink in die Zwischenablage kopiert!
When the cluster installation is complete, the ZTP pipeline applies the following custom resources (CRs) that are required to run DU workloads.
In {ztp} v4.10 and earlier, you configure UEFI secure boot with a
MachineConfig
spec.clusters.nodes.bootMode
SiteConfig
19.6.7.1. Operator namespaces and Operator groups Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters that run DU workloads require the following
OperatorGroup
Namespace
- Local Storage Operator
- Logging Operator
- PTP Operator
- SR-IOV Network Operator
The following YAML summarizes these CRs:
Recommended Operator Namespace and OperatorGroup configuration
apiVersion: v1
kind: Namespace
metadata:
annotations:
workload.openshift.io/allowed: management
name: openshift-local-storage
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-local-storage
namespace: openshift-local-storage
spec:
targetNamespaces:
- openshift-local-storage
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
workload.openshift.io/allowed: management
name: openshift-logging
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
targetNamespaces:
- openshift-logging
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
workload.openshift.io/allowed: management
labels:
openshift.io/cluster-monitoring: "true"
name: openshift-ptp
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: ptp-operators
namespace: openshift-ptp
spec:
targetNamespaces:
- openshift-ptp
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
workload.openshift.io/allowed: management
name: openshift-sriov-network-operator
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: sriov-network-operators
namespace: openshift-sriov-network-operator
spec:
targetNamespaces:
- openshift-sriov-network-operator
19.6.7.2. Operator subscriptions Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters that run DU workloads require the following
Subscription
- Local Storage Operator
- Logging Operator
- PTP Operator
- SR-IOV Network Operator
Recommended Operator subscriptions
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
channel: "stable"
name: cluster-logging
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: local-storage-operator
namespace: openshift-local-storage
spec:
channel: "stable"
installPlanApproval: Automatic
name: local-storage-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ptp-operator-subscription
namespace: openshift-ptp
spec:
channel: "stable"
name: ptp-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: sriov-network-operator-subscription
namespace: openshift-sriov-network-operator
spec:
channel: "stable"
name: sriov-network-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
- 1
- Specify the channel to get the Operator from.
stableis the recommended channel. - 2
- Specify
ManualorAutomatic. InAutomaticmode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. InManualmode, new Operator versions are installed only after they are explicitly approved.
19.6.7.3. Cluster logging and log forwarding Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters that run DU workloads require logging and log forwarding for debugging. The following example YAML illustrates the required
ClusterLogging
ClusterLogForwarder
Recommended cluster logging and log forwarding configuration
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
fluentd: {}
type: fluentd
curation:
type: "curator"
curator:
schedule: "30 3 * * *"
managementState: Managed
---
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
inputs:
- infrastructure: {}
name: infra-logs
outputs:
- name: kafka-open
type: kafka
url: tcp://10.46.55.190:9092/test
pipelines:
- inputRefs:
- audit
name: audit-logs
outputRefs:
- kafka-open
- inputRefs:
- infrastructure
name: infrastructure-logs
outputRefs:
- kafka-open
19.6.7.4. Performance profile Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters that run DU workloads require a Node Tuning Operator performance profile to use real-time host capabilities and services.
In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator.
The following example
PerformanceProfile
Recommended performance profile configuration
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: openshift-node-performance-profile
spec:
additionalKernelArgs:
- "rcupdate.rcu_normal_after_boot=0"
- "efi=runtime"
cpu:
isolated: 2-51,54-103
reserved: 0-1,52-53
hugepages:
defaultHugepagesSize: 1G
pages:
- count: 32
size: 1G
node: 0
machineConfigPoolSelector:
pools.operator.machineconfiguration.openshift.io/master: ""
nodeSelector:
node-role.kubernetes.io/master: ""
numa:
topologyPolicy: "restricted"
realTimeKernel:
enabled: true
- 1
- Ensure that the value for
namematches that specified in thespec.profile.datafield ofTunedPerformancePatch.yamland thestatus.configuration.source.namefield ofvalidatorCRs/informDuValidator.yaml. - 2
- Configures UEFI secure boot for the cluster host.
- 3
- Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match.Important
The reserved and isolated CPU pools must not overlap and together must span all available cores. CPU cores that are not accounted for cause an undefined behaviour in the system.
- 4
- Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved.
- 5
- Set the number of huge pages.
- 6
- Set the huge page size.
- 7
- Set
nodeto the NUMA node where thehugepagesare allocated. - 8
- Set
enabledtotrueto install the real-time Linux kernel.
19.6.7.5. Configuring cluster time synchronization Link kopierenLink in die Zwischenablage kopiert!
Run a one-time system time synchronization job for control plane or worker nodes.
Recommended one time time-sync for control plane nodes (99-sync-time-once-master.yaml)
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 99-sync-time-once-master
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Sync time once
After=network.service
[Service]
Type=oneshot
TimeoutStartSec=300
ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0'
ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
enabled: true
name: sync-time-once.service
Recommended one time time-sync for worker nodes (99-sync-time-once-worker.yaml)
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-sync-time-once-worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Sync time once
After=network.service
[Service]
Type=oneshot
TimeoutStartSec=300
ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0'
ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
enabled: true
name: sync-time-once.service
19.6.7.6. PTP Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters use Precision Time Protocol (PTP) for network time synchronization. The following example
PtpConfig
Recommended PTP configuration
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: ordinary
namespace: openshift-ptp
annotations:
ran.openshift.io/ztp-deploy-wave: "10"
spec:
profile:
- name: "ordinary"
# The interface name is hardware-specific
interface: $interface
ptp4lOpts: "-2 -s"
phc2sysOpts: "-a -r -n 24"
ptpSchedulingPolicy: SCHED_FIFO
ptpSchedulingPriority: 10
ptp4lConf: |
[global]
#
# Default Data Set
#
twoStepFlag 1
slaveOnly 1
priority1 128
priority2 128
domainNumber 24
#utc_offset 37
clockClass 255
clockAccuracy 0xFE
offsetScaledLogVariance 0xFFFF
free_running 0
freq_est_interval 1
dscp_event 0
dscp_general 0
dataset_comparison G.8275.x
G.8275.defaultDS.localPriority 128
#
# Port Data Set
#
logAnnounceInterval -3
logSyncInterval -4
logMinDelayReqInterval -4
logMinPdelayReqInterval -4
announceReceiptTimeout 3
syncReceiptTimeout 0
delayAsymmetry 0
fault_reset_interval -4
neighborPropDelayThresh 20000000
masterOnly 0
G.8275.portDS.localPriority 128
#
# Run time options
#
assume_two_step 0
logging_level 6
path_trace_enabled 0
follow_up_info 0
hybrid_e2e 0
inhibit_multicast_service 0
net_sync_monitor 0
tc_spanning_tree 0
tx_timestamp_timeout 50
unicast_listen 0
unicast_master_table 0
unicast_req_duration 3600
use_syslog 1
verbose 0
summary_interval 0
kernel_leap 1
check_fup_sync 0
clock_class_threshold 7
#
# Servo Options
#
pi_proportional_const 0.0
pi_integral_const 0.0
pi_proportional_scale 0.0
pi_proportional_exponent -0.3
pi_proportional_norm_max 0.7
pi_integral_scale 0.0
pi_integral_exponent 0.4
pi_integral_norm_max 0.3
step_threshold 2.0
first_step_threshold 0.00002
max_frequency 900000000
clock_servo pi
sanity_freq_limit 200000000
ntpshm_segment 0
#
# Transport options
#
transportSpecific 0x0
ptp_dst_mac 01:1B:19:00:00:00
p2p_dst_mac 01:80:C2:00:00:0E
udp_ttl 1
udp6_scope 0x0E
uds_address /var/run/ptp4l
#
# Default interface options
#
clock_type OC
network_transport L2
delay_mechanism E2E
time_stamping hardware
tsproc_mode filter
delay_filter moving_median
delay_filter_length 10
egressLatency 0
ingressLatency 0
boundary_clock_jbod 0
#
# Clock description
#
productDescription ;;
revisionData ;;
manufacturerIdentity 00:00:00
userDescription ;
timeSource 0xA0
recommend:
- profile: "ordinary"
priority: 4
match:
- nodeLabel: "node-role.kubernetes.io/$mcp"
19.6.7.7. Extended Tuned profile Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters that run DU workloads require additional performance tuning configurations necessary for high-performance workloads. The following example
Tuned
Tuned
Recommended extended Tuned profile configuration
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: performance-patch
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Configuration changes profile inherited from performance created tuned
include=openshift-node-performance-openshift-node-performance-profile
[bootloader]
cmdline_crash=nohz_full=2-51,54-103
[sysctl]
kernel.timer_migration=1
[scheduler]
group.ice-ptp=0:f:10:*:ice-ptp.*
[service]
service.stalld=start,enable
service.chronyd=stop,disable
name: performance-patch
recommend:
- machineConfigLabels:
machineconfiguration.openshift.io/role: master
priority: 19
profile: performance-patch
19.6.7.8. SR-IOV Link kopierenLink in die Zwischenablage kopiert!
Single root I/O virtualization (SR-IOV) is commonly used to enable the fronthaul and the midhaul networks. The following YAML example configures SR-IOV for a single-node OpenShift cluster.
Recommended SR-IOV configuration
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovOperatorConfig
metadata:
name: default
namespace: openshift-sriov-network-operator
spec:
configDaemonNodeSelector:
node-role.kubernetes.io/master: ""
disableDrain: true
enableInjector: true
enableOperatorWebhook: true
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: sriov-nw-du-mh
namespace: openshift-sriov-network-operator
spec:
networkNamespace: openshift-sriov-network-operator
resourceName: du_mh
vlan: 150
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: sriov-nnp-du-mh
namespace: openshift-sriov-network-operator
spec:
deviceType: vfio-pci
isRdma: false
nicSelector:
pfNames:
- ens7f0
nodeSelector:
node-role.kubernetes.io/master: ""
numVfs: 8
priority: 10
resourceName: du_mh
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: sriov-nw-du-fh
namespace: openshift-sriov-network-operator
spec:
networkNamespace: openshift-sriov-network-operator
resourceName: du_fh
vlan: 140
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: sriov-nnp-du-fh
namespace: openshift-sriov-network-operator
spec:
deviceType: netdevice
isRdma: true
nicSelector:
pfNames:
- ens5f0
nodeSelector:
node-role.kubernetes.io/master: ""
numVfs: 8
priority: 10
resourceName: du_fh
- 1
- Specifies the VLAN for the midhaul network.
- 2
- Select either
vfio-pciornetdevice, as needed. - 3
- Specifies the interface connected to the midhaul network.
- 4
- Specifies the number of VFs for the midhaul network.
- 5
- The VLAN for the fronthaul network.
- 6
- Select either
vfio-pciornetdevice, as needed. - 7
- Specifies the interface connected to the fronthaul network.
- 8
- Specifies the number of VFs for the fronthaul network.
19.6.7.9. Console Operator Link kopierenLink in die Zwischenablage kopiert!
The console-operator installs and maintains the web console on a cluster. When the node is centrally managed the Operator is not needed and makes space for application workloads. The following
Console
Recommended console configuration
apiVersion: operator.openshift.io/v1
kind: Console
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "false"
include.release.openshift.io/self-managed-high-availability: "false"
include.release.openshift.io/single-node-developer: "false"
release.openshift.io/create-only: "true"
name: cluster
spec:
logLevel: Normal
managementState: Removed
operatorLogLevel: Normal
19.6.7.10. Alertmanager Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters that run DU workloads require reduced CPU resources consumed by the OpenShift Container Platform monitoring components. The following
ConfigMap
Recommended cluster monitoring configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
alertmanagerMain:
enabled: false
prometheusK8s:
retention: 24h
19.6.7.11. Operator Lifecycle Manager Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters that run distributed unit workloads require consistent access to CPU resources. Operator Lifecycle Manager (OLM) collects performance data from Operators at regular intervals, resulting in an increase in CPU utilisation. The following
ConfigMap
Recommended cluster OLM configuration (ReduceOLMFootprint.yaml)
apiVersion: v1
kind: ConfigMap
metadata:
name: collect-profiles-config
namespace: openshift-operator-lifecycle-manager
data:
pprof-config.yaml: |
disabled: True
19.6.7.12. Network diagnostics Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters that run DU workloads require less inter-pod network connectivity checks to reduce the additional load created by these pods. The following custom resource (CR) disables these checks.
Recommended network diagnostics configuration
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
disableNetworkDiagnostics: true
19.7. Validating single-node OpenShift cluster tuning for vDU application workloads Link kopierenLink in die Zwischenablage kopiert!
Before you can deploy virtual distributed unit (vDU) applications, you need to tune and configure the cluster host firmware and various other cluster configuration settings. Use the following information to validate the cluster configuration to support vDU workloads.
19.7.1. Recommended firmware configuration for vDU cluster hosts Link kopierenLink in die Zwischenablage kopiert!
Use the following table as the basis to configure the cluster host firmware for vDU applications running on OpenShift Container Platform 4.12.
The following table is a general recommendation for vDU cluster host firmware configuration. Exact firmware settings will depend on your requirements and specific hardware platform. Automatic setting of firmware is not handled by the zero touch provisioning pipeline.
| Firmware setting | Configuration | Description |
|---|---|---|
| HyperTransport (HT) | Enabled | HyperTransport (HT) bus is a bus technology developed by AMD. HT provides a high-speed link between the components in the host memory and other system peripherals. |
| UEFI | Enabled | Enable booting from UEFI for the vDU host. |
| CPU Power and Performance Policy | Performance | Set CPU Power and Performance Policy to optimize the system for performance over energy efficiency. |
| Uncore Frequency Scaling | Disabled | Disable Uncore Frequency Scaling to prevent the voltage and frequency of non-core parts of the CPU from being set independently. |
| Uncore Frequency | Maximum | Sets the non-core parts of the CPU such as cache and memory controller to their maximum possible frequency of operation. |
| Performance P-limit | Disabled | Disable Performance P-limit to prevent the Uncore frequency coordination of processors. |
| Enhanced Intel® SpeedStep Tech | Enabled | Enable Enhanced Intel SpeedStep to allow the system to dynamically adjust processor voltage and core frequency that decreases power consumption and heat production in the host. |
| Intel® Turbo Boost Technology | Enabled | Enable Turbo Boost Technology for Intel-based CPUs to automatically allow processor cores to run faster than the rated operating frequency if they are operating below power, current, and temperature specification limits. |
| Intel Configurable TDP | Enabled | Enables Thermal Design Power (TDP) for the CPU. |
| Configurable TDP Level | Level 2 | TDP level sets the CPU power consumption required for a particular performance rating. TDP level 2 sets the CPU to the most stable performance level at the cost of power consumption. |
| Energy Efficient Turbo | Disabled | Disable Energy Efficient Turbo to prevent the processor from using an energy-efficiency based policy. |
| Hardware P-States | Enabled or Disabled | Enable OS-controlled P-States to allow power saving configurations. Disable
|
| Package C-State | C0/C1 state | Use C0 or C1 states to set the processor to a fully active state (C0) or to stop CPU internal clocks running in software (C1). |
| C1E | Disabled | CPU Enhanced Halt (C1E) is a power saving feature in Intel chips. Disabling C1E prevents the operating system from sending a halt command to the CPU when inactive. |
| Processor C6 | Disabled | C6 power-saving is a CPU feature that automatically disables idle CPU cores and cache. Disabling C6 improves system performance. |
| Sub-NUMA Clustering | Disabled | Sub-NUMA clustering divides the processor cores, cache, and memory into multiple NUMA domains. Disabling this option can increase performance for latency-sensitive workloads. |
Enable global SR-IOV and VT-d settings in the firmware for the host. These settings are relevant to bare-metal environments.
Enable both
C-states
P-States
19.7.2. Recommended cluster configurations to run vDU applications Link kopierenLink in die Zwischenablage kopiert!
Clusters running virtualized distributed unit (vDU) applications require a highly tuned and optimized configuration. The following information describes the various elements that you require to support vDU workloads in OpenShift Container Platform 4.12 clusters.
19.7.2.1. Recommended cluster MachineConfig CRs Link kopierenLink in die Zwischenablage kopiert!
Check that the
MachineConfig
ztp-site-generate
out/source-crs/extra-manifest/
The following
MachineConfig
ztp-site-generate
| CR filename | Description |
|---|---|
|
| Configures workload partitioning for the cluster. Apply this
|
|
| Loads the SCTP kernel module. These
|
|
| Configures the container mount namespace and Kubelet configuration. |
|
| Configures accelerated startup for the cluster. |
|
| Configures
|
19.7.2.2. Recommended cluster Operators Link kopierenLink in die Zwischenablage kopiert!
The following Operators are required for clusters running virtualized distributed unit (vDU) applications and are a part of the baseline reference configuration:
- Node Tuning Operator (NTO). NTO packages functionality that was previously delivered with the Performance Addon Operator, which is now a part of NTO.
- PTP Operator
- SR-IOV Network Operator
- Red Hat OpenShift Logging Operator
- Local Storage Operator
19.7.2.3. Recommended cluster kernel configuration Link kopierenLink in die Zwischenablage kopiert!
Always use the latest supported real-time kernel version in your cluster. Ensure that you apply the following configurations in the cluster:
Ensure that the following
are set in the cluster performance profile:additionalKernelArgsspec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime"Ensure that the
profile in theperformance-patchCR configures the correct CPU isolation set that matches theTunedCPU set in the relatedisolatedCR, for example:PerformanceProfilespec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name # And the cmdline_crash CPU set must match the 'isolated' set in the associated PerformanceProfile data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-51,54-1031 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable- 1
- Listed CPUs depend on the host hardware configuration, specifically the number of available CPUs in the system and the CPU topology.
19.7.2.4. Checking the realtime kernel version Link kopierenLink in die Zwischenablage kopiert!
Always use the latest version of the realtime kernel in your OpenShift Container Platform clusters. If you are unsure about the kernel version that is in use in the cluster, you can compare the current realtime kernel version to the release version with the following procedure.
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You are logged in as a user with privileges.
cluster-admin -
You have installed .
podman
Procedure
Run the following command to get the cluster version:
$ OCP_VERSION=$(oc get clusterversion version -o jsonpath='{.status.desired.version}{"\n"}')Get the release image SHA number:
$ DTK_IMAGE=$(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:$OCP_VERSION-x86_64)Run the release image container and extract the kernel version that is packaged with cluster’s current release:
$ podman run --rm $DTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##'Example output
4.18.0-305.49.1.rt7.121.el8_4.x86_64This is the default realtime kernel version that ships with the release.
NoteThe realtime kernel is denoted by the string
in the kernel version..rt
Verification
Check that the kernel version listed for the cluster’s current release matches actual realtime kernel that is running in the cluster. Run the following commands to check the running realtime kernel version:
Open a remote shell connection to the cluster node:
$ oc debug node/<node_name>Check the realtime kernel version:
sh-4.4# uname -rExample output
4.18.0-305.49.1.rt7.121.el8_4.x86_64
19.7.3. Checking that the recommended cluster configurations are applied Link kopierenLink in die Zwischenablage kopiert!
You can check that clusters are running the correct configuration. The following procedure describes how to check the various configurations that you require to deploy a DU application in OpenShift Container Platform 4.12 clusters.
Prerequisites
- You have deployed a cluster and tuned it for vDU workloads.
-
You have installed the OpenShift CLI ().
oc -
You have logged in as a user with privileges.
cluster-admin
Procedure
Check that the default OperatorHub sources are disabled. Run the following command:
$ oc get operatorhub cluster -o yamlExample output
spec: disableAllDefaultSources: trueCheck that all required
resources are annotated for workload partitioning (CatalogSource) by running the following command:PreferredDuringScheduling$ oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{" -- "}{.metadata.annotations.target\.workload\.openshift\.io/management}{"\n"}{end}'Example output
certified-operators -- {"effect": "PreferredDuringScheduling"} community-operators -- {"effect": "PreferredDuringScheduling"} ran-operators1 redhat-marketplace -- {"effect": "PreferredDuringScheduling"} redhat-operators -- {"effect": "PreferredDuringScheduling"}- 1
CatalogSourceresources that are not annotated are also returned. In this example, theran-operatorsCatalogSourceresource is not annotated and does not have thePreferredDuringSchedulingannotation.
NoteIn a properly configured vDU cluster, only a single annotated catalog source is listed.
Check that all applicable OpenShift Container Platform Operator namespaces are annotated for workload partitioning. This includes all Operators installed with core OpenShift Container Platform and the set of additional Operators included in the reference DU tuning configuration. Run the following command:
$ oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{" -- "}{.metadata.annotations.workload\.openshift\.io/allowed}{"\n"}{end}'Example output
default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- managementImportantAdditional Operators must not be annotated for workload partitioning. In the output from the previous command, additional Operators should be listed without any value on the right side of the
separator.--Check that the
configuration is correct. Run the following commands:ClusterLoggingValidate that the appropriate input and output logs are configured:
$ oc get -n openshift-logging ClusterLogForwarder instance -o yamlExample output
apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: "2022-07-19T21:51:41Z" generation: 1 name: instance namespace: openshift-logging resourceVersion: "1030342" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open ...Check that the curation schedule is appropriate for your application:
$ oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yamlExample output
apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: "2022-07-07T18:22:56Z" generation: 1 name: instance namespace: openshift-logging resourceVersion: "235796" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed ...
Check that the web console is disabled (
) by running the following command:managementState: Removed$ oc get consoles.operator.openshift.io cluster -o jsonpath="{ .spec.managementState }"Example output
RemovedCheck that
is disabled on the cluster node by running the following commands:chronyd$ oc debug node/<node_name>Check the status of
on the node:chronydsh-4.4# chroot /hostsh-4.4# systemctl status chronydExample output
● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5)Check that the PTP interface is successfully synchronized to the primary clock using a remote shell connection to the
container and the PTP Management Client (linuxptp-daemon) tool:pmcSet the
variable with the name of the$PTP_POD_NAMEpod by running the following command:linuxptp-daemon$ PTP_POD_NAME=$(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name)Run the following command to check the sync status of the PTP device:
$ oc -n openshift-ptp rsh -c linuxptp-daemon-container ${PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'Example output
sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2Run the following
command to check the PTP clock status:pmc$ oc -n openshift-ptp rsh -c linuxptp-daemon-container ${PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP'Example output
sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 101 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true2 gmIdentity 3c2c30.ffff.670e00Check that the expected
value corresponding to the value inmaster offsetis found in the/var/run/ptp4l.0.configlog:linuxptp-daemon-container$ oc logs $PTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-containerExample output
phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533
Check that the SR-IOV configuration is correct by running the following commands:
Check that the
value in thedisableDrainresource is set toSriovOperatorConfig:true$ oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath="{.spec.disableDrain}{'\n'}"Example output
trueCheck that the
sync status isSriovNetworkNodeStateby running the following command:Succeeded$ oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath="{.items[*].status.syncStatus}{'\n'}"Example output
SucceededVerify that the expected number and configuration of virtual functions (
) under each interface configured for SR-IOV is present and correct in theVfsfield. For example:.status.interfaces$ oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yamlExample output
apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState ... status: interfaces: ... - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: "8086" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: "8086" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: "8086" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: "8086" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: "8086" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: "8086" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: "8086" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: "8086" vfID: 7
Check that the cluster performance profile is correct. The
andcpusections will vary depending on your hardware configuration. Run the following command:hugepages$ oc get PerformanceProfile openshift-node-performance-profile -o yamlExample output
apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: "2022-07-19T21:51:31Z" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: "33558" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "True" type: Available - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "True" type: Upgradeable - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "False" type: Progressing - lastHeartbeatTime: "2022-07-19T21:51:31Z" lastTransitionTime: "2022-07-19T21:51:31Z" status: "False" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profileNoteCPU settings are dependent on the number of cores available on the server and should align with workload partitioning settings.
configuration is server and application dependent.hugepagesCheck that the
was successfully applied to the cluster by running the following command:PerformanceProfile$ oc get performanceprofile openshift-node-performance-profile -o jsonpath="{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\n'}{end}"Example output
Available -- True Upgradeable -- True Progressing -- False Degraded -- FalseCheck the
performance patch settings by running the following command:Tuned$ oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yamlExample output
apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: "2022-07-18T10:33:52Z" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: "34024" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-471 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch- 1
- The cpu list in
cmdline=nohz_full=will vary based on your hardware configuration.
Check that cluster networking diagnostics are disabled by running the following command:
$ oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}'Example output
trueCheck that the
housekeeping interval is tuned to slower rate. This is set in theKubeletmachine config. Run the following command:containerMountNS$ oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATIONExample output
Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s"Check that Grafana and
are disabled and that the Prometheus retention period is set to 24h by running the following command:alertManagerMain$ oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath="{ .data.config\.yaml }"Example output
grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24hUse the following commands to verify that Grafana and
routes are not found in the cluster:alertManagerMain$ oc get route -n openshift-monitoring alertmanager-main$ oc get route -n openshift-monitoring grafanaBoth queries should return
messages.Error from server (NotFound)
Check that there is a minimum of 4 CPUs allocated as
for each of thereserved,PerformanceProfileperformance-patch, workload partitioning, and kernel command-line arguments by running the following command:Tuned$ oc get performanceprofile -o jsonpath="{ .items[0].spec.cpu.reserved }"Example output
0-3NoteDepending on your workload requirements, you might require additional reserved CPUs to be allocated.
19.8. Advanced managed cluster configuration with SiteConfig resources Link kopierenLink in die Zwischenablage kopiert!
You can use
SiteConfig
19.8.1. Customizing extra installation manifests in the ZTP GitOps pipeline Link kopierenLink in die Zwischenablage kopiert!
You can define a set of extra manifests for inclusion in the installation phase of the zero touch provisioning (ZTP) GitOps pipeline. These manifests are linked to the
SiteConfig
MachineConfig
Prerequisites
- Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Procedure
- Create a set of extra manifest CRs that the ZTP pipeline uses to customize the cluster installs.
In your custom
directory, create an/siteconfigfolder for your extra manifests. The following example illustrates a sample/extra-manifestwith/siteconfigfolder:/extra-manifestsiteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml └── extra-manifest └── 01-example-machine-config.yaml-
Add your custom extra manifest CRs to the directory.
siteconfig/extra-manifest In your
CR, enter the directory name in theSiteConfigfield, for example:extraManifestPathclusters: - clusterName: "example-sno" networkType: "OVNKubernetes" extraManifestPath: extra-manifest-
Save the CRs and
SiteConfigCRs and push them to the site configuration repo./extra-manifest
The ZTP pipeline appends the CRs in the
/extra-manifest
19.8.2. Filtering custom resources using SiteConfig filters Link kopierenLink in die Zwischenablage kopiert!
By using filters, you can easily customize
SiteConfig
You can specify an
inclusionDefault
include
exclude
SiteConfig
extraManifest
inclusionDefault
include
/source-crs/extra-manifest
inclusionDefault
exclude
You can exclude individual CRs from the
/source-crs/extra-manifest
SiteConfig
/source-crs/extra-manifest/03-sctp-machine-config-worker.yaml
Some additional optional filtering scenarios are also described.
Prerequisites
- You configured the hub cluster for generating the required installation and policy CRs.
- You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Procedure
To prevent the ZTP pipeline from applying the
CR file, apply the following YAML in the03-sctp-machine-config-worker.yamlCR:SiteConfigapiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.12" sshPublicKey: "<ssh_public_key>" clusters: - clusterName: "site1-sno-du" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yamlThe ZTP pipeline skips the
CR during installation. All other CRs in03-sctp-machine-config-worker.yamlare applied./source-crs/extra-manifestSave the
CR and push the changes to the site configuration repository.SiteConfigThe ZTP pipeline monitors and adjusts what CRs it applies based on the
filter instructions.SiteConfigOptional: To prevent the ZTP pipeline from applying all the
CRs during cluster installation, apply the following YAML in the/source-crs/extra-manifestCR:SiteConfig- clusterName: "site1-sno-du" extraManifests: filter: inclusionDefault: excludeOptional: To exclude all the
RAN CRs and instead include a custom CR file during installation, edit the custom/source-crs/extra-manifestCR to set the custom manifests folder and theSiteConfigfile, for example:includeclusters: - clusterName: "site1-sno-du" extraManifestPath: "<custom_manifest_folder>"1 extraManifests: filter: inclusionDefault: exclude2 include: - custom-sctp-machine-config-worker.yamlThe following example illustrates the custom folder structure:
siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml
19.9. Advanced managed cluster configuration with PolicyGenTemplate resources Link kopierenLink in die Zwischenablage kopiert!
You can use
PolicyGenTemplate
19.9.1. Deploying additional changes to clusters Link kopierenLink in die Zwischenablage kopiert!
If you require cluster configuration changes outside of the base GitOps ZTP pipeline configuration, there are three options:
- Apply the additional configuration after the ZTP pipeline is complete
- When the GitOps ZTP pipeline deployment is complete, the deployed cluster is ready for application workloads. At this point, you can install additional Operators and apply configurations specific to your requirements. Ensure that additional configurations do not negatively affect the performance of the platform or allocated CPU budget.
- Add content to the ZTP library
- The base source custom resources (CRs) that you deploy with the GitOps ZTP pipeline can be augmented with custom content as required.
- Create extra manifests for the cluster installation
- Extra manifests are applied during installation and make the installation process more efficient.
Providing additional source CRs or modifying existing source CRs can significantly impact the performance or CPU profile of OpenShift Container Platform.
19.9.2. Using PolicyGenTemplate CRs to override source CRs content Link kopierenLink in die Zwischenablage kopiert!
PolicyGenTemplate
ztp-site-generate
PolicyGenTemplate
PolicyGenTemplate
The following example procedure describes how to update fields in the generated
PerformanceProfile
PolicyGenTemplate
group-du-sno-ranGen.yaml
PolicyGenTemplate
Prerequisites
- Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD.
Procedure
Review the baseline source CR for existing content. You can review the source CRs listed in the reference
CRs by extracting them from the zero touch provisioning (ZTP) container.PolicyGenTemplateCreate an
folder:/out$ mkdir -p ./outExtract the source CRs:
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 extract /home/ztp --tar | tar x -C ./out
Review the baseline
CR inPerformanceProfile:./out/source-crs/PerformanceProfile.yamlapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: $name annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: additionalKernelArgs: - "idle=poll" - "rcupdate.rcu_normal_after_boot=0" cpu: isolated: $isolated reserved: $reserved hugepages: defaultHugepagesSize: $defaultHugepagesSize pages: - size: $size count: $count node: $node machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/$mcp: "" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/$mcp: '' numa: topologyPolicy: "restricted" realTimeKernel: enabled: trueNoteAny fields in the source CR which contain
are removed from the generated CR if they are not provided in the$…CR.PolicyGenTemplateUpdate the
entry forPolicyGenTemplatein thePerformanceProfilereference file. The following examplegroup-du-sno-ranGen.yamlCR stanza supplies appropriate CPU specifications, sets thePolicyGenTemplateconfiguration, and adds a new field that setshugepagesto false.globallyDisableIrqLoadBalancing- fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: "2-19,22-39" reserved: "0-1,20-21" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false-
Commit the change in Git, and then push to the Git repository being monitored by the GitOps ZTP argo CD application.
PolicyGenTemplate
Example output
The ZTP application generates an RHACM policy that contains the generated
PerformanceProfile
metadata
spec
PerformanceProfile
PolicyGenTemplate
---
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: openshift-node-performance-profile
spec:
additionalKernelArgs:
- idle=poll
- rcupdate.rcu_normal_after_boot=0
cpu:
isolated: 2-19,22-39
reserved: 0-1,20-21
globallyDisableIrqLoadBalancing: false
hugepages:
defaultHugepagesSize: 1G
pages:
- count: 10
size: 1G
machineConfigPoolSelector:
pools.operator.machineconfiguration.openshift.io/master: ""
net:
userLevelNetworking: true
nodeSelector:
node-role.kubernetes.io/master: ""
numa:
topologyPolicy: restricted
realTimeKernel:
enabled: true
In the
/source-crs
ztp-site-generate
$
policyGen
$
PolicyGenTemplate
An exception to this is the
$mcp
/source-crs
mcp
PolicyGenTemplate
example/policygentemplates/group-du-standard-ranGen.yaml
mcp
worker
spec:
bindingRules:
group-du-standard: ""
mcp: "worker"
The
policyGen
$mcp
worker
19.9.3. Adding custom content to the GitOps ZTP pipeline Link kopierenLink in die Zwischenablage kopiert!
Perform the following procedure to add new content to the ZTP pipeline.
Procedure
-
Create a subdirectory named in the directory that contains the
source-crsfile for thekustomization.yamlcustom resource (CR).PolicyGenTemplate Add your custom CRs to the
subdirectory, as shown in the following example:source-crsexample └── policygentemplates ├── dev.yaml ├── kustomization.yaml ├── mec-edge-sno1.yaml ├── sno.yaml └── source-crs1 ├── PaoCatalogSource.yaml ├── PaoSubscription.yaml ├── custom-crs | ├── apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch ├── ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml- 1
- The
source-crssubdirectory must be in the same directory as thekustomization.yamlfile.
ImportantTo use your own resources, ensure that the custom CR names differ from the default source CRs provided in the ZTP container.
Update the required
CRs to include references to the content you added in thePolicyGenTemplateandsource-crs/custom-crsdirectories. For example:source-crs/elasticsearchapiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-dev" namespace: "ztp-clusters" spec: bindingRules: dev: "true" mcp: "master" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: "group-dev-cluster-log-ns" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: "group-dev-cluster-log-operator-group" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: "group-dev-cluster-log-sub" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: "group-dev-lso-ns" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: "group-dev-lso-operator-group" - fileName: StorageSubscription.yaml remediationAction: inform policyName: "group-dev-lso-sub" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: "group-dev-pao-ns" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: "group-dev-pao-cat-source" spec: image: <image_URL_here> - fileName: PaoSubscription.yaml remediationAction: inform policyName: "group-dev-pao-sub" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml1 remediationAction: inform policyName: "group-dev-elasticsearch-ns" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: "group-dev-elasticsearch-operator-group" #Custom Resources - fileName: custom-crs/apiserver-config.yaml2 remediationAction: inform policyName: "group-dev-apiserver-config" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: "group-dev-disable-nic-lldp"-
Commit the change in Git, and then push to the Git repository that is monitored by the GitOps ZTP Argo CD policies application.
PolicyGenTemplate Update the
CR to include the changedClusterGroupUpgradeand save it asPolicyGenTemplate. The following example shows a generatedcgu-test.yamlfile.cgu-test.yamlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240Apply the updated
CR by running the following command:ClusterGroupUpgrade$ oc apply -f cgu-test.yaml
Verification
Check that the updates have succeeded by running the following command:
$ oc get cgu -AExample output
NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies
19.9.4. Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs Link kopierenLink in die Zwischenablage kopiert!
Use Red Hat Advanced Cluster Management (RHACM) installed on a hub cluster to monitor and report on whether your managed clusters are compliant with applied policies. RHACM uses policy templates to apply predefined policy controllers and policies. Policy controllers are Kubernetes custom resource definition (CRD) instances.
You can override the default policy evaluation intervals with
PolicyGenTemplate
ConfigurationPolicy
The zero touch provisioning (ZTP) policy generator generates
ConfigurationPolicy
noncompliant
compliant
never
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin - You have created a Git repository where you manage your custom site configuration data.
Procedure
To configure the evaluation interval for all policies in a
CR, addPolicyGenTemplateto theevaluationIntervalfield, and then set the appropriatespecandcompliantvalues. For example:noncompliantspec: evaluationInterval: compliant: 30m noncompliant: 20sTo configure the evaluation interval for the
object in aspec.sourceFilesCR, addPolicyGenTemplateto theevaluationIntervalfield, for example:sourceFilesspec: sourceFiles: - fileName: SriovSubscription.yaml policyName: "sriov-sub-policy" evaluationInterval: compliant: never noncompliant: 10s-
Commit the CRs files in the Git repository and push your changes.
PolicyGenTemplate
Verification
Check that the managed spoke cluster policies are monitored at the expected intervals.
-
Log in as a user with privileges on the managed cluster.
cluster-admin Get the pods that are running in the
namespace. Run the following command:open-cluster-management-agent-addon$ oc get pods -n open-cluster-management-agent-addonExample output
NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10dCheck the applied policies are being evaluated at the expected interval in the logs for the
pod:config-policy-controller$ oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdbExample output
2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-config-policy-config"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-common-compute-1-catalog-policy-config"}
19.9.5. Signalling ZTP cluster deployment completion with validator inform policies Link kopierenLink in die Zwischenablage kopiert!
Create a validator inform policy that signals when the zero touch provisioning (ZTP) installation and configuration of the deployed cluster is complete. This policy can be used for deployments of single-node OpenShift clusters, three-node clusters, and standard clusters.
Procedure
Create a standalone
custom resource (CR) that contains the source filePolicyGenTemplate. You only need one standalonevalidatorCRs/informDuValidator.yamlCR for each cluster type. For example, this CR applies a validator inform policy for single-node OpenShift clusters:PolicyGenTemplateExample single-node cluster validator inform policy CR (group-du-sno-validator-ranGen.yaml)
apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "group-du-sno-validator"1 namespace: "ztp-group"2 spec: bindingRules: group-du-sno: ""3 bindingExcludedRules: ztp-done: ""4 mcp: "master"5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform6 policyName: "du-policy"7 - 1
- The name of
PolicyGenTemplatesobject. This name is also used as part of the names for theplacementBinding,placementRule, andpolicythat are created in the requestednamespace. - 2
- This value should match the
namespaceused in the groupPolicyGenTemplates. - 3
- The
group-du-*label defined inbindingRulesmust exist in theSiteConfigfiles. - 4
- The label defined in
bindingExcludedRulesmust be`ztp-done:`. Theztp-donelabel is used in coordination with the Topology Aware Lifecycle Manager. - 5
mcpdefines theMachineConfigPoolobject that is used in the source filevalidatorCRs/informDuValidator.yaml. It should bemasterfor single node and three-node cluster deployments andworkerfor standard cluster deployments.- 6
- Optional. The default value is
inform. - 7
- This value is used as part of the name for the generated RHACM policy. The generated validator policy for the single node example is
group-du-sno-validator-du-policy.
-
Commit the CR file in your Git repository and push the changes.
PolicyGenTemplate
19.9.6. Configuring PTP events with PolicyGenTemplate CRs Link kopierenLink in die Zwischenablage kopiert!
You can use the GitOps ZTP pipeline to configure PTP events that use HTTP or AMQP transport.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.
19.9.6.1. Configuring PTP events that use HTTP transport Link kopierenLink in die Zwischenablage kopiert!
You can configure PTP events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline.
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in as a user with privileges.
cluster-admin - You have created a Git repository where you manage your custom site configuration data.
Procedure
Apply the following
changes toPolicyGenTemplate,group-du-3node-ranGen.yaml, orgroup-du-sno-ranGen.yamlfiles according to your requirements:group-du-standard-ranGen.yamlIn
, add the.sourceFilesCR file that configures the transport host:PtpOperatorConfig- fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043NoteIn OpenShift Container Platform 4.12 or later, you do not need to set the
field in thetransportHostresource when you use HTTP transport with PTP events.PtpOperatorConfigConfigure the
andlinuxptpfor the PTP clock type and interface. For example, add the following stanza intophc2sys:.sourceFiles- fileName: PtpConfigSlave.yaml1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1"2 ptp4lOpts: "-2 -s --summary_interval -4"3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16"4 ptpClockThreshold:5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs- 1
- Can be one of
PtpConfigMaster.yaml,PtpConfigSlave.yaml, orPtpConfigSlaveCvl.yamldepending on your requirements.PtpConfigSlaveCvl.yamlconfigureslinuxptpservices for an Intel E810 Columbiaville NIC. For configurations based ongroup-du-sno-ranGen.yamlorgroup-du-3node-ranGen.yaml, usePtpConfigSlave.yaml. - 2
- Device specific interface name.
- 3
- You must append the
--summary_interval -4value toptp4lOptsin.spec.sourceFiles.spec.profileto enable PTP fast events. - 4
- Required
phc2sysOptsvalues.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics. - 5
- Optional. If the
ptpClockThresholdstanza is not present, default values are used for theptpClockThresholdfields. The stanza shows defaultptpClockThresholdvalues. TheptpClockThresholdvalues configure how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeoutis the time value in seconds before the PTP clock event state changes toFREERUNwhen the PTP master clock is disconnected. ThemaxOffsetThresholdandminOffsetThresholdsettings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME(phc2sys) or master offset (ptp4l). When theptp4lorphc2sysoffset value is outside this range, the PTP clock state is set toFREERUN. When the offset value is within this range, the PTP clock state is set toLOCKED.
- Merge any other required changes and files with your custom site repository.
- Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP.
19.9.6.2. Configuring PTP events that use AMQP transport Link kopierenLink in die Zwischenablage kopiert!
You can configure PTP events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in as a user with privileges.
cluster-admin - You have created a Git repository where you manage your custom site configuration data.
Procedure
Add the following YAML into
in the.spec.sourceFilesfile to configure the AMQP Operator:common-ranGen.yaml#AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy"Apply the following
changes toPolicyGenTemplate,group-du-3node-ranGen.yaml, orgroup-du-sno-ranGen.yamlfiles according to your requirements:group-du-standard-ranGen.yamlIn
, add the.sourceFilesCR file that configures the AMQ transport host to thePtpOperatorConfig:config-policy- fileName: PtpOperatorConfigForEvent.yaml policyName: "config-policy" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: "amqp://amq-router.amq-router.svc.cluster.local"Configure the
andlinuxptpfor the PTP clock type and interface. For example, add the following stanza intophc2sys:.sourceFiles- fileName: PtpConfigSlave.yaml1 policyName: "config-policy" metadata: name: "du-ptp-slave" spec: profile: - name: "slave" interface: "ens5f1"2 ptp4lOpts: "-2 -s --summary_interval -4"3 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16"4 ptpClockThreshold:5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs- 1
- Can be one
PtpConfigMaster.yaml,PtpConfigSlave.yaml, orPtpConfigSlaveCvl.yamldepending on your requirements.PtpConfigSlaveCvl.yamlconfigureslinuxptpservices for an Intel E810 Columbiaville NIC. For configurations based ongroup-du-sno-ranGen.yamlorgroup-du-3node-ranGen.yaml, usePtpConfigSlave.yaml. - 2
- Device specific interface name.
- 3
- You must append the
--summary_interval -4value toptp4lOptsin.spec.sourceFiles.spec.profileto enable PTP fast events. - 4
- Required
phc2sysOptsvalues.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics. - 5
- Optional. If the
ptpClockThresholdstanza is not present, default values are used for theptpClockThresholdfields. The stanza shows defaultptpClockThresholdvalues. TheptpClockThresholdvalues configure how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeoutis the time value in seconds before the PTP clock event state changes toFREERUNwhen the PTP master clock is disconnected. ThemaxOffsetThresholdandminOffsetThresholdsettings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME(phc2sys) or master offset (ptp4l). When theptp4lorphc2sysoffset value is outside this range, the PTP clock state is set toFREERUN. When the offset value is within this range, the PTP clock state is set toLOCKED.
Apply the following
changes to your specific site YAML files, for example,PolicyGenTemplate:example-sno-site.yamlIn
, add the.sourceFilesCR file that configures the AMQ router to theInterconnect:config-policy- fileName: AmqInstance.yaml policyName: "config-policy"
- Merge any other required changes and files with your custom site repository.
- Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP.
19.9.7. Configuring bare-metal events with PolicyGenTemplate CRs Link kopierenLink in die Zwischenablage kopiert!
You can use the GitOps ZTP pipeline to configure bare-metal events that use HTTP or AMQP transport.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.
19.9.7.1. Configuring bare-metal events that use HTTP transport Link kopierenLink in die Zwischenablage kopiert!
You can configure bare-metal events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline.
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in as a user with privileges.
cluster-admin - You have created a Git repository where you manage your custom site configuration data.
Procedure
Configure the Bare Metal Event Relay Operator by adding the following YAML to
in thespec.sourceFilesfile:common-ranGen.yaml# Bare Metal Event Relay operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy"Add the
CR toHardwareEventin your specific group configuration file, for example, in thespec.sourceFilesfile:group-du-sno-ranGen.yaml- fileName: HardwareEvent.yaml1 policyName: "config-policy" spec: nodeSelector: {} transportHost: "http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043" logLevel: "info"- 1
- Each baseboard management controller (BMC) requires a single
HardwareEventCR only.
NoteIn OpenShift Container Platform 4.12 or later, you do not need to set the
field in thetransportHostcustom resource (CR) when you use HTTP transport with bare-metal events.HardwareEvent- Merge any other required changes and files with your custom site repository.
- Push the changes to your site configuration repository to deploy bare-metal events to new sites with GitOps ZTP.
Create the Redfish Secret by running the following command:
$ oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>"
19.9.7.2. Configuring bare-metal events that use AMQP transport Link kopierenLink in die Zwischenablage kopiert!
You can configure bare-metal events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in as a user with privileges.
cluster-admin - You have created a Git repository where you manage your custom site configuration data.
Procedure
To configure the AMQ Interconnect Operator and the Bare Metal Event Relay Operator, add the following YAML to
in thespec.sourceFilesfile:common-ranGen.yaml# AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: AmqSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: AmqSubscription.yaml policyName: "subscriptions-policy" # Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: BareMetalEventRelaySubscription.yaml policyName: "subscriptions-policy"Add the
CR toInterconnectin the site configuration file, for example, the.spec.sourceFilesfile:example-sno-site.yaml- fileName: AmqInstance.yaml policyName: "config-policy"Add the
CR toHardwareEventin your specific group configuration file, for example, in thespec.sourceFilesfile:group-du-sno-ranGen.yaml- fileName: HardwareEvent.yaml policyName: "config-policy" spec: nodeSelector: {} transportHost: "amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local"1 logLevel: "info"- 1
- The
transportHostURL is composed of the existing AMQ Interconnect CRnameandnamespace. For example, intransportHost: "amqp://amq-router.amq-router.svc.cluster.local", the AMQ Interconnectnameandnamespaceare both set toamq-router.
NoteEach baseboard management controller (BMC) requires a single
resource only.HardwareEvent-
Commit the change in Git, and then push the changes to your site configuration repository to deploy bare-metal events monitoring to new sites using GitOps ZTP.
PolicyGenTemplate Create the Redfish Secret by running the following command:
$ oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>"
19.9.8. Configuring the Image Registry Operator for local caching of images Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform manages image caching using a local registry. In edge computing use cases, clusters are often subject to bandwidth restrictions when communicating with centralized image registries, which might result in long image download times.
Long download times are unavoidable during initial deployment. Over time, there is a risk that CRI-O will erase the
/var/lib/containers/storage
Before you can set up the local image registry with GitOps ZTP, you need to configure disk partitioning in the
SiteConfig
PolicyGenTemplate
imageregistry
The local image registry can only be used for user application images and cannot be used for the OpenShift Container Platform or Operator Lifecycle Manager operator images.
19.9.8.1. Configuring disk partitioning with SiteConfig Link kopierenLink in die Zwischenablage kopiert!
Configure disk partitioning for a managed cluster using a
SiteConfig
SiteConfig
You must complete this procedure at installation time.
Prerequisites
- Install Butane.
Procedure
Create the
file by using the following example YAML file:storage.buvariant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:01 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition>2 size_mib: <partition_size>3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquotaConvert the
file to an Ignition file by running the following command:storage.bu$ butane storage.buExample output
{"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}}- Use a tool such as JSON Pretty Print to convert the output into JSON format.
Copy the output into the
field in the.spec.clusters.nodes.ignitionConfigOverrideCR:SiteConfig[...] spec: clusters: - nodes: - ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } [...]NoteIf the
field does not exist, create it..spec.clusters.nodes.ignitionConfigOverride
Verification
During or after installation, verify on the hub cluster that the
object shows the annotation by running the following command:BareMetalHost$ oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"]Example output
"{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}"After installation, check the single-node OpenShift disk status:
Enter into a debug session on the single-node OpenShift node by running the following command.
This step instantiates a debug pod called
:<node_name>-debug$ oc debug node/my-sno-nodeSet
as the root directory within the debug shell by running the following command./hostThe debug pod mounts the host’s root file system in
within the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:/host# chroot /hostList information about all available block devices by running the following command:
# lsblkExample output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containersDisplay information about the file system disk space usage by running the following command:
# df -hExample output
Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000
19.9.8.2. Configuring the image registry using PolicyGenTemplate CRs Link kopierenLink in die Zwischenablage kopiert!
Use
PolicyGenTemplate
imageregistry
Prerequisites
- You have configured a disk partition in the managed cluster.
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin - You have created a Git repository where you manage your custom site configuration data for use with GitOps Zero Touch Provisioning (ZTP).
Procedure
Configure the storage class, persistent volume claim, persistent volume, and image registry configuration in the appropriate
CR. For example, to configure an individual site, add the following YAML to the filePolicyGenTemplate:example-sno-site.yamlsourceFiles: # storage class - fileName: StorageClass.yaml policyName: "sc-for-image-registry" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: "100"1 # persistent volume claim - fileName: StoragePVC.yaml policyName: "pvc-for-image-registry" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml2 policyName: "pv-for-image-registry" metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" - fileName: ImageRegistryConfig.yaml policyName: "config-for-image-registry" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: storage: pvc: claim: "image-registry-pvc"- 1
- Set the appropriate value for
ztp-deploy-wavedepending on whether you are configuring image registries at the site, common, or group level.ztp-deploy-wave: "100"is suitable for development or testing because it allows you to group the referenced source files together. - 2
- In
ImageRegistryPV.yaml, ensure that thespec.local.pathfield is set to/var/imageregistryto match the value set for themount_pointfield in theSiteConfigCR.
ImportantDo not set
for thecomplianceType: mustonlyhaveconfiguration. This can cause the registry pod deployment to fail.- fileName: ImageRegistryConfig.yaml-
Commit the change in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application.
PolicyGenTemplate
Verification
Use the following steps to troubleshoot errors with the local image registry on the managed clusters:
Verify successful login to the registry while logged in to the managed cluster. Run the following commands:
Export the managed cluster name:
$ cluster=<managed_cluster_name>Get the managed cluster
details:kubeconfig$ oc get secret -n $cluster $cluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-$clusterDownload and export the cluster
:kubeconfig$ oc get secret -n $cluster $cluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-$cluster && export KUBECONFIG=./kubeconfig-$cluster- Verify access to the image registry from the managed cluster. See "Accessing the registry".
Check that the
CRD in theConfiggroup instance is not reporting errors. Run the following command while logged in to the managed cluster:imageregistry.operator.openshift.io$ oc get image.config.openshift.io cluster -o yamlExample output
apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2021-10-08T19:02:39Z" generation: 5 name: cluster resourceVersion: "688678648" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-iceCheck that the
on the managed cluster is populated with data. Run the following command while logged in to the managed cluster:PersistentVolumeClaim$ oc get pv image-registry-scCheck that the
pod is running and is located under theregistry*namespace.openshift-image-registry$ oc get pods -n openshift-image-registry | grep registry*Example output
cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8dCheck that the disk partition on the managed cluster is correct:
Open a debug shell to the managed cluster:
$ oc debug node/sno-1.example.comRun
to check the host disk partitions:lsblksh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom- 1
/var/imageregistryindicates that the disk is correctly partitioned.
19.9.9. Using hub templates in PolicyGenTemplate CRs Link kopierenLink in die Zwischenablage kopiert!
Topology Aware Lifecycle Manager supports partial Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps ZTP.
Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similiar configurations but with different values.
Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means that you must create the objects referenced in the hub template in the same namespace where the policy is created.
The following supported hub template functions are available for use in GitOps ZTP with TALM:
fromConfigmapreturns the value of the provided data key in the namedresource.ConfigMapNoteThere is a 1 MiB size limit for
CRs. The effective size forConfigMapCRs is further limited by theConfigMapannotation. To avoid thelast-applied-configurationlimitation, add the following annotation to the templatelast-applied-configuration:ConfigMapargocd.argoproj.io/sync-options: Replace=true-
base64encreturns the base64-encoded value of the input string -
base64decreturns the decoded value of the base64-encoded input string -
indentreturns the input string with added indent spaces -
autoindentreturns the input string with added indent spaces based on the spacing used in the parent template -
toIntcasts and returns the integer value of the input value -
toBoolconverts the input string into a boolean value, and returns the boolean
Various Open source community functions are also available for use with GitOps ZTP.
19.9.9.1. Example hub templates Link kopierenLink in die Zwischenablage kopiert!
The following code examples are valid hub templates. Each of these templates return values from the
ConfigMap
test-config
default
Returns the value with the key
:common-key{{hub fromConfigMap "default" "test-config" "common-key" hub}}Returns a string by using the concatenated value of the
field and the string.ManagedClusterName:-name{{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) hub}}Casts and returns a boolean value from the concatenated value of the
field and the string.ManagedClusterName:-name{{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) | toBool hub}}Casts and returns an integer value from the concatenated value of the
field and the string.ManagedClusterName:-name{{hub (printf "%s-name" .ManagedClusterName) | fromConfigMap "default" "test-config" | toInt hub}}
19.9.9.2. Specifying host NICs in site PolicyGenTemplate CRs with hub cluster templates Link kopierenLink in die Zwischenablage kopiert!
You can manage host NICs in a single
ConfigMap
PolicyGenTemplate
The following example shows you how to use a single
ConfigMap
PolicyGenTemplate
When you use the
fromConfigmap
printf
data
name
namespace
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin - You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application.
Procedure
Create a
resource that describes the NICs for a group of hosts. For example:ConfigMapapiVersion: v1 kind: ConfigMap metadata: name: sriovdata namespace: ztp-site annotations: argocd.argoproj.io/sync-options: Replace=true1 data: example-sno-du_fh-numVfs: "8" example-sno-du_fh-pf: ens1f0 example-sno-du_fh-priority: "10" example-sno-du_fh-vlan: "140" example-sno-du_mh-numVfs: "8" example-sno-du_mh-pf: ens3f0 example-sno-du_mh-priority: "10" example-sno-du_mh-vlan: "150"- 1
- The
argocd.argoproj.io/sync-optionsannotation is required only if theConfigMapis larger than 1 MiB in size.
NoteThe
must be in the same namespace with the policy that has the hub template substitution.ConfigMap-
Commit the CR in Git, and then push to the Git repository being monitored by the Argo CD application.
ConfigMap Create a site PGT CR that uses templates to pull the required data from the
object. For example:ConfigMapapiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "site" namespace: "ztp-site" spec: remediationAction: inform bindingRules: group-du-sno: "" mcp: "master" sourceFiles: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-fh" spec: resourceName: du_fh vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-vlan" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-fh" spec: deviceType: netdevice isRdma: true nicSelector: pfNames: - '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-pf" .ManagedClusterName) | autoindent hub}}' numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-numVfs" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_fh-priority" .ManagedClusterName) | toInt hub}}' resourceName: du_fh - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" spec: resourceName: du_mh vlan: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-vlan" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-mh" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: - '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-pf" .ManagedClusterName) hub}}' numVfs: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-numVfs" .ManagedClusterName) | toInt hub}}' priority: '{{hub fromConfigMap "ztp-site" "sriovdata" (printf "%s-du_mh-priority" .ManagedClusterName) | toInt hub}}' resourceName: du_mhCommit the site
CR in Git and push to the Git repository that is monitored by the ArgoCD application.PolicyGenTemplateNoteSubsequent changes to the referenced
CR are not automatically synced to the applied policies. You need to manually sync the newConfigMapchanges to update existing PolicyGenTemplate CRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs".ConfigMap
19.9.9.3. Specifying VLAN IDs in group PolicyGenTemplate CRs with hub cluster templates Link kopierenLink in die Zwischenablage kopiert!
You can manage VLAN IDs for managed clusters in a single
ConfigMap
The following example shows how you how manage VLAN IDs in single
ConfigMap
PolicyGenTemplate
When using the
fromConfigmap
printf
data
name
namespace
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin - You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Procedure
Create a
CR that describes the VLAN IDs for a group of cluster hosts. For example:ConfigMapapiVersion: v1 kind: ConfigMap metadata: name: site-data namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true1 data: site-1-vlan: "101" site-2-vlan: "234"- 1
- The
argocd.argoproj.io/sync-optionsannotation is required only if theConfigMapis larger than 1 MiB in size.
NoteThe
must be in the same namespace with the policy that has the hub template substitution.ConfigMap-
Commit the CR in Git, and then push to the Git repository being monitored by the Argo CD application.
ConfigMap Create a group PGT CR that uses a hub template to pull the required VLAN IDs from the
object. For example, add the following YAML snippet to the group PGT CR:ConfigMap- fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: resourceName: du_mh vlan: '{{hub fromConfigMap "" "site-data" (printf "%s-vlan" .ManagedClusterName) | toInt hub}}'Commit the group
CR in Git, and then push to the Git repository being monitored by the Argo CD application.PolicyGenTemplateNoteSubsequent changes to the referenced
CR are not automatically synced to the applied policies. You need to manually sync the newConfigMapchanges to update existing PolicyGenTemplate CRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs".ConfigMap
19.9.9.4. Syncing new ConfigMap changes to existing PolicyGenTemplate CRs Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in to the hub cluster as a user with privileges.
cluster-admin -
You have created a CR that pulls information from a
PolicyGenTemplateCR using hub cluster templates.ConfigMap
Procedure
-
Update the contents of your CR, and apply the changes in the hub cluster.
ConfigMap To sync the contents of the updated
CR to the deployed policy, do either of the following:ConfigMapOption 1: Delete the existing policy. ArgoCD uses the
CR to immediately recreate the deleted policy. For example, run the following command:PolicyGenTemplate$ oc delete policy <policy_name> -n <policy_namespace>Option 2: Apply a special annotation
to the policy with a different value every time when you update thepolicy.open-cluster-management.io/trigger-update. For example:ConfigMap$ oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1"NoteYou must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing.
Optional: If it exists, delete the
CR that contains the policy. For example:ClusterGroupUpdate$ oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>Create a new
CR that includes the policy to apply with the updatedClusterGroupUpdatechanges. For example, add the following YAML to the fileConfigMap:cgr-example.yamlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240Apply the updated policy:
$ oc apply -f cgr-example.yaml
19.10. Updating managed clusters with the Topology Aware Lifecycle Manager Link kopierenLink in die Zwischenablage kopiert!
You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of multiple clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters.
19.10.1. About the Topology Aware Lifecycle Manager configuration Link kopierenLink in die Zwischenablage kopiert!
The Topology Aware Lifecycle Manager (TALM) manages the deployment of Red Hat Advanced Cluster Management (RHACM) policies for one or more OpenShift Container Platform clusters. Using TALM in a large network of clusters allows the phased rollout of policies to the clusters in limited batches. This helps to minimize possible service disruptions when updating. With TALM, you can control the following actions:
- The timing of the update
- The number of RHACM-managed clusters
- The subset of managed clusters to apply the policies to
- The update order of the clusters
- The set of policies remediated to the cluster
- The order of policies remediated to the cluster
- The assignment of a canary cluster
For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) offers the following features:
- Create a backup of a deployment before an upgrade
- Pre-caching images for clusters with limited bandwidth
TALM supports the orchestration of the OpenShift Container Platform y-stream and z-stream updates, and day-two operations on y-streams and z-streams.
19.10.2. About managed policies used with Topology Aware Lifecycle Manager Link kopierenLink in die Zwischenablage kopiert!
The Topology Aware Lifecycle Manager (TALM) uses RHACM policies for cluster updates.
TALM can be used to manage the rollout of any policy CR where the
remediationAction
inform
- Manual user creation of policy CRs
-
Automatically generated policies from the custom resource definition (CRD)
PolicyGenTemplate
For policies that update an Operator subscription with manual approval, TALM provides additional functionality that approves the installation of the updated Operator.
For more information about managed policies, see Policy Overview in the RHACM documentation.
For more information about the
PolicyGenTemplate
19.10.3. Installing the Topology Aware Lifecycle Manager by using the web console Link kopierenLink in die Zwischenablage kopiert!
You can use the OpenShift Container Platform web console to install the Topology Aware Lifecycle Manager.
Prerequisites
- Install the latest version of the RHACM Operator.
- Set up a hub cluster with disconnected regitry.
-
Log in as a user with privileges.
cluster-admin
Procedure
-
In the OpenShift Container Platform web console, navigate to Operators
OperatorHub. - Search for the Topology Aware Lifecycle Manager from the list of available Operators, and then click Install.
- Keep the default selection of Installation mode ["All namespaces on the cluster (default)"] and Installed Namespace ("openshift-operators") to ensure that the Operator is installed properly.
- Click Install.
Verification
To confirm that the installation is successful:
-
Navigate to the Operators
Installed Operators page. -
Check that the Operator is installed in the namespace and its status is
All Namespaces.Succeeded
If the Operator is not installed successfully:
-
Navigate to the Operators
Installed Operators page and inspect the column for any errors or failures.Status -
Navigate to the Workloads
Pods page and check the logs in any containers in the pod that are reporting issues.cluster-group-upgrades-controller-manager
19.10.4. Installing the Topology Aware Lifecycle Manager by using the CLI Link kopierenLink in die Zwischenablage kopiert!
You can use the OpenShift CLI (
oc
Prerequisites
-
Install the OpenShift CLI ().
oc - Install the latest version of the RHACM Operator.
- Set up a hub cluster with disconnected registry.
-
Log in as a user with privileges.
cluster-admin
Procedure
Create a
CR:SubscriptionDefine the
CR and save the YAML file, for example,Subscription:talm-subscription.yamlapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: "stable" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplaceCreate the
CR by running the following command:Subscription$ oc create -f talm-subscription.yaml
Verification
Verify that the installation succeeded by inspecting the CSV resource:
$ oc get csv -n openshift-operatorsExample output
NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.12.x Topology Aware Lifecycle Manager 4.12.x SucceededVerify that the TALM is up and running:
$ oc get deploy -n openshift-operatorsExample output
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s
19.10.5. About the ClusterGroupUpgrade CR Link kopierenLink in die Zwischenablage kopiert!
The Topology Aware Lifecycle Manager (TALM) builds the remediation plan from the
ClusterGroupUpgrade
ClusterGroupUpgrade
- Clusters in the group
-
Blocking CRs
ClusterGroupUpgrade - Applicable list of managed policies
- Number of concurrent updates
- Applicable canary updates
- Actions to perform before and after the update
- Update timing
You can control the start time of an update using the
enable
ClusterGroupUpgrade
ClusterGroupUpgrade
enable
false
You can set the timeout by configuring the
spec.remediationStrategy.timeout
spec
remediationStrategy:
maxConcurrency: 1
timeout: 240
You can use the
batchTimeoutAction
continue
abort
enforce
To apply the changes, you set the
enabled
true
For more information see the "Applying update policies to managed clusters" section.
As TALM works through remediation of the policies to the specified clusters, the
ClusterGroupUpgrade
After TALM completes a cluster update, the cluster does not update again under the control of the same
ClusterGroupUpgrade
ClusterGroupUpgrade
- When you need to update the cluster again
-
When the cluster changes to non-compliant with the policy after being updated
inform
19.10.5.1. Selecting clusters Link kopierenLink in die Zwischenablage kopiert!
TALM builds a remediation plan and selects clusters based on the following fields:
-
The field specifies the labels of the clusters that you want to update. This consists of a list of the standard label selectors from
clusterLabelSelector. Each selector in the list uses either label value pairs or label expressions. Matches from each selector are added to the final list of clusters along with the matches from thek8s.io/apimachinery/pkg/apis/meta/v1field and theclusterSelectorfield.cluster -
The field specifies a list of clusters to update.
clusters -
The field specifies the clusters for canary updates.
canaries -
The field specifies the number of clusters to update in a batch.
maxConcurrency
You can use the
clusters
clusterLabelSelector
clusterSelector
The remediation plan starts with the clusters listed in the
canaries
Sample ClusterGroupUpgrade CR with the enabled field set to false
apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
creationTimestamp: '2022-11-18T16:27:15Z'
finalizers:
- ran.openshift.io/cleanup-finalizer
generation: 1
name: talm-cgu
namespace: talm-namespace
resourceVersion: '40451823'
uid: cca245a5-4bca-45fa-89c0-aa6af81a596c
Spec:
actions:
afterCompletion:
deleteObjects: true
beforeEnable: {}
backup: false
clusters:
- spoke1
enable: false
managedPolicies:
- talm-policy
preCaching: false
remediationStrategy:
canaries:
- spoke1
maxConcurrency: 2
timeout: 240
clusterLabelSelectors:
- matchExpressions:
- key: label1
operator: In
values:
- value1a
- value1b
batchTimeoutAction:
status:
computedMaxConcurrency: 2
conditions:
- lastTransitionTime: '2022-11-18T16:27:15Z'
message: All selected clusters are valid
reason: ClusterSelectionCompleted
status: 'True'
type: ClustersSelected
- lastTransitionTime: '2022-11-18T16:27:15Z'
message: Completed validation
reason: ValidationCompleted
status: 'True'
type: Validated
- lastTransitionTime: '2022-11-18T16:37:16Z'
message: Not enabled
reason: NotEnabled
status: 'False'
type: Progressing
managedPoliciesForUpgrade:
- name: talm-policy
namespace: talm-namespace
managedPoliciesNs:
talm-policy: talm-namespace
remediationPlan:
- - spoke1
- - spoke2
- spoke3
status:
- 1
- Defines the list of clusters to update.
- 2
- The
enablefield is set tofalse. - 3
- Lists the user-defined set of policies to remediate.
- 4
- Defines the specifics of the cluster updates.
- 5
- Defines the clusters for canary updates.
- 6
- Defines the maximum number of concurrent updates in a batch. The number of remediation batches is the number of canary clusters, plus the number of clusters, except the canary clusters, divided by the maxConcurrency value. The clusters that are already compliant with all the managed policies are excluded from the remediation plan.
- 7
- Displays the parameters for selecting clusters.
- 8
- Controls what happens if a batch times out. Possible values are
abortorcontinue. If unspecified, the default iscontinue. - 9
- Displays information about the status of the updates.
- 10
- The
ClustersSelectedcondition shows that all selected clusters are valid. - 11
- The
Validatedcondition shows that all selected clusters have been validated.
Any failures during the update of a canary cluster stops the update process.
When the remediation plan is successfully created, you can you set the
enable
true
You can only make changes to the
spec
enable
ClusterGroupUpgrade
false
19.10.5.2. Validating Link kopierenLink in die Zwischenablage kopiert!
TALM checks that all specified managed policies are available and correct, and uses the
Validated
trueValidation is completed.
falsePolicies are missing or invalid, or an invalid platform image has been specified.
19.10.5.3. Pre-caching Link kopierenLink in die Zwischenablage kopiert!
Clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. On single-node OpenShift clusters, you can use pre-caching to avoid this. The container image pre-caching starts when you create a
ClusterGroupUpgrade
preCaching
true
TALM uses the
PrecacheSpecValid
trueThe pre-caching spec is valid and consistent.
falseThe pre-caching spec is incomplete.
TALM uses the
PrecachingSucceeded
trueTALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters.
falsePre-caching is still in progress for one or more clusters or has failed for all clusters.
For more information see the "Using the container image pre-cache feature" section.
19.10.5.4. Creating a backup Link kopierenLink in die Zwischenablage kopiert!
For single-node OpenShift, TALM can create a backup of a deployment before an update. If the update fails, you can recover the previous version and restore a cluster to a working state without requiring a reprovision of applications. To use the backup feature you first create a
ClusterGroupUpgrade
backup
true
enable
ClusterGroupUpgrade
true
TALM uses the
BackupSucceeded
trueBackup is completed for all clusters or the backup run has completed but failed for one or more clusters. If backup fails for any cluster, the update fails for that cluster but proceeds for all other clusters.
falseBackup is still in progress for one or more clusters or has failed for all clusters.
For more information, see the "Creating a backup of cluster resources before upgrade" section.
19.10.5.5. Updating clusters Link kopierenLink in die Zwischenablage kopiert!
TALM enforces the policies following the remediation plan. Enforcing the policies for subsequent batches starts immediately after all the clusters of the current batch are compliant with all the managed policies. If the batch times out, TALM moves on to the next batch. The timeout value of a batch is the
spec.timeout
TALM uses the
Progressing
trueTALM is remediating non-compliant policies.
falseThe update is not in progress. Possible reasons for this are:
- All clusters are compliant with all the managed policies.
- The update has timed out as policy remediation took too long.
- Blocking CRs are missing from the system or have not yet completed.
-
The CR is not enabled.
ClusterGroupUpgrade - Backup is still in progress.
The managed policies apply in the order that they are listed in the
managedPolicies
ClusterGroupUpgrade
Sample ClusterGroupUpgrade CR in the Progressing state
apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
creationTimestamp: '2022-11-18T16:27:15Z'
finalizers:
- ran.openshift.io/cleanup-finalizer
generation: 1
name: talm-cgu
namespace: talm-namespace
resourceVersion: '40451823'
uid: cca245a5-4bca-45fa-89c0-aa6af81a596c
Spec:
actions:
afterCompletion:
deleteObjects: true
beforeEnable: {}
backup: false
clusters:
- spoke1
enable: true
managedPolicies:
- talm-policy
preCaching: true
remediationStrategy:
canaries:
- spoke1
maxConcurrency: 2
timeout: 240
clusterLabelSelectors:
- matchExpressions:
- key: label1
operator: In
values:
- value1a
- value1b
batchTimeoutAction:
status:
clusters:
- name: spoke1
state: complete
computedMaxConcurrency: 2
conditions:
- lastTransitionTime: '2022-11-18T16:27:15Z'
message: All selected clusters are valid
reason: ClusterSelectionCompleted
status: 'True'
type: ClustersSelected
- lastTransitionTime: '2022-11-18T16:27:15Z'
message: Completed validation
reason: ValidationCompleted
status: 'True'
type: Validated
- lastTransitionTime: '2022-11-18T16:37:16Z'
message: Remediating non-compliant policies
reason: InProgress
status: 'True'
type: Progressing
managedPoliciesForUpgrade:
- name: talm-policy
namespace: talm-namespace
managedPoliciesNs:
talm-policy: talm-namespace
remediationPlan:
- - spoke1
- - spoke2
- spoke3
status:
currentBatch: 2
currentBatchRemediationProgress:
spoke2:
state: Completed
spoke3:
policyIndex: 0
state: InProgress
currentBatchStartedAt: '2022-11-18T16:27:16Z'
startedAt: '2022-11-18T16:27:15Z'
- 1
- The
Progressingfields show that TALM is in the process of remediating policies.
19.10.5.6. Update status Link kopierenLink in die Zwischenablage kopiert!
TALM uses the
Succeeded
trueAll clusters are compliant with the specified managed policies.
falsePolicy remediation failed as there were no clusters available for remediation, or because policy remediation took too long for one of the following reasons:
- The current batch contains canary updates and the cluster in the batch does not comply with all the managed policies within the batch timeout.
-
Clusters did not comply with the managed policies within the value specified in the
timeoutfield.remediationStrategy
Sample ClusterGroupUpgrade CR in the Succeeded state
apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
name: cgu-upgrade-complete
namespace: default
spec:
clusters:
- spoke1
- spoke4
enable: true
managedPolicies:
- policy1-common-cluster-version-policy
- policy2-common-pao-sub-policy
remediationStrategy:
maxConcurrency: 1
timeout: 240
status:
clusters:
- name: spoke1
state: complete
- name: spoke4
state: complete
conditions:
- message: All selected clusters are valid
reason: ClusterSelectionCompleted
status: "True"
type: ClustersSelected
- message: Completed validation
reason: ValidationCompleted
status: "True"
type: Validated
- message: All clusters are compliant with all the managed policies
reason: Completed
status: "False"
type: Progressing
- message: All clusters are compliant with all the managed policies
reason: Completed
status: "True"
type: Succeeded
managedPoliciesForUpgrade:
- name: policy1-common-cluster-version-policy
namespace: default
- name: policy2-common-pao-sub-policy
namespace: default
remediationPlan:
- - spoke1
- - spoke4
status:
completedAt: '2022-11-18T16:27:16Z'
startedAt: '2022-11-18T16:27:15Z'
- 2
- In the
Progressingfields, the status isfalseas the update has completed; clusters are compliant with all the managed policies. - 3
- The
Succeededfields show that the validations completed successfully. - 1
- The
statusfield includes a list of clusters and their respective statuses. The status of a cluster can becompleteortimedout.
Sample ClusterGroupUpgrade CR in the timedout state
apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
creationTimestamp: '2022-11-18T16:27:15Z'
finalizers:
- ran.openshift.io/cleanup-finalizer
generation: 1
name: talm-cgu
namespace: talm-namespace
resourceVersion: '40451823'
uid: cca245a5-4bca-45fa-89c0-aa6af81a596c
spec:
actions:
afterCompletion:
deleteObjects: true
beforeEnable: {}
backup: false
clusters:
- spoke1
- spoke2
enable: true
managedPolicies:
- talm-policy
preCaching: false
remediationStrategy:
maxConcurrency: 2
timeout: 240
status:
clusters:
- name: spoke1
state: complete
- currentPolicy:
name: talm-policy
status: NonCompliant
name: spoke2
state: timedout
computedMaxConcurrency: 2
conditions:
- lastTransitionTime: '2022-11-18T16:27:15Z'
message: All selected clusters are valid
reason: ClusterSelectionCompleted
status: 'True'
type: ClustersSelected
- lastTransitionTime: '2022-11-18T16:27:15Z'
message: Completed validation
reason: ValidationCompleted
status: 'True'
type: Validated
- lastTransitionTime: '2022-11-18T16:37:16Z'
message: Policy remediation took too long
reason: TimedOut
status: 'False'
type: Progressing
- lastTransitionTime: '2022-11-18T16:37:16Z'
message: Policy remediation took too long
reason: TimedOut
status: 'False'
type: Succeeded
managedPoliciesForUpgrade:
- name: talm-policy
namespace: talm-namespace
managedPoliciesNs:
talm-policy: talm-namespace
remediationPlan:
- - spoke1
- spoke2
status:
startedAt: '2022-11-18T16:27:15Z'
completedAt: '2022-11-18T20:27:15Z'
19.10.5.7. Blocking ClusterGroupUpgrade CRs Link kopierenLink in die Zwischenablage kopiert!
You can create multiple
ClusterGroupUpgrade
For example, if you create
ClusterGroupUpgrade
ClusterGroupUpgrade
ClusterGroupUpgrade
ClusterGroupUpgrade
UpgradeComplete
One
ClusterGroupUpgrade
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Provision one or more managed clusters.
-
Log in as a user with privileges.
cluster-admin - Create RHACM policies in the hub cluster.
Procedure
Save the content of the
CRs in theClusterGroupUpgrade,cgu-a.yaml, andcgu-b.yamlfiles.cgu-c.yamlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs:1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2- 1
- Defines the blocking CRs. The
cgu-aupdate cannot start untilcgu-cis complete.
apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs:1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}- 1
- The
cgu-bupdate cannot start untilcgu-ais complete.
apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec:1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {}- 1
- The
cgu-cupdate does not have any blocking CRs. TALM starts thecgu-cupdate when theenablefield is set totrue.
Create the
CRs by running the following command for each relevant CR:ClusterGroupUpgrade$ oc apply -f <name>.yamlStart the update process by running the following command for each relevant CR:
$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> \ --type merge -p '{"spec":{"enable":true}}'The following examples show
CRs where theClusterGroupUpgradefield is set toenable:trueExample for
cgu-awith blocking CRsapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]'1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {}- 1
- Shows the list of blocking CRs.
Example for
cgu-bwith blocking CRsapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]'1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}- 1
- Shows the list of blocking CRs.
Example for
cgu-cwith blocking CRsapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant1 reason: UpgradeNotCompleted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0- 1
- The
cgu-cupdate does not have any blocking CRs.
19.10.6. Update policies on managed clusters Link kopierenLink in die Zwischenablage kopiert!
The Topology Aware Lifecycle Manager (TALM) remediates a set of
inform
ClusterGroupUpgrade
inform
enforce
One by one, TALM adds each cluster from the current batch to the placement rule that corresponds with the applicable managed policy. If a cluster is already compliant with a policy, TALM skips applying that policy on the compliant cluster. TALM then moves on to applying the next policy to the non-compliant cluster. After TALM completes the updates in a batch, all clusters are removed from the placement rules associated with the copied policies. Then, the update of the next batch starts.
If a spoke cluster does not report any compliant state to RHACM, the managed policies on the hub cluster can be missing status information that TALM needs. TALM handles these cases in the following ways:
-
If a policy’s field is missing, TALM ignores the policy and adds a log entry. Then, TALM continues looking at the policy’s
status.compliantfield.status.status -
If a policy’s is missing, TALM produces an error.
status.status -
If a cluster’s compliance status is missing in the policy’s field, TALM considers that cluster to be non-compliant with that policy.
status.status
The
ClusterGroupUpgrade
batchTimeoutAction
continue
abort
Example upgrade policy
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: ocp-4.4.12.4
namespace: platform-upgrade
spec:
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: upgrade
spec:
namespaceselector:
exclude:
- kube-*
include:
- '*'
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: config.openshift.io/v1
kind: ClusterVersion
metadata:
name: version
spec:
channel: stable-4.12
desiredUpdate:
version: 4.4.12.4
upstream: https://api.openshift.com/api/upgrades_info/v1/graph
status:
history:
- state: Completed
version: 4.4.12.4
remediationAction: inform
severity: low
remediationAction: inform
For more information about RHACM policies, see Policy overview.
19.10.6.1. Configuring Operator subscriptions for managed clusters that you install with TALM Link kopierenLink in die Zwischenablage kopiert!
Topology Aware Lifecycle Manager (TALM) can only approve the install plan for an Operator if the
Subscription
status.state.AtLatestKnown
Procedure
Add the
field to thestatus.state.AtLatestKnownCR of the Operator:SubscriptionExample Subscription CR
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: "stable" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown1 - 1
- The
status.state: AtLatestKnownfield is used for the latest Operator version available from the Operator catalog.
NoteWhen a new version of the Operator is available in the registry, the associated policy becomes non-compliant.
-
Apply the changed policy to your managed clusters with a
SubscriptionCR.ClusterGroupUpgrade
19.10.6.2. Applying update policies to managed clusters Link kopierenLink in die Zwischenablage kopiert!
You can update your managed clusters by applying your policies.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Provision one or more managed clusters.
-
Log in as a user with privileges.
cluster-admin - Create RHACM policies in the hub cluster.
Procedure
Save the contents of the
CR in theClusterGroupUpgradefile.cgu-1.yamlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies:1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters:2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 23 timeout: 2404 batchTimeoutAction:5 - 1
- The name of the policies to apply.
- 2
- The list of clusters to update.
- 3
- The
maxConcurrencyfield signifies the number of clusters updated at the same time. - 4
- The update timeout in minutes.
- 5
- Controls what happens if a batch times out. Possible values are
abortorcontinue. If unspecified, the default iscontinue.
Create the
CR by running the following command:ClusterGroupUpgrade$ oc create -f cgu-1.yamlCheck if the
CR was created in the hub cluster by running the following command:ClusterGroupUpgrade$ oc get cgu --all-namespacesExample output
NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not EnabledCheck the status of the update by running the following command:
$ oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jqExample output
{ "computedMaxConcurrency": 2, "conditions": [ { "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Not enabled",1 "reason": "NotEnabled", "status": "False", "type": "Progressing" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": {} }- 1
- The
spec.enablefield in theClusterGroupUpgradeCR is set tofalse.
Check the status of the policies by running the following command:
$ oc get policies -AExample output
NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m- 1
- The
spec.remediationActionfield of policies currently applied on the clusters is set toenforce. The managed policies ininformmode from theClusterGroupUpgradeCR remain ininformmode during the update.
Change the value of the
field tospec.enableby running the following command:true$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 \ --patch '{"spec":{"enable":true}}' --type=merge
Verification
Check the status of the update again by running the following command:
$ oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jqExample output
{ "computedMaxConcurrency": 2, "conditions": [1 { "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "All selected clusters are valid", "reason": "ClusterSelectionCompleted", "status": "True", "type": "ClustersSelected", "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "Completed validation", "reason": "ValidationCompleted", "status": "True", "type": "Validated", "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Remediating non-compliant policies", "reason": "InProgress", "status": "True", "type": "Progressing" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": { "currentBatch": 1, "currentBatchStartedAt": "2022-02-25T15:54:16Z", "remediationPlanForBatch": { "spoke1": 0, "spoke2": 1 }, "startedAt": "2022-02-25T15:54:16Z" } }- 1
- Reflects the update progress of the current batch. Run this command again to receive updated information about the progress.
If the policies include Operator subscriptions, you can check the installation progress directly on the single-node cluster.
Export the
file of the single-node cluster you want to check the installation progress for by running the following command:KUBECONFIG$ export KUBECONFIG=<cluster_kubeconfig_absolute_path>Check all the subscriptions present on the single-node cluster and look for the one in the policy you are trying to install through the
CR by running the following command:ClusterGroupUpgrade$ oc get subs -A | grep -i <subscription_name>Example output for
cluster-loggingpolicyNAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable
If one of the managed policies includes a
CR, check the status of platform updates in the current batch by running the following command against the spoke cluster:ClusterVersion$ oc get clusterversionExample output
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.12.5 True True 43s Working towards 4.4.12.7: 71 of 735 done (9% complete)Check the Operator subscription by running the following command:
$ oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath="{.status}"Check the install plans present on the single-node cluster that is associated with the desired subscription by running the following command:
$ oc get installplan -n <subscription_namespace>Example output for
cluster-loggingOperatorNAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true1 - 1
- The install plans have their
Approvalfield set toManualand theirApprovedfield changes fromfalsetotrueafter TALM approves the install plan.
NoteWhen TALM is remediating a policy containing a subscription, it automatically approves any install plans attached to that subscription. Where multiple install plans are needed to get the operator to the latest known version, TALM might approve multiple install plans, upgrading through one or more intermediate versions to get to the final version.
Check if the cluster service version for the Operator of the policy that the
is installing reached theClusterGroupUpgradephase by running the following command:Succeeded$ oc get csv -n <operator_namespace>Example output for OpenShift Logging Operator
NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded
19.10.7. Creating a backup of cluster resources before upgrade Link kopierenLink in die Zwischenablage kopiert!
For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) can create a backup of a deployment before an upgrade. If the upgrade fails, you can recover the previous version and restore a cluster to a working state without requiring a reprovision of applications.
To use the backup feature you first create a
ClusterGroupUpgrade
backup
true
enable
ClusterGroupUpgrade
true
TALM uses the
BackupSucceeded
trueBackup is completed for all clusters or the backup run has completed but failed for one or more clusters. If backup fails for any cluster, the update does not proceed for that cluster.
falseBackup is still in progress for one or more clusters or has failed for all clusters. The backup process running in the spoke clusters can have the following statuses:
PreparingToStartThe first reconciliation pass is in progress. The TALM deletes any spoke backup namespace and hub view resources that have been created in a failed upgrade attempt.
StartingThe backup prerequisites and backup job are being created.
ActiveThe backup is in progress.
SucceededThe backup succeeded.
BackupTimeoutArtifact backup is partially done.
UnrecoverableErrorThe backup has ended with a non-zero exit code.
If the backup of a cluster fails and enters the
BackupTimeout
UnrecoverableError
19.10.7.1. Creating a ClusterGroupUpgrade CR with backup Link kopierenLink in die Zwischenablage kopiert!
You can create a backup of a deployment before an upgrade on single-node OpenShift clusters. If the upgrade fails you can use the
upgrade-recovery.sh
- Cluster backup
-
A snapshot of
etcdand static pod manifests. - Content backup
-
Backups of folders, for example,
/etc,/usr/local,/var/lib/kubelet. - Changed files backup
-
Any files managed by
machine-configthat have been changed. - Deployment
-
A pinned
ostreedeployment. - Images (Optional)
- Any container images that are in use.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Provision one or more managed clusters.
-
Log in as a user with privileges.
cluster-admin - Install Red Hat Advanced Cluster Management (RHACM).
It is highly recommended that you create a recovery partition. The following is an example
SiteConfig
nodes:
- hostName: "node-1.example.com"
role: "master"
rootDeviceHints:
hctl: "0:2:0:0"
deviceName: /dev/sda
........
........
#Disk /dev/sda: 893.3 GiB, 959119884288 bytes, 1873281024 sectors
diskPartition:
- device: /dev/sda
partitions:
- mount_point: /var/recovery
size: 51200
start: 800000
Procedure
Save the contents of the
CR with theClusterGroupUpgradeandbackupfields set toenablein thetruefile:clustergroupupgrades-group-du.yamlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: true managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240To start the update, apply the
CR by running the following command:ClusterGroupUpgrade$ oc apply -f clustergroupupgrades-group-du.yaml
Verification
Check the status of the upgrade in the hub cluster by running the following command:
$ oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'Example output
{ "backup": { "clusters": [ "cnfdb2", "cnfdb1" ], "status": { "cnfdb1": "Succeeded", "cnfdb2": "Failed"1 } }, "computedMaxConcurrency": 1, "conditions": [ { "lastTransitionTime": "2022-04-05T10:37:19Z", "message": "Backup failed for 1 cluster",2 "reason": "PartiallyDone",3 "status": "True",4 "type": "Succeeded" } ], "precaching": { "spec": {} }, "status": {}
19.10.7.2. Recovering a cluster after a failed upgrade Link kopierenLink in die Zwischenablage kopiert!
If an upgrade of a cluster fails, you can manually log in to the cluster and use the backup to return the cluster to its preupgrade state. There are two stages:
- Rollback
- If the attempted upgrade included a change to the platform OS deployment, you must roll back to the previous version before running the recovery script.
A rollback is only applicable to upgrades from TALM and single-node OpenShift. This process does not apply to rollbacks from any other upgrade type.
- Recovery
- The recovery shuts down containers and uses files from the backup partition to relaunch containers and restore clusters.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Provision one or more managed clusters.
- Install Red Hat Advanced Cluster Management (RHACM).
-
Log in as a user with privileges.
cluster-admin - Run an upgrade that is configured for backup.
Procedure
Delete the previously created
custom resource (CR) by running the following command:ClusterGroupUpgrade$ oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno- Log in to the cluster that you want to recover.
Check the status of the platform OS deployment by running the following command:
$ ostree admin statusExample outputs
[root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9- 1
- The current deployment is pinned. A platform OS deployment rollback is not necessary.
[root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback)1 Version: 410.84.202203290245-0 Pinned: yes2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52caTo trigger a rollback of the platform OS deployment, run the following command:
$ rpm-ostree rollback -rThe first phase of the recovery shuts down containers and restores files from the backup partition to the targeted directories. To begin the recovery, run the following command:
$ /var/recovery/upgrade-recovery.shWhen prompted, reboot the cluster by running the following command:
$ systemctl rebootAfter the reboot, restart the recovery by running the following command:
$ /var/recovery/upgrade-recovery.sh --resume
If the recovery utility fails, you can retry with the
--restart
$ /var/recovery/upgrade-recovery.sh --restart
Verification
To check the status of the recovery run the following command:
$ oc get clusterversion,nodes,clusteroperatorExample output
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.4.12.23 True False 86d Cluster version is 4.4.12.231 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd352 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.4.12.23 True False False 2d7h3 clusteroperator.config.openshift.io/baremetal 4.4.12.23 True False False 86d ..............
19.10.8. Using the container image pre-cache feature Link kopierenLink in die Zwischenablage kopiert!
Single-node OpenShift clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed.
The time of the update is not set by TALM. You can apply the
ClusterGroupUpgrade
The container image pre-caching starts when the
preCaching
true
ClusterGroupUpgrade
TALM uses the
PrecacheSpecValid
trueThe pre-caching spec is valid and consistent.
falseThe pre-caching spec is incomplete.
TALM uses the
PrecachingSucceeded
trueTALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters.
falsePre-caching is still in progress for one or more clusters or has failed for all clusters.
After a successful pre-caching process, you can start remediating policies. The remediation actions start when the
enable
true
The pre-caching process can be in the following statuses:
NotStartedThis is the initial state all clusters are automatically assigned to on the first reconciliation pass of the
CR. In this state, TALM deletes any pre-caching namespace and hub view resources of spoke clusters that remain from previous incomplete updates. TALM then creates a newClusterGroupUpgraderesource for the spoke pre-caching namespace to verify its deletion in theManagedClusterViewstate.PrecachePreparingPreparingToStartCleaning up any remaining resources from previous incomplete updates is in progress.
StartingPre-caching job prerequisites and the job are created.
ActiveThe job is in "Active" state.
SucceededThe pre-cache job succeeded.
PrecacheTimeoutThe artifact pre-caching is partially done.
UnrecoverableErrorThe job ends with a non-zero exit code.
19.10.8.1. Creating a ClusterGroupUpgrade CR with pre-caching Link kopierenLink in die Zwischenablage kopiert!
For single-node OpenShift, the pre-cache feature allows the required container images to be present on the spoke cluster before the update starts.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Provision one or more managed clusters.
-
Log in as a user with privileges.
cluster-admin
Procedure
Save the contents of the
CR with theClusterGroupUpgradefield set topreCachingin thetruefile:clustergroupupgrades-group-du.yamlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240- 1
- The
preCachingfield is set totrue, which enables TALM to pull the container images before starting the update.
When you want to start pre-caching, apply the
CR by running the following command:ClusterGroupUpgrade$ oc apply -f clustergroupupgrades-group-du.yaml
Verification
Check if the
CR exists in the hub cluster by running the following command:ClusterGroupUpgrade$ oc get cgu -AExample output
NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done1 - 1
- The CR is created.
Check the status of the pre-caching task by running the following command:
$ oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'Example output
{ "conditions": [ { "lastTransitionTime": "2022-01-27T19:07:24Z", "message": "Precaching is required and not done", "reason": "InProgress", "status": "False", "type": "PrecachingSucceeded" }, { "lastTransitionTime": "2022-01-27T19:07:34Z", "message": "Pre-caching spec is valid and consistent", "reason": "PrecacheSpecIsWellFormed", "status": "True", "type": "PrecacheSpecValid" } ], "precaching": { "clusters": [ "cnfdb1"1 "cnfdb2" ], "spec": { "platformImage": "image.example.io"}, "status": { "cnfdb1": "Active" "cnfdb2": "Succeeded"} } }- 1
- Displays the list of identified clusters.
Check the status of the pre-caching job by running the following command on the spoke cluster:
$ oc get jobs,pods -n openshift-talo-pre-cacheExample output
NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10sCheck the status of the
CR by running the following command:ClusterGroupUpgrade$ oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'Example output
"conditions": [ { "lastTransitionTime": "2022-01-27T19:30:41Z", "message": "The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies", "reason": "UpgradeCompleted", "status": "True", "type": "Ready" }, { "lastTransitionTime": "2022-01-27T19:28:57Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingSucceeded"1 }- 1
- The pre-cache tasks are done.
19.10.9. Troubleshooting the Topology Aware Lifecycle Manager Link kopierenLink in die Zwischenablage kopiert!
The Topology Aware Lifecycle Manager (TALM) is an OpenShift Container Platform Operator that remediates RHACM policies. When issues occur, use the
oc adm must-gather
For more information about related topics, see the following documentation:
- Red Hat Advanced Cluster Management for Kubernetes 2.4 Support Matrix
- Red Hat Advanced Cluster Management Troubleshooting
- The "Troubleshooting Operator issues" section
19.10.9.1. General troubleshooting Link kopierenLink in die Zwischenablage kopiert!
You can determine the cause of the problem by reviewing the following questions:
Is the configuration that you are applying supported?
- Are the RHACM and the OpenShift Container Platform versions compatible?
- Are the TALM and RHACM versions compatible?
Which of the following components is causing the problem?
To ensure that the
ClusterGroupUpgrade
-
Create the CR with the
ClusterGroupUpgradefield set tospec.enable.false - Wait for the status to be updated and go through the troubleshooting questions.
-
If everything looks as expected, set the field to
spec.enablein thetrueCR.ClusterGroupUpgrade
After you set the
spec.enable
true
ClusterUpgradeGroup
spec
19.10.9.2. Cannot modify the ClusterUpgradeGroup CR Link kopierenLink in die Zwischenablage kopiert!
- Issue
-
You cannot edit the
ClusterUpgradeGroupCR after enabling the update. - Resolution
Restart the procedure by performing the following steps:
Remove the old
CR by running the following command:ClusterGroupUpgrade$ oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name>Check and fix the existing issues with the managed clusters and policies.
- Ensure that all the clusters are managed clusters and available.
-
Ensure that all the policies exist and have the field set to
spec.remediationAction.inform
Create a new
CR with the correct configurations.ClusterGroupUpgrade$ oc apply -f <ClusterGroupUpgradeCR_YAML>
19.10.9.3. Managed policies Link kopierenLink in die Zwischenablage kopiert!
Checking managed policies on the system
- Issue
- You want to check if you have the correct managed policies on the system.
- Resolution
Run the following command:
$ oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}'Example output
["group-du-sno-validator-du-validator-policy", "policy2-common-nto-sub-policy", "policy3-common-ptp-sub-policy"]
Checking remediationAction mode
- Issue
-
You want to check if the
remediationActionfield is set toinformin thespecof the managed policies. - Resolution
Run the following command:
$ oc get policies --all-namespacesExample output
NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h
Checking policy compliance state
- Issue
- You want to check the compliance state of policies.
- Resolution
Run the following command:
$ oc get policies --all-namespacesExample output
NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h
19.10.9.4. Clusters Link kopierenLink in die Zwischenablage kopiert!
Checking if managed clusters are present
- Issue
-
You want to check if the clusters in the
ClusterGroupUpgradeCR are managed clusters. - Resolution
Run the following command:
$ oc get managedclustersExample output
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27hAlternatively, check the TALM manager logs:
Get the name of the TALM manager by running the following command:
$ oc get pod -n openshift-operatorsExample output
NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45mCheck the TALM manager logs by running the following command:
$ oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c managerExample output
ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"}1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem- 1
- The error message shows that the cluster is not a managed cluster.
Checking if managed clusters are available
- Issue
-
You want to check if the managed clusters specified in the
ClusterGroupUpgradeCR are available. - Resolution
Run the following command:
$ oc get managedclustersExample output
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h2
Checking clusterLabelSelector
- Issue
-
You want to check if the
clusterLabelSelectorfield specified in theClusterGroupUpgradeCR matches at least one of the managed clusters. - Resolution
Run the following command:
$ oc get managedcluster --selector=upgrade=true1 - 1
- The label for the clusters you want to update is
upgrade:true.
Example output
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h
Checking if canary clusters are present
- Issue
You want to check if the canary clusters are present in the list of clusters.
Example
ClusterGroupUpgradeCRspec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true- Resolution
Run the following commands:
$ oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}'Example output
["spoke1", "spoke3"]Check if the canary clusters are present in the list of clusters that match
labels by running the following command:clusterLabelSelector$ oc get managedcluster --selector=upgrade=trueExample output
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h
A cluster can be present in
spec.clusters
spec.clusterLabelSelector
Checking the pre-caching status on spoke clusters
Check the status of pre-caching by running the following command on the spoke cluster:
$ oc get jobs,pods -n openshift-talo-pre-cache
19.10.9.5. Remediation Strategy Link kopierenLink in die Zwischenablage kopiert!
Checking if remediationStrategy is present in the ClusterGroupUpgrade CR
- Issue
-
You want to check if the
remediationStrategyis present in theClusterGroupUpgradeCR. - Resolution
Run the following command:
$ oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}'Example output
{"maxConcurrency":2, "timeout":240}
Checking if maxConcurrency is specified in the ClusterGroupUpgrade CR
- Issue
-
You want to check if the
maxConcurrencyis specified in theClusterGroupUpgradeCR. - Resolution
Run the following command:
$ oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}'Example output
2
19.10.9.6. Topology Aware Lifecycle Manager Link kopierenLink in die Zwischenablage kopiert!
Checking condition message and status in the ClusterGroupUpgrade CR
- Issue
-
You want to check the value of the
status.conditionsfield in theClusterGroupUpgradeCR. - Resolution
Run the following command:
$ oc get cgu lab-upgrade -ojsonpath='{.status.conditions}'Example output
{"lastTransitionTime":"2022-02-17T22:25:28Z", "message":"Missing managed policies:[policyList]", "reason":"NotAllManagedPoliciesExist", "status":"False", "type":"Validated"}
Checking corresponding copied policies
- Issue
-
You want to check if every policy from
status.managedPoliciesForUpgradehas a corresponding policy instatus.copiedPolicies. - Resolution
Run the following command:
$ oc get cgu lab-upgrade -oyamlExample output
status: … copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default
Checking if status.remediationPlan was computed
- Issue
-
You want to check if
status.remediationPlanis computed. - Resolution
Run the following command:
$ oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}'Example output
[["spoke2", "spoke3"]]
Errors in the TALM manager container
- Issue
- You want to check the logs of the manager container of TALM.
- Resolution
Run the following command:
$ oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c managerExample output
ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"}1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem- 1
- Displays the error.
Clusters are not compliant to some policies after a ClusterGroupUpgrade CR has completed
- Issue
The policy compliance status that TALM uses to decide if remediation is needed has not yet fully updated for all clusters. This may be because:
- The CGU was run too soon after a policy was created or updated.
-
The remediation of a policy affects the compliance of subsequent policies in the CR.
ClusterGroupUpgrade
- Resolution
-
Create a new and apply
ClusterGroupUpdateCR with the same specification .
19.11. Updating managed clusters in a disconnected environment with the Topology Aware Lifecycle Manager Link kopierenLink in die Zwischenablage kopiert!
You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of OpenShift Container Platform managed clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters.
19.11.1. Updating clusters in a disconnected environment Link kopierenLink in die Zwischenablage kopiert!
You can upgrade managed clusters and Operators for managed clusters that you have deployed using GitOps ZTP and Topology Aware Lifecycle Manager (TALM).
19.11.1.1. Setting up the environment Link kopierenLink in die Zwischenablage kopiert!
TALM can perform both platform and Operator updates.
You must mirror both the platform image and Operator images that you want to update to in your mirror registry before you can use TALM to update your disconnected clusters. Complete the following steps to mirror the images:
For platform updates, you must perform the following steps:
Mirror the desired OpenShift Container Platform image repository. Ensure that the desired platform image is mirrored by following the "Mirroring the OpenShift Container Platform image repository" procedure linked in the Additional resources. Save the contents of the
section in theimageContentSourcesfile:imageContentSources.yamlExample output
imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-devSave the image signature of the desired platform image that was mirrored. You must add the image signature to the
CR for platform updates. To get the image signature, perform the following steps:PolicyGenTemplateSpecify the desired OpenShift Container Platform tag by running the following command:
$ OCP_RELEASE_NUMBER=<release_version>Specify the architecture of the server by running the following command:
$ ARCHITECTURE=<server_architecture>Get the release image digest from Quay by running the following command
$ DIGEST="$(oc adm release info quay.io/openshift-release-dev/ocp-release:${OCP_RELEASE_NUMBER}-${ARCHITECTURE} | sed -n 's/Pull From: .*@//p')"Set the digest algorithm by running the following command:
$ DIGEST_ALGO="${DIGEST%%:*}"Set the digest signature by running the following command:
$ DIGEST_ENCODED="${DIGEST#*:}"Get the image signature from the mirror.openshift.com website by running the following command:
$ SIGNATURE_BASE64=$(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/${DIGEST_ALGO}=${DIGEST_ENCODED}/signature-1" | base64 -w0 && echo)Save the image signature to the
file by running the following commands:checksum-<OCP_RELEASE_NUMBER>.yaml$ cat >checksum-${OCP_RELEASE_NUMBER}.yaml <<EOF ${DIGEST_ALGO}-${DIGEST_ENCODED}: ${SIGNATURE_BASE64} EOF
Prepare the update graph. You have two options to prepare the update graph:
Use the OpenShift Update Service.
For more information about how to set up the graph on the hub cluster, see Deploy the operator for OpenShift Update Service and Build the graph data init container.
Make a local copy of the upstream graph. Host the update graph on an
orhttpserver in the disconnected environment that has access to the managed cluster. To download the update graph, use the following command:https$ curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.12 -o ~/upgrade-graph_stable-4.12
For Operator updates, you must perform the following task:
- Mirror the Operator catalogs. Ensure that the desired operator images are mirrored by following the procedure in the "Mirroring Operator catalogs for use with disconnected clusters" section.
19.11.1.2. Performing a platform update Link kopierenLink in die Zwischenablage kopiert!
You can perform a platform update with the TALM.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Update ZTP to the latest version.
- Provision one or more managed clusters with ZTP.
- Mirror the desired image repository.
-
Log in as a user with privileges.
cluster-admin - Create RHACM policies in the hub cluster.
Procedure
Create a
CR for the platform update:PolicyGenTemplateSave the following contents of the
CR in thePolicyGenTemplatefile.du-upgrade.yamlExample of
PolicyGenTemplatefor platform updateapiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml1 policyName: "platform-upgrade-prep" binaryData: ${DIGEST_ALGO}-${DIGEST_ENCODED}: ${SIGNATURE_BASE64}2 - fileName: DisconnectedICSP.yaml policyName: "platform-upgrade-prep" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors:3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml4 policyName: "platform-upgrade" metadata: name: version spec: channel: "stable-4.12" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.12 desiredUpdate: version: 4.12.4 status: history: - version: 4.12.4 state: "Completed"- 1
- The
ConfigMapCR contains the signature of the desired release image to update to. - 2
- Shows the image signature of the desired OpenShift Container Platform release. Get the signature from the
checksum-${OCP_RELEASE_NUMBER}.yamlfile you saved when following the procedures in the "Setting up the environment" section. - 3
- Shows the mirror repository that contains the desired OpenShift Container Platform image. Get the mirrors from the
imageContentSources.yamlfile that you saved when following the procedures in the "Setting up the environment" section. - 4
- Shows the
ClusterVersionCR to trigger the update. Thechannel,upstream, anddesiredVersionfields are all required for image pre-caching.
The
CR generates two policies:PolicyGenTemplate-
The policy does the preparation work for the platform update. It creates the
du-upgrade-platform-upgrade-prepCR for the desired release image signature, creates the image content source of the mirrored release image repository, and updates the cluster version with the desired update channel and the update graph reachable by the managed cluster in the disconnected environment.ConfigMap -
The policy is used to perform platform upgrade.
du-upgrade-platform-upgrade
Add the
file contents to thedu-upgrade.yamlfile located in the ZTP Git repository for thekustomization.yamlCRs and push the changes to the Git repository.PolicyGenTemplateArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster.
Check the created policies by running the following command:
$ oc get policies -A | grep platform-upgrade
Create the
CR for the platform update with theClusterGroupUpdatefield set tospec.enable.falseSave the content of the platform update
CR with theClusterGroupUpdateand thedu-upgrade-platform-upgrade-preppolicies and the target clusters to thedu-upgrade-platform-upgradefile, as shown in the following example:cgu-platform-upgrade.ymlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: falseApply the
CR to the hub cluster by running the following command:ClusterGroupUpdate$ oc apply -f cgu-platform-upgrade.yml
Optional: Pre-cache the images for the platform update.
Enable pre-caching in the
CR by running the following command:ClusterGroupUpdate$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=mergeMonitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the hub cluster:
$ oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'
Start the platform update:
Enable the
policy and disable pre-caching by running the following command:cgu-platform-upgrade$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=mergeMonitor the process. Upon completion, ensure that the policy is compliant by running the following command:
$ oc get policies --all-namespaces
19.11.1.3. Performing an Operator update Link kopierenLink in die Zwischenablage kopiert!
You can perform an Operator update with the TALM.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Update ZTP to the latest version.
- Provision one or more managed clusters with ZTP.
- Mirror the desired index image, bundle images, and all Operator images referenced in the bundle images.
-
Log in as a user with privileges.
cluster-admin - Create RHACM policies in the hub cluster.
Procedure
Update the
CR for the Operator update.PolicyGenTemplateUpdate the
du-upgradeCR with the following additional contents in thePolicyGenTemplatefile:du-upgrade.yamlapiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.121 updateStrategy:2 registryPoll: interval: 1h- 1
- The index image URL contains the desired Operator images. If the index images are always pushed to the same image name and tag, this change is not needed.
- 2
- Set how frequently the Operator Lifecycle Manager (OLM) polls the index image for new Operator versions with the
registryPoll.intervalfield. This change is not needed if a new index image tag is always pushed for y-stream and z-stream Operator updates. TheregistryPoll.intervalfield can be set to a shorter interval to expedite the update, however shorter intervals increase computational load. To counteract this, you can restoreregistryPoll.intervalto the default value once the update is complete.
This update generates one policy,
, to update thedu-upgrade-operator-catsrc-policycatalog source with the new index images that contain the desired Operators images.redhat-operators-disconnectedNoteIf you want to use the image pre-caching for Operators and there are Operators from a different catalog source other than
, you must perform the following tasks:redhat-operators-disconnected- Prepare a separate catalog source policy with the new index image or registry poll interval update for the different catalog source.
- Prepare a separate subscription policy for the desired Operators that are from the different catalog source.
For example, the desired SRIOV-FEC Operator is available in the
catalog source. To update the catalog source and the Operator subscription, add the following contents to generate two policies,certified-operatorsanddu-upgrade-fec-catsrc-policy:du-upgrade-subscriptions-fec-policyapiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-upgrade" namespace: "ztp-group-du-sno" spec: bindingRules: group-du-sno: "" mcp: "master" remediationAction: inform sourceFiles: … - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "fec-catsrc-policy" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: "subscriptions-fec-policy" spec: channel: "stable" source: certified-operatorsRemove the specified subscriptions channels in the common
CR, if they exist. The default subscriptions channels from the ZTP image are used for the update.PolicyGenTemplateNoteThe default channel for the Operators applied through ZTP 4.12 is
, except for thestable. As of OpenShift Container Platform 4.11, theperformance-addon-operatorfunctionality was moved to theperformance-addon-operator. For the 4.10 release, the default channel for PAO isnode-tuning-operator. You can also specify the default channels in the commonv4.10CR.PolicyGenTemplatePush the
CRs updates to the ZTP Git repository.PolicyGenTemplateArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster.
Check the created policies by running the following command:
$ oc get policies -A | grep -E "catsrc-policy|subscription"
Apply the required catalog source updates before starting the Operator update.
Save the content of the
CR namedClusterGroupUpgradewith the catalog source policies and the target managed clusters to theoperator-upgrade-prepfile:cgu-operator-upgrade-prep.ymlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1Apply the policy to the hub cluster by running the following command:
$ oc apply -f cgu-operator-upgrade-prep.ymlMonitor the update process. Upon completion, ensure that the policy is compliant by running the following command:
$ oc get policies -A | grep -E "catsrc-policy"
Create the
CR for the Operator update with theClusterGroupUpgradefield set tospec.enable.falseSave the content of the Operator update
CR with theClusterGroupUpgradepolicy and the subscription policies created from the commondu-upgrade-operator-catsrc-policyand the target clusters to thePolicyGenTemplatefile, as shown in the following example:cgu-operator-upgrade.ymlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy1 - common-subscriptions-policy2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false- 1
- The policy is needed by the image pre-caching feature to retrieve the operator images from the catalog source.
- 2
- The policy contains Operator subscriptions. If you have followed the structure and content of the reference
PolicyGenTemplates, all Operator subscriptions are grouped into thecommon-subscriptions-policypolicy.
NoteOne
CR can only pre-cache the images of the desired Operators defined in the subscription policy from one catalog source included in theClusterGroupUpgradeCR. If the desired Operators are from different catalog sources, such as in the example of the SRIOV-FEC Operator, anotherClusterGroupUpgradeCR must be created withClusterGroupUpgradeanddu-upgrade-fec-catsrc-policypolicies for the SRIOV-FEC Operator images pre-caching and update.du-upgrade-subscriptions-fec-policyApply the
CR to the hub cluster by running the following command:ClusterGroupUpgrade$ oc apply -f cgu-operator-upgrade.yml
Optional: Pre-cache the images for the Operator update.
Before starting image pre-caching, verify the subscription policy is
at this point by running the following command:NonCompliant$ oc get policy common-subscriptions-policy -n <policy_namespace>Example output
NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27dEnable pre-caching in the
CR by running the following command:ClusterGroupUpgrade$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=mergeMonitor the process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster:
$ oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'Check if the pre-caching is completed before starting the update by running the following command:
$ oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jqExample output
[ { "lastTransitionTime": "2022-03-08T20:49:08.000Z", "message": "The ClusterGroupUpgrade CR is not enabled", "reason": "UpgradeNotStarted", "status": "False", "type": "Ready" }, { "lastTransitionTime": "2022-03-08T20:55:30.000Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingDone" } ]
Start the Operator update.
Enable the
cgu-operator-upgradeCR and disable pre-caching to start the Operator update by running the following command:ClusterGroupUpgrade$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=mergeMonitor the process. Upon completion, ensure that the policy is compliant by running the following command:
$ oc get policies --all-namespaces
19.11.1.3.1. Troubleshooting missed Operator updates due to out-of-date policy compliance states Link kopierenLink in die Zwischenablage kopiert!
In some scenarios, Topology Aware Lifecycle Manager (TALM) might miss Operator updates due to an out-of-date policy compliance state.
After a catalog source update, it takes time for the Operator Lifecycle Manager (OLM) to update the subscription status. The status of the subscription policy might continue to show as compliant while TALM decides whether remediation is needed. As a result, the Operator specified in the subscription policy does not get upgraded.
To avoid this scenario, add another catalog source configuration to the
PolicyGenTemplate
Procedure
Add a catalog source configuration in the
resource:PolicyGenTemplate- fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: "operator-catsrc-policy" metadata: name: redhat-operators-disconnected-v21 spec: displayName: Red Hat Operators Catalog v22 image: registry.example.com:5000/olredhat-operators-disconnected:<version>3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READYUpdate the
resource to point to the new configuration for Operators that require an update:SubscriptionapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace # ... spec: source: redhat-operators-disconnected-v21 # ...- 1
- Enter the name of the additional catalog source configuration that you defined in the
PolicyGenTemplateresource.
19.11.1.4. Performing a platform and an Operator update together Link kopierenLink in die Zwischenablage kopiert!
You can perform a platform and an Operator update at the same time.
Prerequisites
- Install the Topology Aware Lifecycle Manager (TALM).
- Update ZTP to the latest version.
- Provision one or more managed clusters with ZTP.
-
Log in as a user with privileges.
cluster-admin - Create RHACM policies in the hub cluster.
Procedure
-
Create the CR for the updates by following the steps described in the "Performing a platform update" and "Performing an Operator update" sections.
PolicyGenTemplate Apply the prep work for the platform and the Operator update.
Save the content of the
CR with the policies for platform update preparation work, catalog source updates, and target clusters to theClusterGroupUpgradefile, for example:cgu-platform-operator-upgrade-prep.ymlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: trueApply the
file to the hub cluster by running the following command:cgu-platform-operator-upgrade-prep.yml$ oc apply -f cgu-platform-operator-upgrade-prep.ymlMonitor the process. Upon completion, ensure that the policy is compliant by running the following command:
$ oc get policies --all-namespaces
Create the
CR for the platform and the Operator update with theClusterGroupUpdatefield set tospec.enable.falseSave the contents of the platform and Operator update
CR with the policies and the target clusters to theClusterGroupUpdatefile, as shown in the following example:cgu-platform-operator-upgrade.ymlapiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade1 - du-upgrade-operator-catsrc-policy2 - common-subscriptions-policy3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: falseApply the
file to the hub cluster by running the following command:cgu-platform-operator-upgrade.yml$ oc apply -f cgu-platform-operator-upgrade.yml
Optional: Pre-cache the images for the platform and the Operator update.
Enable pre-caching in the
CR by running the following command:ClusterGroupUpgrade$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"preCaching": true}}' --type=mergeMonitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster:
$ oc get jobs,pods -n openshift-talm-pre-cacheCheck if the pre-caching is completed before starting the update by running the following command:
$ oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'
Start the platform and Operator update.
Enable the
cgu-du-upgradeCR to start the platform and the Operator update by running the following command:ClusterGroupUpgrade$ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \ --patch '{"spec":{"enable":true, "preCaching": false}}' --type=mergeMonitor the process. Upon completion, ensure that the policy is compliant by running the following command:
$ oc get policies --all-namespacesNoteThe CRs for the platform and Operator updates can be created from the beginning by configuring the setting to
. In this case, the update starts immediately after pre-caching completes and there is no need to manually enable the CR.spec.enable: trueBoth pre-caching and the update create extra resources, such as policies, placement bindings, placement rules, managed cluster actions, and managed cluster view, to help complete the procedures. Setting the
field toafterCompletion.deleteObjectsdeletes all these resources after the updates complete.true
19.11.1.5. Removing Performance Addon Operator subscriptions from deployed clusters Link kopierenLink in die Zwischenablage kopiert!
In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11 or later, these functions are part of the Node Tuning Operator.
Do not install the Performance Addon Operator on clusters running OpenShift Container Platform 4.11 or later. If you upgrade to OpenShift Container Platform 4.11 or later, the Node Tuning Operator automatically removes the Performance Addon Operator.
You need to remove any policies that create Performance Addon Operator subscriptions to prevent a re-installation of the Operator.
The reference DU profile includes the Performance Addon Operator in the
PolicyGenTemplate
common-ranGen.yaml
common-ranGen.yaml
If you install Performance Addon Operator 4.10.3-5 or later on OpenShift Container Platform 4.11 or later, the Performance Addon Operator detects the cluster version and automatically hibernates to avoid interfering with the Node Tuning Operator functions. However, to ensure best performance, remove the Performance Addon Operator from your OpenShift Container Platform 4.11 clusters.
Prerequisites
- Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for ArgoCD.
- Update to OpenShift Container Platform 4.11 or later.
-
Log in as a user with privileges.
cluster-admin
Procedure
Change the
tocomplianceTypefor the Performance Addon Operator namespace, Operator group, and subscription in themustnothavefile.common-ranGen.yaml- fileName: PaoSubscriptionNS.yaml policyName: "subscriptions-policy" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: "subscriptions-policy" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: "subscriptions-policy" complianceType: mustnothave-
Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The status of the policy changes to
common-subscriptions-policy.Non-Compliant - Apply the change to your target clusters by using the Topology Aware Lifecycle Manager. For more information about rolling out configuration changes, see the "Additional resources" section.
Monitor the process. When the status of the
policy for a target cluster iscommon-subscriptions-policy, the Performance Addon Operator has been removed from the cluster. Get the status of theCompliantby running the following command:common-subscriptions-policy$ oc get policy -n ztp-common common-subscriptions-policy-
Delete the Performance Addon Operator namespace, Operator group and subscription CRs from in the
.spec.sourceFilesfile.common-ranGen.yaml - Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The policy remains compliant.
19.11.2. About the auto-created ClusterGroupUpgrade CR for ZTP Link kopierenLink in die Zwischenablage kopiert!
TALM has a controller called
ManagedClusterForCGU
Ready
ManagedCluster
ClusterGroupUpgrade
For any managed cluster in the
Ready
ManagedClusterForCGU
ClusterGroupUpgrade
ztp-install
ClusterGroupUpgrade
If the managed cluster has no bound policies when the cluster becomes
Ready
ClusterGroupUpgrade
Example of an auto-created ClusterGroupUpgrade CR for ZTP
apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
generation: 1
name: spoke1
namespace: ztp-install
ownerReferences:
- apiVersion: cluster.open-cluster-management.io/v1
blockOwnerDeletion: true
controller: true
kind: ManagedCluster
name: spoke1
uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5
resourceVersion: "46666836"
uid: b8be9cd2-764f-4a62-87d6-6b767852c7da
spec:
actions:
afterCompletion:
addClusterLabels:
ztp-done: ""
deleteClusterLabels:
ztp-running: ""
deleteObjects: true
beforeEnable:
addClusterLabels:
ztp-running: ""
clusters:
- spoke1
enable: true
managedPolicies:
- common-spoke1-config-policy
- common-spoke1-subscriptions-policy
- group-spoke1-config-policy
- spoke1-config-policy
- group-spoke1-validator-du-policy
preCaching: false
remediationStrategy:
maxConcurrency: 1
timeout: 240
19.12. Updating GitOps ZTP Link kopierenLink in die Zwischenablage kopiert!
You can update the Gitops zero touch provisioning (ZTP) infrastructure independently from the hub cluster, Red Hat Advanced Cluster Management (RHACM), and the managed OpenShift Container Platform clusters.
You can update the Red Hat OpenShift GitOps Operator when new versions become available. When updating the GitOps ZTP plugin, review the updated files in the reference configuration and ensure that the changes meet your requirements.
19.12.1. Overview of the GitOps ZTP update process Link kopierenLink in die Zwischenablage kopiert!
You can update GitOps zero touch provisioning (ZTP) for a fully operational hub cluster running an earlier version of the GitOps ZTP infrastructure. The update process avoids impact on managed clusters.
Any changes to policy settings, including adding recommended content, results in updated polices that must be rolled out to the managed clusters and reconciled.
At a high level, the strategy for updating the GitOps ZTP infrastructure is as follows:
-
Label all existing clusters with the label.
ztp-done - Stop the ArgoCD applications.
- Install the new GitOps ZTP tools.
- Update required content and optional changes in the Git repository.
- Update and restart the application configuration.
19.12.2. Preparing for the upgrade Link kopierenLink in die Zwischenablage kopiert!
Use the following procedure to prepare your site for the GitOps zero touch provisioning (ZTP) upgrade.
Procedure
- Get the latest version of the GitOps ZTP container that has the custom resources (CRs) used to configure Red Hat OpenShift GitOps for use with GitOps ZTP.
Extract the
directory by using the following commands:argocd/deployment$ mkdir -p ./update$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12 extract /home/ztp --tar | tar x -C ./updateThe
directory contains the following subdirectories:/update-
: contains the source CR files that the
update/extra-manifestCR uses to generate the extra manifestSiteConfig.configMap -
: contains the source CR files that the
update/source-crsCR uses to generate the Red Hat Advanced Cluster Management (RHACM) policies.PolicyGenTemplate -
: contains patches and YAML files to apply on the hub cluster for use in the next step of this procedure.
update/argocd/deployment -
: contains example
update/argocd/exampleandSiteConfigfiles that represent the recommended configuration.PolicyGenTemplate
-
Update the
andclusters-app.yamlfiles to reflect the name of your applications and the URL, branch, and path for your Git repository.policies-app.yamlIf the upgrade includes changes that results in obsolete policies, the obsolete policies should be removed prior to performing the upgrade.
Diff the changes between the configuration and deployment source CRs in the
folder and Git repo where you manage your fleet site CRs. Apply and push the required changes to your site repository./updateImportantWhen you update GitOps ZTP to the latest version, you must apply the changes from the
directory to your site repository. Do not use older versions of theupdate/argocd/deploymentfiles.argocd/deployment/
19.12.3. Labeling the existing clusters Link kopierenLink in die Zwischenablage kopiert!
To ensure that existing clusters remain untouched by the tool updates, label all existing managed clusters with the
ztp-done
This procedure only applies when updating clusters that were not provisioned with Topology Aware Lifecycle Manager (TALM). Clusters that you provision with TALM are automatically labeled with
ztp-done
Procedure
Find a label selector that lists the managed clusters that were deployed with zero touch provisioning (ZTP), such as
:local-cluster!=true$ oc get managedcluster -l 'local-cluster!=true'Ensure that the resulting list contains all the managed clusters that were deployed with ZTP, and then use that selector to add the
label:ztp-done$ oc label managedcluster -l 'local-cluster!=true' ztp-done=
19.12.4. Stopping the existing GitOps ZTP applications Link kopierenLink in die Zwischenablage kopiert!
Removing the existing applications ensures that any changes to existing content in the Git repository are not rolled out until the new version of the tools is available.
Use the application files from the
deployment
Procedure
Perform a non-cascaded delete on the
application to leave all generated resources in place:clusters$ oc delete -f update/argocd/deployment/clusters-app.yamlPerform a cascaded delete on the
application to remove all previous policies:policies$ oc patch -f policies-app.yaml -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge$ oc delete -f update/argocd/deployment/policies-app.yaml
19.12.5. Required changes to the Git repository Link kopierenLink in die Zwischenablage kopiert!
When upgrading the
ztp-site-generate
Make required changes to
files:PolicyGenTemplateAll
files must be created in aPolicyGenTemplateprefixed withNamespace. This ensures that the GitOps zero touch provisioning (ZTP) application is able to manage the policy CRs generated by GitOps ZTP without conflicting with the way Red Hat Advanced Cluster Management (RHACM) manages the policies internally.ztpAdd the
file to the repository:kustomization.yamlAll
andSiteConfigCRs must be included in aPolicyGenTemplatefile under their respective directory trees. For example:kustomization.yaml├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yamlNoteThe files listed in the
sections must contain eithergeneratororSiteConfigCRs only. If your existing YAML files contain other CRs, for example,PolicyGenTemplate, these other CRs must be pulled out into separate files and listed in theNamespacesection.resourcesThe
kustomization file must contain allPolicyGenTemplateYAML files in thePolicyGenTemplatesection andgeneratorCRs in theNamespacesection. For example:resourcesapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yamlThe
kustomization file must contain allSiteConfigYAML files in theSiteConfigsection and any other CRs in the resources:generatorapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yamlRemove the
andpre-sync.yamlfiles.post-sync.yamlIn OpenShift Container Platform 4.10 and later, the
andpre-sync.yamlfiles are no longer required. Thepost-sync.yamlCR manages the policies deployment on the hub cluster.update/deployment/kustomization.yamlNoteThere is a set of
andpre-sync.yamlfiles under both thepost-sync.yamlandSiteConfigtrees.PolicyGenTemplateReview and incorporate recommended changes
Each release may include additional recommended changes to the configuration applied to deployed clusters. Typically these changes result in lower CPU use by the OpenShift platform, additional features, or improved tuning of the platform.
Review the reference
andSiteConfigCRs applicable to the types of cluster in your network. These examples can be found in thePolicyGenTemplatedirectory extracted from the GitOps ZTP container.argocd/example
19.12.6. Installing the new GitOps ZTP applications Link kopierenLink in die Zwischenablage kopiert!
Using the extracted
argocd/deployment
Procedure
To patch the ArgoCD instance in the hub cluster by using the patch file that you previously extracted into the
directory, enter the following command:update/argocd/deployment/$ oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file update/argocd/deployment/argocd-openshift-gitops-patch.jsonTo apply the contents of the
directory, enter the following command:argocd/deployment$ oc apply -k update/argocd/deployment
19.12.7. Rolling out the GitOps ZTP configuration changes Link kopierenLink in die Zwischenablage kopiert!
If any configuration changes were included in the upgrade due to implementing recommended changes, the upgrade process results in a set of policy CRs on the hub cluster in the
Non-Compliant
ztp-site-generate
inform
To roll out the changes, create one or more
ClusterGroupUpgrade
Non-Compliant
19.13. Expanding single-node OpenShift clusters with GitOps ZTP Link kopierenLink in die Zwischenablage kopiert!
You can expand single-node OpenShift clusters with GitOps ZTP. When you add worker nodes to single-node OpenShift clusters, the original single-node OpenShift cluster retains the control plane node role. Adding worker nodes does not require any downtime for the existing single-node OpenShift cluster.
Although there is no specified limit on the number of worker nodes that you can add to a single-node OpenShift cluster, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes.
If you require workload partitioning on the worker node, you must deploy and remediate the managed cluster policies on the hub cluster before installing the node. This way, the workload partitioning
MachineConfig
worker
MachineConfig
It is recommended that you first remediate the policies, and then install the worker node. If you create the workload partitioning manifests after installing the worker node, you must drain the node manually and delete all the pods managed by daemon sets. When the managing daemon sets create the new pods, the new pods undergo the workload partitioning process.
Adding worker nodes to single-node OpenShift clusters with GitOps ZTP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
19.13.1. Applying profiles to the worker node Link kopierenLink in die Zwischenablage kopiert!
You can configure the additional worker node with a DU profile.
You can apply a RAN distributed unit (DU) profile to the worker node cluster using the ZTP GitOps common, group, and site-specific
PolicyGenTemplate
policies
out/argocd/example/policygentemplates
ztp-site-generate
-
common-ranGen.yaml -
group-du-sno-ranGen.yaml -
example-sno-site.yaml -
ns.yaml -
kustomization.yaml
Configuring the DU profile on the worker node is considered an upgrade. To initiate the upgrade flow, you must update the existing policies or create additional ones. Then, you must create a
ClusterGroupUpgrade
19.13.2. (Optional) Ensuring PTP and SR-IOV daemon selector compatibility Link kopierenLink in die Zwischenablage kopiert!
If the DU profile was deployed using the GitOps ZTP plugin version 4.11 or earlier, the PTP and SR-IOV Operators might be configured to place the daemons only on nodes labelled as
master
Procedure
Check the daemon node selector settings of the PTP Operator on one of the spoke clusters:
$ oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jqExample output for PTP Operator
{"daemonNodeSelector":{"node-role.kubernetes.io/master":""}}1 - 1
- If the node selector is set to
master, the spoke was deployed with the version of the ZTP plugin that requires changes.
Check the daemon node selector settings of the SR-IOV Operator on one of the spoke clusters:
$ oc get sriovoperatorconfig/default -n \ openshift-sriov-network-operator -ojsonpath='{.spec}' | jqExample output for SR-IOV Operator
{"configDaemonNodeSelector":{"node-role.kubernetes.io/worker":""},"disableDrain":false,"enableInjector":true,"enableOperatorWebhook":true}1 - 1
- If the node selector is set to
master, the spoke was deployed with the version of the ZTP plugin that requires changes.
In the group policy, add the following
andcomplianceTypeentries:specspec: - fileName: PtpOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" - fileName: SriovOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: ""ImportantChanging the
field causes temporary PTP synchronization loss and SR-IOV connectivity loss.daemonNodeSelector- Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application.
19.13.3. PTP and SR-IOV node selector compatibility Link kopierenLink in die Zwischenablage kopiert!
The PTP configuration resources and SR-IOV network node policies use
node-role.kubernetes.io/master: ""
"node-role.kubernetes.io/worker"
19.13.4. Using PolicyGenTemplate CRs to apply worker node policies to worker nodes Link kopierenLink in die Zwischenablage kopiert!
You can create policies for worker nodes.
Procedure
Create the following policy template:
apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-sno-workers" namespace: "example-sno" spec: bindingRules: sites: "example-sno"1 mcp: "worker"2 sourceFiles: - fileName: MachineConfigGeneric.yaml3 policyName: "config-policy" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-worker-node-performance-profile spec: cpu:4 isolated: "4-47" reserved: "0-3" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: "config-policy" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-475 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker- 1
- The policies are applied to all clusters with this label.
- 2
- The
MCPfield must be set toworker. - 3
- This generic
MachineConfigCR is used to configure workload partitioning on the worker node. - 4
- The
cpu.isolatedandcpu.reservedfields must be configured for each particular hardware platform. - 5
- The
cmdline_crashCPU set must match thecpu.isolatedset in thePerformanceProfilesection.
A generic
CR is used to configure workload partitioning on the worker node. You can generate the content ofMachineConfigandcrioconfiguration files.kubelet-
Add the created policy template to the Git repository monitored by the ArgoCD application.
policies -
Add the policy in the file.
kustomization.yaml - Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application.
To remediate the new policies to your spoke cluster, create a TALM custom resource:
$ cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF
19.13.5. Adding worker nodes to single-node OpenShift clusters with GitOps ZTP Link kopierenLink in die Zwischenablage kopiert!
You can add one or more worker nodes to existing single-node OpenShift clusters to increase available CPU resources in the cluster.
Prerequisites
- Install and configure RHACM 2.6 or later in an OpenShift Container Platform 4.11 or later bare-metal hub cluster
- Install Topology Aware Lifecycle Manager in the hub cluster
- Install Red Hat OpenShift GitOps in the hub cluster
-
Use the GitOps ZTP container image version 4.12 or later
ztp-site-generate - Deploy a managed single-node OpenShift cluster with GitOps ZTP
- Configure the Central Infrastructure Management as described in the RHACM documentation
-
Configure the DNS serving the cluster to resolve the internal API endpoint
api-int.<cluster_name>.<base_domain>
Procedure
If you deployed your cluster by using the
example-sno.yamlmanifest, add your new worker node to theSiteConfiglist:spec.clusters['example-sno'].nodesnodes: - hostName: "example-node2.example.com" role: "worker" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node2-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI" nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up macAddress: "AA:BB:CC:DD:EE:11" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254Create a BMC authentication secret for the new host, as referenced by the
field in thebmcCredentialsNamesection of yourspec.nodesfile:SiteConfigapiVersion: v1 data: password: "password" username: "username" kind: Secret metadata: name: "example-node2-bmh-secret" namespace: example-sno type: OpaqueCommit the changes in Git, and then push to the Git repository that is being monitored by the GitOps ZTP ArgoCD application.
When the ArgoCD
application synchronizes, two new manifests appear on the hub cluster generated by the ZTP plugin:cluster-
BareMetalHost NMStateConfigImportantThe
field should not be configured for the worker node. Workload partitioning for worker nodes is added through management policies after the node installation is complete.cpuset
-
Verification
You can monitor the installation process in several ways.
Check if the preprovisioning images are created by running the following command:
$ oc get ppimg -n example-snoExample output
NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreatedCheck the state of the bare-metal hosts:
$ oc get bmh -n example-snoExample output
NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s1 - 1
- The
provisioningstate indicates that node booting from the installation media is in progress.
Continuously monitor the installation process:
Watch the agent install process by running the following command:
$ oc get agent -n example-sno --watchExample output
NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker DoneWhen the worker node installation is finished, the worker node certificates are approved automatically. At this point, the worker appears in the
status. Run the following command to see the status:ManagedClusterInfo$ oc get managedclusterinfo/example-sno -n example-sno -o \ jsonpath='{range .status.nodeList[*]}{.name}{"\t"}{.conditions}{"\t"}{.labels}{"\n"}{end}'Example output
example-sno [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/master":"","node-role.kubernetes.io/worker":""} example-node2 [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/worker":""}
19.14. Pre-caching images for single-node OpenShift deployments Link kopierenLink in die Zwischenablage kopiert!
In environments with limited bandwidth where you use the GitOps zero touch provisioning (ZTP) solution to deploy a large number of clusters, you want to avoid downloading all the images that are required for bootstrapping and installing OpenShift Container Platform. The limited bandwidth at remote single-node OpenShift sites can cause long deployment times. The factory-precaching-cli tool allows you to pre-stage servers before shipping them to the remote site for ZTP provisioning.
The factory-precaching-cli tool does the following:
- Downloads the RHCOS rootfs image that is required by the minimal ISO to boot.
-
Creates a partition from the installation disk labelled as .
data - Formats the disk in xfs.
- Creates a GUID Partition Table (GPT) data partition at the end of the disk, where the size of the partition is configurable by the tool.
- Copies the container images required to install OpenShift Container Platform.
- Copies the container images required by ZTP to install OpenShift Container Platform.
- Optional: Copies Day-2 Operators to the partition.
The factory-precaching-cli tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
19.14.1. Getting the factory-precaching-cli tool Link kopierenLink in die Zwischenablage kopiert!
The factory-precaching-cli tool Go binary is publicly available in the Telco RAN tools container image. The factory-precaching-cli tool Go binary in the container image is executed on the server running an RHCOS live image using
podman
Procedure
Pull the factory-precaching-cli tool image by running the following command:
# podman pull quay.io/openshift-kni/telco-ran-tools:latest
Verification
To check that the tool is available, query the current version of the factory-precaching-cli tool Go binary:
# podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -vExample output
factory-precaching-cli version 20221018.120852+main.feecf17
19.14.2. Booting from a live operating system image Link kopierenLink in die Zwischenablage kopiert!
You can use the factory-precaching-cli tool with to boot servers where only one disk is available and external disk drive cannot be attached to the server.
RHCOS requires the disk to not be in use when the disk is about to be written with an RHCOS image.
Depending on the server hardware, you can mount the RHCOS live ISO on the blank server using one of the following methods:
- Using the Dell RACADM tool on a Dell server.
- Using the HPONCFG tool on a HP server.
- Using the Redfish BMC API.
It is recommended to automate the mounting procedure. To automate the procedure, you need to pull the required images and host them on a local HTTP server.
Prerequisites
- You powered up the host.
- You have network connectivity to the host.
This example procedure uses the Redfish BMC API to mount the RHCOS live ISO.
Mount the RHCOS live ISO:
Check virtual media status:
$ curl --globoff -H "Content-Type: application/json" -H \ "Accept: application/json" -k -X GET --user ${username_password} \ https://$BMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.toolMount the ISO file as a virtual media:
$ curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku ${username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Image": "http://[$HTTPd_IP]/RHCOS-live.iso"}' -X POST https://$BMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMediaSet the boot order to boot from the virtual media once:
$ curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku ${username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Boot":{ "BootSourceOverrideEnabled": "Once", "BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI"}}' -X PATCH https://$BMC_ADDRESS/redfish/v1/Systems/Self
- Reboot and ensure that the server is booting from virtual media.
19.14.3. Partitioning the disk Link kopierenLink in die Zwischenablage kopiert!
To run the full pre-caching process, you have to boot from a live ISO and use the factory-precaching-cli tool from a container image to partition and pre-cache all the artifacts required.
A live ISO or RHCOS live ISO is required because the disk must not be in use when the operating system (RHCOS) is written to the device during the provisioning. Single-disk servers can also be enabled with this procedure.
Prerequisites
- You have a disk that is not partitioned.
-
You have access to the image.
quay.io/openshift-kni/telco-ran-tools:latest - You have enough storage to install OpenShift Container Platform and pre-cache the required images.
Procedure
Verify that the disk is cleared:
# lsblkExample output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 diskErase any file system, RAID or partition table signatures from the device:
# wipefs -a /dev/nvme0n1Example output
/dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
The tool fails if the disk is not empty because it uses partition number 1 of the device for pre-caching the artifacts.
19.14.3.1. Creating the partition Link kopierenLink in die Zwischenablage kopiert!
Once the device is ready, you create a single partition and a GPT partition table. The partition is automatically labelled as
data
coreos-installer
The
coreos-installer
data
Prerequisites
-
The container must run as due to formatting host devices.
privileged -
You have to mount the folder so that the process can be executed inside the container.
/dev
Procedure
In the following example, the size of the partition is 250 GiB due to allow pre-caching the DU profile for Day 2 Operators.
Run the container as
and partition the disk:privileged# podman run -v /dev:/dev --privileged \ --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli partition \1 -d /dev/nvme0n1 \2 -s 2503 Check the storage information:
# lsblkExample output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:3 0 250G 0 part
Verification
You must verify that the following requirements are met:
- The device has a GPT partition table
- The partition uses the latest sectors of the device.
-
The partition is correctly labeled as .
data
Query the disk status to verify that the disk is partitioned as expected:
# gdisk -l /dev/nvme0n1
Example output
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB
Model: Dell Express Flash PM1725b 1.6TB SFF
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3125627534
Partitions will be aligned on 2048-sector boundaries
Total free space is 2601338846 sectors (1.2 TiB)
Number Start (sector) End (sector) Size Code Name
1 2601338880 3125627534 250.0 GiB 8300 data
19.14.3.2. Mounting the partition Link kopierenLink in die Zwischenablage kopiert!
After verifying that the disk is partitioned correctly, you can mount the device into
/mnt
It is recommended to mount the device into
/mnt
Verify that the partition is formatted as
:xfs# lsblk -f /dev/nvme0n1Example output
NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071Mount the partition:
# mount /dev/nvme0n1p1 /mnt/
Verification
Check that the partition is mounted:
# lsblkExample output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:2 0 250G 0 part /var/mnt1 - 1
- The mount point is
/var/mntbecause the/mntfolder in RHCOS is a link to/var/mnt.
19.14.4. Downloading the images Link kopierenLink in die Zwischenablage kopiert!
The factory-precaching-cli tool allows you to download the following images to your partitioned server:
- OpenShift Container Platform images
- Operator images that are included in the distributed unit (DU) profile for 5G RAN sites
- Operator images from disconnected registries
The list of available Operator images can vary in different OpenShift Container Platform releases.
19.14.4.1. Downloading with parallel workers Link kopierenLink in die Zwischenablage kopiert!
The factory-precaching-cli tool uses parallel workers to download multiple images simultaneously. You can configure the number of workers with the
--parallel
-p
Your login shell may be restricted to a subset of CPUs, which reduces the CPUs available to the container. To remove this restriction, you can precede your commands with
taskset 0xffffffff
# taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help
19.14.4.2. Preparing to download the OpenShift Container Platform images Link kopierenLink in die Zwischenablage kopiert!
To download OpenShift Container Platform container images, you need to know the multicluster engine version. When you use the
--du-profile
Prerequisites
- You have RHACM and the multicluster engine Operator installed.
- You partitioned the storage device.
- You have enough space for the images on the partitioned device.
- You connected the bare-metal server to the Internet.
- You have a valid pull secret.
Procedure
Check the RHACM version and the multicluster engine version by running the following commands in the hub cluster:
$ oc get csv -A | grep -i advanced-cluster-managementExample output
open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded$ oc get csv -A | grep -i multicluster-engineExample output
multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 SucceededTo access the container registry, copy a valid pull secret on the server to be installed:
Create the
folder:.docker$ mkdir /root/.dockerCopy the valid pull in the
file to the previously createdconfig.jsonfolder:.docker/$ cp config.json /root/.docker/config.json1 - 1
/root/.docker/config.jsonis the default path wherepodmanchecks for the login credentials for the registry.
If you use a different registry to pull the required artifacts, you need to copy the proper pull secret. If the local registry uses TLS, you need to include the certificates from the registry as well.
19.14.4.3. Downloading the OpenShift Container Platform images Link kopierenLink in die Zwischenablage kopiert!
The factory-precaching-cli tool allows you to pre-cache all the container images required to provision a specific OpenShift Container Platform release.
Procedure
Pre-cache the release by running the following command:
# podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- \ factory-precaching-cli download \1 -r 4.12.0 \2 --acm-version 2.6.3 \3 --mce-version 2.1.4 \4 -f /mnt \5 --img quay.io/custom/repository6 - 1
- Specifies the downloading function of the factory-precaching-cli tool.
- 2
- Defines the OpenShift Container Platform release version.
- 3
- Defines the RHACM version.
- 4
- Defines the multicluster engine version.
- 5
- Defines the folder where you want to download the images on the disk.
- 6
- Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk.
Example output
Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf ... Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f ... Summary: Release: 4.12.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83
Verification
Check that all the images are compressed in the target folder of server:
$ ls -l /mnt1 - 1
- It is recommended that you pre-cache the images in the
/mntfolder.
Example output
-rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz
19.14.4.4. Downloading the Operator images Link kopierenLink in die Zwischenablage kopiert!
You can also pre-cache Day-2 Operators used in the 5G Radio Access Network (RAN) Distributed Unit (DU) cluster configuration. The Day-2 Operators depend on the installed OpenShift Container Platform version.
You need to include the RHACM hub and multicluster engine Operator versions by using the
--acm-version
--mce-version
Procedure
Pre-cache the Operator images:
# podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \1 -r 4.12.0 \2 --acm-version 2.6.3 \3 --mce-version 2.1.4 \4 -f /mnt \5 --img quay.io/custom/repository6 --du-profile -s7 - 1
- Specifies the downloading function of the factory-precaching-cli tool.
- 2
- Defines the OpenShift Container Platform release version.
- 3
- Defines the RHACM version.
- 4
- Defines the multicluster engine version.
- 5
- Defines the folder where you want to download the images on the disk.
- 6
- Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk.
- 7
- Specifies pre-caching the Operators included in the DU configuration.
Example output
Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 ... Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 ... Summary: Release: 4.12.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83
19.14.4.5. Pre-caching custom images in disconnected environments Link kopierenLink in die Zwischenablage kopiert!
The
--generate-imageset
ImageSetConfiguration
ImageSetConfiguration
--skip-imageset
ImageSetConfiguration
You can customize the
ImageSetConfiguration
- Add Operators and additional images
- Remove Operators and additional images
- Change Operator and catalog sources to local or disconnected registries
Procedure
Pre-cache the images:
# podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \1 -r 4.12.0 \2 --acm-version 2.6.3 \3 --mce-version 2.1.4 \4 -f /mnt \5 --img quay.io/custom/repository6 --du-profile -s \7 --generate-imageset8 - 1
- Specifies the downloading function of the factory-precaching-cli tool.
- 2
- Defines the OpenShift Container Platform release version.
- 3
- Defines the RHACM version.
- 4
- Defines the multicluster engine version.
- 5
- Defines the folder where you want to download the images on the disk.
- 6
- Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk.
- 7
- Specifies pre-caching the Operators included in the DU configuration.
- 8
- The
--generate-imagesetargument generates theImageSetConfigurationCR only, which allows you to customize the CR.
Example output
Generated /mnt/imageset.yamlExample ImageSetConfiguration CR
apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.01 maxVersion: 4.12.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: advanced-cluster-management2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator4 channels: - name: 'stable' - name: ptp-operator5 channels: - name: 'stable' - name: sriov-network-operator6 channels: - name: 'stable' - name: cluster-logging7 channels: - name: 'stable' - name: lvms-operator8 channels: - name: 'stable-4.12' - name: amq7-interconnect-operator9 channels: - name: '1.10.x' - name: bare-metal-event-relay10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.12 packages: - name: sriov-fec11 channels: - name: 'stable'Customize the catalog resource in the CR:
apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.12 packages: - name: sriov-fec channels: - name: 'stable'When you download images by using a local or disconnected registry, you have to first add certificates for the registries that you want to pull the content from.
To avoid any errors, copy the registry certificate into your server:
# cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.Then, update the certificates trust store:
# update-ca-trustMount the host
folder into the factory-cli image:/etc/pki# podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download \1 -r 4.12.0 \2 --acm-version 2.6.3 \3 --mce-version 2.1.4 \4 -f /mnt \5 --img quay.io/custom/repository6 --du-profile -s \7 --skip-imageset8 - 1
- Specifies the downloading function of the factory-precaching-cli tool.
- 2
- Defines the OpenShift Container Platform release version.
- 3
- Defines the RHACM version.
- 4
- Defines the multicluster engine version.
- 5
- Defines the folder where you want to download the images on the disk.
- 6
- Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk.
- 7
- Specifies pre-caching the Operators included in the DU configuration.
- 8
- The
--skip-imagesetargument allows you to download the images that you specified in your customizedImageSetConfigurationCR.
Download the images without generating a new
CR:imageSetConfiguration# podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.12.0 \ --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt \ --img quay.io/custom/repository \ --du-profile -s \ --skip-imageset
19.14.5. Pre-caching images in ZTP Link kopierenLink in die Zwischenablage kopiert!
The
SiteConfig
SiteConfig
-
clusters.ignitionConfigOverride -
nodes.installerArgs -
nodes.ignitionConfigOverride
Example SiteConfig with additional fields
apiVersion: ran.openshift.io/v1
kind: SiteConfig
metadata:
name: "example-5g-lab"
namespace: "example-5g-lab"
spec:
baseDomain: "example.domain.redhat.com"
pullSecretRef:
name: "assisted-deployment-pull-secret"
clusterImageSetNameRef: "img4.9.10-x86-64-appsub"
sshPublicKey: "ssh-rsa ..."
clusters:
- clusterName: "sno-worker-0"
clusterLabels:
group-du-sno: ""
common-411: true
sites : "example-5g-lab"
vendor: "OpenShift"
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.19.32.192/26
serviceNetwork:
- 172.30.0.0/16
networkType: "OVNKubernetes"
additionalNTPSources:
- clock.corp.redhat.com
ignitionConfigOverride:
'{
"ignition": {
"version": "3.1.0"
},
"systemd": {
"units": [
{
"name": "var-mnt.mount",
"enabled": true,
"contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-images.service\nBindsTo=precache-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-images.service"
},
{
"name": "precache-images.service",
"enabled": true,
"contents": "[Unit]\nDescription=Extracts the precached images in discovery stage\nAfter=var-mnt.mount\nBefore=agent.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ai.sh\n#TimeoutStopSec=30\n\n[Install]\nWantedBy=multi-user.target default.target\nWantedBy=agent.service"
}
]
},
"storage": {
"files": [
{
"overwrite": true,
"path": "/usr/local/bin/extract-ai.sh",
"mode": 755,
"user": {
"name": "root"
},
"contents": {
"source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200"
}
},
{
"overwrite": true,
"path": "/usr/local/bin/agent-fix-bz1964591",
"mode": 755,
"user": {
"name": "root"
},
"contents": {
"source": "data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true"
}
}
]
}
}'
nodes:
- hostName: "snonode.sno-worker-0.example.domain.redhat.com"
role: "master"
bmcAddress: "idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1"
bmcCredentialsName:
name: "worker0-bmh-secret"
bootMACAddress: "e4:43:4b:bd:90:46"
bootMode: "UEFI"
rootDeviceHints:
deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0
installerArgs: '["--save-partlabel", "data"]'
ignitionConfigOverride: |
{
"ignition": {
"version": "3.1.0"
},
"systemd": {
"units": [
{
"name": "var-mnt.mount",
"enabled": true,
"contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-ocp-images.service\nBindsTo=precache-ocp-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-ocp-images.service"
},
{
"name": "precache-ocp-images.service",
"enabled": true,
"contents": "[Unit]\nDescription=Extracts the precached OCP images into containers storage\nAfter=var-mnt.mount\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ocp.sh\nTimeoutStopSec=60\n\n[Install]\nWantedBy=multi-user.target"
}
]
},
"storage": {
"files": [
{
"overwrite": true,
"path": "/usr/local/bin/extract-ocp.sh",
"mode": 755,
"user": {
"name": "root"
},
"contents": {
"source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200"
}
}
]
}
}
nodeNetwork:
config:
interfaces:
- name: ens1f0
type: ethernet
state: up
macAddress: "AA:BB:CC:11:22:33"
ipv4:
enabled: true
dhcp: true
ipv6:
enabled: false
interfaces:
- name: "ens1f0"
macAddress: "AA:BB:CC:11:22:33"
19.14.5.1. Understanding the clusters.ignitionConfigOverride field Link kopierenLink in die Zwischenablage kopiert!
The
clusters.ignitionConfigOverride
systemd
systemdservices-
The
systemdservices arevar-mnt.mountandprecache-images.services. Theprecache-images.servicedepends on the disk partition to be mounted in/var/mntby thevar-mnt.mountunit. The service calls a script calledextract-ai.sh. extract-ai.sh-
The
extract-ai.shscript extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally. agent-fix-bz1964591-
The
agent-fix-bz1964591script is a workaround for an AI issue. To prevent AI from removing the images, which can force theagent.serviceto pull the images again from the registry, theagent-fix-bz1964591script checks if the requested container images exist.
19.14.5.2. Understanding the nodes.installerArgs field Link kopierenLink in die Zwischenablage kopiert!
The
nodes.installerArgs
coreos-installer
data
data
The extra parameters are passed directly to the
coreos-installer
You can pass several options to the
coreos-installer
OPTIONS:
...
-u, --image-url <URL>
Manually specify the image URL
-f, --image-file <path>
Manually specify a local image file
-i, --ignition-file <path>
Embed an Ignition config from a file
-I, --ignition-url <URL>
Embed an Ignition config from a URL
...
--save-partlabel <lx>...
Save partitions with this label glob
--save-partindex <id>...
Save partitions with this number or range
...
--insecure-ignition
Allow Ignition URL without HTTPS or hash
19.14.5.3. Understanding the nodes.ignitionConfigOverride field Link kopierenLink in die Zwischenablage kopiert!
Similarly to
clusters.ignitionConfigOverride
nodes.ignitionConfigOverride
coreos-installer
At this stage, the number of container images extracted and loaded is bigger than in the discovery stage. Depending on the OpenShift Container Platform release and whether you install the Day-2 Operators, the installation time can vary.
At the installation stage, the
var-mnt.mount
precache-ocp.services
systemd
precache-ocp.serviceThe
depends on the disk partition to be mounted inprecache-ocp.serviceby the/var/mntunit. Thevar-mnt.mountservice calls a script calledprecache-ocp.service.extract-ocp.shImportantTo extract all the images before the OpenShift Container Platform installation, you must execute
before executing theprecache-ocp.serviceandmachine-config-daemon-pull.serviceservices.nodeip-configuration.serviceextract-ocp.sh-
The
extract-ocp.shscript extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally.
When you upload the
SiteConfig
PolicyGenTemplates
19.14.6. Troubleshooting Link kopierenLink in die Zwischenablage kopiert!
19.14.6.1. Rendered catalog is invalid Link kopierenLink in die Zwischenablage kopiert!
When you download images by using a local or disconnected registry, you might see the
The rendered catalog is invalid
The factory-precaching-cli tool image is built on a UBI RHEL image. Certificate paths and locations are the same on RHCOS.
Example error
Generating list of pre-cached artifacts...
error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish
Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2
Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts
Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures
backend is not configured in /mnt/imageset.yaml, using stateless mode
backend is not configured in /mnt/imageset.yaml, using stateless mode
No metadata detected, creating new workspace
level=info msg=trying next host error=failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443
The rendered catalog is invalid.
Run "oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME" for more information.
error: error rendering new refs: render reference "eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11": error resolving name : failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority
Procedure
Copy the registry certificate into your server:
# cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.Update the certificates trust store:
# update-ca-trustMount the host
folder into the factory-cli image:/etc/pki# podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download -r 4.11.5 --acm-version 2.5.4 \ --mce-version 2.0.4 -f /mnt \--img quay.io/custom/repository --du-profile -s --skip-imageset