Chapter 2. Preparing the hub cluster for GitOps ZTP
To use RHACM in a disconnected environment, create a mirror registry that mirrors the OpenShift Container Platform release images and Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You can also use a disconnected mirror host to serve the RHCOS ISO and RootFS disk images that are used to provision the bare-metal hosts.
2.1. Telco RAN DU 4.19 validated software components Copy linkLink copied to clipboard!
The Red Hat telco RAN DU 4.19 solution has been validated using the following Red Hat software products for OpenShift Container Platform managed clusters.
| Component | Software version |
|---|---|
| Managed cluster version | 4.19 |
| Cluster Logging Operator | 6.21 |
| Local Storage Operator | 4.19 |
| OpenShift API for Data Protection (OADP) | 1.5 |
| PTP Operator | 4.19 |
| SR-IOV Operator | 4.19 |
| SRIOV-FEC Operator | 2.11 |
| Lifecycle Agent | 4.19 |
[1] This table will be updated when the aligned Cluster Logging Operator version 6.3 is released.
2.2. Recommended hub cluster specifications and managed cluster limits for GitOps ZTP Copy linkLink copied to clipboard!
With GitOps Zero Touch Provisioning (ZTP), you can manage thousands of clusters in geographically dispersed regions and networks. The Red Hat Performance and Scale lab successfully created and managed 3500 virtual single-node OpenShift clusters with a reduced DU profile from a single Red Hat Advanced Cluster Management (RHACM) hub cluster in a lab environment.
In real-world situations, the scaling limits for the number of clusters that you can manage will vary depending on various factors affecting the hub cluster. For example:
- Hub cluster resources
- Available hub cluster host resources (CPU, memory, storage) are an important factor in determining how many clusters the hub cluster can manage. The more resources allocated to the hub cluster, the more managed clusters it can accommodate.
- Hub cluster storage
- The hub cluster host storage IOPS rating and whether the hub cluster hosts use NVMe storage can affect hub cluster performance and the number of clusters it can manage.
- Network bandwidth and latency
- Slow or high-latency network connections between the hub cluster and managed clusters can impact how the hub cluster manages multiple clusters.
- Managed cluster size and complexity
- The size and complexity of the managed clusters also affects the capacity of the hub cluster. Larger managed clusters with more nodes, namespaces, and resources require additional processing and management resources. Similarly, clusters with complex configurations such as the RAN DU profile or diverse workloads can require more resources from the hub cluster.
- Number of managed policies
- The number of policies managed by the hub cluster scaled over the number of managed clusters bound to those policies is an important factor that determines how many clusters can be managed.
- Monitoring and management workloads
- RHACM continuously monitors and manages the managed clusters. The number and complexity of monitoring and management workloads running on the hub cluster can affect its capacity. Intensive monitoring or frequent reconciliation operations can require additional resources, potentially limiting the number of manageable clusters.
- RHACM version and configuration
- Different versions of RHACM can have varying performance characteristics and resource requirements. Additionally, the configuration settings of RHACM, such as the number of concurrent reconciliations or the frequency of health checks, can affect the managed cluster capacity of the hub cluster.
Use the following representative configuration and network specifications to develop your own Hub cluster and network specifications.
The following guidelines are based on internal lab benchmark testing only and do not represent complete bare-metal host specifications.
| Requirement | Description |
|---|---|
| Server hardware | 3 x Dell PowerEdge R650 rack servers |
| NVMe hard disks |
|
| SSD hard disks |
|
| Number of applied DU profile policies | 5 |
The following network specifications are representative of a typical real-world RAN network and were applied to the scale lab environment during testing.
| Specification | Description |
|---|---|
| Round-trip time (RTT) latency | 50 ms |
| Packet loss | 0.02% packet loss |
| Network bandwidth limit | 20 Mbps |
2.3. Installing GitOps ZTP in a disconnected environment Copy linkLink copied to clipboard!
Use Red Hat Advanced Cluster Management (RHACM), Red Hat OpenShift GitOps, and Topology Aware Lifecycle Manager (TALM) on the hub cluster in the disconnected environment to manage the deployment of multiple managed clusters.
Prerequisites
-
You have installed the OpenShift Container Platform CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. You have configured a disconnected mirror registry for use in the cluster.
NoteThe disconnected mirror registry that you create must contain a version of TALM backup and pre-cache images that matches the version of TALM running in the hub cluster. The spoke cluster must be able to resolve these images in the disconnected mirror registry.
Procedure
- Install RHACM in the hub cluster. See Installing RHACM in a disconnected environment.
- Install GitOps and TALM in the hub cluster.
2.4. Adding RHCOS ISO and RootFS images to the disconnected mirror host Copy linkLink copied to clipboard!
Before you begin installing clusters in the disconnected environment with Red Hat Advanced Cluster Management (RHACM), you must first host Red Hat Enterprise Linux CoreOS (RHCOS) images for it to use. Use a disconnected mirror to host the RHCOS images.
Prerequisites
- Deploy and configure an HTTP server to host the RHCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create.
The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. You require ISO and RootFS images to install RHCOS on the hosts. RHCOS QCOW2 images are not supported for this installation type.
Procedure
- Log in to the mirror host.
Obtain the RHCOS ISO and RootFS images from mirror.openshift.com, for example:
Export the required image names and OpenShift Container Platform version as environment variables:
export ISO_IMAGE_NAME=<iso_image_name>
$ export ISO_IMAGE_NAME=<iso_image_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ROOTFS_IMAGE_NAME=<rootfs_image_name>
$ export ROOTFS_IMAGE_NAME=<rootfs_image_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OCP_VERSION=<ocp_version>
$ export OCP_VERSION=<ocp_version>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the required images:
sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.19/${OCP_VERSION}/${ISO_IMAGE_NAME} -O /var/www/html/${ISO_IMAGE_NAME}$ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.19/${OCP_VERSION}/${ISO_IMAGE_NAME} -O /var/www/html/${ISO_IMAGE_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.19/${OCP_VERSION}/${ROOTFS_IMAGE_NAME} -O /var/www/html/${ROOTFS_IMAGE_NAME}$ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.19/${OCP_VERSION}/${ROOTFS_IMAGE_NAME} -O /var/www/html/${ROOTFS_IMAGE_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example:
wget http://$(hostname)/${ISO_IMAGE_NAME}$ wget http://$(hostname)/${ISO_IMAGE_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Saving to: rhcos-4.19.1-x86_64-live.x86_64.iso rhcos-4.19.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s
Saving to: rhcos-4.19.1-x86_64-live.x86_64.iso rhcos-4.19.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Enabling the assisted service Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management (RHACM) uses the assisted service to deploy OpenShift Container Platform clusters. The assisted service is deployed automatically when you enable the MultiClusterHub Operator on Red Hat Advanced Cluster Management (RHACM). After that, you need to configure the Provisioning resource to watch all namespaces and to update the AgentServiceConfig custom resource (CR) with references to the ISO and RootFS images that are hosted on the mirror registry HTTP server.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have RHACM with
MultiClusterHubenabled.
Procedure
-
Enable the
Provisioningresource to watch all namespaces and configure mirrors for disconnected environments. For more information, see Enabling the central infrastructure management service. Open the
AgentServiceConfigCR to update thespec.osImagesfield by running the following command:oc edit AgentServiceConfig
$ oc edit AgentServiceConfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
spec.osImagesfield in theAgentServiceConfigCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<host>- Specifies the fully qualified domain name (FQDN) for the target mirror registry HTTP server.
<path>- Specifies the path to the image on the target mirror registry.
- Save and quit the editor to apply the changes.
2.6. Configuring the hub cluster to use a disconnected mirror registry Copy linkLink copied to clipboard!
You can configure the hub cluster to use a disconnected mirror registry for a disconnected environment.
Prerequisites
- You have a disconnected hub cluster installation with Red Hat Advanced Cluster Management (RHACM) 2.13 installed.
-
You have hosted the
rootfsandisoimages on an HTTP server. See the Additional resources section for guidance about Mirroring the OpenShift Container Platform image repository.
If you enable TLS for the HTTP server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and managed clusters and the HTTP server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported.
Procedure
Create a
ConfigMapcontaining the mirror registry config:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
ConfigMapnamespace must be set tomulticluster-engine. - 2
- The mirror registry’s certificate that is used when creating the mirror registry.
- 3
- The configuration file for the mirror registry. The mirror registry configuration adds mirror information to the
/etc/containers/registries.conffile in the discovery image. The mirror information is stored in theimageContentSourcessection of theinstall-config.yamlfile when the information is passed to the installation program. The Assisted Service pod that runs on the hub cluster fetches the container images from the configured mirror registry. - 4
- The URL of the mirror registry. You must use the URL from the
imageContentSourcessection by running theoc adm release mirrorcommand when you configure the mirror registry. For more information, see the Mirroring the OpenShift Container Platform image repository section. - 5
- The registries defined in the
registries.conffile must be scoped by repository, not by registry. In this example, both thequay.io/example-repositoryand themirror1.registry.corp.com:5000/example-repositoryrepositories are scoped by theexample-repositoryrepository.
This updates
mirrorRegistryRefin theAgentServiceConfigcustom resource, as shown below:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the
AgentServiceConfignamespace tomulticluster-engineto match theConfigMapnamespace. - 2
- Set
mirrorRegistryRef.nameto match the definition specified in the relatedConfigMapCR. - 3
- Set the OpenShift Container Platform version to either the x.y or x.y.z format.
- 4
- Set the URL for the ISO hosted on the
httpdserver.
A valid NTP server is required during cluster installation. Ensure that a suitable NTP server is available and can be reached from the installed clusters through the disconnected network.
2.7. Configuring the hub cluster to use unauthenticated registries Copy linkLink copied to clipboard!
You can configure the hub cluster to use unauthenticated registries. Unauthenticated registries does not require authentication to access and download images.
Prerequisites
- You have installed and configured a hub cluster and installed Red Hat Advanced Cluster Management (RHACM) on the hub cluster.
- You have installed the OpenShift Container Platform CLI (oc).
-
You have logged in as a user with
cluster-adminprivileges. - You have configured an unauthenticated registry for use with the hub cluster.
Procedure
Update the
AgentServiceConfigcustom resource (CR) by running the following command:oc edit AgentServiceConfig agent
$ oc edit AgentServiceConfig agentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
unauthenticatedRegistriesfield in the CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Unauthenticated registries are listed under
spec.unauthenticatedRegistriesin theAgentServiceConfigresource. Any registry on this list is not required to have an entry in the pull secret used for the spoke cluster installation.assisted-servicevalidates the pull secret by making sure it contains the authentication information for every image registry used for installation.
Mirror registries are automatically added to the ignore list and do not need to be added under spec.unauthenticatedRegistries. Specifying the PUBLIC_CONTAINER_REGISTRIES environment variable in the ConfigMap overrides the default values with the specified value. The PUBLIC_CONTAINER_REGISTRIES defaults are quay.io and registry.svc.ci.openshift.org.
Verification
Verify that you can access the newly added registry from the hub cluster by running the following commands:
Open a debug shell prompt to the hub cluster:
oc debug node/<node_name>
$ oc debug node/<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test access to the unauthenticated registry by running the following command:
podman login -u kubeadmin -p $(oc whoami -t) <unauthenticated_registry>
sh-4.4# podman login -u kubeadmin -p $(oc whoami -t) <unauthenticated_registry>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <unauthenticated_registry>
-
Is the new registry, for example,
unauthenticated-image-registry.openshift-image-registry.svc:5000.
Example output
Login Succeeded!
Login Succeeded!Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. Configuring the hub cluster with ArgoCD Copy linkLink copied to clipboard!
You can configure the hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CRs) for each site with GitOps Zero Touch Provisioning (ZTP).
Red Hat Advanced Cluster Management (RHACM) uses SiteConfig CRs to generate the Day 1 managed cluster installation CRs for ArgoCD. Each ArgoCD application can manage a maximum of 300 SiteConfig CRs.
Prerequisites
- You have a OpenShift Container Platform hub cluster with Red Hat Advanced Cluster Management (RHACM) and Red Hat OpenShift GitOps installed.
-
You have extracted the reference deployment from the GitOps ZTP plugin container as described in the "Preparing the GitOps ZTP site configuration repository" section. Extracting the reference deployment creates the
out/argocd/deploymentdirectory referenced in the following procedure.
Procedure
Prepare the ArgoCD pipeline configuration:
- Create a Git repository with the directory structure similar to the example directory. For more information, see "Preparing the GitOps ZTP site configuration repository".
Configure access to the repository using the ArgoCD UI. Under Settings configure the following:
-
Repositories - Add the connection information. The URL must end in
.git, for example,https://repo.example.com/repo.gitand credentials. - Certificates - Add the public certificate for the repository, if needed.
-
Repositories - Add the connection information. The URL must end in
Modify the two ArgoCD applications,
out/argocd/deployment/clusters-app.yamlandout/argocd/deployment/policies-app.yaml, based on your Git repository:-
Update the URL to point to the Git repository. The URL ends with
.git, for example,https://repo.example.com/repo.git. -
The
targetRevisionindicates which Git repository branch to monitor. -
pathspecifies the path to theSiteConfigandPolicyGeneratororPolicyGentemplateCRs, respectively.
-
Update the URL to point to the Git repository. The URL ends with
To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image. Customize the patch file that you previously extracted into the
out/argocd/deployment/directory for your environment.Select the
multicluster-operators-subscriptionimage that matches your RHACM version.-
For RHACM 2.8 and 2.9, use the
registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v<rhacm_version>image. -
For RHACM 2.10 and later, use the
registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<rhacm_version>image.
ImportantThe version of the
multicluster-operators-subscriptionimage must match the RHACM version. Beginning with the MCE 2.10 release, RHEL 9 is the base image formulticluster-operators-subscriptionimages.Click
[Expand for Operator list]in the "Platform Aligned Operators" table in OpenShift Operator Life Cycles to view the complete supported Operators matrix for OpenShift Container Platform.-
For RHACM 2.8 and 2.9, use the
Modify the
out/argocd/deployment/argocd-openshift-gitops-patch.jsonfile with themulticluster-operators-subscriptionimage that matches your RHACM version:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional: For RHEL 9 images, copy the required universal executable in the
/policy-generator/PolicyGenerator-not-fips-compliantfolder for the ArgoCD version. - 2
- Match the
multicluster-operators-subscriptionimage to the RHACM version. - 3
- In disconnected environments, replace the URL for the
multicluster-operators-subscriptionimage with the disconnected registry equivalent for your environment.
Patch the ArgoCD instance. Run the following command:
oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json
$ oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In RHACM 2.7 and later, the multicluster engine enables the
cluster-proxy-addonfeature by default. Apply the following patch to disable thecluster-proxy-addonfeature and remove the relevant hub cluster and managed pods that are responsible for this add-on. Run the following command:oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json
$ oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the pipeline configuration to your hub cluster by running the following command:
oc apply -k out/argocd/deployment
$ oc apply -k out/argocd/deploymentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you have existing ArgoCD applications, verify that the
PrunePropagationPolicy=backgroundpolicy is set in theApplicationresource by running the following command:oc -n openshift-gitops get applications.argoproj.io \ clusters -o jsonpath='{.spec.syncPolicy.syncOptions}' |jq$ oc -n openshift-gitops get applications.argoproj.io \ clusters -o jsonpath='{.spec.syncPolicy.syncOptions}' |jqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for an existing policy
[ "CreateNamespace=true", "PrunePropagationPolicy=background", "RespectIgnoreDifferences=true" ]
[ "CreateNamespace=true", "PrunePropagationPolicy=background", "RespectIgnoreDifferences=true" ]Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
spec.syncPolicy.syncOptionfield does not contain aPrunePropagationPolicyparameter orPrunePropagationPolicyis set to theforegroundvalue, set the policy tobackgroundin theApplicationresource. See the following example:kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=backgroundkind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=backgroundCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Setting the
backgrounddeletion policy ensures that theManagedClusterCR and all its associated resources are deleted.
2.9. Preparing the GitOps ZTP site configuration repository Copy linkLink copied to clipboard!
Before you can use the GitOps Zero Touch Provisioning (ZTP) pipeline, you need to prepare the Git repository to host the site configuration data.
Prerequisites
- You have configured the hub cluster GitOps applications for generating the required installation and policy custom resources (CRs).
- You have deployed the managed clusters using GitOps ZTP.
Procedure
Create a directory structure with separate paths for the
SiteConfigandPolicyGeneratororPolicyGentemplateCRs.NoteKeep
SiteConfigandPolicyGeneratororPolicyGentemplateCRs in separate directories. Both theSiteConfigandPolicyGeneratororPolicyGentemplatedirectories must contain akustomization.yamlfile that explicitly includes the files in that directory.Export the
argocddirectory from theztp-site-generatecontainer image using the following commands:podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.19
$ podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.19Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p ./out
$ mkdir -p ./outCopy to Clipboard Copied! Toggle word wrap Toggle overflow podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.19 extract /home/ztp --tar | tar x -C ./out
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.19 extract /home/ztp --tar | tar x -C ./outCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
outdirectory contains the following subdirectories:-
out/extra-manifestcontains the source CR files thatSiteConfiguses to generate extra manifestconfigMap. -
out/source-crscontains the source CR files thatPolicyGeneratoruses to generate the Red Hat Advanced Cluster Management (RHACM) policies. -
out/argocd/deploymentcontains patches and YAML files to apply on the hub cluster for use in the next step of this procedure. -
out/argocd/examplecontains the examples forSiteConfigandPolicyGeneratororPolicyGentemplatefiles that represent the recommended configuration.
-
-
Copy the
out/source-crsfolder and contents to thePolicyGeneratororPolicyGentemplatedirectory. The out/extra-manifests directory contains the reference manifests for a RAN DU cluster. Copy the
out/extra-manifestsdirectory into theSiteConfigfolder. This directory should contain CRs from theztp-site-generatecontainer only. Do not add user-provided CRs here. If you want to work with user-provided CRs you must create another directory for that content. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Using
PolicyGenTemplateCRs to manage and deploy policies to manage clusters will be deprecated in a future OpenShift Container Platform release. Equivalent and improved functionality is available by using Red Hat Advanced Cluster Management (RHACM) andPolicyGeneratorCRs.
-
Commit the directory structure and the
kustomization.yamlfiles and push to your Git repository. The initial push to Git should include thekustomization.yamlfiles.
You can use the directory structure under out/argocd/example as a reference for the structure and content of your Git repository. That structure includes SiteConfig and PolicyGenerator or PolicyGentemplate reference CRs for single-node, three-node, and standard clusters. Remove references to cluster types that you are not using.
For all cluster types, you must:
-
Add the
source-crssubdirectory to theacmpolicygeneratororpolicygentemplatesdirectory. -
Add the
extra-manifestsdirectory to thesiteconfigdirectory.
The following example describes a set of CRs for a network of single-node clusters:
Using PolicyGenTemplate CRs to manage and deploy policies to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs.
For more information about PolicyGenerator resources, see the RHACM Integrating Policy Generator documentation.
2.10. Preparing the GitOps ZTP site configuration repository for version independence Copy linkLink copied to clipboard!
You can use GitOps ZTP to manage source custom resources (CRs) for managed clusters that are running different versions of OpenShift Container Platform. This means that the version of OpenShift Container Platform running on the hub cluster can be independent of the version running on the managed clusters.
The following procedure assumes you are using PolicyGenerator resources instead of PolicyGentemplate resources for cluster policies management.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges.
Procedure
-
Create a directory structure with separate paths for the
SiteConfigandPolicyGeneratorCRs. Within the
PolicyGeneratordirectory, create a directory for each OpenShift Container Platform version you want to make available. For each version, create the following resources:-
kustomization.yamlfile that explicitly includes the files in that directory source-crsdirectory to contain reference CR configuration files from theztp-site-generatecontainerIf you want to work with user-provided CRs, you must create a separate directory for them.
-
In the
/siteconfigdirectory, create a subdirectory for each OpenShift Container Platform version you want to make available. For each version, create at least one directory for reference CRs to be copied from the container. There is no restriction on the naming of directories or on the number of reference directories. If you want to work with custom manifests, you must create a separate directory for them.The following example describes a structure using user-provided manifests and CRs for different versions of OpenShift Container Platform:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Create a top-level
kustomizationYAML file. - 2 7
- Create the version-specific directories within the custom
/acmpolicygeneratordirectory. - 3 8
- Create a
kustomization.yamlfile for each version. - 4 9
- Create a
source-crsdirectory for each version to contain reference CRs from theztp-site-generatecontainer. - 5 10
- Create the
reference-crsdirectory for policy CRs that are extracted from the ZTP container. - 6 11
- Optional: Create a
custom-crsdirectory for user-provided CRs. - 12 14
- Create a directory within the custom
/siteconfigdirectory to contain extra manifests from theztp-site-generatecontainer. - 13 15
- Create a folder to hold user-provided manifests.
NoteIn the previous example, each version subdirectory in the custom
/siteconfigdirectory contains two further subdirectories, one containing the reference manifests copied from the container, the other for custom manifests that you provide. The names assigned to those directories are examples. If you use user-provided CRs, the last directory listed underextraManifests.searchPathsin theSiteConfigCR must be the directory containing user-provided CRs.Edit the
SiteConfigCR to include the search paths of any directories you have created. The first directory that is listed underextraManifests.searchPathsmust be the directory containing the reference manifests. Consider the order in which the directories are listed. In cases where directories contain files with the same name, the file in the final directory takes precedence.Example SiteConfig CR
extraManifests: searchPaths: - extra-manifest/ - custom-manifest/extraManifests: searchPaths: - extra-manifest/1 - custom-manifest/2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the top-level
kustomization.yamlfile to control which OpenShift Container Platform versions are active. The following is an example of akustomization.yamlfile at the top level:resources: - version_4.13 #- version_4.14
resources: - version_4.131 #- version_4.142 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11. Configuring the hub cluster for backup and restore Copy linkLink copied to clipboard!
You can use GitOps ZTP to configure a set of policies to back up BareMetalHost resources. This allows you to recover data from a failed hub cluster and deploy a replacement cluster using Red Hat Advanced Cluster Management (RHACM).
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges.
Procedure
Create a policy to add the
cluster.open-cluster-management.io/backup=cluster-activationlabel to allBareMetalHostresources that have theinfraenvs.agent-install.openshift.iolabel. Save the policy asBareMetalHostBackupPolicy.yaml.The following example adds the
cluster.open-cluster-management.io/backuplabel to allBareMetalHostresources that have theinfraenvs.agent-install.openshift.iolabel:Example Policy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you apply the
cluster.open-cluster-management.io/backup: cluster-activationlabel toBareMetalHostresources, the RHACM cluster backs up those resources. You can restore theBareMetalHostresources if the active cluster becomes unavailable, when restoring the hub activation resources.
Apply the policy by running the following command:
oc apply -f BareMetalHostBackupPolicy.yaml
$ oc apply -f BareMetalHostBackupPolicy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Find all
BareMetalHostresources with the labelinfraenvs.agent-install.openshift.ioby running the following command:oc get BareMetalHost -A -l infraenvs.agent-install.openshift.io
$ oc get BareMetalHost -A -l infraenvs.agent-install.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE baremetal-ns baremetal-name false 50s
NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE baremetal-ns baremetal-name false 50sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the policy has applied the label
cluster.open-cluster-management.io/backup=cluster-activationto all these resources, by running the following command:oc get BareMetalHost -A -l infraenvs.agent-install.openshift.io,cluster.open-cluster-management.io/backup=cluster-activation
$ oc get BareMetalHost -A -l infraenvs.agent-install.openshift.io,cluster.open-cluster-management.io/backup=cluster-activationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE baremetal-ns baremetal-name false 50s
NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE baremetal-ns baremetal-name false 50sCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output must show the same list as in the previous step, which listed all
BareMetalHostresources with the labelinfraenvs.agent-install.openshift.io. This confirms that all theBareMetalHostresources with theinfraenvs.agent-install.openshift.iolabel also have thecluster.open-cluster-management.io/backup: cluster-activationlabel.The following example shows a
BareMetalHostresource with theinfraenvs.agent-install.openshift.iolabel. The resource must also have thecluster.open-cluster-management.io/backup: cluster-activationlabel, which was added by the policy created in step 1.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can now use Red Hat Advanced Cluster Management to restore a managed cluster.
When you restore BareMetalHosts resources as part of restoring the cluster activation data, you must restore the BareMetalHosts status. The following RHACM Restore resource example restores activation resources, including BareMetalHosts, and also restores the status for the BareMetalHosts resources: