Chapter 4. Installing managed clusters with RHACM and SiteConfig resources
You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The GitOps Zero Touch Provisioning (ZTP) pipeline performs the cluster installations. GitOps ZTP can be used in a disconnected environment.
4.1. GitOps ZTP and Topology Aware Lifecycle Manager Copy linkLink copied to clipboard!
GitOps Zero Touch Provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the GitOps ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM.
- Inform policies
-
By default, GitOps ZTP creates all policies with a remediation action of
inform. These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the GitOps ZTP process, after OpenShift installation, the TALM steps through the createdinformpolicies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the GitOps ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. - Automatic creation of ClusterGroupUpgrade CRs
To automate the initial configuration of newly deployed clusters, TALM monitors the state of all
ManagedClusterCRs on the hub cluster. AnyManagedClusterCR that does not have aztp-donelabel applied, including newly createdManagedClusterCRs, causes the TALM to automatically create aClusterGroupUpgradeCR with the following characteristics:-
The
ClusterGroupUpgradeCR is created and enabled in theztp-installnamespace. -
ClusterGroupUpgradeCR has the same name as theManagedClusterCR. -
The cluster selector includes only the cluster associated with that
ManagedClusterCR. -
The set of managed policies includes all policies that RHACM has bound to the cluster at the time the
ClusterGroupUpgradeis created. - Pre-caching is disabled.
- Timeout set to 4 hours (240 minutes).
The automatic creation of an enabled
ClusterGroupUpgradeensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of aClusterGroupUpgradeCR for anyManagedClusterwithout theztp-donelabel allows a failed GitOps ZTP installation to be restarted by simply deleting theClusterGroupUpgradeCR for the cluster.-
The
- Waves
Each policy generated from a
PolicyGenTemplateCR includes aztp-deploy-waveannotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generatedClusterGroupUpgradeCR. The wave annotation is not used other than for the auto-generatedClusterGroupUpgradeCR.NoteAll CRs in the same policy must have the same setting for the
ztp-deploy-waveannotation. The default value of this annotation for each CR can be overridden in thePolicyGenTemplate. The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime.The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the next policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the
CatalogSourcefor an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account.Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves.
To check the default wave value in each source CR, run the following command against the out/source-crs directory that is extracted from the ztp-site-generate container image:
grep -r "ztp-deploy-wave" out/source-crs
$ grep -r "ztp-deploy-wave" out/source-crs
- Phase labels
The
ClusterGroupUpgradeCR is automatically created and includes directives to annotate theManagedClusterCR with labels at the start and end of the GitOps ZTP process.When GitOps ZTP configuration postinstallation commences, the
ManagedClusterhas theztp-runninglabel applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove theztp-runninglabel and apply theztp-donelabel.For deployments that make use of the
informDuValidatorpolicy, theztp-donelabel is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the GitOps ZTP applied configuration CRs. Theztp-donelabel affects automaticClusterGroupUpgradeCR creation by TALM. Do not manipulate this label after the initial GitOps ZTP installation of the cluster.- Linked CRs
-
The automatically created
ClusterGroupUpgradeCR has the owner reference set as theManagedClusterfrom which it was derived. This reference ensures that deleting theManagedClusterCR causes the instance of theClusterGroupUpgradeto be deleted along with any supporting resources.
4.2. Overview of deploying managed clusters with GitOps ZTP Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management (RHACM) uses GitOps Zero Touch Provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. GitOps ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters.
The deployment of the clusters includes:
- Installing the host operating system (RHCOS) on a blank server
- Deploying OpenShift Container Platform
- Creating cluster policies and site subscriptions
- Making the necessary network configurations to the server operating system
- Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV
4.2.1. Overview of the managed site installation process Copy linkLink copied to clipboard!
After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically:
- A Discovery image ISO file is generated and booted on the target host.
- When the ISO file successfully boots on the target host it reports the host hardware information to RHACM.
- After all hosts are discovered, OpenShift Container Platform is installed.
-
When OpenShift Container Platform finishes installing, the hub installs the
klusterletservice on the target cluster. - The requested add-on services are installed on the target cluster.
The Discovery image ISO process is complete when the Agent CR for the managed cluster is created on the hub cluster.
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads.
4.3. Creating the managed bare-metal host secrets Copy linkLink copied to clipboard!
Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.
The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace.
Procedure
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file
example-sno-secret.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Add the relative path to
example-sno-secret.yamlto thekustomization.yamlfile that you use to install the cluster.
4.4. Configuring Discovery ISO kernel arguments for installations using GitOps ZTP Copy linkLink copied to clipboard!
The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation.
In OpenShift Container Platform 4.15, you can only add kernel arguments. You can not replace or delete kernel arguments.
Prerequisites
- You have installed the OpenShift CLI (oc).
- You have logged in to the hub cluster as a user with cluster-admin privileges.
Procedure
Create the
InfraEnvCR and edit thespec.kernelArgumentsspecification to configure kernel arguments.Save the following YAML in an
InfraEnv-example.yamlfile:NoteThe
InfraEnvCR in this example uses template syntax such as{{ .Cluster.ClusterName }}that is populated based on values in theSiteConfigCR. TheSiteConfigCR automatically populates values for these templates during deployment. Do not edit the templates manually.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Commit the
InfraEnv-example.yamlCR to the same location in your Git repository that has theSiteConfigCR and push your changes. The following example shows a sample Git repository structure:~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ...~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
spec.clusters.crTemplatesspecification in theSiteConfigCR to reference theInfraEnv-example.yamlCR in your Git repository:clusters: crTemplates: InfraEnv: "InfraEnv-example.yaml"clusters: crTemplates: InfraEnv: "InfraEnv-example.yaml"Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you are ready to deploy your cluster by committing and pushing the
SiteConfigCR, the build pipeline uses the customInfraEnv-exampleCR in your Git repository to configure the infrastructure environment, including the custom kernel arguments.
Verification
To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file.
Begin an SSH session with the target host:
ssh -i /path/to/privatekey core@<host_name>
$ ssh -i /path/to/privatekey core@<host_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the system’s kernel arguments by using the following command:
cat /proc/cmdline
$ cat /proc/cmdlineCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Deploying a managed cluster with SiteConfig and GitOps ZTP Copy linkLink copied to clipboard!
Use the following procedure to create a SiteConfig custom resource (CR) and related files and initiate the GitOps Zero Touch Provisioning (ZTP) cluster deployment.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. - You configured the hub cluster for generating the required installation and policy CRs.
You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information.
NoteWhen you create the source repository, ensure that you patch the ArgoCD application with the
argocd/deployment/argocd-openshift-gitops-patch.jsonpatch-file that you extract from theztp-site-generatecontainer. See "Configuring the hub cluster with ArgoCD".To be ready for provisioning managed clusters, you require the following for each bare-metal host:
- Network connectivity
- Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host.
- Baseboard Management Controller (BMC) details
-
GitOps ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the
ManagedClusterCRs on the hub cluster based on theSiteConfigCR in your site Git repo. You create individualBMCSecretCRs for each host manually.
Procedure
Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in
out/argocd/example/siteconfig/example-sno.yaml, the cluster name and namespace isexample-sno.Export the cluster namespace by running the following command:
export CLUSTERNS=example-sno
$ export CLUSTERNS=example-snoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace:
oc create namespace $CLUSTERNS
$ oc create namespace $CLUSTERNSCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create pull secret and BMC
SecretCRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information.NoteThe secrets are referenced from the
SiteConfigcustom resource (CR) by name. The namespace must match theSiteConfignamespace.Create a
SiteConfigCR for your cluster in your local clone of the Git repository:Choose the appropriate example for your CR from the
out/argocd/example/siteconfig/folder. The folder includes example files for single node, three-node, and standard clusters:-
example-sno.yaml -
example-3node.yaml -
example-standard.yaml
-
Change the cluster and host details in the example file to match the type of cluster you want. For example:
Example single-node OpenShift SiteConfig CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor more information about BMC addressing, see the "Additional resources" section. The
installConfigOverridesandignitionConfigOverridefields are expanded in the example for ease of readability.-
You can inspect the default set of extra-manifest
MachineConfigCRs inout/argocd/extra-manifest. It is automatically applied to the cluster when it is installed. Optional: To provision additional install-time manifests on the provisioned cluster, create a directory in your Git repository, for example,
sno-extra-manifest/, and add your custom manifest CRs to this directory. If yourSiteConfig.yamlrefers to this directory in theextraManifestPathfield, any CRs in this referenced directory are appended to the default set of extra manifests.Enabling the crun OCI container runtimeFor optimal cluster performance, enable crun for master and worker nodes in single-node OpenShift, single-node OpenShift with additional worker nodes, three-node OpenShift, and standard clusters.
Enable crun in a
ContainerRuntimeConfigCR as an additional Day 0 install-time manifest to avoid the cluster having to reboot.The
enable-crun-master.yamlandenable-crun-worker.yamlCR files are in theout/source-crs/optional-extra-manifest/folder that you can extract from theztp-site-generatecontainer. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline".
-
Add the
SiteConfigCR to thekustomization.yamlfile in thegeneratorssection, similar to the example shown inout/argocd/example/siteconfig/kustomization.yaml. Commit the
SiteConfigCR and associatedkustomization.yamlchanges in your Git repository and push the changes.The ArgoCD pipeline detects the changes and begins the managed cluster deployment.
Verification
Verify that the custom roles and labels are applied after the node is deployed:
oc describe node example-node.example.com
$ oc describe node example-node.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output
- 1
- The custom label is applied to the node.
4.5.1. Single-node OpenShift SiteConfig CR installation reference Copy linkLink copied to clipboard!
| SiteConfig CR field | Description |
|---|---|
|
|
Configure workload partitioning by setting the value for Note
Configuring workload partitioning by using the |
|
|
Set |
|
|
Configure the image set available on the hub cluster for all the clusters in the site. To see the list of supported versions on your hub cluster, run |
|
|
Set the Important
Use the reference configuration as specified in the example |
|
|
Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the |
|
|
Configure cluster labels to correspond to the |
|
|
Optional. Set |
|
|
For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with |
|
| Specify custom roles for your nodes in your managed clusters. These are additional roles are not used by any OpenShift Container Platform components, only by the user. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete. |
|
|
Optional. Uncomment and set the value to |
|
| BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. |
|
| BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. Note In far edge Telco use cases, only virtual media is supported for use with GitOps ZTP. |
|
|
Configure the |
|
|
Set the boot mode for the host to |
|
|
Specifies the device for deployment. Identifiers that are stable across reboots are recommended. For example, |
|
| Optional. Use this field to assign partitions for persistent storage. Adjust disk ID and size to the specific hardware. |
|
| Configure the network settings for the node. |
|
| Configure the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. |
4.6. Monitoring managed cluster installation progress Copy linkLink copied to clipboard!
The ArgoCD pipeline uses the SiteConfig CR to generate the cluster configuration CRs and syncs it with the hub cluster. You can monitor the progress of the synchronization in the ArgoCD dashboard.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
Procedure
When the synchronization is complete, the installation generally proceeds as follows:
The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands:
Export the cluster name:
export CLUSTER=<clusterName>
$ export CLUSTER=<clusterName>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query the
AgentClusterInstallCR for the managed cluster:oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq$ oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the installation events for the cluster:
curl -sk $(oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'$ curl -sk $(oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7. Troubleshooting GitOps ZTP by validating the installation CRs Copy linkLink copied to clipboard!
The ArgoCD pipeline uses the SiteConfig and PolicyGenTemplate custom resources (CRs) to generate the cluster configuration CRs and Red Hat Advanced Cluster Management (RHACM) policies. Use the following steps to troubleshoot issues that might occur during this process.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
Procedure
Check that the installation CRs were created by using the following command:
oc get AgentClusterInstall -n <cluster_name>
$ oc get AgentClusterInstall -n <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from
SiteConfigfiles to the installation CRs.Verify that the
ManagedClusterCR was generated using theSiteConfigCR on the hub cluster:oc get managedcluster
$ oc get managedclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
ManagedClusteris missing, check if theclustersapplication failed to synchronize the files from the Git repository to the hub cluster:oc get applications.argoproj.io -n openshift-gitops clusters -o yaml
$ oc get applications.argoproj.io -n openshift-gitops clusters -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To identify error logs for the managed cluster, inspect the
status.operationState.syncResult.resourcesfield. For example, if an invalid value is assigned to theextraManifestPathin theSiteConfigCR, an error similar to the following is generated:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To see a more detailed
SiteConfigerror, complete the following steps:- In the Argo CD dashboard, click the SiteConfig resource that Argo CD is trying to sync.
Check the DESIRED MANIFEST tab to find the
siteConfigErrorfield.siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory
siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directoryCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check the
Status.Syncfield. If there are log errors, theStatus.Syncfield could indicate anUnknownerror:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8. Troubleshooting GitOps ZTP virtual media booting on Supermicro servers Copy linkLink copied to clipboard!
SuperMicro X11 servers do not support virtual media installations when the image is served using the https protocol. As a result, single-node OpenShift deployments for this environment fail to boot on the target node. To avoid this issue, log in to the hub cluster and disable Transport Layer Security (TLS) in the Provisioning resource. This ensures the image is not served with TLS even though the image address uses the https scheme.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
Procedure
Disable TLS in the
Provisioningresource by running the following command:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}'$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Continue the steps to deploy your single-node OpenShift cluster.
4.9. Removing a managed cluster site from the GitOps ZTP pipeline Copy linkLink copied to clipboard!
You can remove a managed site and the associated installation and configuration policy CRs from the GitOps Zero Touch Provisioning (ZTP) pipeline.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
Procedure
-
Remove a site and the associated CRs by removing the associated
SiteConfigandPolicyGenTemplatefiles from thekustomization.yamlfile. Add the following
syncOptionsfield to yourSiteConfigapplication:kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=backgroundkind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=backgroundCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you run the GitOps ZTP pipeline again, the generated CRs are removed.
-
Optional: If you want to permanently remove a site, you should also remove the
SiteConfigand site-specificPolicyGenTemplatefiles from the Git repository. -
Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the
SiteConfigand site-specificPolicyGenTemplateCRs in the Git repository.
4.10. Removing obsolete content from the GitOps ZTP pipeline Copy linkLink copied to clipboard!
If a change to the PolicyGenTemplate configuration results in obsolete policies, for example, if you rename policies, use the following procedure to remove the obsolete policies.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
Procedure
-
Remove the affected
PolicyGenTemplatefiles from the Git repository, commit and push to the remote repository. - Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster.
Add the updated
PolicyGenTemplatefiles back to the Git repository, and then commit and push to the remote repository.NoteRemoving GitOps Zero Touch Provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster.
Optional: As an alternative, after making changes to
PolicyGenTemplateCRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command:oc delete policy -n <namespace> <policy_name>
$ oc delete policy -n <namespace> <policy_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11. Tearing down the GitOps ZTP pipeline Copy linkLink copied to clipboard!
You can remove the ArgoCD pipeline and all generated GitOps Zero Touch Provisioning (ZTP) artifacts.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
Procedure
- Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster.
Delete the
kustomization.yamlfile in thedeploymentdirectory using the following command:oc delete -k out/argocd/deployment
$ oc delete -k out/argocd/deploymentCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Commit and push your changes to the site repository.