이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 9. Advanced managed cluster configuration with ClusterInstance resources
You can use ClusterInstance custom resources (CRs) to deploy custom functionality and configurations in your managed clusters at installation time.
9.1. Customizing extra installation manifests in the GitOps ZTP pipeline 링크 복사링크가 클립보드에 복사되었습니다!
You can define a set of extra manifests for inclusion in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. These manifests are linked to the ClusterInstance custom resources (CRs) and are applied to the cluster during installation. Including MachineConfig CRs at install time makes the installation process more efficient.
Extra manifests must be packaged in ConfigMap resources and referenced in the extraManifestsRefs field of the ClusterInstance CR.
Prerequisites
- Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Procedure
- Create a set of extra manifest CRs that the GitOps ZTP pipeline uses to customize the cluster installs.
In your
/clusterinstancedirectory, create a subdirectory with your extra manifests. The following example illustrates a sample folder structure:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or update the
kustomization.yamlfile to useconfigMapGeneratorto package your extra manifests into aConfigMap:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In your
ClusterInstanceCR, reference theConfigMapin theextraManifestsRefsfield:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Reference to the
ConfigMapcontaining the extra manifests.
-
Commit the
ClusterInstanceCR, extra manifest files, andkustomization.yamlto your Git repository and push the changes.
During cluster provisioning, the SiteConfig Operator applies the CRs contained in the referenced ConfigMap resources as extra manifests.
You can reference multiple ConfigMap resources in extraManifestsRefs to organize your manifests logically. For example, you might have separate ConfigMap resources for crun configuration, custom MachineConfig CRs, and other Day 0 configurations.
9.2. Deleting a node by using the ClusterInstance CR 링크 복사링크가 클립보드에 복사되었습니다!
By using a ClusterInstance custom resource (CR), you can delete and reprovision a node. This method is more efficient than manually deleting the node.
Prerequisites
- You have configured the hub cluster to generate the required installation and policy CRs.
- You have created a Git repository in which you can manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as the source repository for the Argo CD application.
Procedure
Update the
ClusterInstanceCR to add thebmac.agent-install.openshift.io/remove-agent-and-node-on-delete=trueannotation to theBareMetalHostresource for the node, and push the changes to the Git repository:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
BareMetalHostobject is annotated by running the following command:oc get bmh -n <cluster_namespace> <bmh_name> -ojsonpath='{.metadata}' | jq -r '.annotations["bmac.agent-install.openshift.io/remove-agent-and-node-on-delete"]'$ oc get bmh -n <cluster_namespace> <bmh_name> -ojsonpath='{.metadata}' | jq -r '.annotations["bmac.agent-install.openshift.io/remove-agent-and-node-on-delete"]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
true
trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
BareMetalHostCR by configuring thepruneManifestsfield in theClusterInstanceCR to remove the targetBareMetalHostresource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Push the changes to the Git repository and wait for deprovisioning to start. The status of the
BareMetalHostCR should change todeprovisioning. Wait for theBareMetalHostto finish deprovisioning, and be fully deleted.
Verification
Verify that the
BareMetalHostandAgentCRs for the worker node have been deleted from the hub cluster by running the following commands:oc get bmh -n <cluster_namespace>
$ oc get bmh -n <cluster_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get agent -n <cluster_namespace>
$ oc get agent -n <cluster_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the node record has been deleted from the spoke cluster by running the following command:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you are working with secrets, deleting a secret too early can cause an issue because ArgoCD needs the secret to complete resynchronization after deletion. Delete the secret only after the node cleanup, when the current ArgoCD synchronization is complete.
-
After the
BareMetalHostobject is successfully deleted, remove the worker node definition from thespec.nodessection in theClusterInstanceCR and push the changes to the Git repository.
Next steps
To reprovision a node, add the node definition back to the spec.nodes section in the ClusterInstance CR, push the changes to the Git repository, and wait for the synchronization to complete. This regenerates the BareMetalHost CR of the worker node and triggers the re-install of the node.