Questo contenuto non è disponibile nella lingua selezionata.
Chapter 8. Advanced managed cluster configuration with SiteConfig resources
You can use SiteConfig
custom resources (CRs) to deploy custom functionality and configurations in your managed clusters at installation time.
8.1. Customizing extra installation manifests in the GitOps ZTP pipeline
You can define a set of extra manifests for inclusion in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. These manifests are linked to the SiteConfig
custom resources (CRs) and are applied to the cluster during installation. Including MachineConfig
CRs at install time makes the installation process more efficient.
Prerequisites
- Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Procedure
- Create a set of extra manifest CRs that the GitOps ZTP pipeline uses to customize the cluster installs.
In your custom
/siteconfig
directory, create a subdirectory/custom-manifest
for your extra manifests. The following example illustrates a sample/siteconfig
with/custom-manifest
folder:siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml ├── extra-manifest/ └── custom-manifest └── 01-example-machine-config.yaml
NoteThe subdirectory names
/custom-manifest
and/extra-manifest
used throughout are example names only. There is no requirement to use these names and no restriction on how you name these subdirectories. In this example/extra-manifest
refers to the Git subdirectory that stores the contents of/extra-manifest
from theztp-site-generate
container.-
Add your custom extra manifest CRs to the
siteconfig/custom-manifest
directory. In your
SiteConfig
CR, enter the directory name in theextraManifests.searchPaths
field, for example:clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2
-
Save the
SiteConfig
,/extra-manifest
, and/custom-manifest
CRs, and push them to the site configuration repo.
During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the /custom-manifest
directory to the default set of extra manifests stored in extra-manifest/
.
As of version 4.14 extraManifestPath
is subject to a deprecation warning.
While extraManifestPath
is still supported, we recommend that you use extraManifests.searchPaths
. If you define extraManifests.searchPaths
in the SiteConfig
file, the GitOps ZTP pipeline does not fetch manifests from the ztp-site-generate
container during site installation.
If you define both extraManifestPath
and extraManifests.searchPaths
in the Siteconfig
CR, the setting defined for extraManifests.searchPaths
takes precedence.
It is strongly recommended that you extract the contents of /extra-manifest
from the ztp-site-generate
container and push it to the GIT repository.
8.2. Filtering custom resources using SiteConfig filters
By using filters, you can easily customize SiteConfig
custom resources (CRs) to include or exclude other CRs for use in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline.
You can specify an inclusionDefault
value of include
or exclude
for the SiteConfig
CR, along with a list of the specific extraManifest
RAN CRs that you want to include or exclude. Setting inclusionDefault
to include
makes the GitOps ZTP pipeline apply all the files in /source-crs/extra-manifest
during installation. Setting inclusionDefault
to exclude
does the opposite.
You can exclude individual CRs from the /source-crs/extra-manifest
folder that are otherwise included by default. The following example configures a custom single-node OpenShift SiteConfig
CR to exclude the /source-crs/extra-manifest/03-sctp-machine-config-worker.yaml
CR at installation time.
Some additional optional filtering scenarios are also described.
Prerequisites
- You configured the hub cluster for generating the required installation and policy CRs.
- You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Procedure
To prevent the GitOps ZTP pipeline from applying the
03-sctp-machine-config-worker.yaml
CR file, apply the following YAML in theSiteConfig
CR:apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.17" sshPublicKey: "<ssh_public_key>" clusters: - clusterName: "site1-sno-du" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml
The GitOps ZTP pipeline skips the
03-sctp-machine-config-worker.yaml
CR during installation. All other CRs in/source-crs/extra-manifest
are applied.Save the
SiteConfig
CR and push the changes to the site configuration repository.The GitOps ZTP pipeline monitors and adjusts what CRs it applies based on the
SiteConfig
filter instructions.Optional: To prevent the GitOps ZTP pipeline from applying all the
/source-crs/extra-manifest
CRs during cluster installation, apply the following YAML in theSiteConfig
CR:- clusterName: "site1-sno-du" extraManifests: filter: inclusionDefault: exclude
Optional: To exclude all the
/source-crs/extra-manifest
RAN CRs and instead include a custom CR file during installation, edit the customSiteConfig
CR to set the custom manifests folder and theinclude
file, for example:clusters: - clusterName: "site1-sno-du" extraManifestPath: "<custom_manifest_folder>" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml
The following example illustrates the custom folder structure:
siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml
8.3. Deleting a node by using the SiteConfig CR
By using a SiteConfig
custom resource (CR), you can delete and reprovision a node. This method is more efficient than manually deleting the node.
Prerequisites
- You have configured the hub cluster to generate the required installation and policy CRs.
- You have created a Git repository in which you can manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as the source repository for the Argo CD application.
Procedure
Update the
SiteConfig
CR to include thebmac.agent-install.openshift.io/remove-agent-and-node-on-delete=true
annotation and push the changes to the Git repository:apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "cnfdf20" namespace: "cnfdf20" spec: clusters: nodes: - hostname: node6 role: "worker" crAnnotations: add: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true # ...
Verify that the
BareMetalHost
object is annotated by running the following command:oc get bmh -n <managed-cluster-namespace> <bmh-object> -ojsonpath='{.metadata}' | jq -r '.annotations["bmac.agent-install.openshift.io/remove-agent-and-node-on-delete"]'
Example output
true
Suppress the generation of the
BareMetalHost
CR by updating theSiteConfig
CR to include thecrSuppression.BareMetalHost
annotation:apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "cnfdf20" namespace: "cnfdf20" spec: clusters: - nodes: - hostName: node6 role: "worker" crSuppression: - BareMetalHost # ...
-
Push the changes to the Git repository and wait for deprovisioning to start. The status of the
BareMetalHost
CR should change todeprovisioning
. Wait for theBareMetalHost
to finish deprovisioning, and be fully deleted.
Verification
Verify that the
BareMetalHost
andAgent
CRs for the worker node have been deleted from the hub cluster by running the following commands:$ oc get bmh -n <cluster-ns>
$ oc get agent -n <cluster-ns>
Verify that the node record has been deleted from the spoke cluster by running the following command:
$ oc get nodes
NoteIf you are working with secrets, deleting a secret too early can cause an issue because ArgoCD needs the secret to complete resynchronization after deletion. Delete the secret only after the node cleanup, when the current ArgoCD synchronization is complete.
Next Steps
To reprovision a node, delete the changes previously added to the SiteConfig
, push the changes to the Git repository, and wait for the synchronization to complete. This regenerates the BareMetalHost
CR of the worker node and triggers the re-install of the node.