Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Installing managed clusters with RHACM and SiteConfig resources
You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The GitOps Zero Touch Provisioning (ZTP) pipeline performs the cluster installations. GitOps ZTP can be used in a disconnected environment.
Using PolicyGenTemplate
CRs to manage and deploy polices to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator
CRs.
For more information about PolicyGenerator
resources, see the RHACM Policy Generator documentation.
Additional resources
4.1. GitOps ZTP and Topology Aware Lifecycle Manager
GitOps Zero Touch Provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the GitOps ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM.
- Inform policies
-
By default, GitOps ZTP creates all policies with a remediation action of
inform
. These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the GitOps ZTP process, after OpenShift installation, the TALM steps through the createdinform
policies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the GitOps ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. - Automatic creation of ClusterGroupUpgrade CRs
To automate the initial configuration of newly deployed clusters, TALM monitors the state of all
ManagedCluster
CRs on the hub cluster. AnyManagedCluster
CR that does not have aztp-done
label applied, including newly createdManagedCluster
CRs, causes the TALM to automatically create aClusterGroupUpgrade
CR with the following characteristics:-
The
ClusterGroupUpgrade
CR is created and enabled in theztp-install
namespace. -
ClusterGroupUpgrade
CR has the same name as theManagedCluster
CR. -
The cluster selector includes only the cluster associated with that
ManagedCluster
CR. -
The set of managed policies includes all policies that RHACM has bound to the cluster at the time the
ClusterGroupUpgrade
is created. - Pre-caching is disabled.
- Timeout set to 4 hours (240 minutes).
The automatic creation of an enabled
ClusterGroupUpgrade
ensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of aClusterGroupUpgrade
CR for anyManagedCluster
without theztp-done
label allows a failed GitOps ZTP installation to be restarted by simply deleting theClusterGroupUpgrade
CR for the cluster.-
The
- Waves
Each policy generated from a
PolicyGenerator
orPolicyGentemplate
CR includes aztp-deploy-wave
annotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generatedClusterGroupUpgrade
CR. The wave annotation is not used other than for the auto-generatedClusterGroupUpgrade
CR.NoteAll CRs in the same policy must have the same setting for the
ztp-deploy-wave
annotation. The default value of this annotation for each CR can be overridden in thePolicyGenerator
orPolicyGentemplate
. The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime.The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the next policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the
CatalogSource
for an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account.Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves.
To check the default wave value in each source CR, run the following command against the out/source-crs
directory that is extracted from the ztp-site-generate
container image:
$ grep -r "ztp-deploy-wave" out/source-crs
- Phase labels
The
ClusterGroupUpgrade
CR is automatically created and includes directives to annotate theManagedCluster
CR with labels at the start and end of the GitOps ZTP process.When GitOps ZTP configuration postinstallation commences, the
ManagedCluster
has theztp-running
label applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove theztp-running
label and apply theztp-done
label.For deployments that make use of the
informDuValidator
policy, theztp-done
label is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the GitOps ZTP applied configuration CRs. Theztp-done
label affects automaticClusterGroupUpgrade
CR creation by TALM. Do not manipulate this label after the initial GitOps ZTP installation of the cluster.- Linked CRs
-
The automatically created
ClusterGroupUpgrade
CR has the owner reference set as theManagedCluster
from which it was derived. This reference ensures that deleting theManagedCluster
CR causes the instance of theClusterGroupUpgrade
to be deleted along with any supporting resources.
4.2. Overview of deploying managed clusters with GitOps ZTP
Red Hat Advanced Cluster Management (RHACM) uses GitOps Zero Touch Provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. GitOps ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters.
The deployment of the clusters includes:
- Installing the host operating system (RHCOS) on a blank server
- Deploying OpenShift Container Platform
- Creating cluster policies and site subscriptions
- Making the necessary network configurations to the server operating system
- Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV
Overview of the managed site installation process
After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically:
- A Discovery image ISO file is generated and booted on the target host.
- When the ISO file successfully boots on the target host it reports the host hardware information to RHACM.
- After all hosts are discovered, OpenShift Container Platform is installed.
-
When OpenShift Container Platform finishes installing, the hub installs the
klusterlet
service on the target cluster. - The requested add-on services are installed on the target cluster.
The Discovery image ISO process is complete when the Agent
CR for the managed cluster is created on the hub cluster.
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads.
4.3. Creating the managed bare-metal host secrets
Add the required Secret
custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.
The secrets are referenced from the SiteConfig
CR by name. The namespace must match the SiteConfig
namespace.
Procedure
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file
example-sno-secret.yaml
:apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson
-
Add the relative path to
example-sno-secret.yaml
to thekustomization.yaml
file that you use to install the cluster.
4.4. Configuring Discovery ISO kernel arguments for installations using GitOps ZTP
The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv
resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier
kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation.
In OpenShift Container Platform 4.17, you can only add kernel arguments. You can not replace or delete kernel arguments.
Prerequisites
- You have installed the OpenShift CLI (oc).
- You have logged in to the hub cluster as a user with cluster-admin privileges.
Procedure
Create the
InfraEnv
CR and edit thespec.kernelArguments
specification to configure kernel arguments.Save the following YAML in an
InfraEnv-example.yaml
file:NoteThe
InfraEnv
CR in this example uses template syntax such as{{ .Cluster.ClusterName }}
that is populated based on values in theSiteConfig
CR. TheSiteConfig
CR automatically populates values for these templates during deployment. Do not edit the templates manually.apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" spec: clusterRef: name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: "{{ .Site.SshPublicKey }}" proxy: "{{ .Cluster.ProxySettings }}" pullSecretRef: name: "{{ .Site.PullSecretRef.Name }}" ignitionConfigOverride: "{{ .Cluster.IgnitionConfigOverride }}" nmStateConfigLabelSelector: matchLabels: nmstate-label: "{{ .Cluster.ClusterName }}" additionalNTPSources: "{{ .Cluster.AdditionalNTPSources }}"
Commit the
InfraEnv-example.yaml
CR to the same location in your Git repository that has theSiteConfig
CR and push your changes. The following example shows a sample Git repository structure:~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ...
Edit the
spec.clusters.crTemplates
specification in theSiteConfig
CR to reference theInfraEnv-example.yaml
CR in your Git repository:clusters: crTemplates: InfraEnv: "InfraEnv-example.yaml"
When you are ready to deploy your cluster by committing and pushing the
SiteConfig
CR, the build pipeline uses the customInfraEnv-example
CR in your Git repository to configure the infrastructure environment, including the custom kernel arguments.
Verification
To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline
file.
Begin an SSH session with the target host:
$ ssh -i /path/to/privatekey core@<host_name>
View the system’s kernel arguments by using the following command:
$ cat /proc/cmdline
4.5. Deploying a managed cluster with SiteConfig and GitOps ZTP
Use the following procedure to create a SiteConfig
custom resource (CR) and related files and initiate the GitOps Zero Touch Provisioning (ZTP) cluster deployment.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges. - You configured the hub cluster for generating the required installation and policy CRs.
You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information.
NoteWhen you create the source repository, ensure that you patch the ArgoCD application with the
argocd/deployment/argocd-openshift-gitops-patch.json
patch-file that you extract from theztp-site-generate
container. See "Configuring the hub cluster with ArgoCD".To be ready for provisioning managed clusters, you require the following for each bare-metal host:
- Network connectivity
- Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host.
- Baseboard Management Controller (BMC) details
-
GitOps ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the
ManagedCluster
CRs on the hub cluster based on theSiteConfig
CR in your site Git repo. You create individualBMCSecret
CRs for each host manually.
Procedure
Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in
out/argocd/example/siteconfig/example-sno.yaml
, the cluster name and namespace isexample-sno
.Export the cluster namespace by running the following command:
$ export CLUSTERNS=example-sno
Create the namespace:
$ oc create namespace $CLUSTERNS
Create pull secret and BMC
Secret
CRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information.NoteThe secrets are referenced from the
SiteConfig
custom resource (CR) by name. The namespace must match theSiteConfig
namespace.Create a
SiteConfig
CR for your cluster in your local clone of the Git repository:Choose the appropriate example for your CR from the
out/argocd/example/siteconfig/
folder. The folder includes example files for single node, three-node, and standard clusters:-
example-sno.yaml
-
example-3node.yaml
-
example-standard.yaml
-
Change the cluster and host details in the example file to match the type of cluster you want. For example:
Example single-node OpenShift SiteConfig CR
# example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.16" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager", "Ingress" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot. bootMode: "UEFISecureBoot" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254
NoteFor more information about BMC addressing, see the "Additional resources" section. The
installConfigOverrides
andignitionConfigOverride
fields are expanded in the example for ease of readability.-
You can inspect the default set of extra-manifest
MachineConfig
CRs inout/argocd/extra-manifest
. It is automatically applied to the cluster when it is installed. Optional: To provision additional install-time manifests on the provisioned cluster, create a directory in your Git repository, for example,
sno-extra-manifest/
, and add your custom manifest CRs to this directory. If yourSiteConfig.yaml
refers to this directory in theextraManifestPath
field, any CRs in this referenced directory are appended to the default set of extra manifests.Enabling the crun OCI container runtimeFor optimal cluster performance, enable crun for master and worker nodes in single-node OpenShift, single-node OpenShift with additional worker nodes, three-node OpenShift, and standard clusters.
Enable crun in a
ContainerRuntimeConfig
CR as an additional Day 0 install-time manifest to avoid the cluster having to reboot.The
enable-crun-master.yaml
andenable-crun-worker.yaml
CR files are in theout/source-crs/optional-extra-manifest/
folder that you can extract from theztp-site-generate
container. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline".
-
Add the
SiteConfig
CR to thekustomization.yaml
file in thegenerators
section, similar to the example shown inout/argocd/example/siteconfig/kustomization.yaml
. Commit the
SiteConfig
CR and associatedkustomization.yaml
changes in your Git repository and push the changes.The ArgoCD pipeline detects the changes and begins the managed cluster deployment.
Verification
Verify that the custom roles and labels are applied after the node is deployed:
$ oc describe node example-node.example.com
Example output
Name: example-node.example.com
Roles: control-plane,example-label,master,worker
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
custom-label/parameter1=true
kubernetes.io/arch=amd64
kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/example-label= 1
node-role.kubernetes.io/master=
node-role.kubernetes.io/worker=
node.openshift.io/os_id=rhcos
- 1
- The custom label is applied to the node.
Additional resources
4.5.1. Accelerated provisioning of GitOps ZTP
Accelerated provisioning of GitOps ZTP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can reduce the time taken for cluster installation by using accelerated provisioning of GitOps ZTP for single-node OpenShift. Accelerated ZTP speeds up installation by applying Day 2 manifests derived from policies at an earlier stage.
Accelerated provisioning of GitOps ZTP is supported only when installing single-node OpenShift with Assisted Installer. Otherwise this installation method will fail.
4.5.1.1. Activating accelerated ZTP
You can activate accelerated ZTP using the spec.clusters.clusterLabels.accelerated-ztp
label, as in the following example:
Example Accelerated ZTP SiteConfig
CR.
apiVersion: ran.openshift.io/v2 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA..." clusters: # ... clusterLabels: common: true group-du-sno: "" sites : "example-sno" accelerated-ztp: full
You can use accelerated-ztp: full
to fully automate the accelerated process. GitOps ZTP updates the AgentClusterInstall
resource with a reference to the accelerated GitOps ZTP ConfigMap
, and includes resources extracted from policies by TALM, and accelerated ZTP job manifests.
If you use accelerated-ztp: partial
, GitOps ZTP does not include the accelerated job manifests, but includes policy-derived objects created during the cluster installation of the following kind
types:
-
PerformanceProfile.performance.openshift.io
-
Tuned.tuned.openshift.io
-
Namespace
-
CatalogSource.operators.coreos.com
-
ContainerRuntimeConfig.machineconfiguration.openshift.io
This partial acceleration can reduce the number of reboots done by the node when applying resources of the kind Performance Profile
, Tuned
, and ContainerRuntimeConfig
. TALM installs the Operator subscriptions derived from policies after RHACM completes the import of the cluster, following the same flow as standard GitOps ZTP.
The benefits of accelerated ZTP increase with the scale of your deployment. Using accelerated-ztp: full
gives more benefit on a large number of clusters. With a smaller number of clusters, the reduction in installation time is less significant. Full accelerated ZTP leaves behind a namespace and a completed job on the spoke that need to be manually removed.
One benefit of using accelerated-ztp: partial
is that you can override the functionality of the on-spoke job if something goes wrong with the stock implementation or if you require a custom functionality.
4.5.1.2. The accelerated ZTP process
Accelerated ZTP uses an additional ConfigMap
to create the resources derived from policies on the spoke cluster. The standard ConfigMap
includes manifests that the GitOps ZTP workflow uses to customize cluster installs.
TALM detects that the accelerated-ztp
label is set and then creates a second ConfigMap
. As part of accelerated ZTP, the SiteConfig
generator adds a reference to that second ConfigMap
using the naming convention <spoke-cluster-name>-aztp
.
After TALM creates that second ConfigMap
, it finds all policies bound to the managed cluster and extracts the GitOps ZTP profile information. TALM adds the GitOps ZTP profile information to the <spoke-cluster-name>-aztp
ConfigMap
custom resource (CR) and applies the CR to the hub cluster API.
4.5.2. Configuring IPsec encryption for single-node OpenShift clusters using GitOps ZTP and SiteConfig resources
You can enable IPsec encryption in managed single-node OpenShift clusters that you install using GitOps ZTP and Red Hat Advanced Cluster Management (RHACM). You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode.
You can also configure IPsec encryption for single-node OpenShift clusters with an additional worker node by following this procedure. It is recommended to use the MachineConfig
custom resource (CR) to configure IPsec encryption for single-node OpenShift clusters and single-node OpenShift clusters with an additional worker node because of their low resource availability.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges. - You have configured RHACM and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters.
- You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
-
You have installed the
butane
utility version 0.20.0 or later. - You have a PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format.
Procedure
-
Extract the latest version of the
ztp-site-generate
container source and merge it with your repository where you manage your custom site configuration data. Configure
optional-extra-manifest/ipsec/ipsec-endpoint-config.yaml
with the required values that configure IPsec in the cluster. For example:interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel
- 1
- The value of this field must match with the name of the certificate used on the remote system.
- 2
- Replace
<external_host>
with the external host IP address or DNS hostname. - 3
- Replace
<external_address>
with the IP subnet of the external host on the other side of the IPsec tunnel. - 4
- Use the IKEv2 VPN encryption protocol only. Do not use IKEv1, which is deprecated.
Add the following certificates to the
optional-extra-manifest/ipsec
folder:-
left_server.p12
: The certificate bundle for the IPsec endpoints ca.pem
: The certificate authority that you signed your certificates withThe certificate files are required for the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in later steps.
-
-
Open a shell prompt at the
optional-extra-manifest/ipsec
folder of the Git repository where you maintain your custom site configuration data. Run the
optional-extra-manifest/ipsec/build.sh
script to generate the required Butane andMachineConfig
CRs files.If the PKCS#12 certificate is protected with a password, set the
-W
argument.Example output
out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-endpoint-config.bu 1 ├── 99-ipsec-master-endpoint-config.yaml 2 ├── 99-ipsec-worker-endpoint-config.bu 3 ├── 99-ipsec-worker-endpoint-config.yaml 4 ├── build.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-endpoint-config.yml └── README.md
Create a
custom-manifest/
folder in the repository where you manage your custom site configuration data. Add theenable-ipsec.yaml
and99-ipsec-*
YAML files to the directory. For example:siteconfig ├── site1-sno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-worker-endpoint-config.yaml └── 99-ipsec-master-endpoint-config.yaml
In your
SiteConfig
CR, add thecustom-manifest/
directory to theextraManifests.searchPaths
field. For example:clusters: - clusterName: "site1-sno-du" networkType: "OVNKubernetes" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/
Commit the
SiteConfig
CR changes and updated files in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption.The Argo CD pipeline detects the changes and begins the managed cluster deployment.
During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the
custom-manifest/
directory to the default set of extra manifests stored in theextra-manifest/
directory.
Verification
For information about verifying the IPsec encryption, see "Verifying the IPsec encryption".
4.5.3. Configuring IPsec encryption for multi-node clusters using GitOps ZTP and SiteConfig resources
You can enable IPsec encryption in managed multi-node clusters that you install using GitOps ZTP and Red Hat Advanced Cluster Management (RHACM). You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges. - You have configured RHACM and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters.
- You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
-
You have installed the
butane
utility version 0.20.0 or later. - You have a PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format.
- You have installed the NMState Operator.
Procedure
-
Extract the latest version of the
ztp-site-generate
container source and merge it with your repository where you manage your custom site configuration data. Configure the
optional-extra-manifest/ipsec/ipsec-config-policy.yaml
file with the required values that configure IPsec in the cluster.ConfigurationPolicy
object for creating an IPsec configurationapiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: ["default"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform severity: low evaluationInterval: compliant: noncompliant: object-templates-raw: | {{- range (lookup "v1" "Node" "" "").items }} - complianceType: musthave objectDefinition: kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: {{ .metadata.name }}-ipsec-policy spec: nodeSelector: kubernetes.io/hostname: {{ .metadata.name }} desiredState: interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel
- 1
- The value of this field must match with the name of the certificate used on the remote system.
- 2
- Replace
<external_host>
with the external host IP address or DNS hostname. - 3
- Replace
<external_address>
with the IP subnet of the external host on the other side of the IPsec tunnel. - 4
- Use the IKEv2 VPN encryption protocol only. Do not use IKEv1, which is deprecated.
Add the following certificates to the
optional-extra-manifest/ipsec
folder:-
left_server.p12
: The certificate bundle for the IPsec endpoints ca.pem
: The certificate authority that you signed your certificates withThe certificate files are required for the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in later steps.
-
-
Open a shell prompt at the
optional-extra-manifest/ipsec
folder of the Git repository where you maintain your custom site configuration data. Run the
optional-extra-manifest/ipsec/import-certs.sh
script to generate the required Butane andMachineConfig
CRs to import the external certs.If the PKCS#12 certificate is protected with a password, set the
-W
argument.Example output
out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-import-certs.bu 1 ├── 99-ipsec-master-import-certs.yaml 2 ├── 99-ipsec-worker-import-certs.bu 3 ├── 99-ipsec-worker-import-certs.yaml 4 ├── import-certs.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-config-policy.yaml └── README.md
Create a
custom-manifest/
folder in the repository where you manage your custom site configuration data and add theenable-ipsec.yaml
and99-ipsec-*
YAML files to the directory.Example
siteconfig
directorysiteconfig ├── site1-mno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-master-import-certs.yaml └── 99-ipsec-worker-import-certs.yaml
In your
SiteConfig
CR, add thecustom-manifest/
directory to theextraManifests.searchPaths
field, as in the following example:clusters: - clusterName: "site1-mno-du" networkType: "OVNKubernetes" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/
-
Include the
ipsec-config-policy.yaml
config policy file in thesource-crs
directory in GitOps and reference the file in one of thePolicyGenerator
CRs. Commit the
SiteConfig
CR changes and updated files in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption.The Argo CD pipeline detects the changes and begins the managed cluster deployment.
During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the
custom-manifest/
directory to the default set of extra manifests stored in theextra-manifest/
directory.
Verification
For information about verifying the IPsec encryption, see "Verifying the IPsec encryption".
4.5.4. Verifying the IPsec encryption
You can verify that the IPsec encryption is successfully applied in a managed OpenShift Container Platform cluster.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges. - You have configured the IPsec encryption.
Procedure
Start a debug pod for the managed cluster by running the following command:
$ oc debug node/<node_name>
Check that the IPsec policy is applied in the cluster node by running the following command:
sh-5.1# ip xfrm policy
Example output
src 172.16.123.0/24 dst 10.1.232.10/32 dir out priority 1757377 ptype main tmpl src 10.1.28.190 dst 10.1.232.10 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir fwd priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir in priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel
Check that the IPsec tunnel is up and connected by running the following command:
sh-5.1# ip xfrm state
Example output
src 10.1.232.10 dst 10.1.28.190 proto esp spi 0xa62a05aa reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0x8c59f680c8ea1e667b665d8424e2ab749cec12dc 96 enc cbc(aes) 0x2818a489fe84929c8ab72907e9ce2f0eac6f16f2258bd22240f4087e0326badb anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 src 10.1.28.190 dst 10.1.232.10 proto esp spi 0x8e96e9f9 reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0xd960ddc0a6baaccb343396a51295e08cfd8aaddd 96 enc cbc(aes) 0x0273c02e05b4216d5e652de3fc9b3528fea94648bc2b88fa01139fdf0beb27ab anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000
Ping a known IP in the external host subnet by running the following command: For example, ping an IP address in the
rightsubnet
range that you set in theipsec/ipsec-endpoint-config.yaml
file:sh-5.1# ping 172.16.110.8
Example output
PING 172.16.110.8 (172.16.110.8) 56(84) bytes of data. 64 bytes from 172.16.110.8: icmp_seq=1 ttl=64 time=153 ms 64 bytes from 172.16.110.8: icmp_seq=2 ttl=64 time=155 ms
4.5.5. Single-node OpenShift SiteConfig CR installation reference
SiteConfig CR field | Description |
---|---|
|
Configure workload partitioning by setting the value for Note
Configuring workload partitioning by using the |
|
Set |
|
Configure the image set available on the hub cluster for all the clusters in the site. To see the list of supported versions on your hub cluster, run |
|
Set the Important
Use the reference configuration as specified in the example |
|
Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the |
|
Configure cluster labels to correspond to the binding rules in the
For example, |
|
Optional. Set |
| Configure this field to enable disk encryption with Trusted Platform Module (TPM) and Platform Configuration Registers (PCRs) protection. For more information, see "About disk encryption with TPM and PCR protection". Note
Configuring disk encryption by using the |
|
Set the disk encryption type to |
| Configure the Platform Configuration Registers (PCRs) protection for disk encryption. |
| Configure the list of Platform Configuration Registers (PCRs) to be used for disk encryption. You must use PCR registers 1 and 7. |
|
For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with |
| Specify custom roles for your nodes in your managed clusters. These are additional roles are not used by any OpenShift Container Platform components, only by the user. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete. |
|
Optional. Uncomment and set the value to |
| BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. |
| BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. Note In far edge Telco use cases, only virtual media is supported for use with GitOps ZTP. |
|
Configure the |
|
Set the boot mode for the host to |
|
Specifies the device for deployment. Identifiers that are stable across reboots are recommended. For example, |
| Optional. Use this field to assign partitions for persistent storage. Adjust disk ID and size to the specific hardware. |
| Configure the network settings for the node. |
| Configure the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. |
Additional resources
- About disk encryption with TPM and PCR protection.
- Customizing extra installation manifests in the GitOps ZTP pipeline
- Preparing the GitOps ZTP site configuration repository
- Configuring the hub cluster with ArgoCD
- Signalling GitOps ZTP cluster deployment completion with validator inform policies
- Creating the managed bare-metal host secrets
- BMC addressing
- About root device hints
4.6. Managing host firmware settings with GitOps ZTP
Hosts require the correct firmware configuration to ensure high performance and optimal efficiency. You can deploy custom host firmware configurations for managed clusters with GitOps ZTP.
Tune hosts with specific hardware profiles in your lab and ensure they are optimized for your requirements. When you have completed host tuning to your satisfaction, you extract the host profile and save it in your GitOps ZTP repository. Then, you use the host profile to configure firmware settings in the managed cluster hosts that you deploy with GitOps ZTP.
You specify the required hardware profiles in SiteConfig
custom resources (CRs) that you use to deploy the managed clusters. The GitOps ZTP pipeline generates the required HostFirmwareSettings
(HFS
) and BareMetalHost
(BMH
) CRs that are applied to the hub cluster.
Use the following best practices to manage your host firmware profiles.
- Identify critical firmware settings with hardware vendors
- Work with hardware vendors to identify and document critical host firmware settings required for optimal performance and compatibility with the deployed host platform.
- Use common firmware configurations across similar hardware platforms
- Where possible, use a standardized host firmware configuration across similar hardware platforms to reduce complexity and potential errors during deployment.
- Test firmware configurations in a lab environment
- Test host firmware configurations in a controlled lab environment before deploying in production to ensure that settings are compatible with hardware, firmware, and software.
- Manage firmware profiles in source control
- Manage host firmware profiles in Git repositories to track changes, ensure consistency, and facilitate collaboration with vendors.
Additional resources
4.6.1. Retrieving the host firmware schema for a managed cluster
You can discover the host firmware schema for managed clusters. The host firmware schema for bare-metal hosts is populated with information that the Ironic API returns. The API returns information about host firmware interfaces, including firmware setting types, allowable values, ranges, and flags.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with
cluster-admin
privileges. - You have provisioned a cluster that is managed by RHACM.
Procedure
Discover the host firmware schema for the managed cluster. Run the following command:
$ oc get firmwareschema -n <managed_cluster_namespace> -o yaml
Example output
apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: FirmwareSchema metadata: creationTimestamp: "2024-09-11T10:29:43Z" generation: 1 name: schema-40562318 namespace: compute-1 ownerReferences: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings name: compute-1.example.com uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 resourceVersion: "280057624" uid: 511ad25d-f1c9-457b-9a96-776605c7b887 spec: schema: AccessControlService: allowable_values: - Enabled - Disabled attribute_type: Enumeration read_only: false # ...
4.6.2. Retrieving the host firmware settings for a managed cluster
You can retrieve the host firmware settings for managed clusters. This is useful when you have deployed changes to the host firmware and you want to monitor the changes and ensure that they are applied successfully.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with
cluster-admin
privileges. - You have provisioned a cluster that is managed by RHACM.
Procedure
Retrieve the host firmware settings for the managed cluster. Run the following command:
$ oc get hostfirmwaresettings -n <cluster_namespace> <node_name> -o yaml
Example output
apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings metadata: creationTimestamp: "2024-09-11T10:29:43Z" generation: 1 name: compute-1.example.com namespace: kni-qe-24 ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: compute-1.example.com uid: 0baddbb7-bb34-4224-8427-3d01d91c9287 resourceVersion: "280057626" uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 spec: settings: {} status: conditions: - lastTransitionTime: "2024-09-11T10:29:43Z" message: "" observedGeneration: 1 reason: Success status: "True" 1 type: ChangeDetected - lastTransitionTime: "2024-09-11T10:29:43Z" message: Invalid BIOS setting observedGeneration: 1 reason: ConfigurationError status: "False" 2 type: Valid lastUpdated: "2024-09-11T10:29:43Z" schema: name: schema-40562318 namespace: compute-1 settings: 3 AccessControlService: Enabled AcpiHpet: Enabled AcpiRootBridgePxm: Enabled # ...
Optional: Check the status of the
HostFirmwareSettings
(hfs
) custom resource in the cluster:$ oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="ChangeDetected")].status}'
Example output
True
Optional: Check for invalid firmware settings in the cluster host. Run the following command:
$ oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="Valid")].status}'
Example output
False
4.6.3. Deploying user-defined firmware to cluster hosts with GitOps ZTP
You can deploy user-defined firmware settings to cluster hosts by configuring the SiteConfig
custom resource (CR) to include a hardware profile that you want to apply during cluster host provisioning. You can configure hardware profiles to apply to hosts in the following scenarios:
- All hosts site-wide
- Only cluster hosts that meet certain criteria
- Individual cluster hosts
You can configure host hardware profiles to be applied in a hierarchy. Cluster-level settings override site-wide settings. Node level profiles override cluster and site-wide settings.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with
cluster-admin
privileges. - You have provisioned a cluster that is managed by RHACM.
- You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Procedure
Create the host firmware profile that contain the firmware settings you want to apply. For example, create the following YAML file:
host-firmware.profile
BootMode: Uefi LogicalProc: Enabled ProcVirtualization: Enabled
Save the hardware profile YAML file relative to the
kustomization.yaml
file that you use to define how to provision the cluster, for example:example-ztp/install └── site-install ├── siteconfig-example.yaml ├── kustomization.yaml └── host-firmware.profile
Edit the
SiteConfig
CR to include the firmware profile that you want to apply in the cluster. For example:apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "site-plan-cluster" namespace: "example-cluster-namespace" spec: baseDomain: "example.com" # ... biosConfigRef: filePath: "./host-firmware.profile" 1
- 1
- Applies the hardware profile to all cluster hosts site-wide
NoteWhere possible, use a single
SiteConfig
CR per cluster.Optional. To apply a hardware profile to hosts in a specific cluster, update
clusters.biosConfigRef.filePath
with the hardware profile that you want to apply. For example:clusters: - clusterName: "cluster-1" # ... biosConfigRef: filePath: "./host-firmware.profile" 1
- 1
- Applies to all hosts in the
cluster-1
cluster
Optional. To apply a hardware profile to a specific host in the cluster, update
clusters.nodes.biosConfigRef.filePath
with the hardware profile that you want to apply. For example:clusters: - clusterName: "cluster-1" # ... nodes: - hostName: "compute-1.example.com" # ... bootMode: "UEFI" biosConfigRef: filePath: "./host-firmware.profile" 1
- 1
- Applies the firmware profile to the
compute-1.example.com
host in the cluster
Commit the
SiteConfig
CR and associatedkustomization.yaml
changes in your Git repository and push the changes.The ArgoCD pipeline detects the changes and begins the managed cluster deployment.
NoteCluster deployment proceeds even if an invalid firmware setting is detected. To apply a correction using GitOps ZTP, re-deploy the cluster with the corrected hardware profile.
Verification
Check that the firmware settings have been applied in the managed cluster host. For example, run the following command:
$ oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="Valid")].status}'
Example output
True
4.7. Monitoring managed cluster installation progress
The ArgoCD pipeline uses the SiteConfig
CR to generate the cluster configuration CRs and syncs it with the hub cluster. You can monitor the progress of the synchronization in the ArgoCD dashboard.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges.
Procedure
When the synchronization is complete, the installation generally proceeds as follows:
The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands:
Export the cluster name:
$ export CLUSTER=<clusterName>
Query the
AgentClusterInstall
CR for the managed cluster:$ oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq
Get the installation events for the cluster:
$ curl -sk $(oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'
4.8. Troubleshooting GitOps ZTP by validating the installation CRs
The ArgoCD pipeline uses the SiteConfig
and PolicyGenerator
or PolicyGentemplate
custom resources (CRs) to generate the cluster configuration CRs and Red Hat Advanced Cluster Management (RHACM) policies. Use the following steps to troubleshoot issues that might occur during this process.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges.
Procedure
Check that the installation CRs were created by using the following command:
$ oc get AgentClusterInstall -n <cluster_name>
If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from
SiteConfig
files to the installation CRs.Verify that the
ManagedCluster
CR was generated using theSiteConfig
CR on the hub cluster:$ oc get managedcluster
If the
ManagedCluster
is missing, check if theclusters
application failed to synchronize the files from the Git repository to the hub cluster:$ oc describe -n openshift-gitops application clusters
Check for the
Status.Conditions
field to view the error logs for the managed cluster. For example, setting an invalid value forextraManifestPath:
in theSiteConfig
CR raises the following error:Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/siteconfigs/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not create extra-manifest ranSite1.extra-manifest3 stat extra-manifest3: no such file or directory 2021/11/26 17:21:40 Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-913473579: stat extra-manifest3: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-913473579; exit status 1: exit status 1 Type: ComparisonError
Check the
Status.Sync
field. If there are log errors, theStatus.Sync
field could indicate anUnknown
error:Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown
4.9. Troubleshooting GitOps ZTP virtual media booting on SuperMicro servers
SuperMicro X11 servers do not support virtual media installations when the image is served using the https
protocol. As a result, single-node OpenShift deployments for this environment fail to boot on the target node. To avoid this issue, log in to the hub cluster and disable Transport Layer Security (TLS) in the Provisioning
resource. This ensures the image is not served with TLS even though the image address uses the https
scheme.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges.
Procedure
Disable TLS in the
Provisioning
resource by running the following command:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}'
- Continue the steps to deploy your single-node OpenShift cluster.
4.10. Removing a managed cluster site from the GitOps ZTP pipeline
You can remove a managed site and the associated installation and configuration policy CRs from the GitOps Zero Touch Provisioning (ZTP) pipeline.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges.
Procedure
-
Remove a site and the associated CRs by removing the associated
SiteConfig
andPolicyGenerator
orPolicyGentemplate
files from thekustomization.yaml
file. Add the following
syncOptions
field to yourSiteConfig
application.kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background
When you run the GitOps ZTP pipeline again, the generated CRs are removed.
-
Optional: If you want to permanently remove a site, you should also remove the
SiteConfig
and site-specificPolicyGenerator
orPolicyGentemplate
files from the Git repository. -
Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the
SiteConfig
and site-specificPolicyGenerator
orPolicyGentemplate
CRs in the Git repository.
Additional resources
- For information about removing a cluster, see Removing a cluster from management.
4.11. Removing obsolete content from the GitOps ZTP pipeline
If a change to the PolicyGenerator
or PolicyGentemplate
configuration results in obsolete policies, for example, if you rename policies, use the following procedure to remove the obsolete policies.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges.
Procedure
-
Remove the affected
PolicyGenerator
orPolicyGentemplate
files from the Git repository, commit and push to the remote repository. - Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster.
Add the updated
PolicyGenerator
orPolicyGentemplate
files back to the Git repository, and then commit and push to the remote repository.NoteRemoving GitOps Zero Touch Provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster.
Optional: As an alternative, after making changes to
PolicyGenerator
orPolicyGentemplate
CRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command:$ oc delete policy -n <namespace> <policy_name>
4.12. Tearing down the GitOps ZTP pipeline
You can remove the ArgoCD pipeline and all generated GitOps Zero Touch Provisioning (ZTP) artifacts.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have logged in to the hub cluster as a user with
cluster-admin
privileges.
Procedure
- Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster.
Delete the
kustomization.yaml
file in thedeployment
directory using the following command:$ oc delete -k out/argocd/deployment
- Commit and push your changes to the site repository.