Chapter 5. Manually installing a single-node OpenShift cluster with GitOps ZTP
You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service.
If you are creating multiple managed clusters, use the SiteConfig method described in Deploying far edge sites with ZTP.
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads.
5.1. Generating GitOps ZTP installation and configuration CRs manually Copy linkLink copied to clipboard!
Use the generator entrypoint for the ztp-site-generate container to generate the site installation and configuration custom resource (CRs) for a cluster based on SiteConfig and PolicyGenerator CRs.
SiteConfig v1 is deprecated starting with OpenShift Container Platform version 4.18. Equivalent and improved functionality is now available through the SiteConfig Operator using the ClusterInstance custom resource. For more information, see Procedure to transition from SiteConfig CRs to the ClusterInstance API.
For more information about the SiteConfig Operator, see SiteConfig.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
Procedure
Create an output folder by running the following command:
mkdir -p ./out
$ mkdir -p ./outCopy to Clipboard Copied! Toggle word wrap Toggle overflow Export the
argocddirectory from theztp-site-generatecontainer image:podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 extract /home/ztp --tar | tar x -C ./out
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 extract /home/ztp --tar | tar x -C ./outCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
./outdirectory has the referencePolicyGeneratorandSiteConfigCRs in theout/argocd/example/folder.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an output folder for the site installation CRs:
mkdir -p ./site-install
$ mkdir -p ./site-installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the example
SiteConfigCR for the cluster type that you want to install. Copyexample-sno.yamltosite-1-sno.yamland modify the CR to match the details of the site and bare-metal host that you want to install, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOnce you have extracted reference CR configuration files from the
out/extra-manifestdirectory of theztp-site-generatecontainer, you can useextraManifests.searchPathsto include the path to the git directory containing those files. This allows the GitOps ZTP pipeline to apply those CR files during cluster installation. If you configure asearchPathsdirectory, the GitOps ZTP pipeline does not fetch manifests from theztp-site-generatecontainer during site installation.Generate the Day 0 installation CRs by processing the modified
SiteConfigCRsite-1-sno.yamlby running the following command:podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 generator install site-1-sno.yaml /output
$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 generator install site-1-sno.yaml /outputCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Generate just the Day 0
MachineConfiginstallation CRs for a particular cluster type by processing the referenceSiteConfigCR with the-Eoption. For example, run the following commands:Create an output folder for the
MachineConfigCRs:mkdir -p ./site-machineconfig
$ mkdir -p ./site-machineconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the
MachineConfiginstallation CRs:podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 generator install -E site-1-sno.yaml /output
$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 generator install -E site-1-sno.yaml /outputCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yamlsite-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Generate and export the Day 2 configuration CRs using the reference
PolicyGeneratorCRs from the previous step. Run the following commands:Create an output folder for the Day 2 CRs:
mkdir -p ./ref
$ mkdir -p ./refCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate and export the Day 2 configuration CRs:
podman run -it --rm -v `pwd`/out/argocd/example/acmpolicygenerator:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 generator config -N . /output
$ podman run -it --rm -v `pwd`/out/argocd/example/acmpolicygenerator:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 generator config -N . /outputCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command generates example group and site-specific
PolicyGeneratorCRs for single-node OpenShift, three-node clusters, and standard clusters in the./reffolder.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Use the generated CRs as the basis for the CRs that you use to install the cluster. You apply the installation CRs to the hub cluster as described in "Installing a single managed cluster". The configuration CRs can be applied to the cluster after cluster installation is complete.
Verification
Verify that the custom roles and labels are applied after the node is deployed:
oc describe node example-node.example.com
$ oc describe node example-node.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output
- 1
- The custom label is applied to the node.
5.2. Creating the managed bare-metal host secrets Copy linkLink copied to clipboard!
Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.
The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace.
Procedure
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file
example-sno-secret.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Add the relative path to
example-sno-secret.yamlto thekustomization.yamlfile that you use to install the cluster.
5.3. Configuring Discovery ISO kernel arguments for manual installations using GitOps ZTP Copy linkLink copied to clipboard!
The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation.
In OpenShift Container Platform 4.18, you can only add kernel arguments. You can not replace or delete kernel arguments.
Prerequisites
- You have installed the OpenShift CLI (oc).
- You have logged in to the hub cluster as a user with cluster-admin privileges.
- You have manually generated the installation and configuration custom resources (CRs).
Procedure
-
Edit the
spec.kernelArgumentsspecification in theInfraEnvCR to configure kernel arguments:
The SiteConfig CR generates the InfraEnv resource as part of the day-0 installation CRs.
Verification
To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file.
Begin an SSH session with the target host:
ssh -i /path/to/privatekey core@<host_name>
$ ssh -i /path/to/privatekey core@<host_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the system’s kernel arguments by using the following command:
cat /proc/cmdline
$ cat /proc/cmdlineCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Installing a single managed cluster Copy linkLink copied to clipboard!
You can manually deploy a single managed cluster using the assisted service and Red Hat Advanced Cluster Management (RHACM).
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have created the baseboard management controller (BMC)
Secretand the image pull-secretSecretcustom resources (CRs). See "Creating the managed bare-metal host secrets" for details. - Your target bare-metal host meets the networking and hardware requirements for managed clusters.
Procedure
Create a
ClusterImageSetfor each specific cluster version to be deployed, for exampleclusterImageSet-4.18.yaml. AClusterImageSethas the following format:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
clusterImageSetCR:oc apply -f clusterImageSet-4.18.yaml
$ oc apply -f clusterImageSet-4.18.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
NamespaceCR in thecluster-namespace.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
NamespaceCR by running the following command:oc apply -f cluster-namespace.yaml
$ oc apply -f cluster-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the generated day-0 CRs that you extracted from the
ztp-site-generatecontainer and customized to meet your requirements:oc apply -R ./site-install/site-sno-1
$ oc apply -R ./site-install/site-sno-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. Monitoring the managed cluster installation status Copy linkLink copied to clipboard!
Ensure that cluster provisioning was successful by checking the cluster status.
Prerequisites
-
All of the custom resources have been configured and provisioned, and the
Agentcustom resource is created on the hub for the managed cluster.
Procedure
Check the status of the managed cluster:
oc get managedcluster
$ oc get managedclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Trueindicates the managed cluster is ready.Check the agent status:
oc get agent -n <cluster_name>
$ oc get agent -n <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
describecommand to provide an in-depth description of the agent’s condition. Statuses to be aware of includeBackendError,InputError,ValidationsFailing,InstallationFailed, andAgentIsConnected. These statuses are relevant to theAgentandAgentClusterInstallcustom resources.oc describe agent -n <cluster_name>
$ oc describe agent -n <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the cluster provisioning status:
oc get agentclusterinstall -n <cluster_name>
$ oc get agentclusterinstall -n <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
describecommand to provide an in-depth description of the cluster provisioning status:oc describe agentclusterinstall -n <cluster_name>
$ oc describe agentclusterinstall -n <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the managed cluster’s add-on services:
oc get managedclusteraddon -n <cluster_name>
$ oc get managedclusteraddon -n <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the authentication information of the
kubeconfigfile for the managed cluster:oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig$ oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6. Troubleshooting the managed cluster Copy linkLink copied to clipboard!
Use this procedure to diagnose any installation issues that might occur with the managed cluster.
Procedure
Check the status of the managed cluster:
oc get managedcluster
$ oc get managedclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19hCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the status in the
AVAILABLEcolumn isTrue, the managed cluster is being managed by the hub.If the status in the
AVAILABLEcolumn isUnknown, the managed cluster is not being managed by the hub. Use the following steps to continue checking to get more information.Check the
AgentClusterInstallinstall status:oc get clusterdeployment -n <cluster_name>
$ oc get clusterdeployment -n <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h
NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14hCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the status in the
INSTALLEDcolumn isfalse, the installation was unsuccessful.If the installation failed, enter the following command to review the status of the
AgentClusterInstallresource:oc describe agentclusterinstall -n <cluster_name> <cluster_name>
$ oc describe agentclusterinstall -n <cluster_name> <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Resolve the errors and reset the cluster:
Remove the cluster’s managed cluster resource:
oc delete managedcluster <cluster_name>
$ oc delete managedcluster <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the cluster’s namespace:
oc delete namespace <cluster_name>
$ oc delete namespace <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the
ManagedClusterCR deletion to complete before proceeding.- Recreate the custom resources for the managed cluster.
5.7. RHACM generated cluster installation CRs reference Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management (RHACM) supports deploying OpenShift Container Platform on single-node clusters, three-node clusters, and standard clusters with a specific set of installation custom resources (CRs) that you generate using SiteConfig CRs for each site.
Every managed cluster has its own namespace, and all of the installation CRs except for ManagedCluster and ClusterImageSet are under that namespace. ManagedCluster and ClusterImageSet are cluster-scoped, not namespace-scoped. The namespace and the CR names match the cluster name.
The following table lists the installation CRs that are automatically applied by the RHACM assisted service when it installs clusters using the SiteConfig CRs that you configure.
| CR | Description | Usage |
|---|---|---|
|
| Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host. | Provides access to the BMC to load and start the discovery image on the target server by using the Redfish protocol. |
|
| Contains information for installing OpenShift Container Platform on the target bare-metal host. |
Used with |
|
|
Specifies details of the managed cluster configuration such as networking and the number of control plane nodes. Displays the cluster | Specifies the managed cluster configuration information and provides status during the installation of the cluster. |
|
|
References the |
Used with |
|
|
Provides network configuration information such as | Sets up a static IP address for the managed cluster’s Kube API server. |
|
| Contains hardware information about the target bare-metal host. | Created automatically on the hub when the target machine’s discovery image boots. |
|
| When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface. | The hub uses this resource to manage and show the status of managed clusters. |
|
|
Contains the list of services provided by the hub to be deployed to the |
Tells the hub which addon services to deploy to the |
|
|
Logical space for |
Propagates resources to the |
|
|
Two CRs are created: |
|
|
| Contains OpenShift Container Platform image information such as the repository and image name. | Passed into resources to provide OpenShift Container Platform images. |