Questo contenuto non è disponibile nella lingua selezionata.
Chapter 5. Manually installing a single-node OpenShift cluster with GitOps ZTP
You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service.
If you are creating multiple managed clusters, use the ClusterInstance method described in Deploying far edge sites with ZTP.
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads.
5.1. Extracting reference and example CRs from the ztp-site-generate container Copia collegamentoCollegamento copiato negli appunti!
Use the ztp-site-generate container to extract reference custom resources (CRs) and example ClusterInstance CRs to prepare for cluster installation and Day 2 configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You installed
podman.
Procedure
Create an output folder by running the following command:
$ mkdir -p ./outLog in to the Ecosystem container registry with your credentials by running the following command:
$ podman login registry.redhat.ioExtract the reference and example CRs from the
ztp-site-generatecontainer image by running the following command:$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.21 extract /home/ztp --tar | tar x -C ./outThe
./outdirectory contains the referencePolicyGeneratorandClusterInstanceCRs in theout/argocd/example/folder.Example output
out └── argocd └── example ├── acmpolicygenerator │ ├── {policy-prefix}common-ranGen.yaml │ ├── {policy-prefix}example-sno-site.yaml │ ├── {policy-prefix}group-du-sno-ranGen.yaml │ ├── ... │ ├── kustomization.yaml │ └── ns.yaml └── clusterinstance ├── example-sno.yaml ├── example-3node.yaml ├── example-standard.yaml └── ...Create a
ClusterInstanceCR for your cluster.Use the example
ClusterInstanceCRs in theout/argocd/example/clusterinstance/folder that you previously extracted from theztp-site-generatecontainer as a reference. The folder includes example files for single node, three-node, and standard clusters:-
example-sno.yaml -
example-3node.yaml example-standard.yamlChange the cluster and host details in the example file to match the type of cluster you want to install. For example:
Example single-node OpenShift ClusterInstance CR
# example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-ai-sno --- apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-ai-sno" namespace: "example-ai-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.21" sshPublicKey: "ssh-rsa AAAA..." clusterName: "example-ai-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager", "Ingress" ] } } # Include references to extraManifest ConfigMaps. extraManifestsRefs: - name: sno-extra-manifest-configmap extraLabels: ManagedCluster: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: "true" # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - cidr: 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes templateRefs: - name: ai-cluster-templates-v1 namespace: open-cluster-management nodes: - hostName: "example-node1.example.com" role: "master" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot, UEFI to disable. bootMode: "UEFISecureBoot" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md in argocd folder for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254 templateRefs: - name: ai-node-templates-v1 namespace: open-cluster-managementNoteOptional: To provision additional install-time manifests on the provisioned cluster, create the extra manifest CRs and apply them to the hub cluster. Then reference them in the
extraManifestsRefsfield of theClusterInstanceCR. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline".
-
Optional: Generate Day 2 configuration CRs from the reference
PolicyGeneratorCRs:Create an output folder for the configuration CRs by running the following command:
$ mkdir -p ./refGenerate the configuration CRs by running the following command:
$ podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.21 generator config -N . /outputThe command generates example group and cluster-specific configuration CRs in the
./reffolder. You can apply these CRs to the cluster after installation is complete.
5.2. Creating the managed bare-metal host secrets Copia collegamentoCollegamento copiato negli appunti!
Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.
The secrets are referenced from the ClusterInstance CR by name. The namespace must match the ClusterInstance namespace.
Procedure
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file
example-sno-secret.yaml:apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno1 data:2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno3 data: .dockerconfigjson: <pull_secret>4 type: kubernetes.io/dockerconfigjson
-
Add the relative path to
example-sno-secret.yamlto thekustomization.yamlfile that you use to install the cluster.
5.3. Configuring Discovery ISO kernel arguments for manual installations using GitOps ZTP Copia collegamentoCollegamento copiato negli appunti!
The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation.
In OpenShift Container Platform 4.21, you can only add kernel arguments. You can not replace or delete kernel arguments.
Prerequisites
- You have installed the OpenShift CLI (oc).
- You have logged in to the hub cluster as a user with cluster-admin privileges.
-
You have applied a
ClusterInstanceCR to the hub cluster.
Procedure
-
Edit the
spec.kernelArgumentsspecification in theInfraEnvCR to configure kernel arguments:
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: <cluster_name>
namespace: <cluster_name>
spec:
kernelArguments:
- operation: append
value: audit=0
- operation: append
value: trace=1
clusterRef:
name: <cluster_name>
namespace: <cluster_name>
pullSecretRef:
name: pull-secret
The ClusterInstance CR generates the InfraEnv resource as part of the day-0 installation CRs.
Verification
To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file.
Begin an SSH session with the target host:
$ ssh -i /path/to/privatekey core@<host_name>View the system’s kernel arguments by using the following command:
$ cat /proc/cmdline
5.4. Installing a single managed cluster Copia collegamentoCollegamento copiato negli appunti!
You can manually deploy a single managed cluster using the assisted service and Red Hat Advanced Cluster Management (RHACM).
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have extracted the reference and example CRs from the
ztp-site-generatecontainer and you configured theClusterInstanceCR. -
You have created the baseboard management controller (BMC)
Secretand the image pull-secretSecretcustom resources (CRs). See "Creating the managed bare-metal host secrets" for details. - Your target bare-metal host meets the networking and hardware requirements for managed clusters.
Procedure
Create a
ClusterImageSetfor each specific cluster version to be deployed, for exampleclusterImageSet-4.21.yaml. AClusterImageSethas the following format:apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.21.01 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.21.0-x86_642 Apply the
clusterImageSetCR:$ oc apply -f clusterImageSet-4.21.yamlCreate the
NamespaceCR in thecluster-namespace.yamlfile:apiVersion: v1 kind: Namespace metadata: name: <cluster_name>1 labels: name: <cluster_name>2 Apply the
NamespaceCR by running the following command:$ oc apply -f cluster-namespace.yamlApply the
ClusterInstanceCR that you configured to the hub cluster by running the following command:$ oc apply -f clusterinstance.yamlThe SiteConfig Operator processes the
ClusterInstanceCR and automatically generates the required installation CRs, includingBareMetalHost,AgentClusterInstall,ClusterDeployment,InfraEnv, andNMStateConfig. The assisted service then begins the cluster installation.
5.5. Monitoring the managed cluster installation status Copia collegamentoCollegamento copiato negli appunti!
Ensure that cluster provisioning was successful by checking the cluster status.
Prerequisites
-
All of the custom resources have been configured and provisioned, and the
Agentcustom resource is created on the hub for the managed cluster.
Procedure
Check the status of the managed cluster:
$ oc get managedclusterTrueindicates the managed cluster is ready.Check the agent status:
$ oc get agent -n <cluster_name>Use the
describecommand to provide an in-depth description of the agent’s condition. Statuses to be aware of includeBackendError,InputError,ValidationsFailing,InstallationFailed, andAgentIsConnected. These statuses are relevant to theAgentandAgentClusterInstallcustom resources.$ oc describe agent -n <cluster_name>Check the cluster provisioning status:
$ oc get agentclusterinstall -n <cluster_name>Use the
describecommand to provide an in-depth description of the cluster provisioning status:$ oc describe agentclusterinstall -n <cluster_name>Check the status of the managed cluster’s add-on services:
$ oc get managedclusteraddon -n <cluster_name>Retrieve the authentication information of the
kubeconfigfile for the managed cluster:$ oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig
5.6. Troubleshooting the managed cluster Copia collegamentoCollegamento copiato negli appunti!
Use this procedure to diagnose any installation issues that might occur with the managed cluster.
Procedure
Check the status of the managed cluster:
$ oc get managedclusterExample output
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19hIf the status in the
AVAILABLEcolumn isTrue, the managed cluster is being managed by the hub.If the status in the
AVAILABLEcolumn isUnknown, the managed cluster is not being managed by the hub. Use the following steps to continue checking to get more information.Check the
AgentClusterInstallinstall status:$ oc get clusterdeployment -n <cluster_name>Example output
NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14hIf the status in the
INSTALLEDcolumn isfalse, the installation was unsuccessful.If the installation failed, enter the following command to review the status of the
AgentClusterInstallresource:$ oc describe agentclusterinstall -n <cluster_name> <cluster_name>Resolve the errors and reset the cluster:
Remove the cluster’s managed cluster resource:
$ oc delete managedcluster <cluster_name>Remove the cluster’s namespace:
$ oc delete namespace <cluster_name>This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the
ManagedClusterCR deletion to complete before proceeding.- Recreate the custom resources for the managed cluster.
5.7. RHACM generated cluster installation CRs reference Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management (RHACM) supports deploying OpenShift Container Platform on single-node clusters, three-node clusters, and standard clusters with a specific set of installation custom resources (CRs) that you generate using ClusterInstance CRs for each cluster.
Every managed cluster has its own namespace, and all of the installation CRs except for ManagedCluster and ClusterImageSet are under that namespace. ManagedCluster and ClusterImageSet are cluster-scoped, not namespace-scoped. The namespace and the CR names match the cluster name.
The following table lists the installation CRs that are automatically applied by the RHACM assisted service when it installs clusters using the ClusterInstance CRs that you configure.
| CR | Description | Usage |
|---|---|---|
|
| Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host. | Provides access to the BMC to load and start the discovery image on the target server by using the Redfish protocol. |
|
| Contains information for installing OpenShift Container Platform on the target bare-metal host. |
Used with |
|
|
Specifies details of the managed cluster configuration such as networking and the number of control plane nodes. Displays the cluster | Specifies the managed cluster configuration information and provides status during the installation of the cluster. |
|
|
References the |
Used with |
|
|
Provides network configuration information such as | Sets up a static IP address for the managed cluster’s Kube API server. |
|
| Contains hardware information about the target bare-metal host. | Created automatically on the hub when the target machine’s discovery image boots. |
|
| When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface. | The hub uses this resource to manage and show the status of managed clusters. |
|
|
Contains the list of services provided by the hub to be deployed to the |
Tells the hub which addon services to deploy to the |
|
|
Logical space for |
Propagates resources to the |
|
|
Two CRs are created: |
|
|
| Contains OpenShift Container Platform image information such as the repository and image name. | Passed into resources to provide OpenShift Container Platform images. |