Este conteúdo não está disponível no idioma selecionado.

Chapter 5. Manually installing a single-node OpenShift cluster with GitOps ZTP


You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service.

Note

If you are creating multiple managed clusters, use the ClusterInstance method described in Deploying far edge sites with ZTP.

Important

The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads.

5.1. Extracting reference and example CRs from the ztp-site-generate container

Use the ztp-site-generate container to extract reference custom resources (CRs) and example ClusterInstance CRs to prepare for cluster installation and Day 2 configuration.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the hub cluster as a user with cluster-admin privileges.
  • You installed podman.

Procedure

  1. Create an output folder by running the following command:

    $ mkdir -p ./out
    Copy to Clipboard Toggle word wrap
  2. Log in to the Ecosystem container registry with your credentials by running the following command:

    $ podman login registry.redhat.io
    Copy to Clipboard Toggle word wrap
  3. Extract the reference and example CRs from the ztp-site-generate container image by running the following command:

    $ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.21 extract /home/ztp --tar | tar x -C ./out
    Copy to Clipboard Toggle word wrap

    The ./out directory contains the reference PolicyGenerator and ClusterInstance CRs in the out/argocd/example/ folder.

    Example output

    out
     └── argocd
          └── example
               ├── acmpolicygenerator
               │     ├── {policy-prefix}common-ranGen.yaml
               │     ├── {policy-prefix}example-sno-site.yaml
               │     ├── {policy-prefix}group-du-sno-ranGen.yaml
               │     ├── ...
               │     ├── kustomization.yaml
               │     └── ns.yaml
               └── clusterinstance
                     ├── example-sno.yaml
                     ├── example-3node.yaml
                     ├── example-standard.yaml
                     └── ...
    Copy to Clipboard Toggle word wrap

  4. Create a ClusterInstance CR for your cluster.

    Use the example ClusterInstance CRs in the out/argocd/example/clusterinstance/ folder that you previously extracted from the ztp-site-generate container as a reference. The folder includes example files for single node, three-node, and standard clusters:

    • example-sno.yaml
    • example-3node.yaml
    • example-standard.yaml

      Change the cluster and host details in the example file to match the type of cluster you want to install. For example:

      Example single-node OpenShift ClusterInstance CR

      # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-ai-sno
      ---
      apiVersion: siteconfig.open-cluster-management.io/v1alpha1
      kind: ClusterInstance
      metadata:
        name: "example-ai-sno"
        namespace: "example-ai-sno"
      spec:
        baseDomain: "example.com"
        pullSecretRef:
          name: "assisted-deployment-pull-secret"
        clusterImageSetNameRef: "openshift-4.21"
        sshPublicKey: "ssh-rsa AAAA..."
        clusterName: "example-ai-sno"
        networkType: "OVNKubernetes"
        # installConfigOverrides is a generic way of passing install-config
        # parameters through the siteConfig.  The 'capabilities' field configures
        # the composable openshift feature.  In this 'capabilities' setting, we
        # remove all the optional set of components.
        # Notes:
        # - OperatorLifecycleManager is needed for 4.15 and later
        # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier
        # - Ingress is needed for 4.16 and later
        installConfigOverrides: |
          {
            "capabilities": {
              "baselineCapabilitySet": "None",
              "additionalEnabledCapabilities": [
                "NodeTuning",
                "OperatorLifecycleManager",
                "Ingress"
              ]
            }
          }
        # Include references to extraManifest ConfigMaps.
        extraManifestsRefs:
          - name: sno-extra-manifest-configmap
        extraLabels:
          ManagedCluster:
            # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples
            du-profile: "latest"
            # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates:
            # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true'
            common: "true"
            # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""'
            group-du-sno: ""
            # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"'
            # Normally this should match or contain the cluster name so it only applies to a single cluster
            sites : "example-sno"
        clusterNetwork:
          - cidr: 1001:1::/48
            hostPrefix: 64
        machineNetwork:
          - cidr: 1111:2222:3333:4444::/64
        serviceNetwork:
          - cidr: 1001:2::/112
        additionalNTPSources:
          - 1111:2222:3333:4444::2
        # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate
        # please see Workload Partitioning Feature for a complete guide.
        cpuPartitioningMode: AllNodes
        templateRefs:
          - name: ai-cluster-templates-v1
            namespace: open-cluster-management
        nodes:
          - hostName: "example-node1.example.com"
            role: "master"
            bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1"
            bmcCredentialsName:
              name: "example-node1-bmh-secret"
            bootMACAddress: "AA:BB:CC:DD:EE:11"
            # Use UEFISecureBoot to enable secure boot, UEFI to disable.
            bootMode: "UEFISecureBoot"
            rootDeviceHints:
              deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0"
            # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md in argocd folder for more details
            ignitionConfigOverride: |
              {
                "ignition": {
                  "version": "3.2.0"
                },
                "storage": {
                  "disks": [
                    {
                      "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0",
                      "partitions": [
                        {
                          "label": "var-lib-containers",
                          "sizeMiB": 0,
                          "startMiB": 250000
                        }
                      ],
                      "wipeTable": false
                    }
                  ],
                  "filesystems": [
                    {
                      "device": "/dev/disk/by-partlabel/var-lib-containers",
                      "format": "xfs",
                      "mountOptions": [
                        "defaults",
                        "prjquota"
                      ],
                      "path": "/var/lib/containers",
                      "wipeFilesystem": true
                    }
                  ]
                },
                "systemd": {
                  "units": [
                    {
                      "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target",
                      "enabled": true,
                      "name": "var-lib-containers.mount"
                    }
                  ]
                }
              }
            nodeNetwork:
              interfaces:
                - name: eno1
                  macAddress: "AA:BB:CC:DD:EE:11"
              config:
                interfaces:
                  - name: eno1
                    type: ethernet
                    state: up
                    ipv4:
                      enabled: false
                    ipv6:
                      enabled: true
                      address:
                      # For SNO sites with static IP addresses, the node-specific,
                      # API and Ingress IPs should all be the same and configured on
                      # the interface
                      - ip: 1111:2222:3333:4444::aaaa:1
                        prefix-length: 64
                dns-resolver:
                  config:
                    search:
                    - example.com
                    server:
                    - 1111:2222:3333:4444::2
                routes:
                  config:
                  - destination: ::/0
                    next-hop-interface: eno1
                    next-hop-address: 1111:2222:3333:4444::1
                    table-id: 254
            templateRefs:
              - name: ai-node-templates-v1
                namespace: open-cluster-management
      Copy to Clipboard Toggle word wrap

      Note

      Optional: To provision additional install-time manifests on the provisioned cluster, create the extra manifest CRs and apply them to the hub cluster. Then reference them in the extraManifestsRefs field of the ClusterInstance CR. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline".

  5. Optional: Generate Day 2 configuration CRs from the reference PolicyGenerator CRs:

    1. Create an output folder for the configuration CRs by running the following command:

      $ mkdir -p ./ref
      Copy to Clipboard Toggle word wrap
    2. Generate the configuration CRs by running the following command:

      $ podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.21 generator config -N . /output
      Copy to Clipboard Toggle word wrap

      The command generates example group and cluster-specific configuration CRs in the ./ref folder. You can apply these CRs to the cluster after installation is complete.

5.2. Creating the managed bare-metal host secrets

Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.

Note

The secrets are referenced from the ClusterInstance CR by name. The namespace must match the ClusterInstance namespace.

Procedure

  1. Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:

    1. Save the following YAML as the file example-sno-secret.yaml:

      apiVersion: v1
      kind: Secret
      metadata:
        name: example-sno-bmc-secret
        namespace: example-sno 
      1
      
      data: 
      2
      
        password: <base64_password>
        username: <base64_username>
      type: Opaque
      ---
      apiVersion: v1
      kind: Secret
      metadata:
        name: pull-secret
        namespace: example-sno  
      3
      
      data:
        .dockerconfigjson: <pull_secret> 
      4
      
      type: kubernetes.io/dockerconfigjson
      Copy to Clipboard Toggle word wrap
      1
      Must match the namespace configured in the related ClusterInstance CR
      2
      Base64-encoded values for password and username
      3
      Must match the namespace configured in the related ClusterInstance CR
      4
      Base64-encoded pull secret
  2. Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster.

The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation.

Note

In OpenShift Container Platform 4.21, you can only add kernel arguments. You can not replace or delete kernel arguments.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the hub cluster as a user with cluster-admin privileges.
  • You have applied a ClusterInstance CR to the hub cluster.

Procedure

  1. Edit the spec.kernelArguments specification in the InfraEnv CR to configure kernel arguments:
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
  name: <cluster_name>
  namespace: <cluster_name>
spec:
  kernelArguments:
    - operation: append 
1

      value: audit=0 
2

    - operation: append
      value: trace=1
  clusterRef:
    name: <cluster_name>
    namespace: <cluster_name>
  pullSecretRef:
    name: pull-secret
Copy to Clipboard Toggle word wrap
1
Specify the append operation to add a kernel argument.
2
Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument.
Note

The ClusterInstance CR generates the InfraEnv resource as part of the day-0 installation CRs.

Verification

To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file.

  1. Begin an SSH session with the target host:

    $ ssh -i /path/to/privatekey core@<host_name>
    Copy to Clipboard Toggle word wrap
  2. View the system’s kernel arguments by using the following command:

    $ cat /proc/cmdline
    Copy to Clipboard Toggle word wrap

5.4. Installing a single managed cluster

You can manually deploy a single managed cluster using the assisted service and Red Hat Advanced Cluster Management (RHACM).

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the hub cluster as a user with cluster-admin privileges.
  • You have extracted the reference and example CRs from the ztp-site-generate container and you configured the ClusterInstance CR.
  • You have created the baseboard management controller (BMC) Secret and the image pull-secret Secret custom resources (CRs). See "Creating the managed bare-metal host secrets" for details.
  • Your target bare-metal host meets the networking and hardware requirements for managed clusters.

Procedure

  1. Create a ClusterImageSet for each specific cluster version to be deployed, for example clusterImageSet-4.21.yaml. A ClusterImageSet has the following format:

    apiVersion: hive.openshift.io/v1
    kind: ClusterImageSet
    metadata:
      name: openshift-4.21.0 
    1
    
    spec:
       releaseImage: quay.io/openshift-release-dev/ocp-release:4.21.0-x86_64 
    2
    Copy to Clipboard Toggle word wrap
    1
    The descriptive version that you want to deploy.
    2
    Specifies the releaseImage to deploy and determines the operating system image version. The discovery ISO is based on the image version as set by releaseImage, or the latest version if the exact version is unavailable.
  2. Apply the clusterImageSet CR:

    $ oc apply -f clusterImageSet-4.21.yaml
    Copy to Clipboard Toggle word wrap
  3. Create the Namespace CR in the cluster-namespace.yaml file:

    apiVersion: v1
    kind: Namespace
    metadata:
         name: <cluster_name> 
    1
    
         labels:
            name: <cluster_name> 
    2
    Copy to Clipboard Toggle word wrap
    1 2
    The name of the managed cluster to provision.
  4. Apply the Namespace CR by running the following command:

    $ oc apply -f cluster-namespace.yaml
    Copy to Clipboard Toggle word wrap
  5. Apply the ClusterInstance CR that you configured to the hub cluster by running the following command:

    $ oc apply -f clusterinstance.yaml
    Copy to Clipboard Toggle word wrap

    The SiteConfig Operator processes the ClusterInstance CR and automatically generates the required installation CRs, including BareMetalHost, AgentClusterInstall, ClusterDeployment, InfraEnv, and NMStateConfig. The assisted service then begins the cluster installation.

5.5. Monitoring the managed cluster installation status

Ensure that cluster provisioning was successful by checking the cluster status.

Prerequisites

  • All of the custom resources have been configured and provisioned, and the Agent custom resource is created on the hub for the managed cluster.

Procedure

  1. Check the status of the managed cluster:

    $ oc get managedcluster
    Copy to Clipboard Toggle word wrap

    True indicates the managed cluster is ready.

  2. Check the agent status:

    $ oc get agent -n <cluster_name>
    Copy to Clipboard Toggle word wrap
  3. Use the describe command to provide an in-depth description of the agent’s condition. Statuses to be aware of include BackendError, InputError, ValidationsFailing, InstallationFailed, and AgentIsConnected. These statuses are relevant to the Agent and AgentClusterInstall custom resources.

    $ oc describe agent -n <cluster_name>
    Copy to Clipboard Toggle word wrap
  4. Check the cluster provisioning status:

    $ oc get agentclusterinstall -n <cluster_name>
    Copy to Clipboard Toggle word wrap
  5. Use the describe command to provide an in-depth description of the cluster provisioning status:

    $ oc describe agentclusterinstall -n <cluster_name>
    Copy to Clipboard Toggle word wrap
  6. Check the status of the managed cluster’s add-on services:

    $ oc get managedclusteraddon -n <cluster_name>
    Copy to Clipboard Toggle word wrap
  7. Retrieve the authentication information of the kubeconfig file for the managed cluster:

    $ oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig
    Copy to Clipboard Toggle word wrap

5.6. Troubleshooting the managed cluster

Use this procedure to diagnose any installation issues that might occur with the managed cluster.

Procedure

  1. Check the status of the managed cluster:

    $ oc get managedcluster
    Copy to Clipboard Toggle word wrap

    Example output

    NAME            HUB ACCEPTED   MANAGED CLUSTER URLS   JOINED   AVAILABLE   AGE
    SNO-cluster     true                                   True     True      2d19h
    Copy to Clipboard Toggle word wrap

    If the status in the AVAILABLE column is True, the managed cluster is being managed by the hub.

    If the status in the AVAILABLE column is Unknown, the managed cluster is not being managed by the hub. Use the following steps to continue checking to get more information.

  2. Check the AgentClusterInstall install status:

    $ oc get clusterdeployment -n <cluster_name>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME        PLATFORM            REGION   CLUSTERTYPE   INSTALLED    INFRAID    VERSION  POWERSTATE AGE
    Sno0026    agent-baremetal                               false                          Initialized
    2d14h
    Copy to Clipboard Toggle word wrap

    If the status in the INSTALLED column is false, the installation was unsuccessful.

  3. If the installation failed, enter the following command to review the status of the AgentClusterInstall resource:

    $ oc describe agentclusterinstall -n <cluster_name> <cluster_name>
    Copy to Clipboard Toggle word wrap
  4. Resolve the errors and reset the cluster:

    1. Remove the cluster’s managed cluster resource:

      $ oc delete managedcluster <cluster_name>
      Copy to Clipboard Toggle word wrap
    2. Remove the cluster’s namespace:

      $ oc delete namespace <cluster_name>
      Copy to Clipboard Toggle word wrap

      This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the ManagedCluster CR deletion to complete before proceeding.

    3. Recreate the custom resources for the managed cluster.

5.7. RHACM generated cluster installation CRs reference

Red Hat Advanced Cluster Management (RHACM) supports deploying OpenShift Container Platform on single-node clusters, three-node clusters, and standard clusters with a specific set of installation custom resources (CRs) that you generate using ClusterInstance CRs for each cluster.

Note

Every managed cluster has its own namespace, and all of the installation CRs except for ManagedCluster and ClusterImageSet are under that namespace. ManagedCluster and ClusterImageSet are cluster-scoped, not namespace-scoped. The namespace and the CR names match the cluster name.

The following table lists the installation CRs that are automatically applied by the RHACM assisted service when it installs clusters using the ClusterInstance CRs that you configure.

Expand
Table 5.1. Cluster installation CRs generated by RHACM
CRDescriptionUsage

BareMetalHost

Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host.

Provides access to the BMC to load and start the discovery image on the target server by using the Redfish protocol.

InfraEnv

Contains information for installing OpenShift Container Platform on the target bare-metal host.

Used with ClusterDeployment to generate the discovery ISO for the managed cluster.

AgentClusterInstall

Specifies details of the managed cluster configuration such as networking and the number of control plane nodes. Displays the cluster kubeconfig and credentials when the installation is complete.

Specifies the managed cluster configuration information and provides status during the installation of the cluster.

ClusterDeployment

References the AgentClusterInstall CR to use.

Used with InfraEnv to generate the discovery ISO for the managed cluster.

NMStateConfig

Provides network configuration information such as MAC address to IP mapping, DNS server, default route, and other network settings.

Sets up a static IP address for the managed cluster’s Kube API server.

Agent

Contains hardware information about the target bare-metal host.

Created automatically on the hub when the target machine’s discovery image boots.

ManagedCluster

When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface.

The hub uses this resource to manage and show the status of managed clusters.

KlusterletAddonConfig

Contains the list of services provided by the hub to be deployed to the ManagedCluster resource.

Tells the hub which addon services to deploy to the ManagedCluster resource.

Namespace

Logical space for ManagedCluster resources existing on the hub. Unique per site.

Propagates resources to the ManagedCluster.

Secret

Two CRs are created: BMC Secret and Image Pull Secret.

  • BMC Secret authenticates into the target bare-metal host using its username and password.
  • Image Pull Secret contains authentication information for the OpenShift Container Platform image installed on the target bare-metal host.

ClusterImageSet

Contains OpenShift Container Platform image information such as the repository and image name.

Passed into resources to provide OpenShift Container Platform images.

Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2026 Red Hat
Voltar ao topo