Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 16. Image-based installation for single-node OpenShift


16.1. Understanding image-based installation and deployment for single-node OpenShift

Image-based installations significantly reduce the deployment time of single-node OpenShift clusters by streamlining the installation process.

This approach enables the preinstallation of configured and validated instances of single-node OpenShift on target hosts. These preinstalled hosts can be rapidly reconfigured and deployed at the far edge of the network, including in disconnected environments, with minimal intervention.

Note

To deploy a managed cluster using an imaged-based approach in combination with GitOps Zero Touch Provisioning (ZTP), you can use the SiteConfig operator. For more information, see SiteConfig operator.

16.1.1. Overview of image-based installation and deployment for single-node OpenShift clusters

Deploying infrastructure at the far edge of the network presents challenges for service providers with low bandwidth, high latency, and disconnected environments. It is also costly and time-consuming to install and deploy single-node OpenShift clusters.

An image-based approach to installing and deploying single-node OpenShift clusters at the far edge of the network overcomes these challenges by separating the installation and deployment stages.

Figure 16.1. Overview of an image-based installation and deployment for managed single-node OpenShift clusters

Overview of an image-based installation and deployment
Imaged-based installation
Preinstall multiple hosts with single-node OpenShift at a central site, such as a service depot or a factory. Then, validate the base configuration for these hosts and leverage the image-based approach to perform reproducible factory installs at scale by using a single live installation ISO.
Image-based deployment
Ship the preinstalled and validated hosts to a remote site and rapidly reconfigure and deploy the clusters in a matter of minutes by using a configuration ISO.

You can choose from two methods to preinstall and configure your SNO clusters.

Using the openshift-install program
For a single-node OpenShift cluster, use the openshift-install program only to manually create the live installation ISO that is common to all hosts. Then, use the program again to create the configuration ISO which ensures that the host is unique. For more information, see “Deploying managed single-node OpenShift using the openshift-install program”.
Using the IBI Operator
For managed single-node OpenShift clusters, you can use the openshift-install with the Image Based Install (IBI) Operator to scale up the operations. The program creates the live installation ISO and then the IBI Operator creates one configuration ISO for each host. For more information, see “Deploying single-node OpenShift using the IBI Operator”.

16.1.1.1. Image-based installation for single-node OpenShift clusters

Using the Lifecycle Agent, you can generate an OCI container image that encapsulates an instance of a single-node OpenShift cluster. This image is derived from a dedicated cluster that you can configure with the target OpenShift Container Platform version.

You can reference this image in a live installation ISO to consistently preinstall configured and validated instances of single-node OpenShift to multiple hosts. This approach enables the preparation of hosts at a central location, for example in a factory or service depot, before shipping the preinstalled hosts to a remote site for rapid reconfiguration and deployment. The instructions for preinstalling a host are the same whether you deploy the host by using only the openshift-install program or using the program with the IBI Operator.

The following is a high-level overview of the image-based installation process:

  1. Generate an image from a single-node OpenShift cluster.
  2. Use the openshift-install program to embed the seed image URL, and other installation artifacts, in a live installation ISO.
  3. Start the host using the live installation ISO to preinstall the host.

    During this process, the openshift-install program installs Red Hat Enterprise Linux CoreOS (RHCOS) to the disk, pulls the image you generated, and precaches release container images to the disk.

  4. When the installation completes, the host is ready to ship to the remote site for rapid reconfiguration and deployment.

16.1.1.2. Image-based deployment for single-node OpenShift clusters

You can use the openshift-install program or the IBI Operator to configure and deploy a host that you preinstalled with an image-based installation.

Single-node OpenShift cluster deployment

To configure the target host with site-specific details by using the openshift-install program, you must create the following resources:

  • The install-config.yaml installation manifest
  • The image-based-config.yaml manifest

The openshift-install program uses these resources to generate a configuration ISO that you attach to the preinstalled target host to complete the deployment.

Managed single-node OpenShift cluster deployment

Red Hat Advanced Cluster Management (RHACM) and the multicluster engine for Kubernetes Operator (MCE) use a hub-and-spoke architecture to manage and deploy single-node OpenShift clusters across multiple sites. Using this approach, the hub cluster serves as a central control plane that manages the spoke clusters, which are often remote single-node OpenShift clusters deployed at the far edge of the network.

You can define the site-specific configuration resources for an image-based deployment in the hub cluster. The IBI Operator uses these configuration resources to reconfigure the preinstalled host at the remote site and deploy the host as a managed single-node OpenShift cluster. This approach is especially beneficial for telecommunications providers and other service providers with extensive, distributed infrastructures, where an end-to-end installation at the remote site would be time-consuming and costly.

The following is a high-level overview of the image-based deployment process for hosts preinstalled with an imaged-based installation:

  • Define the site-specific configuration resources for the preinstalled host in the hub cluster.
  • Apply these resources in the hub cluster. This initiates the deployment process.
  • The IBI Operator creates a configuration ISO.
  • The IBI Operator boots the target preinstalled host with the configuration ISO attached.
  • The host mounts the configuration ISO and begins the reconfiguration process.
  • When the reconfiguration completes, the single-node OpenShift cluster is ready.

As the host is already preinstalled using an image-based installation, a technician can reconfigure and deploy the host in a matter of minutes.

16.1.2. Image-based installation and deployment components

The following content describes the components in an image-based installation and deployment.

Seed image
OCI container image generated from a dedicated cluster with the target OpenShift Container Platform version.
Seed cluster
Dedicated single-node OpenShift cluster that you use to create a seed image and is deployed with the target OpenShift Container Platform version.
Lifecycle Agent
Generates the seed image.
Image Based Install (IBI) Operator
When you deploy managed clusters, the IBI Operator creates a configuration ISO from the site-specific resources you define in the hub cluster, and attaches the configuration ISO to the preinstalled host by using a bare-metal provisioning service.
openshift-install program
Creates the installation and configuration ISO, and embeds the seed image URL in the live installation ISO. If the IBI Operator is not used, you must manually attach the configuration ISO to a preinstalled host to complete the deployment.

16.1.3. Cluster guidelines for image-based installation and deployment

For a successful image-based installation and deployment, see the following guidelines.

16.1.3.1. Cluster guidelines

  • If you are using Red Hat Advanced Cluster Management (RHACM), to avoid including any RHACM resources in your seed image, you need to disable all optional RHACM add-ons before generating the seed image.

16.1.3.2. Seed cluster guidelines

  • If your cluster deployment at the edge of the network requires a proxy configuration, you must create a seed image from a seed cluster featuring a proxy configuration. The proxy configurations do not have to match.
  • If you set a maximum transmission unit (MTU) in the seed cluster, you must set the same MTU value in the static network configuration for the image-based configuration ISO.
  • Your single-node OpenShift seed cluster must have a shared /var/lib/containers directory for precaching images during an image-based installation. For more information see "Configuring a shared container partition between ostree stateroots".
  • Create a seed image from a single-node OpenShift cluster that uses the same hardware as your target bare-metal host. The seed cluster must reflect your target cluster configuration for the following items:

    • CPU topology

      • CPU architecture
      • Number of CPU cores
      • Tuned performance configuration, such as number of reserved CPUs
    • IP version

      Note

      Dual-stack networking is not supported in this release.

    • Disconnected registry

      Note

      If the target cluster uses a disconnected registry, your seed cluster must use a disconnected registry. The registries do not have to be the same.

    • FIPS configuration

16.1.4. Software prerequisites for an image-based installation and deployment

An image-based installation and deployment requires the following minimum software versions for these required components.

Table 16.1. Minimum software requirements
ComponentSoftware version

Managed cluster version

4.17

Hub cluster version

4.16

Red Hat Advanced Cluster Management (RHACM)

2.12

Lifecycle Agent

4.16 or later

Image Based Install Operator

4.17

openshift-install program

4.17

16.2. Preparing for image-based installation for single-node OpenShift clusters

To prepare for an image-based installation for single-node OpenShift clusters, you must complete the following tasks:

  • Create a seed image by using the Lifecycle Agent.
  • Verify that all software components meet the required versions. For further information, see "Software prerequisites for an image-based installation and deployment".

16.2.1. Installing the Lifecycle Agent

Use the Lifecycle Agent to generate a seed image from a seed cluster. You can install the Lifecycle Agent using the OpenShift CLI (oc) or the web console.

16.2.1.1. Installing the Lifecycle Agent by using the CLI

You can use the OpenShift CLI (oc) to install the Lifecycle Agent.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in as a user with cluster-admin privileges.

Procedure

  1. Create a Namespace object YAML file for the Lifecycle Agent:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-lifecycle-agent
      annotations:
        workload.openshift.io/allowed: management
    1. Create the Namespace CR by running the following command:

      $ oc create -f <namespace_filename>.yaml
  2. Create an OperatorGroup object YAML file for the Lifecycle Agent:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-lifecycle-agent
      namespace: openshift-lifecycle-agent
    spec:
      targetNamespaces:
      - openshift-lifecycle-agent
    1. Create the OperatorGroup CR by running the following command:

      $ oc create -f <operatorgroup_filename>.yaml
  3. Create a Subscription CR for the Lifecycle Agent:

    apiVersion: operators.coreos.com/v1
    kind: Subscription
    metadata:
      name: openshift-lifecycle-agent-subscription
      namespace: openshift-lifecycle-agent
    spec:
      channel: "stable"
      name: lifecycle-agent
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    1. Create the Subscription CR by running the following command:

      $ oc create -f <subscription_filename>.yaml

Verification

  1. To verify that the installation succeeded, inspect the CSV resource by running the following command:

    $ oc get csv -n openshift-lifecycle-agent

    Example output

    NAME                              DISPLAY                     VERSION               REPLACES                           PHASE
    lifecycle-agent.v4.17.0           Openshift Lifecycle Agent   4.17.0                Succeeded

  2. Verify that the Lifecycle Agent is up and running by running the following command:

    $ oc get deploy -n openshift-lifecycle-agent

    Example output

    NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
    lifecycle-agent-controller-manager   1/1     1            1           14s

16.2.1.2. Installing the Lifecycle Agent by using the web console

You can use the OpenShift Container Platform web console to install the Lifecycle Agent.

Prerequisites

  • You have logged in as a user with cluster-admin privileges.

Procedure

  1. In the OpenShift Container Platform web console, navigate to Operators OperatorHub.
  2. Search for the Lifecycle Agent from the list of available Operators, and then click Install.
  3. On the Install Operator page, under A specific namespace on the cluster select openshift-lifecycle-agent.
  4. Click Install.

Verification

  1. To confirm that the installation is successful:

    1. Click Operators Installed Operators.
    2. Ensure that the Lifecycle Agent is listed in the openshift-lifecycle-agent project with a Status of InstallSucceeded.

      Note

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

If the Operator is not installed successfully:

  1. Click Operators Installed Operators, and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
  2. Click Workloads Pods, and check the logs for pods in the openshift-lifecycle-agent project.

16.2.2. Configuring a shared container partition between ostree stateroots

Important

You must complete this procedure at installation time.

Apply a MachineConfig to the seed cluster to create a separate partition and share the /var/lib/containers partition between the two ostree stateroots that will be used during the preinstall process.

Procedure

  • Apply a MachineConfig to create a separate partition:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: master
      name: 98-var-lib-containers-partitioned
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          disks:
            - device: /dev/disk/by-path/pci-<root_disk> 1
              partitions:
                - label: varlibcontainers
                  startMiB: <start_of_partition> 2
                  sizeMiB: <partition_size> 3
          filesystems:
            - device: /dev/disk/by-partlabel/varlibcontainers
              format: xfs
              mountOptions:
                - defaults
                - prjquota
              path: /var/lib/containers
              wipeFilesystem: true
        systemd:
          units:
            - contents: |-
                # Generated by Butane
                [Unit]
                Before=local-fs.target
                Requires=systemd-fsck@dev-disk-by\x2dpartlabel-varlibcontainers.service
                After=systemd-fsck@dev-disk-by\x2dpartlabel-varlibcontainers.service
    
                [Mount]
                Where=/var/lib/containers
                What=/dev/disk/by-partlabel/varlibcontainers
                Type=xfs
                Options=defaults,prjquota
    
                [Install]
                RequiredBy=local-fs.target
              enabled: true
              name: var-lib-containers.mount
    1
    Specify the root disk.
    2
    Specify the start of the partition in MiB. If the value is too small, the installation will fail.
    3
    Specify a minimum size for the partition of 500 GB to ensure adequate disk space for precached images. If the value is too small, the deployments after installation will fail.

16.2.3. Seed image configuration

You can create a seed image from a single-node OpenShift cluster with the same hardware as your bare-metal host, and with a similar target cluster configuration. However, the seed image generated from the seed cluster cannot contain any cluster-specific configuration.

The following table lists the components, resources, and configurations that you must and must not include in your seed image:

Table 16.2. Seed image configuration
Cluster configurationInclude in seed image

Performance profile

Yes

MachineConfig resources for the target cluster

Yes

IP version [1]

Yes

Set of Day 2 Operators, including the Lifecycle Agent and the OADP Operator

Yes

Disconnected registry configuration [2]

Yes

Valid proxy configuration [3]

Yes

FIPS configuration

Yes

Dedicated partition on the primary disk for container storage that matches the size of the target clusters

Yes

Local volumes

  • StorageClass used in LocalVolume for LSO
  • LocalVolume for LSO
  • LVMCluster CR for LVMS

No

  1. Dual-stack networking is not supported in this release.
  2. If the seed cluster is installed in a disconnected environment, the target clusters must also be installed in a disconnected environment.
  3. The proxy configuration on the seed and target clusters does not have to match.

16.2.3.1. Seed image configuration using the RAN DU profile

The following table lists the components, resources, and configurations that you must and must not include in the seed image when using the RAN DU profile:

Table 16.3. Seed image configuration with RAN DU profile
ResourceInclude in seed image

All extra manifests that are applied as part of Day 0 installation

Yes

All Day 2 Operator subscriptions

Yes

DisableOLMPprof.yaml

Yes

TunedPerformancePatch.yaml

Yes

PerformanceProfile.yaml

Yes

SriovOperatorConfig.yaml

Yes

DisableSnoNetworkDiag.yaml

Yes

StorageClass.yaml

No, if it is used in StorageLV.yaml

StorageLV.yaml

No

StorageLVMCluster.yaml

No

The following list of resources and configurations can be applied as extra manifests or by using RHACM policies:

  • ClusterLogForwarder.yaml
  • ReduceMonitoringFootprint.yaml
  • SriovFecClusterConfig.yaml
  • PtpOperatorConfigForEvent.yaml
  • DefaultCatsrc.yaml
  • PtpConfig.yaml
  • SriovNetwork.yaml
Important

If you are using GitOps ZTP, enable these resources by using RHACM policies to ensure configuration changes can be applied throughout the cluster lifecycle.

16.2.4. Generating a seed image with the Lifecycle Agent

Use the Lifecycle Agent to generate a seed image from a managed cluster. The Operator checks for required system configurations, performs any necessary system cleanup before generating the seed image, and launches the image generation. The seed image generation includes the following tasks:

  • Stopping cluster Operators
  • Preparing the seed image configuration
  • Generating and pushing the seed image to the image repository specified in the SeedGenerator CR
  • Restoring cluster Operators
  • Expiring seed cluster certificates
  • Generating new certificates for the seed cluster
  • Restoring and updating the SeedGenerator CR on the seed cluster

Prerequisites

  • RHACM and multicluster engine for Kubernetes Operator are not installed on the seed cluster.
  • You have configured a shared container directory on the seed cluster.
  • You have installed the minimum version of the OADP Operator and the Lifecycle Agent on the seed cluster.
  • Ensure that persistent volumes are not configured on the seed cluster.
  • Ensure that the LocalVolume CR does not exist on the seed cluster if the Local Storage Operator is used.
  • Ensure that the LVMCluster CR does not exist on the seed cluster if LVM Storage is used.
  • Ensure that the DataProtectionApplication CR does not exist on the seed cluster if OADP is used.

Procedure

  1. Detach the managed cluster from the hub to delete any RHACM-specific resources from the seed cluster that must not be in the seed image:

    1. Manually detach the seed cluster by running the following command:

      $ oc delete managedcluster sno-worker-example
      1. Wait until the managed cluster is removed. After the cluster is removed, create the proper SeedGenerator CR. The Lifecycle Agent cleans up the RHACM artifacts.
    2. If you are using GitOps ZTP, detach your cluster by removing the seed cluster’s SiteConfig CR from the kustomization.yaml.

      1. If you have a kustomization.yaml file that references multiple SiteConfig CRs, remove your seed cluster’s SiteConfig CR from the kustomization.yaml:

        apiVersion: kustomize.config.k8s.io/v1beta1
        kind: Kustomization
        
        generators:
        #- example-seed-sno1.yaml
        - example-target-sno2.yaml
        - example-target-sno3.yaml
      2. If you have a kustomization.yaml that references one SiteConfig CR, remove your seed cluster’s SiteConfig CR from the kustomization.yaml and add the generators: {} line:

        apiVersion: kustomize.config.k8s.io/v1beta1
        kind: Kustomization
        
        generators: {}
      3. Commit the kustomization.yaml changes in your Git repository and push the changes to your repository.

        The ArgoCD pipeline detects the changes and removes the managed cluster.

  2. Create the Secret object so that you can push the seed image to your registry.

    1. Create the authentication file by running the following commands:

      $ MY_USER=myuserid
      $ AUTHFILE=/tmp/my-auth.json
      $ podman login --authfile ${AUTHFILE} -u ${MY_USER} quay.io/${MY_USER}
      $ base64 -w 0 ${AUTHFILE} ; echo
    2. Copy the output into the seedAuth field in the Secret YAML file named seedgen in the openshift-lifecycle-agent namespace:

      apiVersion: v1
      kind: Secret
      metadata:
        name: seedgen 1
        namespace: openshift-lifecycle-agent
      type: Opaque
      data:
        seedAuth: <encoded_AUTHFILE> 2
      1
      The Secret resource must have the name: seedgen and namespace: openshift-lifecycle-agent fields.
      2
      Specifies a base64-encoded authfile for write-access to the registry for pushing the generated seed images.
    3. Apply the Secret by running the following command:

      $ oc apply -f secretseedgenerator.yaml
  3. Create the SeedGenerator CR:

    apiVersion: lca.openshift.io/v1
    kind: SeedGenerator
    metadata:
      name: seedimage 1
    spec:
      seedImage: <seed_container_image> 2
    1
    The SeedGenerator CR must be named seedimage.
    2
    Specify the container image URL, for example, quay.io/example/seed-container-image:<tag>. It is recommended to use the <seed_cluster_name>:<ocp_version> format.
  4. Generate the seed image by running the following command:

    $ oc apply -f seedgenerator.yaml
    Important

    The cluster reboots and loses API capabilities while the Lifecycle Agent generates the seed image. Applying the SeedGenerator CR stops the kubelet and the CRI-O operations, then it starts the image generation.

If you want to generate more seed images, you must provision a new seed cluster with the version that you want to generate a seed image from.

Verification

  • After the cluster recovers and it is available, you can check the status of the SeedGenerator CR by running the following command:

    $ oc get seedgenerator -o yaml

    Example output

    status:
      conditions:
      - lastTransitionTime: "2024-02-13T21:24:26Z"
        message: Seed Generation completed
        observedGeneration: 1
        reason: Completed
        status: "False"
        type: SeedGenInProgress
      - lastTransitionTime: "2024-02-13T21:24:26Z"
        message: Seed Generation completed
        observedGeneration: 1
        reason: Completed
        status: "True"
        type: SeedGenCompleted 1
      observedGeneration: 1

    1
    The seed image generation is complete.

16.3. About image-based installation for single-node OpenShift

Use the openshift-install program to create a live installation ISO for preinstalling single-node OpenShift on bare-metal hosts. For more information about downloading the installation program, see "Installation process" in the "Additional resources" section.

The installation program takes a seed image URL and other inputs, such as the release version of the seed image and the disk to use for the installation process, and creates a live installation ISO. You can then start the host using the live installation ISO to begin preinstallation. When preinstallation is complete, the host is ready to ship to a remote site for the final site-specific configuration and deployment.

The following are the high-level steps to preinstall a single-node OpenShift cluster using an image-based installation:

  • Generate a seed image.
  • Create a live installation ISO using the openshift-install installation program.
  • Boot the host using the live installation ISO to preinstall the host.

Additional resources

16.3.1. Creating a live installation ISO for a single-node OpenShift image-based installation

You can embed your single-node OpenShift seed image URL, and other installation artifacts, in a live installation ISO by using the openshift-install program.

Note

For more information about the specification for the image-based-installation-config.yaml manifest, see the section "Reference specifications for the image-based-installation-config.yaml manifest".

Prerequisites

  • You generated a seed image from a single-node OpenShift seed cluster.
  • You downloaded the latest version of the openshift-install program.
  • The target host has network access to the seed image URL and all other installation artifacts.
  • If you require static networking, you must install the nmstatectl library on the host that creates the live installation ISO.

Procedure

  1. Create a live installation ISO and embed your single-node OpenShift seed image URL and other installation artifacts:

    1. Create a working directory by running the following:

      $ mkdir ibi-iso-workdir 1
      1
      Replace ibi-iso-workdir with the name of your working directory.
    2. Optional. Create an installation configuration template to use as a reference when configuring the ImageBasedInstallationConfig resource:

      $ openshift-install image-based create image-config-template --dir ibi-iso-workdir 1
      1
      If you do not specify a working directory, the command uses the current directory.

      Example output

      INFO Image-Config-Template created in: ibi-iso-workdir

      The command creates the image-based-installation-config.yaml installation configuration template in your target directory:

      #
      # Note: This is a sample ImageBasedInstallationConfig file showing
      # which fields are available to aid you in creating your
      # own image-based-installation-config.yaml file.
      #
      apiVersion: v1beta1
      kind: ImageBasedInstallationConfig
      metadata:
        name: example-image-based-installation-config
      # The following fields are required
      seedImage: quay.io/openshift-kni/seed-image:4.17.0
      seedVersion: 4.17.0
      installationDisk: /dev/vda
      pullSecret: '<your_pull_secret>'
      # networkConfig is optional and contains the network configuration for the host in NMState format.
      # See https://nmstate.io/examples.html for examples.
      # networkConfig:
      #   interfaces:
      #     - name: eth0
      #       type: ethernet
      #       state: up
      #       mac-address: 00:00:00:00:00:00
      #       ipv4:
      #         enabled: true
      #         address:
      #           - ip: 192.168.122.2
      #             prefix-length: 23
      #         dhcp: false
    3. Edit your installation configuration file:

      Example image-based-installation-config.yaml file

      apiVersion: v1beta1
      kind: ImageBasedInstallationConfig
      metadata:
        name: example-image-based-installation-config
      seedImage: quay.io/repo-id/seed:latest
      seedVersion: "4.17.0"
      extraPartitionStart: "-240G"
      installationDisk: /dev/disk/by-id/wwn-0x62c...
      sshKey: 'ssh-ed25519 AAAA...'
      pullSecret: '{"auths": ...}'
      networkConfig:
          interfaces:
            - name: ens1f0
              type: ethernet
              state: up
              ipv4:
                enabled: true
                dhcp: false
                auto-dns: false
                address:
                  - ip: 192.168.200.25
                    prefix-length: 24
              ipv6:
                enabled: false
          dns-resolver:
            config:
              server:
                - 192.168.15.47
                - 192.168.15.48
          routes:
            config:
            - destination: 0.0.0.0/0
              metric: 150
              next-hop-address: 192.168.200.254
              next-hop-interface: ens1f0

    4. Create the live installation ISO by running the following command:

      $ openshift-install image-based create image --dir ibi-iso-workdir

      Example output

      INFO Consuming Image-based Installation ISO Config from target directory
      INFO Creating Image-based Installation ISO with embedded ignition

Verification

  • View the output in the working directory:

    ibi-iso-workdir/
      └── rhcos-ibi.iso

16.3.2. Provisioning the live installation ISO to a host

Using your preferred method, boot the target bare-metal host from the rhcos-ibi.iso live installation ISO to preinstall single-node OpenShift.

Verification

  1. Login to the target host.
  2. View the system logs by running the following command:

    $ journalctl -b

    Example output

    Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="All the precaching threads have finished."
    Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Total Images: 125"
    Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Images Pulled Successfully: 125"
    Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Images Failed to Pull: 0"
    Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Completed executing pre-caching"
    Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Pre-cached images successfully."
    Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13 17:01:44" level=info msg="Skipping shutdown"
    Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13 17:01:44" level=info msg="IBI preparation process finished successfully!"
    Aug 13 17:01:44 10.46.26.129 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
    Aug 13 17:01:44 10.46.26.129 systemd[1]: Finished SNO Image-based Installation.
    Aug 13 17:01:44 10.46.26.129 systemd[1]: Reached target Multi-User System.
    Aug 13 17:01:44 10.46.26.129 systemd[1]: Reached target Graphical Interface.

16.3.3. Reference specifications for the image-based-installation-config.yaml manifest

The following content describes the specifications for the image-based-installation-config.yaml manifest.

The openshift-install program uses the image-based-installation-config.yaml manifest to create a live installation ISO for image-based installations of single-node OpenShift.

Table 16.4. Required specifications
SpecificationTypeDescription

seedImage

string

Specifies the seed image to use in the ISO generation process.

seedVersion

string

Specifies the OpenShift Container Platform release version of the seed image. The release version in the seed image must match the release version that you specify in the seedVersion field.

installationDisk

string

Specifies the disk that will be used for the installation process.

Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use a persistent disk attribute, such as the disk World Wide Name (WWN), for example: /dev/disk/by-id/wwn-<disk-id>.

pullSecret

string

Specifies the pull secret to use during the precache process. The pull secret contains authentication credentials for pulling the release payload images from the container registry.

If the seed image requires a separate private registry authentication, add the authentication details to the pull secret.

Table 16.5. Optional specifications
SpecificationTypeDescription

shutdown

boolean

Specifies if the host shuts down after the installation process completes. The default value is false.

extraPartitionStart

string

Specifies the start of the extra partition used for /var/lib/containers. The default value is -40Gb, which means that the partition will be exactly 40Gb in size and uses the space 40Gb from the end of the disk. If you specify a positive value, the partition will start at that position of the disk and extend to the end of the disk.

extraPartitionLabel

string

The label of the extra partition you use for /var/lib/containers. The default label is varlibcontainers.

extraPartitionNumber

unsigned integer

The number of the extra partition you use for /var/lib/containers. The default number is 5.

skipDiskCleanup

boolean

The installation process formats the disk on the host. Set this specification to 'true' to skip this step. The default is false.

networkConfig

string

Specifies networking configurations for the host, for example:

networkConfig:
    interfaces:
      - name: ens1f0
        type: ethernet
        state: up
        ...

If you require static networking, you must install the nmstatectl library on the host that creates the live installation ISO. For further information about defining network configurations by using nmstate, see nmstate.io.

Important

The name of the interface must match the actual NIC name as shown in the operating system.

proxy

string

Specifies proxy settings to use during the installation ISO generation, for example:

proxy:
  httpProxy: "http://proxy.example.com:8080"
  httpsProxy: "http://proxy.example.com:8080"
  noProxy: "no_proxy.example.com"

imageDigestSources

string

Specifies the sources or repositories for the release-image content, for example:

imageDigestSources:
  - mirrors:
      - "registry.example.com:5000/ocp4/openshift4"
    source: "quay.io/openshift-release-dev/ocp-release"

additionalTrustBundle

string

Specifies the PEM-encoded X.509 certificate bundle. The installation program adds this to the /etc/pki/ca-trust/source/anchors/ directory in the installation ISO.

additionalTrustBundle: |
  -----BEGIN CERTIFICATE-----
  MTICLDCCAdKgAwfBAgIBAGAKBggqhkjOPQRDAjB9MQswCQYRVEQGE
  ...
  l2wOuDwKQa+upc4GftXE7C//4mKBNBC6Ty01gUaTIpo=
  -----END CERTIFICATE-----

sshKey

string

Specifies the SSH key to authenticate access to the host.

ignitionConfigOverride

string

Specifies a JSON string containing the user overrides for the Ignition config. The configuration merges with the Ignition config file generated by the installation program. This feature requires Ignition version is 3.2 or later.

16.4. Deploying single-node OpenShift clusters

16.4.1. About image-based deployments for managed single-node OpenShift

When a host preinstalled with single-node OpenShift using an image-based installation arrives at a remote site, a technician can easily reconfigure and deploy the host in a matter of minutes.

For clusters with a hub-and-spoke architecture, to complete the deployment of a preinstalled host, you must first define site-specific configuration resources on the hub cluster for each host. These resources contain configuration information such as the properties of the bare-metal host, authentication details, and other deployment and networking information.

The Image Based Install (IBI) Operator creates a configuration ISO from these resources, and then boots the host with the configuration ISO attached. The host mounts the configuration ISO and runs the reconfiguration process. When the reconfiguration completes, the single-node OpenShift cluster is ready.

Note

You must create distinct configuration resources for each bare-metal host.

See the following high-level steps to deploy a preinstalled host in a cluster with a hub-and-spoke architecture:

  1. Install the IBI Operator on the hub cluster.
  2. Create site-specific configuration resources in the hub cluster for each host.
  3. The IBI Operator creates a configuration ISO from these resources and boots the target host with the configuration ISO attached.
  4. The host mounts the configuration ISO and runs the reconfiguration process. When the reconfiguration completes, the single-node OpenShift cluster is ready.
Note

Alternatively, you can manually deploy a preinstalled host for a cluster without using a hub cluster. You must define an ImageBasedConfig resource and an installation manifest, and provide these as inputs to the openshift-install installation program. For more information, see "Deploying a single-node OpenShift cluster using the openshift-install program".

16.4.1.1. Installing the Image Based Install Operator

The Image Based Install (IBI) Operator is part of the image-based deployment workflow for preinstalled single-node OpenShift on bare-metal hosts.

Note

The IBI Operator is part of the multicluster engine for Kubernetes Operator from MCE version 2.7.

Prerequisites

  • You logged in as a user with cluster-admin privileges.
  • You deployed a Red Hat Advanced Cluster Management (RHACM) hub cluster or you deployed the multicluster engine for Kubernetes Operator.
  • You reviewed the required versions of software components in the section "Software prerequisites for an image-based installation".

Procedure

  • Set the enabled specification to true for the image-based-install-operator component in the MultiClusterEngine resource by running the following command:

    $ oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type json \
    --patch '[{"op": "add", "path":"/spec/overrides/components/-", "value": {"name":"image-based-install-operator","enabled": true}}]'

Verification

  • Check that the Image Based Install Operator pod is running by running the following command:

    $ oc get pods -A | grep image-based

    Example output

    multicluster-engine             image-based-install-operator-57fb8sc423-bxdj8             2/2     Running     0               5m

16.4.1.2. Deploying a managed single-node OpenShift cluster using the IBI Operator

Create the site-specific configuration resources in the hub cluster to initiate the image-based deployment of a preinstalled host.

When you create these configuration resources in the hub cluster, the Image Based Install (IBI) Operator generates a configuration ISO and attaches it to the target host to begin the site-specific configuration process. When the configuration process completes, the single-node OpenShift cluster is ready.

Note

For more information about the configuration resources that you must configure in the hub cluster, see "Cluster configuration resources for deploying a preinstalled host".

Prerequisites

  • You preinstalled a host with single-node OpenShift using an image-based installation.
  • You logged in as a user with cluster-admin privileges.
  • You deployed a Red Hat Advanced Cluster Management (RHACM) hub cluster or you deployed the multicluster engine for Kubernetes operator (MCE).
  • You installed the IBI Operator on the hub cluster.
  • You created a pull secret to authenticate pull requests. For more information, see "Using image pull secrets".

Procedure

  1. Create the ibi-ns namespace by running the following command:

    $ oc create namespace ibi-ns
  2. Create the Secret resource for your image registry:

    1. Create a YAML file that defines the Secret resource for your image registry:

      Example secret-image-registry.yaml file

      apiVersion: v1
      kind: Secret
      metadata:
        name: ibi-image-pull-secret
        namespace: ibi-ns
      stringData:
        .dockerconfigjson: <base64-docker-auth-code> 1
      type: kubernetes.io/dockerconfigjson

      1
      You must provide base64-encoded credential details. See the "Additional resources" section for more information about using image pull secrets.
    2. Create the Secret resource for your image registry by running the following command:

      $ oc create -f secret-image-registry.yaml
  3. Optional: Configure static networking for the host:

    1. Create a Secret resource containing the static network configuration in nmstate format:

      Example host-network-config-secret.yaml file

      apiVersion: v1
      kind: Secret
      metadata:
       name: host-network-config-secret 1
       namespace: ibi-ns
      type: Opaque
      stringData:
       nmstate: | 2
        interfaces:
          - name: ens1f0 3
            type: ethernet
            state: up
            ipv4:
              enabled: true
              address:
                - ip: 192.168.200.25
                  prefix-length: 24
              dhcp: false 4
            ipv6:
              enabled: false
        dns-resolver:
          config:
            server:
              - 192.168.15.47 5
              - 192.168.15.48
        routes:
          config: 6
            - destination: 0.0.0.0/0
              metric: 150
              next-hop-address: 192.168.200.254
              next-hop-interface: ens1f0
              table-id: 254

      1
      Specify the name for the Secret resource.
      2
      Define the static network configuration in nmstate format.
      3
      Specify the name of the interface on the host. The name of the interface must match the actual NIC name as shown in the operating system. To use your MAC address for NIC matching, set the identifier field to mac-address.
      4
      You must specify dhcp: false to ensure nmstate assigns the static IP address to the interface.
      5
      Specify one or more DNS servers that the system will use to resolve domain names.
      6
      In this example, the default route is configured through the ens1f0 interface to the next hop IP address 192.168.200.254.
  4. Create the BareMetalHost and Secret resources:

    1. Create a YAML file that defines the BareMetalHost and Secret resources:

      Example ibi-bmh.yaml file

      apiVersion: metal3.io/v1alpha1
      kind: BareMetalHost
      metadata:
        name: ibi-bmh 1
        namespace: ibi-ns
      spec:
        online: false 2
        bootMACAddress: 00:a5:12:55:62:64 3
        bmc:
          address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/8a5babac-94d0-4c20-b282-50dc3a0a32b5 4
          credentialsName: ibi-bmh-bmc-secret 5
        preprovisioningNetworkDataName: host-network-config-secret 6
        automatedCleaningMode: disabled 7
        externallyProvisioned: true 8
      ---
      apiVersion: v1
      kind: Secret
      metadata:
        name: ibi-bmh-secret 9
        namespace: ibi-ns
      type: Opaque
      data:
        username: <user_name> 10
        password: <password> 11

      1
      Specify the name for the BareMetalHost resource.
      2
      Specify if the host should be online.
      3
      Specify the host boot MAC address.
      4
      Specify the BMC address. You can only use bare-metal host drivers that support virtual media networking booting, for example redfish-virtualmedia and idrac-virtualmedia.
      5
      Specify the name of the bare-metal host Secret resource.
      6
      Optional: If you require static network configuration for the host, specify the name of the Secret resource containing the configuration.
      7
      You must specify automatedCleaningMode:disabled to prevent the provisioning service from deleting all preinstallation artifacts, such as the seed image, during disk inspection.
      8
      You must specify externallyProvisioned: true to enable the host to boot from the preinstalled disk, instead of the configuration ISO.
      9
      Specify the name for the Secret resource.
      10
      Specify the username.
      11
      Specify the password.
    2. Create the BareMetalHost and Secret resources by running the following command:

      $ oc create -f ibi-bmh.yaml
  5. Create the ClusterImageSet resource:

    1. Create a YAML file that defines the ClusterImageSet resource:

      Example ibi-cluster-image-set.yaml file

      apiVersion: hive.openshift.io/v1
      kind: ClusterImageSet
      metadata:
        name: ibi-img-version-arch 1
      spec:
        releaseImage: ibi.example.com:path/to/release/images:version-arch 2

      1
      Specify the name for the ClusterImageSet resource.
      2
      Specify the address for the release image to use for the deployment. If you use a different image registry compared to the image registry used during seed image generation, ensure that the OpenShift Container Platform version for the release image remains the same.
    2. Create the ClusterImageSet resource by running the following command:

      $ oc apply -f ibi-cluster-image-set.yaml
  6. Create the ImageClusterInstall resource:

    1. Create a YAML file that defines the ImageClusterInstall resource:

      Example ibi-image-cluster-install.yaml file

      apiVersion: extensions.hive.openshift.io/v1alpha1
      kind: ImageClusterInstall
      metadata:
        name: ibi-image-install 1
        namespace: ibi-ns
      spec:
        bareMetalHostRef:
          name: ibi-bmh 2
          namespace: ibi-ns
        clusterDeploymentRef:
          name: ibi-cluster-deployment 3
        hostname: ibi-host 4
        imageSetRef:
          name: ibi-img-version-arch 5
        machineNetwork: 10.0.0.0/24 6
        proxy: 7
          httpProxy: "http://proxy.example.com:8080"
          #httpsProxy: "http://proxy.example.com:8080"
          #noProxy: "no_proxy.example.com"

      1
      Specify the name for the ImageClusterInstall resource.
      2
      Specify the BareMetalHost resource that you want to target for the image-based installation.
      3
      Specify the name of the ClusterDeployment resource that you want to use for the image-based installation of the target host.
      4
      Specify the hostname for the cluster.
      5
      Specify the name of the ClusterImageSet resource you used to define the container release images to use for deployment.
      6
      Specify the public CIDR (Classless Inter-Domain Routing) of the external network.
      7
      Optional: Specify a proxy to use for the cluster deployment.
      Important

      If your cluster deployment requires a proxy configuration, you must do the following:

      • Create a seed image from a seed cluster featuring a proxy configuration. The proxy configurations do not have to match.
      • Configure the machineNetwork field in your installation manifest.
    2. Create the ImageClusterInstall resource by running the following command:

      $ oc create -f ibi-image-cluster-install.yaml
  7. Create the ClusterDeployment resource:

    1. Create a YAML file that defines the ClusterDeployment resource:

      Example ibi-cluster-deployment.yaml file

      apiVersion: hive.openshift.io/v1
      kind: ClusterDeployment
      metadata:
        name: ibi-cluster-deployment 1
        namespace: ibi-ns 2
      spec:
        baseDomain: example.com 3
        clusterInstallRef:
          group: extensions.hive.openshift.io
          kind: ImageClusterInstall
          name: ibi-image-install 4
          version: v1alpha1
        clusterName: ibi-cluster 5
        platform:
          none: {}
        pullSecretRef:
          name: ibi-image-pull-secret 6

      1
      Specify the name for the ClusterDeployment resource.
      2
      Specify the namespace for the ClusterDeployment resource.
      3
      Specify the base domain that the cluster should belong to.
      4
      Specify the name of the ImageClusterInstall in which you defined the container images to use for the image-based installation of the target host.
      5
      Specify a name for the cluster.
      6
      Specify the secret to use for pulling images from your image registry.
    2. Create the ClusterDeployment resource by running the following command:

      $ oc apply -f ibi-cluster-deployment.yaml
  8. Create the ManagedCluster resource:

    1. Create a YAML file that defines the ManagedCluster resource:

      Example ibi-managed.yaml file

      apiVersion: cluster.open-cluster-management.io/v1
      kind: ManagedCluster
      metadata:
        name: sno-ibi 1
      spec:
        hubAcceptsClient: true 2

      1
      Specify the name for the ManagedCluster resource.
      2
      Specify true to enable RHACM to mange the cluster.
    2. Create the ManagedCluster resource by running the following command:

      $ oc apply -f ibi-managed.yaml

Verification

  • Check the status of the ImageClusterInstall in the hub cluster to monitor the progress of the target host installation by running the following command:

    $ oc get imageclusterinstall

    Example output

    NAME       REQUIREMENTSMET           COMPLETED                     BAREMETALHOSTREF
    target-0   HostValidationSucceeded   ClusterInstallationSucceeded  ibi-bmh

    Warning

    If the ImageClusterInstall resource is deleted, the IBI Operator reattaches the BareMetalHost resource and reboots the machine.

16.4.1.2.1. Cluster configuration resources for deploying a preinstalled host

To complete a deployment for a preinstalled host at a remote site, you must configure the following site-specifc cluster configuration resources in the hub cluster for each bare-metal host.

Table 16.6. Cluster configuration resources reference
ResourceDescription

Namespace

Namespace for the managed single-node OpenShift cluster.

BareMetalHost

Describes the physical host and its properties, such as the provisioning and hardware configuration.

Secret for the bare-metal host

Credentials for the host BMC.

Secret for the bare-metal host static network configuration

Optional: Describes static network configuration for the target host.

Secret for the image registry

Credentials for the image registry. The secret for the image registry must be of type kubernetes.io/dockerconfigjson.

ImageClusterInstall

References the bare-metal host, deployment, and image set resources.

ClusterImageSet

Describes the release images to use for the cluster.

ClusterDeployment

Describes networking, authentication, and platform-specific settings.

ManagedCluster

Describes cluster details to enable Red Hat Advanced Cluster Management (RHACM) to register and manage.

ConfigMap

Optional: Describes additional configurations for the cluster deployment, such as adding a bundle of trusted certificates for the host to ensure trusted communications for cluster services.

16.4.1.2.2. ImageClusterInstall resource API specifications

The following content describes the API specifications for the ImageClusterInstall resource. This resource is the endpoint for the Image Based Install Operator.

Table 16.7. Required specifications
SpecificationTypeDescription

imageSetRef

string

Specify the name of the ClusterImageSet resource that defines the release images for the deployment.

hostname

string

Specify the hostname for the cluster.

sshKey

string

Specify your SSH key to provide SSH access to the target host.

Table 16.8. Optional specifications
SpecificationTypeDescription

clusterDeploymentRef

string

Specify the name of the ClusterDeployment resource that you want to use for the image-based installation of the target host.

clusterMetadata

string

After the deployment completes, this specification is automatically populated with metadata information about the cluster, including the cluster-admin kubeconfig credentials for logging in to the cluster.

imageDigestSources

string

Specifies the sources or repositories for the release-image content, for example:

imageDigestSources:
  - mirrors:
      - "registry.example.com:5000/ocp4/openshift4"
    source: "quay.io/openshift-release-dev/ocp-release"

extraManifestsRefs

string

Specify a ConfigMap resource containing additional manifests to be applied to the target cluster.

bareMetalHostRef

string

Specify the bareMetalHost resource to use for the cluster deployment

machineNetwork

string

Specify the public CIDR (Classless Inter-Domain Routing) of the external network.

proxy

string

Specifies proxy settings for the cluster, for example:

proxy:
  httpProxy: "http://proxy.example.com:8080"
  httpsProxy: "http://proxy.example.com:8080"
  noProxy: "no_proxy.example.com"

caBundleRef

string

Specify a ConfigMap resource containing the new bundle of trusted certificates for the host.

16.4.1.3. ConfigMap resources for extra manifests

You can optionally create a ConfigMap resource to define additional manifests in an image-based deployment for managed single-node OpenShift clusters.

After you create the ConfigMap resource, reference it in the ImageClusterInstall resource. During deployment, the IBI Operator includes the extra manifests in the deployment.

16.4.1.3.1. Creating a ConfigMap resource to add extra manifests in an image-based deployment

You can use a ConfigMap resource to add extra manifests to the image-based deployment for single-node OpenShift clusters.

The following example adds an single-root I/O virtualization (SR-IOV) network to the deployment.

Prerequisites

  • You preinstalled a host with single-node OpenShift using an image-based installation.
  • You logged in as a user with cluster-admin privileges.

Procedure

  1. Create the SriovNetworkNodePolicy and SriovNetwork resources:

    1. Create a YAML file that defines the resources:

      Example sriov-extra-manifest.yaml file

      apiVersion: sriovnetwork.openshift.io/v1
      kind: SriovNetworkNodePolicy
      metadata:
        name: "example-sriov-node-policy"
        namespace: openshift-sriov-network-operator
      spec:
        deviceType: vfio-pci
        isRdma: false
        nicSelector:
          pfNames: [ens1f0]
        nodeSelector:
          node-role.kubernetes.io/master: ""
        mtu: 1500
        numVfs: 8
        priority: 99
        resourceName: example-sriov-node-policy
      ---
      apiVersion: sriovnetwork.openshift.io/v1
      kind: SriovNetwork
      metadata:
        name: "example-sriov-network"
        namespace: openshift-sriov-network-operator
      spec:
        ipam: |-
          {
          }
        linkState: auto
        networkNamespace: sriov-namespace
        resourceName: example-sriov-node-policy
        spoofChk: "on"
        trust: "off"

    2. Create the ConfigMap resource by running the following command:

      $ oc create configmap sr-iov-extra-manifest --from-file=sriov-extra-manifest.yaml -n ibi-ns 1
      1
      Specify the namespace that has the ImageClusterInstall resource.

      Example output

      configmap/sr-iov-extra-manifest created

  2. Reference the ConfigMap resource in the spec.extraManifestsRefs field of the ImageClusterInstall resource:

    #...
      spec:
        extraManifestsRefs:
        - name: sr-iov-extra-manifest
    #...
16.4.1.3.2. Creating a ConfigMap resource to add a CA bundle in an image-based deployment

You can use a ConfigMap resource to add a certificate authority (CA) bundle to the host to ensure trusted communications for cluster services.

After you create the ConfigMap resource, reference it in the spec.caBundleRef field of the ImageClusterInstall resource.

Prerequisites

  • You preinstalled a host with single-node OpenShift using an image-based installation.
  • You logged in as a user with cluster-admin privileges.

Procedure

  1. Create a CA bundle file such as the following file:

    Example example-ca.crt

    -----BEGIN CERTIFICATE-----
    MIIDXTCCAkWgAwIBAgIJAKmjYKJbIyz3MA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
    ...Custom CA certificate bundle...
    4WPl0Qb27Sb1xZyAsy1ww6MYb98EovazUSfjYr2EVF6ThcAPu4/sMxUV7He2J6Jd
    cA8SMRwpUbz3LXY=
    -----END CERTIFICATE-----

  2. Create the ConfigMap object by running the following command:

    $ oc create configmap custom-ca --from-file=example-ca.crt -n ibi-ns 1
    1
    Specify the namespace that has the ImageClusterInstall resource.

    Example output

    configmap/custom-ca created

  3. Reference the ConfigMap resource in the spec.caBundleRef field of the ImageClusterInstall resource:

    #...
      spec:
        caBundleRef:
          name: custom-ca
    #...

16.4.2. About image-based deployments for single-node OpenShift

You can manually generate a configuration ISO by using the openshift-install program. Attach the configuration ISO to your preinstalled target host to complete the deployment.

16.4.2.1. Deploying a single-node OpenShift cluster using the openshift-install program

You can use the openshift-install program to configure and deploy a host that you preinstalled with an image-based installation. To configure the target host with site-specific details, you must create the following resources:

  • The install-config.yaml installation manifest
  • The image-based-config.yaml manifest

The openshift-install program uses these resources to generate a configuration ISO that you attach to the preinstalled target host to complete the deployment.

Note

For more information about the specifications for the image-based-config.yaml manifest, see "Reference specifications for the image-based-config.yaml manifest".

Prerequisites

  • You preinstalled a host with single-node OpenShift using an image-based installation.
  • You downloaded the latest version of the openshift-install program.
  • You created a pull secret to authenticate pull requests. For more information, see "Using image pull secrets".

Procedure

  1. Create a working directory by running the following:

    $ mkdir ibi-config-iso-workdir 1
    1
    Replace ibi-config-iso-workdir with the name of your working directory.
  2. Create the installation manifest:

    1. Create a YAML file that defines the install-config manifest:

      Example install-config.yaml file

      apiVersion: v1
      metadata:
        name: sno-cluster-name
      baseDomain: host.example.com
      compute:
        - architecture: amd64
          hyperthreading: Enabled
          name: worker
          replicas: 0
      controlPlane:
        architecture: amd64
        hyperthreading: Enabled
        name: master
        replicas: 1
      networking:
        clusterNetwork:
        - cidr: 10.128.0.0/14
          hostPrefix: 23
        machineNetwork:
        - cidr: 192.168.200.0/24
        networkType: OVNKubernetes
        serviceNetwork:
        - 172.30.0.0/16
      platform:
        none: {}
      fips: false
      cpuPartitioningMode: "AllNodes"
      pullSecret: '{"auths":{"<your_pull_secret>"}}}'
      sshKey: 'ssh-rsa <your_ssh_pub_key>'

      Important

      If your cluster deployment requires a proxy configuration, you must do the following:

      • Create a seed image from a seed cluster featuring a proxy configuration. The proxy configurations do not have to match.
      • Configure the machineNetwork field in your installation manifest.
    2. Save the file in your working directory.
  3. Optional. Create a configuration template in your working directory by running the following command:

    $ openshift-install image-based create config-template --dir ibi-config-iso-workdir/

    Example output

    INFO Config-Template created in: ibi-config-iso-workdir

    The command creates the image-based-config.yaml configuration template in your working directory:

    #
    # Note: This is a sample ImageBasedConfig file showing
    # which fields are available to aid you in creating your
    # own image-based-config.yaml file.
    #
    apiVersion: v1beta1
    kind: ImageBasedConfig
    metadata:
      name: example-image-based-config
    additionalNTPSources:
      - 0.rhel.pool.ntp.org
      - 1.rhel.pool.ntp.org
    hostname: change-to-hostname
    releaseRegistry: quay.io
    # networkConfig contains the network configuration for the host in NMState format.
    # See https://nmstate.io/examples.html for examples.
    networkConfig:
      interfaces:
        - name: eth0
          type: ethernet
          state: up
          mac-address: 00:00:00:00:00:00
          ipv4:
            enabled: true
            address:
              - ip: 192.168.122.2
                prefix-length: 23
            dhcp: false
  4. Edit your configuration file:

    Example image-based-config.yaml file

    #
    # Note: This is a sample ImageBasedConfig file showing
    # which fields are available to aid you in creating your
    # own image-based-config.yaml file.
    #
    apiVersion: v1beta1
    kind: ImageBasedConfig
    metadata:
      name: sno-cluster-name
    additionalNTPSources:
      - 0.rhel.pool.ntp.org
      - 1.rhel.pool.ntp.org
    hostname: host.example.com
    releaseRegistry: quay.io
    # networkConfig contains the network configuration for the host in NMState format.
    # See https://nmstate.io/examples.html for examples.
    networkConfig:
        interfaces:
          - name: ens1f0
            type: ethernet
            state: up
            ipv4:
              enabled: true
              dhcp: false
              auto-dns: false
              address:
                - ip: 192.168.200.25
                  prefix-length: 24
            ipv6:
              enabled: false
        dns-resolver:
          config:
            server:
              - 192.168.15.47
              - 192.168.15.48
        routes:
          config:
          - destination: 0.0.0.0/0
            metric: 150
            next-hop-address: 192.168.200.254
            next-hop-interface: ens1f0

  5. Create the configuration ISO in your working directory by running the following command:

    $ openshift-install image-based create config-image --dir ibi-config-iso-workdir/

    Example output

    INFO Adding NMConnection file <ens1f0.nmconnection>
    INFO Consuming Install Config from target directory
    INFO Consuming Image-based Config ISO configuration from target directory
    INFO Config-Image created in: ibi-config-iso-workdir/auth

    View the output in the working directory:

    Example output

    ibi-config-iso-workdir/
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    └── imagebasedconfig.iso

  6. Attach the imagebasedconfig.iso to the preinstalled host using your preferred method and restart the host to complete the configuration process and deploy the cluster.

Verification

When the configuration process completes on the host, access the cluster to verify its status.

  1. Export the kubeconfig environment variable to your kubeconfig file by running the following command:

    $ export KUBECONFIG=ibi-config-iso-workdir/auth/kubeconfig
  2. Verify that the cluster is responding by running the following command:

    $ oc get nodes

    Example output

    NAME                                         STATUS   ROLES                  AGE     VERSION
    node/sno-cluster-name.host.example.com       Ready    control-plane,master   5h15m   v1.30.3

16.4.2.1.1. Reference specifications for the image-based-config.yaml manifest

The following content describes the specifications for the image-based-config.yaml manifest.

The openshift-install program uses the image-based-config.yaml manifest to create a site-specific configuration ISO for image-based deployments of single-node OpenShift.

Table 16.9. Required specifications
SpecificationTypeDescription

hostname

string

Define the name of the node for the single-node OpenShift cluster.

Table 16.10. Optional specifications
SpecificationTypeDescription

networkConfig

string

Specifies networking configurations for the host, for example:

networkConfig:
    interfaces:
      - name: ens1f0
        type: ethernet
        state: up
        ...

If you require static networking, you must install the nmstatectl library on the host that creates the live installation ISO. For further information about defining network configurations by using nmstate, see nmstate.io.

Important

The name of the interface must match the actual NIC name as shown in the operating system.

additionalNTPSources

string

Specifies a list of NTP sources for all cluster hosts. These NTP sources are added to any existing NTP sources in the cluster. You can use the hostname or IP address for the NTP source.

releaseRegistry

string

Specifies the container image registry that you used for the release image of the seed cluster.

16.4.2.2. Configuring resources for extra manifests

You can optionally define additional resources in an image-based deployment for single-node OpenShift clusters.

Create the additional resources in an extra-manifests folder in the same working directory that has the install-config.yaml and image-based-config.yaml manifests.

16.4.2.2.1. Creating a resource in the extra-manifests folder

You can create a resource in the extra-manifests folder of your working directory to add extra manifests to the image-based deployment for single-node OpenShift clusters.

The following example adds an single-root I/O virtualization (SR-IOV) network to the deployment.

Prerequisites

  • You created a working directory with the install-config.yaml and image-based-config.yaml manifests

Procedure

  1. Go to your working directory and create the extra-manifests folder by running the following command:

    $ mkdir extra-manifests
  2. Create the SriovNetworkNodePolicy and SriovNetwork resources in the extra-manifests folder:

    1. Create a YAML file that defines the resources:

      Example sriov-extra-manifest.yaml file

      apiVersion: sriovnetwork.openshift.io/v1
      kind: SriovNetworkNodePolicy
      metadata:
        name: "example-sriov-node-policy"
        namespace: openshift-sriov-network-operator
      spec:
        deviceType: vfio-pci
        isRdma: false
        nicSelector:
          pfNames: [ens1f0]
        nodeSelector:
          node-role.kubernetes.io/master: ""
        mtu: 1500
        numVfs: 8
        priority: 99
        resourceName: example-sriov-node-policy
      ---
      apiVersion: sriovnetwork.openshift.io/v1
      kind: SriovNetwork
      metadata:
        name: "example-sriov-network"
        namespace: openshift-sriov-network-operator
      spec:
        ipam: |-
          {
          }
        linkState: auto
        networkNamespace: sriov-namespace
        resourceName: example-sriov-node-policy
        spoofChk: "on"
        trust: "off"

Verification

  • When you create the configuration ISO, you can view the reference to the extra manifests in the .openshift_install_state.json file in your working directory:

     "*configimage.ExtraManifests": {
            "FileList": [
                {
                    "Filename": "extra-manifests/sriov-extra-manifest.yaml",
                    "Data": "YXBFDFFD..."
                }
            ]
        }
Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.