Chapter 4. Preparing PXE assets for OpenShift Container Platform


Use the following procedures to create the assets needed to PXE boot an OpenShift Container Platform cluster using the Agent-based Installer.

The assets you create in these procedures will deploy a single-node OpenShift Container Platform installation. You can use these procedures as a basis and modify configurations according to your requirements.

See Installing an OpenShift Container Platform cluster with the Agent-based Installer to learn about more configurations available with the Agent-based Installer.

4.1. Prerequisites

4.2. Downloading the Agent-based Installer

Use this procedure to download the Agent-based Installer and the CLI needed for your installation.

Procedure

  1. Log in to the OpenShift Container Platform web console using your login credentials.
  2. Navigate to Datacenter.
  3. Click Run Agent-based Installer locally.
  4. Select the operating system and architecture for the OpenShift Installer and Command line interface.
  5. Click Download Installer to download and extract the install program.
  6. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret.
  7. Click Download command-line tools and place the openshift-install binary in a directory that is on your PATH.

4.3. Creating the preferred configuration inputs

Use this procedure to create the preferred configuration inputs used to create the PXE files.

Procedure

  1. Install nmstate dependency by running the following command:

    $ sudo dnf install /usr/bin/nmstatectl -y
  2. Place the openshift-install binary in a directory that is on your PATH.
  3. Create a directory to store the install configuration by running the following command:

    $ mkdir ~/<directory_name>
    Note

    This is the preferred method for the Agent-based installation. Using GitOps ZTP manifests is optional.

  4. Create the install-config.yaml file by running the following command:

    $ cat << EOF > ./<directory_name>/install-config.yaml
    apiVersion: v1
    baseDomain: test.example.com
    compute:
    - architecture: amd64 1
      hyperthreading: Enabled
      name: worker
      replicas: 0
    controlPlane:
      architecture: amd64
      hyperthreading: Enabled
      name: master
      replicas: 1
    metadata:
      name: sno-cluster 2
    networking:
      clusterNetwork:
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      machineNetwork:
      - cidr: 192.168.0.0/16
      networkType: OVNKubernetes 3
      serviceNetwork:
      - 172.30.0.0/16
    platform: 4
      none: {}
    pullSecret: '<pull_secret>' 5
    sshKey: '<ssh_pub_key>' 6
    EOF
    1
    Specify the system architecture. Valid values are amd64, arm64, ppc64le, and s390x.

    If you are using the release image with the multi payload, you can install the cluster on different architectures such as arm64, amd64, s390x, and ppc64le. Otherwise, you can install the cluster only on the release architecture displayed in the output of the openshift-install version command. For more information, see "Verifying the supported architecture for installing an Agent-based Installer cluster".

    2
    Required. Specify your cluster name.
    3
    The cluster network plugin to install. The default value OVNKubernetes is the only supported value.
    4
    Specify your platform.
    Note

    For bare metal platforms, host settings made in the platform section of the install-config.yaml file are used by default, unless they are overridden by configurations made in the agent-config.yaml file.

    5
    Specify your pull secret.
    6
    Specify your SSH public key.
    Note

    If you set the platform to vSphere or baremetal, you can configure IP address endpoints for cluster nodes in three ways:

    • IPv4
    • IPv6
    • IPv4 and IPv6 in parallel (dual-stack)

    IPv6 is supported only on bare metal platforms.

    Example of dual-stack networking

    networking:
      clusterNetwork:
        - cidr: 172.21.0.0/16
          hostPrefix: 23
        - cidr: fd02::/48
          hostPrefix: 64
      machineNetwork:
        - cidr: 192.168.11.0/16
        - cidr: 2001:DB8::/32
      serviceNetwork:
        - 172.22.0.0/16
        - fd03::/112
      networkType: OVNKubernetes
    platform:
      baremetal:
        apiVIPs:
        - 192.168.11.3
        - 2001:DB8::4
        ingressVIPs:
        - 192.168.11.4
        - 2001:DB8::5

    Note

    When you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the additionalTrustBundle field of the install-config.yaml file.

  5. Create the agent-config.yaml file by running the following command:

    $ cat > agent-config.yaml << EOF
    apiVersion: v1beta1
    kind: AgentConfig
    metadata:
      name: sno-cluster
    rendezvousIP: 192.168.111.80 1
    hosts: 2
      - hostname: master-0 3
        interfaces:
          - name: eno1
            macAddress: 00:ef:44:21:e6:a5
        rootDeviceHints: 4
          deviceName: /dev/sdb
        networkConfig: 5
          interfaces:
            - name: eno1
              type: ethernet
              state: up
              mac-address: 00:ef:44:21:e6:a5
              ipv4:
                enabled: true
                address:
                  - ip: 192.168.111.80
                    prefix-length: 23
                dhcp: false
          dns-resolver:
            config:
              server:
                - 192.168.111.1
          routes:
            config:
              - destination: 0.0.0.0/0
                next-hop-address: 192.168.111.2
                next-hop-interface: eno1
                table-id: 254
    EOF
    1
    This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host’s IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig.
    2
    Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters.
    3
    Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods.
    4
    Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value.
    5
    Optional: Configures the network interface of a host in NMState format.
  6. Optional: To create an iPXE script, add the bootArtifactsBaseURL to the agent-config.yaml file:

    apiVersion: v1beta1
    kind: AgentConfig
    metadata:
      name: sno-cluster
    rendezvousIP: 192.168.111.80
    bootArtifactsBaseURL: <asset_server_URL>

    Where <asset_server_URL> is the URL of the server you will upload the PXE assets to.

4.4. Creating the PXE assets

Use the following procedure to create the assets and optional script to implement in your PXE infrastructure.

Procedure

  1. Create the PXE assets by running the following command:

    $ openshift-install agent create pxe-files

    The generated PXE assets and optional iPXE script can be found in the boot-artifacts directory.

    Example filesystem with PXE assets and optional iPXE script

    boot-artifacts
        ├─ agent.x86_64-initrd.img
        ├─ agent.x86_64.ipxe
        ├─ agent.x86_64-rootfs.img
        └─ agent.x86_64-vmlinuz

    Important

    The contents of the boot-artifacts directory vary depending on the specified architecture.

    Note

    Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default /etc/multipath.conf configuration.

  2. Upload the PXE assets and optional script to your infrastructure where they will be accessible during the boot process.

    Note

    If you generated an iPXE script, the location of the assets must match the bootArtifactsBaseURL you added to the agent-config.yaml file.

4.5. Manually adding IBM Z agents

After creating the PXE assets, you can add IBM Z® agents. Only use this procedure for IBM Z® clusters.

Depending on your IBM Z® environment, you can choose from the following options:

  • Adding IBM Z® agents with z/VM
  • Adding IBM Z® agents with RHEL KVM
  • Adding IBM Z® agents with Logical Partition (LPAR)
Note

Currently, ISO boot support on IBM Z® (s390x) is available only for Red Hat Enterprise Linux (RHEL) KVM, which provides the flexibility to choose either PXE or ISO-based installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is supported.

4.5.1. Networking requirements for IBM Z

In IBM Z environments, advanced networking technologies such as Open Systems Adapter (OSA), HiperSockets, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) require specific configurations that deviate from the standard network settings and those needs to be persisted for multiple boot scenarios that occur in the Agent-based Installation.

To persist these parameters during boot, the ai.ip_cfg_override=1 parameter is required in the paramline. This parameter is used with the configured network cards to ensure a successful and efficient deployment on IBM Z.

The following table lists the network devices that are supported on each hypervisor for the network configuration override functionality :

Network devicez/VMKVMLPAR ClassicLPAR Dynamic Partition Manager (DPM)

Virtual Switch

Supported [1]

Not applicable [2]

Not applicable

Not applicable

Direct attached Open Systems Adapter (OSA)

Supported

Not required [3]

Supported

Not required

RDMA over Converged Ethernet (RoCE)

Not required

Not required

Not required

Not required

HiperSockets

Supported

Not required

Supported

Not required

  1. Supported: When the ai.ip_cfg_override parameter is required for the installation procedure.
  2. Not Applicable: When a network card is not applicable to be used on the hypervisor.
  3. Not required: When the ai.ip_cfg_override parameter is not required for the installation procedure.

4.5.2. Configuring network overrides in IBM Z

You can specify a static IP address on IBM Z machines that use Logical Partition (LPAR) and z/VM. This is useful when the network devices do not have a static MAC address assigned to them.

Procedure

  • If you have an existing .parm file, edit it to include the following entry:

    ai.ip_cfg_override=1

    This parameter allows the file to add the network settings to the CoreOS installer.

    Example .parm file

    rd.neednet=1 cio_ignore=all,!condev
    console=ttysclp0
    coreos.live.rootfs_url=<coreos_url> 1
    ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>
    rd.znet=qeth,<network_adaptor_range>,layer2=1
    rd.<disk_type>=<adapter> 2
    rd.zfcp=<adapter>,<wwpn>,<lun> random.trust_cpu=on 3
    zfcp.allow_lun_scan=0
    ai.ip_cfg_override=1
    ignition.firstboot ignition.platform.id=metal
    random.trust_cpu=on

    1
    For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are booting. Only HTTP and HTTPS protocols are supported.
    2
    For installations on direct access storage devices (DASD) type disks, use rd. to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on Fibre Channel Protocol (FCP) disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where {rhel} is to be installed.
    3
    Specify values for adapter, wwpn, and lun as in the following example: rd.zfcp=0.0.8002,0x500507630400d1e3,0x4000404600000000.
Note

The override parameter overrides the host’s network configuration settings.

4.5.3. Adding IBM Z agents with z/VM

Use the following procedure to manually add IBM Z® agents with z/VM. Only use this procedure for IBM Z® clusters with z/VM.

Prerequisites

  • A running file server with access to the guest Virtual Machines.

Procedure

  1. Create a parameter file for the z/VM guest:

    Example parameter file

    rd.neednet=1 \
    console=ttysclp0 \
    coreos.live.rootfs_url=<rootfs_url> \ 1
    ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ 2
    zfcp.allow_lun_scan=0 \ 3
    ai.ip_cfg_override=1 \
    rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \
    rd.dasd=0.0.4411 \ 4
    rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 5
    random.trust_cpu=on rd.luks.options=discard \
    ignition.firstboot ignition.platform.id=metal \
    console=tty1 console=ttyS1,115200n8 \
    coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8"

    1
    For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are booting. Only HTTP and HTTPS protocols are supported.
    2
    For the ip parameter, assign the IP address automatically using DHCP, or manually assign the IP address, as described in "Installing a cluster with z/VM on IBM Z® and IBM® LinuxONE".
    3
    The default is 1. Omit this entry when using an OSA network adapter.
    4
    For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. Omit this entry for FCP-type disks.
    5
    For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks.

    Leave all other parameters unchanged.

  2. Punch the kernel.img,generic.parm, and initrd.img files to the virtual reader of the z/VM guest virtual machine.

    For more information, see PUNCH (IBM Documentation).

    Tip

    You can use the CP PUNCH command or, if you use Linux, the vmur command, to transfer files between two z/VM guest virtual machines.

  3. Log in to the conversational monitor system (CMS) on the bootstrap machine.
  4. IPL the bootstrap machine from the reader by running the following command:

    $ ipl c

    For more information, see IPL (IBM Documentation).

4.5.4. Adding IBM Z agents with RHEL KVM

Use the following procedure to manually add IBM Z® agents with RHEL KVM. Only use this procedure for IBM Z® clusters with RHEL KVM.

Note

The nmstateconfig parameter must be configured for the KVM boot.

Procedure

  1. Boot your RHEL KVM machine.
  2. To deploy the virtual server, run the virt-install command with the following parameters:

    $ virt-install \
       --name <vm_name> \
       --autostart \
       --ram=16384 \
       --cpu host \
       --vcpus=8 \
       --location <path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img \1
       --disk <qcow_image_path> \
       --network network:macvtap ,mac=<mac_address> \
       --graphics none \
       --noautoconsole \
       --wait=-1 \
       --extra-args "rd.neednet=1 nameserver=<nameserver>" \
       --extra-args "ip=<IP>::<nameserver>::<hostname>:enc1:none" \
       --extra-args "coreos.live.rootfs_url=http://<http_server>:8080/agent.s390x-rootfs.img" \
       --extra-args "random.trust_cpu=on rd.luks.options=discard" \
       --extra-args "ignition.firstboot ignition.platform.id=metal" \
       --extra-args "console=tty1 console=ttyS1,115200n8" \
       --extra-args "coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" \
       --osinfo detect=on,require=off
    1
    For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server.

4.5.5. Adding IBM Z agents in a Logical Partition (LPAR)

Use the following procedure to manually add IBM Z® agents to your cluster that runs in an LPAR environment. Use this procedure only for IBM Z® clusters running in an LPAR.

Prerequisites

  • You have Python 3 installed.
  • A running file server with access to the Logical Partition (LPAR).

Procedure

  1. Create a boot parameter file for the agents.

    Example parameter file

    rd.neednet=1 cio_ignore=all,!condev \
    console=ttysclp0 \
    ignition.firstboot ignition.platform.id=metal
    coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \1
    coreos.inst.persistent-kargs=console=ttysclp0
    ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \2
    rd.znet=qeth,<network_adaptor_range>,layer2=1
    rd.<disk_type>=<adapter> \3
    zfcp.allow_lun_scan=0
    ai.ip_cfg_override=1 \//
    random.trust_cpu=on rd.luks.options=discard

    1
    For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are starting. Only HTTP and HTTPS protocols are supported.
    2
    For the ip parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE.
    3
    For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed.
    Note

    The .ins and initrd.img.addrsize files are automatically generated for s390x architecture as part of boot-artifacts from the installation program and are only used when booting in an LPAR environment.

    Example filesystem with LPAR boot

    boot-artifacts
        ├─ agent.s390x-generic.ins
        ├─ agent.s390x-initrd.addrsize
        ├─ agent.s390x-rootfs.img
        └─ agent.s390x-kernel.img
        └─ agent.s390x-rootfs.img

  2. Transfer the initrd, kernel, generic.ins, and initrd.img.addrsize parameter files to the file server. For more information, see Booting Linux in LPAR mode (IBM documentation).
  3. Start the machine.
  4. Repeat the procedure for all other machines in the cluster.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.