Chapter 4. Preparing PXE assets for OpenShift Container Platform
Use the following procedures to create the assets needed to PXE boot an OpenShift Container Platform cluster using the Agent-based Installer.
The assets you create in these procedures will deploy a single-node OpenShift Container Platform installation. You can use these procedures as a basis and modify configurations according to your requirements.
See Installing an OpenShift Container Platform cluster with the Agent-based Installer to learn about more configurations available with the Agent-based Installer.
4.1. Prerequisites
- You reviewed details about the OpenShift Container Platform installation and update processes.
4.2. Downloading the Agent-based Installer
Use this procedure to download the Agent-based Installer and the CLI needed for your installation.
Procedure
- Log in to the OpenShift Container Platform web console using your login credentials.
- Navigate to Datacenter.
- Click Run Agent-based Installer locally.
- Select the operating system and architecture for the OpenShift Installer and Command line interface.
- Click Download Installer to download and extract the install program.
- Download or copy the pull secret by clicking on Download pull secret or Copy pull secret.
-
Click Download command-line tools and place the
openshift-install
binary in a directory that is on yourPATH
.
4.3. Creating the preferred configuration inputs
Use this procedure to create the preferred configuration inputs used to create the PXE files.
Procedure
Install
nmstate
dependency by running the following command:$ sudo dnf install /usr/bin/nmstatectl -y
-
Place the
openshift-install
binary in a directory that is on your PATH. Create a directory to store the install configuration by running the following command:
$ mkdir ~/<directory_name>
NoteThis is the preferred method for the Agent-based installation. Using GitOps ZTP manifests is optional.
Create the
install-config.yaml
file by running the following command:$ cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF
- 1
- Specify the system architecture. Valid values are
amd64
,arm64
,ppc64le
, ands390x
.If you are using the release image with the
multi
payload, you can install the cluster on different architectures such asarm64
,amd64
,s390x
, andppc64le
. Otherwise, you can install the cluster only on therelease architecture
displayed in the output of theopenshift-install version
command. For more information, see "Verifying the supported architecture for installing an Agent-based Installer cluster". - 2
- Required. Specify your cluster name.
- 3
- The cluster network plugin to install. The default value
OVNKubernetes
is the only supported value. - 4
- Specify your platform.Note
For bare metal platforms, host settings made in the platform section of the
install-config.yaml
file are used by default, unless they are overridden by configurations made in theagent-config.yaml
file. - 5
- Specify your pull secret.
- 6
- Specify your SSH public key.
NoteIf you set the platform to
vSphere
orbaremetal
, you can configure IP address endpoints for cluster nodes in three ways:- IPv4
- IPv6
- IPv4 and IPv6 in parallel (dual-stack)
IPv6 is supported only on bare metal platforms.
Example of dual-stack networking
networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5
NoteWhen you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the
additionalTrustBundle
field of theinstall-config.yaml
file.Create the
agent-config.yaml
file by running the following command:$ cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 EOF
- 1
- This IP address is used to determine which node performs the bootstrapping process as well as running the
assisted-service
component. You must provide the rendezvous IP address when you do not specify at least one host’s IP address in thenetworkConfig
parameter. If this address is not provided, one IP address is selected from the provided hosts'networkConfig
. - 2
- Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the
install-config.yaml
file, which is the sum of the values of thecompute.replicas
andcontrolPlane.replicas
parameters. - 3
- Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods.
- 4
- Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value.
- 5
- Optional: Configures the network interface of a host in NMState format.
Optional: To create an iPXE script, add the
bootArtifactsBaseURL
to theagent-config.yaml
file:apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 bootArtifactsBaseURL: <asset_server_URL>
Where
<asset_server_URL>
is the URL of the server you will upload the PXE assets to.
Additional resources
- Deploying with dual-stack networking.
- Configuring the install-config yaml file.
- See Configuring a three-node cluster to deploy three-node clusters in bare metal environments.
- About root device hints.
- NMState state examples.
- Optional: Creating additional manifest files
4.4. Creating the PXE assets
Use the following procedure to create the assets and optional script to implement in your PXE infrastructure.
Procedure
Create the PXE assets by running the following command:
$ openshift-install agent create pxe-files
The generated PXE assets and optional iPXE script can be found in the
boot-artifacts
directory.Example filesystem with PXE assets and optional iPXE script
boot-artifacts ├─ agent.x86_64-initrd.img ├─ agent.x86_64.ipxe ├─ agent.x86_64-rootfs.img └─ agent.x86_64-vmlinuz
ImportantThe contents of the
boot-artifacts
directory vary depending on the specified architecture.NoteRed Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default
/etc/multipath.conf
configuration.Upload the PXE assets and optional script to your infrastructure where they will be accessible during the boot process.
NoteIf you generated an iPXE script, the location of the assets must match the
bootArtifactsBaseURL
you added to theagent-config.yaml
file.
4.5. Manually adding IBM Z agents
After creating the PXE assets, you can add IBM Z® agents. Only use this procedure for IBM Z® clusters.
Depending on your IBM Z® environment, you can choose from the following options:
- Adding IBM Z® agents with z/VM
- Adding IBM Z® agents with RHEL KVM
- Adding IBM Z® agents with Logical Partition (LPAR)
Currently, ISO boot support on IBM Z® (s390x
) is available only for Red Hat Enterprise Linux (RHEL) KVM, which provides the flexibility to choose either PXE or ISO-based installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is supported.
4.5.1. Networking requirements for IBM Z
In IBM Z environments, advanced networking technologies such as Open Systems Adapter (OSA), HiperSockets, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) require specific configurations that deviate from the standard network settings and those needs to be persisted for multiple boot scenarios that occur in the Agent-based Installation.
To persist these parameters during boot, the ai.ip_cfg_override=1
parameter is required in the paramline
. This parameter is used with the configured network cards to ensure a successful and efficient deployment on IBM Z.
The following table lists the network devices that are supported on each hypervisor for the network configuration override functionality :
Network device | z/VM | KVM | LPAR Classic | LPAR Dynamic Partition Manager (DPM) |
---|---|---|---|---|
Virtual Switch | Supported [1] | Not applicable [2] | Not applicable | Not applicable |
Direct attached Open Systems Adapter (OSA) | Supported | Not required [3] | Supported | Not required |
RDMA over Converged Ethernet (RoCE) | Not required | Not required | Not required | Not required |
HiperSockets | Supported | Not required | Supported | Not required |
-
Supported: When the
ai.ip_cfg_override
parameter is required for the installation procedure. - Not Applicable: When a network card is not applicable to be used on the hypervisor.
-
Not required: When the
ai.ip_cfg_override
parameter is not required for the installation procedure.
4.5.2. Configuring network overrides in IBM Z
You can specify a static IP address on IBM Z machines that use Logical Partition (LPAR) and z/VM. This is useful when the network devices do not have a static MAC address assigned to them.
Procedure
If you have an existing
.parm
file, edit it to include the following entry:ai.ip_cfg_override=1
This parameter allows the file to add the network settings to the CoreOS installer.
Example
.parm
filerd.neednet=1 cio_ignore=all,!condev console=ttysclp0 coreos.live.rootfs_url=<coreos_url> 1 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> 2 rd.zfcp=<adapter>,<wwpn>,<lun> random.trust_cpu=on 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 ignition.firstboot ignition.platform.id=metal random.trust_cpu=on
- 1
- For the
coreos.live.rootfs_url
artifact, specify the matchingrootfs
artifact for thekernel
andinitramfs
that you are booting. Only HTTP and HTTPS protocols are supported. - 2
- For installations on direct access storage devices (DASD) type disks, use
rd.
to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on Fibre Channel Protocol (FCP) disks, userd.zfcp=<adapter>,<wwpn>,<lun>
to specify the FCP disk where {rhel} is to be installed. - 3
- Specify values for
adapter
,wwpn
, andlun
as in the following example:rd.zfcp=0.0.8002,0x500507630400d1e3,0x4000404600000000
.
The override
parameter overrides the host’s network configuration settings.
4.5.3. Adding IBM Z agents with z/VM
Use the following procedure to manually add IBM Z® agents with z/VM. Only use this procedure for IBM Z® clusters with z/VM.
Prerequisites
- A running file server with access to the guest Virtual Machines.
Procedure
Create a parameter file for the z/VM guest:
Example parameter file
rd.neednet=1 \ console=ttysclp0 \ coreos.live.rootfs_url=<rootfs_url> \ 1 ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ 2 zfcp.allow_lun_scan=0 \ 3 ai.ip_cfg_override=1 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.dasd=0.0.4411 \ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 5 random.trust_cpu=on rd.luks.options=discard \ ignition.firstboot ignition.platform.id=metal \ console=tty1 console=ttyS1,115200n8 \ coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8"
- 1
- For the
coreos.live.rootfs_url
artifact, specify the matchingrootfs
artifact for thekernel
andinitramfs
that you are booting. Only HTTP and HTTPS protocols are supported. - 2
- For the
ip
parameter, assign the IP address automatically using DHCP, or manually assign the IP address, as described in "Installing a cluster with z/VM on IBM Z® and IBM® LinuxONE". - 3
- The default is
1
. Omit this entry when using an OSA network adapter. - 4
- For installations on DASD-type disks, use
rd.dasd
to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. Omit this entry for FCP-type disks. - 5
- For installations on FCP-type disks, use
rd.zfcp=<adapter>,<wwpn>,<lun>
to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks.
Leave all other parameters unchanged.
Punch the
kernel.img
,generic.parm
, andinitrd.img
files to the virtual reader of the z/VM guest virtual machine.For more information, see PUNCH (IBM Documentation).
TipYou can use the
CP PUNCH
command or, if you use Linux, thevmur
command, to transfer files between two z/VM guest virtual machines.- Log in to the conversational monitor system (CMS) on the bootstrap machine.
IPL the bootstrap machine from the reader by running the following command:
$ ipl c
For more information, see IPL (IBM Documentation).
Additional resources
4.5.4. Adding IBM Z agents with RHEL KVM
Use the following procedure to manually add IBM Z® agents with RHEL KVM. Only use this procedure for IBM Z® clusters with RHEL KVM.
The nmstateconfig
parameter must be configured for the KVM boot.
Procedure
- Boot your RHEL KVM machine.
To deploy the virtual server, run the
virt-install
command with the following parameters:$ virt-install \ --name <vm_name> \ --autostart \ --ram=16384 \ --cpu host \ --vcpus=8 \ --location <path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img \1 --disk <qcow_image_path> \ --network network:macvtap ,mac=<mac_address> \ --graphics none \ --noautoconsole \ --wait=-1 \ --extra-args "rd.neednet=1 nameserver=<nameserver>" \ --extra-args "ip=<IP>::<nameserver>::<hostname>:enc1:none" \ --extra-args "coreos.live.rootfs_url=http://<http_server>:8080/agent.s390x-rootfs.img" \ --extra-args "random.trust_cpu=on rd.luks.options=discard" \ --extra-args "ignition.firstboot ignition.platform.id=metal" \ --extra-args "console=tty1 console=ttyS1,115200n8" \ --extra-args "coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" \ --osinfo detect=on,require=off
- 1
- For the
--location
parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server.
4.5.5. Adding IBM Z agents in a Logical Partition (LPAR)
Use the following procedure to manually add IBM Z® agents to your cluster that runs in an LPAR environment. Use this procedure only for IBM Z® clusters running in an LPAR.
Prerequisites
- You have Python 3 installed.
- A running file server with access to the Logical Partition (LPAR).
Procedure
Create a boot parameter file for the agents.
Example parameter file
rd.neednet=1 cio_ignore=all,!condev \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \1 coreos.inst.persistent-kargs=console=ttysclp0 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \2 rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> \3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 \// random.trust_cpu=on rd.luks.options=discard
- 1
- For the
coreos.live.rootfs_url
artifact, specify the matchingrootfs
artifact for thekernel
andinitramfs
that you are starting. Only HTTP and HTTPS protocols are supported. - 2
- For the
ip
parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE. - 3
- For installations on DASD-type disks, use
rd.dasd
to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, userd.zfcp=<adapter>,<wwpn>,<lun>
to specify the FCP disk where RHCOS is to be installed.
NoteThe
.ins
andinitrd.img.addrsize
files are automatically generated fors390x
architecture as part of boot-artifacts from the installation program and are only used when booting in an LPAR environment.Example filesystem with LPAR boot
boot-artifacts ├─ agent.s390x-generic.ins ├─ agent.s390x-initrd.addrsize ├─ agent.s390x-rootfs.img └─ agent.s390x-kernel.img └─ agent.s390x-rootfs.img
-
Transfer the
initrd
,kernel
,generic.ins
, andinitrd.img.addrsize
parameter files to the file server. For more information, see Booting Linux in LPAR mode (IBM documentation). - Start the machine.
- Repeat the procedure for all other machines in the cluster.
Additional resources