Questo contenuto non è disponibile nella lingua selezionata.
Chapter 5. Preparing PXE assets for OpenShift Container Platform
Use the following procedures to create the assets needed to PXE boot an OpenShift Container Platform cluster using the Agent-based Installer.
The assets you create in these procedures will deploy a single-node OpenShift Container Platform installation. You can use these procedures as a basis and modify configurations according to your requirements.
See Installing an OpenShift Container Platform cluster with the Agent-based Installer to learn about more configurations available with the Agent-based Installer.
5.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You reviewed details about the OpenShift Container Platform installation and update processes.
5.2. Downloading the Agent-based Installer Copia collegamentoCollegamento copiato negli appunti!
Use this procedure to download the Agent-based Installer and the CLI needed for your installation.
Procedure
- Log in to the Red Hat Hybrid Cloud Console using your login credentials.
- Navigate to Datacenter.
- Click Run Agent-based Installer locally.
- Select the operating system and architecture for the OpenShift Installer and Command line interface.
- Click Download Installer to download and extract the install program.
- Download or copy the pull secret by clicking on Download pull secret or Copy pull secret.
-
Click Download command-line tools and place the
openshift-installbinary in a directory that is on yourPATH.
5.3. Creating the preferred configuration inputs Copia collegamentoCollegamento copiato negli appunti!
Use this procedure to create the preferred configuration inputs used to create the PXE files.
Procedure
Install
nmstatedependency by running the following command:sudo dnf install /usr/bin/nmstatectl -y
$ sudo dnf install /usr/bin/nmstatectl -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Place the
openshift-installbinary in a directory that is on your PATH. Create a directory to store the install configuration by running the following command:
mkdir ~/<directory_name>
$ mkdir ~/<directory_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis is the preferred method for the Agent-based installation. Using GitOps ZTP manifests is optional.
Create the
install-config.yamlfile by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the system architecture. Valid values are
amd64,arm64,ppc64le, ands390x.If you are using the release image with the
multipayload, you can install the cluster on different architectures such asarm64,amd64,s390x, andppc64le. Otherwise, you can install the cluster only on therelease architecturedisplayed in the output of theopenshift-install versioncommand. For more information, see "Verifying the supported architecture for installing an Agent-based Installer cluster". - 2
- Required. Specify your cluster name.
- 3
- The cluster network plugin to install. The default value
OVNKubernetesis the only supported value. - 4
- Specify your platform.Note
For bare metal platforms, host settings made in the platform section of the
install-config.yamlfile are used by default, unless they are overridden by configurations made in theagent-config.yamlfile. - 5
- Specify your pull secret.
- 6
- Specify your SSH public key.
NoteIf you set the platform to
vSphere,baremetal, ornone, you can configure IP address endpoints for cluster nodes in three ways:- IPv4
- IPv6
- IPv4 and IPv6 in parallel (dual-stack)
Example of dual-stack networking
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the
additionalTrustBundlefield of theinstall-config.yamlfile.Create the
agent-config.yamlfile by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This IP address is used to determine which node performs the bootstrapping process as well as running the
assisted-servicecomponent. You must provide the rendezvous IP address when you do not specify at least one host’s IP address in thenetworkConfigparameter. If this address is not provided, one IP address is selected from the provided hosts'networkConfig. - 2
- Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the
install-config.yamlfile, which is the sum of the values of thecompute.replicasandcontrolPlane.replicasparameters. - 3
- Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods.
- 4
- Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value.
- 5
- Optional: Configures the network interface of a host in NMState format.
Optional: To create an iPXE script, add the
bootArtifactsBaseURLto theagent-config.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<asset_server_URL>is the URL of the server you will upload the PXE assets to.
5.4. Creating the PXE assets Copia collegamentoCollegamento copiato negli appunti!
Use the following procedure to create the assets and optional script to implement in your PXE infrastructure.
Procedure
Create the PXE assets by running the following command:
openshift-install agent create pxe-files
$ openshift-install agent create pxe-filesCopy to Clipboard Copied! Toggle word wrap Toggle overflow The generated PXE assets and optional iPXE script can be found in the
boot-artifactsdirectory.Example filesystem with PXE assets and optional iPXE script
boot-artifacts ├─ agent.x86_64-initrd.img ├─ agent.x86_64.ipxe ├─ agent.x86_64-rootfs.img └─ agent.x86_64-vmlinuzboot-artifacts ├─ agent.x86_64-initrd.img ├─ agent.x86_64.ipxe ├─ agent.x86_64-rootfs.img └─ agent.x86_64-vmlinuzCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe contents of the
boot-artifactsdirectory vary depending on the specified architecture.NoteRed Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default
/etc/multipath.confconfiguration.Upload the PXE assets and optional script to your infrastructure where they will be accessible during the boot process.
NoteIf you generated an iPXE script, the location of the assets must match the
bootArtifactsBaseURLyou added to theagent-config.yamlfile.
5.5. Manually adding IBM Z agents Copia collegamentoCollegamento copiato negli appunti!
After creating the PXE assets, you can add IBM Z® agents. Only use this procedure for IBM Z® clusters.
Depending on your IBM Z® environment, you can choose from the following options:
- Adding IBM Z® agents with z/VM
- Adding IBM Z® agents with RHEL KVM
- Adding IBM Z® agents with Logical Partition (LPAR)
Currently, ISO boot support on IBM Z® (s390x) is available only for Red Hat Enterprise Linux (RHEL) KVM, which provides the flexibility to choose either PXE or ISO-based installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is supported.
5.5.1. Adding IBM Z agents with z/VM Copia collegamentoCollegamento copiato negli appunti!
Use the following procedure to manually add IBM Z® agents with z/VM. Only use this procedure for IBM Z® clusters with z/VM.
Procedure
Create a parameter file for the z/VM guest:
Example parameter file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For the
coreos.live.rootfs_urlartifact, specify the matchingrootfsartifact for thekernelandinitramfsthat you are booting. Only HTTP and HTTPS protocols are supported. - 2
- For the
ipparameter, assign the IP address automatically using DHCP, or manually assign the IP address, as described in "Installing a cluster with z/VM on IBM Z® and IBM® LinuxONE". - 3
- The default is
1. Omit this entry when using an OSA network adapter. - 4
- For installations on DASD-type disks, use
rd.dasdto specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. Omit this entry for FCP-type disks. - 5
- For installations on FCP-type disks, use
rd.zfcp=<adapter>,<wwpn>,<lun>to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks.
Leave all other parameters unchanged.
Punch the
kernel.img,generic.parm, andinitrd.imgfiles to the virtual reader of the z/VM guest virtual machine.For more information, see PUNCH (IBM Documentation).
TipYou can use the
CP PUNCHcommand or, if you use Linux, thevmurcommand, to transfer files between two z/VM guest virtual machines.- Log in to the conversational monitor system (CMS) on the bootstrap machine.
IPL the bootstrap machine from the reader by running the following command:
ipl c
$ ipl cCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see IPL (IBM Documentation).
5.5.2. Adding IBM Z agents with RHEL KVM Copia collegamentoCollegamento copiato negli appunti!
Use the following procedure to manually add IBM Z® agents with RHEL KVM. Only use this procedure for IBM Z® clusters with RHEL KVM.
Procedure
- Boot your RHEL KVM machine.
To deploy the virtual server, run the
virt-installcommand with the following parameters:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For the
--locationparameter, specify the location of the kernel/initrd on the HTTP or HTTPS server.
5.5.3. Adding IBM Z agents in a Logical Partition (LPAR) Copia collegamentoCollegamento copiato negli appunti!
Use the following procedure to manually add IBM Z® agents to your cluster that runs in an LPAR environment. Use this procedure only for IBM Z® clusters running in an LPAR.
Prerequisites
- You have Python 3 installed.
Procedure
Create a boot parameter file for the agents.
Example parameter file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For the
coreos.live.rootfs_urlartifact, specify the matchingrootfsartifact for thekernelandinitramfsthat you are starting. Only HTTP and HTTPS protocols are supported. - 2
- For the
ipparameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE. - 3
- For installations on DASD-type disks, use
rd.dasdto specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, userd.zfcp=<adapter>,<wwpn>,<lun>to specify the FCP disk where RHCOS is to be installed. - 4
- Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets.
Generate the
.insandinitrd.img.addrsizefiles by running the following Python script:The
.insfile is a special file that includes installation data and is present on the FTP server. It can be accessed from the HMC system. This file contains details such as mapping of the location of installation data on the disk or FTP server, the memory locations where the data is to be copied.NoteThe
.insandinitrd.img.addrsizefiles are not automatically generated as part of boot-artifacts from the installer. You must manually generate these files.Save the following script to a file, such as
generate-files.py:Example of a Python file named
generate-files.pyfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the script by running the following command:
python3 <file_name>.py
$ python3 <file_name>.pyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Transfer the
initrd,kernel,generic.ins, andinitrd.img.addrsizeparameter files to the file server. For more information, see Booting Linux in LPAR mode (IBM documentation). - Start the machine.
- Repeat the procedure for all other machines in the cluster.