Chapter 2. Prerequisites
The Assisted Installer validates the following prerequisites to ensure successful installation. If you use a firewall, you must configure it so that the Assisted Installer can access the resources it requires to function.
2.1. Supported CPU architectures Copy linkLink copied to clipboard!
The Assisted Installer is supported on the following CPU architectures:
- x86_64
- arm64
- ppc64le (IBM Power®)
- s390x (IBM Z®)
2.2. Supported drive types Copy linkLink copied to clipboard!
Install Red Hat OpenShift Container Platform with the Assisted Installer using the appropriate drive types.
- Supported drive types
This table shows the installation drive types supported for the different OpenShift Container Platform versions and CPU architectures.
Expand Drive types RHOCP Version Supported CPU Architectures Comments HDD
All
All
A hard disk drive.
SSD
All
All
An SSD or NVMe drive.
Multipath
All
All
A Linux multipath device that can aggregate paths for various protocols. Using multipath enhances availablity and performance. Currently, the Assisted Installer supports multipathing for Fibre Channel and iSCSI protocals.
FC (Fibre Channel)
All
s390x, x86_64
Indicates a single path Fibre Channel (FC) drive. For a multipath Fibre Channel configuration, see 'Multipath' in this table.
iSCSI
4.15 and later
x86_64
- You can install a cluster on a single or multipath iSCSI boot device.
- The Assisted Installer supports iSCSI boot volumes through iPXE boot.
- A minimal ISO image is mandatory on iSCSI boot volumes. Using a full ISO image will result in an error.
- iSCSI boot requires two machine network interfaces; one for the iSCSI traffic and the other for the OpenShift Container Platform cluster installation.
- A static IP address is not supported when using iSCSI boot volumes.
RAID
4.14 and later
All
A software RAID drive. The RAID should be configured via BIOS/UEFI. If this option is unavailable, you can configure OpenShift Container Platform to mirror the drives. For details, see Encrypting and mirroring disks during installation.
ECK
All
s390x
IBM drive.
ECKD (ESE)
All
s390x
IBM drive.
FBA
All
s390x
IBM drive.
- Unsupported drive types
This table shows the installation drive types that are not supported.
Expand Drive types Comments Unknown
The system could not detect the drive type.
FDD
A floppy disk drive.
ODD
An optical disk drive (e.g., CD-ROM).
Virtual
A loopback device.
LVM
A Linux Logical Volume Management drive.
2.3. Resource requirements Copy linkLink copied to clipboard!
This section describes the resource requirements for different clusters and installation options.
The multicluster engine for Kubernetes requires additional resources.
If you deploy the multicluster engine with storage, such as OpenShift Data Foundation or LVM Storage, you must also assign additional resources to each node.
2.3.1. Multi-node cluster resource requirements Copy linkLink copied to clipboard!
The resource requirements of a multi-node (high-availability) cluster depend on the installation options.
- Description
- A standard OpenShift Container Platform cluster configuration consists of three to five control plane nodes and two or more worker nodes. This configuration ensures full high availability for control plane services.
- Multi-node cluster basic installation
Control plane nodes:
- 4 CPU cores
- 16 GB RAM
100 GB storage
NoteThe disks must be reasonably fast, with an etcd
wal_fsync_duration_secondsp99 duration that is less than 10 ms. For more information, see the Red Hat Knowledgebase solution How to Use 'fio' to Check Etcd Disk Performance in OCP.
Compute nodes:
- 2 CPU cores
- 8 GB RAM
- 100 GB storage
- Multi-node cluster + multicluster engine
- Additional 4 CPU cores
Additional 16 GB RAM
NoteIf you deploy multicluster engine without OpenShift Data Foundation, no storage is configured. You configure the storage after the installation.
- Multi-node cluster + multicluster engine + OpenShift Data Foundation
- Additional 75 GB storage
2.3.2. Two-Node OpenShift with Arbiter (TNA) cluster resource requirements Copy linkLink copied to clipboard!
The resource requirements of a Two-Node OpenShift with Arbiter (TNA) cluster depend on the installation options.
- Description
A Two-Node OpenShift with Arbiter (TNA) cluster is a compact, cost-effective OpenShift Container Platform topology. It consists of two control plane nodes and a lightweight arbiter node. The arbiter node stores the full etcd data, maintaining an etcd quorum and preventing split brain. It does not run the additional control plane components
kube-apiserverandkube-controller-manager, nor does it run workloads. For details, see Overview of etcd.To install a Two-Node OpenShift with Arbiter cluster, assign an arbiter role to at least one of the nodes and set the control plane node count for the cluster to
2. Although OpenShift Container Platform does not currently impose a limit on the number of arbiter nodes, the typical deployment includes only one to minimize the use of hardware resources.Following installation, you can add additional arbiter nodes to a Two-Node OpenShift with Arbiter cluster but not to a standard multi-node cluster. It is also not possible to convert between a Two-Node OpenShift with Arbiter and standard topology.
Support for a Two-Node OpenShift with Arbiter cluster begins with OpenShift Container Platform version 4.19 and later. This configuration is available only for bare-metal installations.
Two-Node OpenShift with Arbiter (TNA) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
- Two-Node OpenShift with Arbiter basic installation
Control plane nodes:
- 4 CPU cores
- 16 GB RAM
100 GB storage
NoteThe disks must be reasonably fast, with an etcd
wal_fsync_duration_secondsp99 duration that is less than 10 ms. For more information, see the Red Hat Knowledgebase solution How to Use 'fio' to Check Etcd Disk Performance in OCP.
Arbiter node:
- 2 CPU cores
- 8 GB RAM
- 50 GB storage
- Two-Node OpenShift with Arbiter + multicluster engine
- Additional 4 CPU cores
Additional 16 GB RAM
NoteIf you deploy multicluster engine without OpenShift Data Foundation, no storage is configured. You configure the storage after the installation.
- Two-Node OpenShift with Arbiter + multicluster engine + OpenShift Data Foundation
- Additional 75 GB storage
2.3.3. Single-node OpenShift cluster resource requirements Copy linkLink copied to clipboard!
The resource requirements for single-node OpenShift depend on the installation options.
- Description
- A single-node OpenShift cluster is an OpenShift Container Platform deployment that runs entirely on a single node. Single-node OpenShift includes the control plane and worker functionality on one physical or virtual machine.
- Single-node OpenShift basic installation
- 8 CPU cores
- 16 GB RAM
- 100 GB storage
- Single-node OpenShift + multicluster engine
- Additional 8 CPU cores
Additional 32 GB RAM
NoteIf you deploy multicluster engine without OpenShift Data Foundation, LVM Storage is enabled.
- Single-node OpenShift + multicluster engine + OpenShift Data Foundation
- Additional 95 GB storage
2.4. Networking requirements Copy linkLink copied to clipboard!
For hosts of type VMware, set clusterSet disk.EnableUUID to TRUE, even when the platform is not vSphere.
2.4.1. General networking requirements Copy linkLink copied to clipboard!
The network must meet the following requirements:
- You have configured either dynamic (DHCP) or static IP addressing.
You have selected the correct route configuration for your IP addressing method:
- For dynamic IP addressing, ensure that you have configured your network routes dynamically via DHCP.
- For static IP addressing, ensure that you have configured the network routes manually via the static networking configurations.
ImportantYou cannot combine dynamic IP addresses with static route configurations. When the Assisted Installer receives a dynamic IP address (with a
/128prefix), it specifically looks for network routes that were also configured dynamically, such as those advertised via Router Advertisement (RA). If a network route is configured manually (with a/64prefix, for example), the Assisted Installer ignores it.-
You have opened port 6443 to allow the API URL access to the cluster using the
ocCLI tool when outside the firewall. - You have opened port 22624 in all firewalls. The Machine Config Operator (MCO) and new worker nodes use port 22624 to get the ignition data from the cluster API.
- You have opened port 443 to allow console access outside the firewall. Port 443 is also used for all ingress traffic.
- You have configured DNS to connect to the cluster API or ingress endpoints from outside the cluster.
- Optional: You have created a DNS Pointer record (PTR) for each node in the cluster if using static IP addressing.
You must create a DNS PTR record to boot with a preset hostname if the hostname will not come from another source (/etc/hosts or DHCP). Otherwise, the Assisted Installer’s automatic node renaming feature will rename the nodes to their network interface MAC address.
2.4.2. External DNS Copy linkLink copied to clipboard!
Installing a multi-node cluster with user-managed networking requires external DNS records. The Assisted Installer does not require external DNS records to complete the installation of multi-node clusters with cluster-managed networking or single-node OpenShift clusters. Configure the external DNS records after installation to connect to the cluster from an external source.
External DNS requires the following records:
- A/AAAA record for api.<cluster_name>.<base_domain>.
- A/AAAA record for api-int.<cluster_name>.<base_domain>.
- A/AAAA record with a wildcard for *.apps.<cluster_name>.<base_domain>.
- A/AAAA record for each node in the cluster.
- Do not create a wildcard, such as *.<cluster_name>.<base_domain>, or the installation will not proceed.
- A/AAAA record settings at top-level domain registrars can take significant time to update. Ensure the newly created DNS Records are resolving before installation to prevent installation delays.
- For DNS record examples, see Example DNS configuration.
The OpenShift Container Platform cluster’s network must also meet the following requirements:
- Connectivity between all cluster nodes
- Connectivity for each node to the internet
- Access to an NTP server for time synchronization between the cluster nodes
The following DNS configuration provides A and PTR record configuration examples that meet the DNS requirements for deploying OpenShift Container Platform using the Assisted Installer. The examples are not meant to recommend one DNS solution over another.
In the examples, the cluster name is ocp4 and the base domain is example.com.
2.4.2.1. Example DNS A record configuration Copy linkLink copied to clipboard!
The following example DNS zone database is a BIND zone file that shows sample A records for name resolution in a cluster installed using the Assisted Installer:
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
IN MX 10 smtp.example.com.
;
;
ns1.example.com. IN A 192.168.1.1
smtp.example.com. IN A 192.168.1.5
;
helper.example.com. IN A 192.168.1.5
;
api.ocp4.example.com. IN A 192.168.1.5
api-int.ocp4.example.com. IN A 192.168.1.5
;
*.apps.ocp4.example.com. IN A 192.168.1.5
;
control-plane0.ocp4.example.com. IN A 192.168.1.97
control-plane1.ocp4.example.com. IN A 192.168.1.98
control-plane2.ocp4.example.com. IN A 192.168.1.99
;
worker0.ocp4.example.com. IN A 192.168.1.11
worker1.ocp4.example.com. IN A 192.168.1.7
;
;EOF
where:
-
api.ocp4.example.com.provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. -
api-int.ocp4.example.com.provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. *.apps.ocp4.example.com.provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the worker machines by default.NoteIn the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
-
control-plane0.ocp4.example.com.and adjacent records provide name resolution for the control plane machines. -
worker0.ocp4.example.com.and adjacent record provide name resolution for the worker machines.
2.4.2.2. Example DNS PTR record configuration Copy linkLink copied to clipboard!
The following example DNS zone database for reverse records is a BIND zone file that shows sample PTR records for reverse name resolution in a cluster installed using the Assisted Installer:
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
;
5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com.
5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com.
;
97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com.
98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com.
99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com.
;
11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com.
7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com.
;
;EOF
where:
-
api.ocp4.example.comprovides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. -
api-int.ocp4.example.comprovides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. -
control-plane0.ocp4.example.comand adjacent records provide reverse DNS resolution for the control plane machines. -
worker0.ocp4.example.comand adjacent record provide reverse DNS resolution for the worker machines.
A PTR record is not required for the OpenShift Container Platform application wildcard.
2.4.3. Networking requirements for IBM Z Copy linkLink copied to clipboard!
In IBM Z® environments, advanced networking technologies like Open Systems Adapter (OSA), HiperSockets, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) require specific configurations that deviate from the standard settings used in Assisted Installer deployments. These overrides are necessary to accommodate their unique requirements and ensure a successful and efficient deployment on IBM Z®.
The following table lists the network devices that are supported for the network configuration override functionality:
| Network device | z/VM | KVM | LPAR Classic | LPAR Dynamic Partition Manager (DPM) |
|---|---|---|---|---|
| OSA virtual switch | Supported | Not applicable | Not applicable | Not applicable |
| Direct attached OSA | Supported | Only through a Linux bridge | Supported | Supported |
| RDMA over Converged Ethernet (RoCE) | Supported | Only through a Linux bridge | Supported | Supported |
| HiperSockets | Supported | Only through a Linux bridge | Supported | Supported |
| Linux bridge | Not applicable | Supported | Not applicable | Not applicable |
2.4.3.1. Configuring network overrides in IBM Z Copy linkLink copied to clipboard!
You can specify a static IP address on IBM Z® machines that uses Logical Partition (LPAR) and z/VM. This is specially useful when the network devices do not have a static MAC address assigned to them.
Procedure
If you have an existing
.parmfile, edit it to include the following entry:ai.ip_cfg_override=1This parameter allows the file to add the network settings to the CoreOS installer.
The following is an example of the
.parmfile:rd.neednet=1 cio_ignore=all,!condev console=ttysclp0 coreos.live.rootfs_url=<coreos_url> ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> rd.zfcp=<adapter>,<wwpn>,<lun> random.trust_cpu=on zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 ignition.firstboot ignition.platform.id=metal random.trust_cpu=onReplace the parameters as follows:
-
For the
coreos.live.rootfs_urlartifact, specify the matchingrootfsartifact for thekernelandinitramfsthat you are booting. Only HTTP and HTTPS protocols are supported. -
Regarding
rd.<disk_type>, for installations on direct access storage devices (DASD) type disks, userd.to specify the DASD where Red Hat Enterprise Linux (RHEL) is to be installed. For installations on Fibre Channel Protocol (FCP) disks, userd.zfcp=<adapter>,<wwpn>,<lun>to specify the FCP disk where RHEL is to be installed. -
For
rd.zfcp, specify values foradapter,wwpn, andlunas in the following example:rd.zfcp=0.0.8002,0x500507630400d1e3,0x4000404600000000. Regarding
ai.ip_cfg_override=1, specify this parameter when using an OSA network adapter or HiperSockets.NoteThe
overrideparameter overrides the host’s network configuration settings.
-
For the
2.5. Preflight validations Copy linkLink copied to clipboard!
The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing software on the nodes, the Assisted Installer conducts the following validations:
- Ensures network connectivity
- Ensures sufficient network bandwidth
- Ensures connectivity to the registry
- Ensures that any upstream DNS can resolve the required domain name
- Ensures time synchronization between cluster nodes
- Verifies that the cluster nodes meet the minimum hardware requirements
- Validates the installation configuration parameters
If the Assisted Installer does not successfully validate the foregoing requirements, installation will not proceed.