Chapter 2. Prerequisites
The Assisted Installer validates the following prerequisites to ensure successful installation.
If you use a firewall, you must configure it so that Assisted Installer can access the resources it requires to function.
2.1. Supported CPU architectures
The Assisted Installer is supported on the following CPU architectures:
- x86_64
- arm64
- ppc64le
- s390x
2.2. Resource requirements
This section describes the resource requirements for different clusters and installation options.
The multicluster engine for Kubernetes requires additional resources.
If you deploy the multicluster engine with storage, such as OpenShift Data Foundation or LVM Storage, you must also assign additional resources to each node.
2.2.1. Multi-node cluster resource requirements
The resource requirements of a multi-node cluster depend on the installation options.
- Multi-node cluster basic installation
Control plane nodes:
- 4 CPU cores
- 16 GB RAM
100 GB storage
NoteThe disks must be reasonably fast, with an etcd
wal_fsync_duration_seconds
p99 duration that is less than 10 ms. For more information, see the Red Hat Knowledgebase solution How to Use 'fio' to Check Etcd Disk Performance in OCP.
Compute nodes:
- 2 CPU cores
- 8 GB RAM
- 100 GB storage
- Multi-node cluster + multicluster engine
- Additional 4 CPU cores
Additional 16 GB RAM
NoteIf you deploy multicluster engine without OpenShift Data Foundation, no storage is configured. You configure the storage after the installation.
- Multi-node cluster + multicluster engine + OpenShift Data Foundation or LVM Storage
- Additional 75 GB storage
2.2.2. Single-node OpenShift resource requirements
The resource requirements for single-node OpenShift depend on the installation options.
- Single-node OpenShift basic installation
- 8 CPU cores
- 16 GB RAM
- 100 GB storage
- Single-node OpenShift + multicluster engine
- Additional 8 CPU cores
Additional 32 GB RAM
NoteIf you deploy multicluster engine without OpenShift Data Foundation, LVM Storage is enabled.
- Single-node OpenShift + multicluster engine + OpenShift Data Foundation or LVM Storage
- Additional 95 GB storage
2.3. Networking requirements
For hosts of type VMware
, set clusterSet disk.enableUUID
to true
, even when the platform is not vSphere.
2.3.1. General networking requirements
The network must meet the following requirements:
- A DHCP server unless using static IP addressing.
A base domain name. You must ensure that the following requirements are met:
-
There is no wildcard, such as
*.<cluster_name>.<base_domain>
, or the installation will not proceed. -
A DNS A/AAAA record for
api.<cluster_name>.<base_domain>
. -
A DNS A/AAAA record with a wildcard for
*.apps.<cluster_name>.<base_domain>
.
-
There is no wildcard, such as
-
Port
6443
is open for the API URL to allow users outside the firewall to access the cluster by using theoc
CLI tool. -
Port
443
is open for the console to allow users outside the firewall to access the console. - A DNS A/AAAA record for each node in the cluster when using User Managed Networking, or the installation will not proceed. DNS A/AAAA records are required for each node in the cluster when using Cluster Managed Networking after installation is complete to connect to the cluster, but installation can proceed without the A/AAAA records when using Cluster Managed Networking.
- A DNS PTR record for each node in the cluster if you want to boot with the preset hostname when using static IP addressing. Otherwise, the Assisted Installer has an automatic node renaming feature when using static IP addressing that will rename the nodes to their network interface MAC address.
- DNS A/AAAA record settings at top-level domain registrars can take significant time to update. Ensure the A/AAAA record DNS settings are working before installation to prevent installation delays.
- For DNS record examples, see Example DNS configuration.
The OpenShift Container Platform cluster’s network must also meet the following requirements:
- Connectivity between all cluster nodes
- Connectivity for each node to the internet
- Access to an NTP server for time synchronization between the cluster nodes
2.3.1.1. Example DNS configuration
This section provides A and PTR record configuration examples that meet the DNS requirements for deploying OpenShift Container Platform using the Assisted Installer. The examples are not meant to provide advice for choosing one DNS solution over another.
In the examples, the cluster name is ocp4
and the base domain is example.com
.
2.3.1.2. Example DNS A record configuration
The following example is a BIND zone file that shows sample A records for name resolution in a cluster installed using the Assisted Installer.
Example DNS zone database
$TTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.1 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 4 control-plane1.ocp4.example.com. IN A 192.168.1.98 control-plane2.ocp4.example.com. IN A 192.168.1.99 ; worker0.ocp4.example.com. IN A 192.168.1.11 5 worker1.ocp4.example.com. IN A 192.168.1.7 ; ;EOF
- 1
- Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer.
- 2
- Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications.
- 3
- Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the worker machines by default.Note
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
- 4
- Provides name resolution for the control plane machines.
- 5
- Provides name resolution for the worker machines.
2.3.1.3. Example DNS PTR record configuration
The following example is a BIND zone file that shows sample PTR records for reverse name resolution in a cluster installed using the Assisted Installer.
Example DNS zone database for reverse records
$$TTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 4 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. ; ;EOF
- 1
- Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer.
- 2
- Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications.
- 3
- Provides reverse DNS resolution for the control plane machines.
- 4
- Provides reverse DNS resolution for the worker machines.Note
A PTR record is not required for the OpenShift Container Platform application wildcard.
2.3.2. Networking requirements for IBM Z
In IBM Z® environments, advanced networking technologies like Original Storage Architecture (OSA), HiperSockets, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) require specific configurations that deviate from the standard settings used in Assisted Installer deployments. These overrides are necessary to accommodate their unique requirements and ensure a successful and efficient deployment on IBM Z®.
The following table lists the network devices that are supported for the network configuration override functionality:
Network device | z/VM | KVM | LPAR Classic | LPAR Dynamic Partition Manager (DPM) |
---|---|---|---|---|
Original Storage Architecture (OSA) virtual switch | Not supported | — | Not supported | Not supported |
Direct attached OSA | Supported | Only through a Linux bridge | Supported | Not supported |
RDMA over Converged Ethernet (RoCE) | Not supported | Only through a Linux bridge | Not supported | Not supported |
HiperSockets | Supported | Only through a Linux bridge | Supported | Not supported |
Linux bridge | Not supported | Supported | Not supported | Not supported |
2.3.2.1. Configuring network overrides in IBM Z
You can specify a static IP address on IBM Z® machines that uses Logical Partition (LPAR) and z/VM. This is specially useful when the network devices do not have a static MAC address assigned to them.
If you have an existing .parm
file, edit it to include the following entry:
ai.ip_cfg_override=1
This parameter allows the file to add the network settings to the CoreOS installer.
Example of the .parm
file
rd.neednet=1 cio_ignore=all,!condev console=ttysclp0 coreos.live.rootfs_url=<coreos_url> 1 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> 2 rd.zfcp=<adapter>,<wwpn>,<lun> random.trust_cpu=on 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 4 ignition.firstboot ignition.platform.id=metal random.trust_cpu=on
- 1
- For the
coreos.live.rootfs_url
artifact, specify the matchingrootfs
artifact for thekernel
andinitramfs
that you are booting. Only HTTP and HTTPS protocols are supported. - 2
- For installations on direct access storage devices (DASD) type disks, use
rd.
to specify the DASD where Red Hat Enterprise Linux (RHEL) is to be installed. For installations on Fibre Channel Protocol (FCP) disks, userd.zfcp=<adapter>,<wwpn>,<lun>
to specify the FCP disk where RHEL is to be installed. - 3
- Specify values for
adapter
,wwpn
, andlun
as in the following example:rd.zfcp=0.0.8002,0x500507630400d1e3,0x4000404600000000
. - 4
- Specify this parameter when using an OSA network adapter or HiperSockets.
The override
parameter overrides the host’s network configuration settings.
2.4. Preflight validations
The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing software on the nodes, the Assisted Installer conducts the following validations:
- Ensures network connectivity
- Ensures sufficient network bandwidth
- Ensures connectivity to the registry
- Ensures that any upstream DNS can resolve the required domain name
- Ensures time synchronization between cluster nodes
- Verifies that the cluster nodes meet the minimum hardware requirements
- Validates the installation configuration parameters
If the Assisted Installer does not successfully validate the foregoing requirements, installation will not proceed.