Este conteúdo não está disponível no idioma selecionado.

Chapter 2. Prerequisites


The Assisted Installer validates the following prerequisites to ensure successful installation.

If you use a firewall, you must configure it so that Assisted Installer can access the resources it requires to function.

2.1. Supported CPU architectures

The Assisted Installer is supported on the following CPU architectures:

  • x86_64
  • arm64
  • ppc64le (IBM Power®)
  • s390x (IBM Z®)

2.2. Supported drive types

This section lists the installation drive types that you can and cannot use when installing Red Hat OpenShift Container Platform with the Assisted Installer.

Supported drive types

The table below shows the installation drive types supported for the different OpenShift Container Platform versions and CPU architectures.

Expand
Drive typesRHOCP VersionSupported CPU ArchitecturesComments

HDD

All

All

A hard disk drive.

SSD

All

All

An SSD or NVMe drive.

Multipath

All

All

A Linux multipath device that can aggregate paths for various protocols. Using multipath enhances availablity and performance. Currently, the Assisted Installer supports multipathing for Fibre Channel and iSCSI protocals.

FC (Fibre Channel)

All

s390x, x86_64

Indicates a single path Fibre Channel (FC) drive. For a multipath Fibre Channel configuration, see 'Multipath' in this table.

iSCSI

4.15 and later

x86_64

  • You can install a cluster on a single or multipath iSCSI boot device.
  • The Assisted Installer supports iSCSI boot volumes through iPXE boot.
  • A minimal ISO image is mandatory on iSCSI boot volumes. Using a full ISO image will result in an error.
  • iSCSI boot requires two machine network interfaces; one for the iSCSI traffic and the other for the OpenShift Container Platform cluster installation.
  • A static IP address is not supported when using iSCSI boot volumes.

RAID

4.14 and later

All

A software RAID drive. The RAID should be configured via BIOS/UEFI. If this option is unavailable, you can configure OpenShift Container Platform to mirror the drives. For details, see Encrypting and mirroring disks during installation.

ECK

All

s390x

IBM drive.

ECKD (ESE)

All

s390x

IBM drive.

FBA

All

s390x

IBM drive.

Unsupported drive types

The table below shows the installation drive types that are not supported.

Expand
Drive typesComments

Unknown

The system could not detect the drive type.

FDD

A floppy disk drive.

ODD

An optical disk drive (e.g., CD-ROM).

Virtual

A loopback device.

LVM

A Linux Logical Volume Management drive.

2.3. Resource requirements

This section describes the resource requirements for different clusters and installation options.

The multicluster engine for Kubernetes requires additional resources.

If you deploy the multicluster engine with storage, such as OpenShift Data Foundation or LVM Storage, you must also assign additional resources to each node.

2.3.1. Multi-node cluster resource requirements

The resource requirements of a multi-node (high-availability) cluster depend on the installation options.

Description
A standard OpenShift Container Platform cluster configuration consists of three to five control plane nodes and two or more worker nodes. This configuration ensures full high availability for control plane services.
Multi-node cluster basic installation
  • Control plane nodes:

    • 4 CPU cores
    • 16 GB RAM
    • 100 GB storage

      Note

      The disks must be reasonably fast, with an etcd wal_fsync_duration_seconds p99 duration that is less than 10 ms. For more information, see the Red Hat Knowledgebase solution How to Use 'fio' to Check Etcd Disk Performance in OCP.

  • Compute nodes:

    • 2 CPU cores
    • 8 GB RAM
    • 100 GB storage
Multi-node cluster + multicluster engine
  • Additional 4 CPU cores
  • Additional 16 GB RAM

    Note

    If you deploy multicluster engine without OpenShift Data Foundation, no storage is configured. You configure the storage after the installation.

Multi-node cluster + multicluster engine + OpenShift Data Foundation
  • Additional 75 GB storage

2.3.2. Two-Node OpenShift with Arbiter (TNA) cluster resource requirements

The resource requirements of a Two-Node OpenShift with Arbiter (TNA) cluster depend on the installation options.

Description

A Two-Node OpenShift with Arbiter (TNA) cluster is a compact, cost-effective OpenShift Container Platform topology. It consists of two control plane nodes and a lightweight arbiter node. The arbiter node stores the full etcd data, maintaining an etcd quorum and preventing split brain. It does not run the additional control plane components kube-apiserver and kube-controller-manager, nor does it run workloads. For details, see Overview of etcd.

To install a Two-Node OpenShift with Arbiter cluster, assign an arbiter role to at least one of the nodes and set the control plane node count for the cluster to 2. Although OpenShift Container Platform does not currently impose a limit on the number of arbiter nodes, the typical deployment includes only one to minimize the use of hardware resources.

Following installation, you can add additional arbiter nodes to a Two-Node OpenShift with Arbiter cluster but not to a standard multi-node cluster. It is also not possible to convert between a Two-Node OpenShift with Arbiter and standard topology.

Support for a Two-Node OpenShift with Arbiter cluster begins with OpenShift Container Platform version 4.19 and later. This configuration is available only for bare-metal installations.

Important

Two-Node OpenShift with Arbiter (TNA) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Two-Node OpenShift with Arbiter basic installation
  • Control plane nodes:

    • 4 CPU cores
    • 16 GB RAM
    • 100 GB storage

      Note

      The disks must be reasonably fast, with an etcd wal_fsync_duration_seconds p99 duration that is less than 10 ms. For more information, see the Red Hat Knowledgebase solution How to Use 'fio' to Check Etcd Disk Performance in OCP.

  • Arbiter node:

    • 2 CPU cores
    • 8 GB RAM
    • 50 GB storage
Two-Node OpenShift with Arbiter + multicluster engine
  • Additional 4 CPU cores
  • Additional 16 GB RAM

    Note

    If you deploy multicluster engine without OpenShift Data Foundation, no storage is configured. You configure the storage after the installation.

Two-Node OpenShift with Arbiter + multicluster engine + OpenShift Data Foundation
  • Additional 75 GB storage

2.3.3. Single-node OpenShift cluster resource requirements

The resource requirements for single-node OpenShift depend on the installation options.

Description
A single-node OpenShift cluster is an OpenShift Container Platform deployment that runs entirely on a single node. Single-node OpenShift includes the control plane and worker functionality on one physical or virtual machine.
Single-node OpenShift basic installation
  • 8 CPU cores
  • 16 GB RAM
  • 100 GB storage
Single-node OpenShift + multicluster engine
  • Additional 8 CPU cores
  • Additional 32 GB RAM

    Note

    If you deploy multicluster engine without OpenShift Data Foundation, LVM Storage is enabled.

Single-node OpenShift + multicluster engine + OpenShift Data Foundation
  • Additional 95 GB storage

2.4. Networking requirements

For hosts of type VMware, set clusterSet disk.EnableUUID to TRUE, even when the platform is not vSphere.

2.4.1. General networking requirements

The network must meet the following requirements:

  • You have configured either dynamic (DHCP) or static IP addressing.
  • You have selected the correct route configuration for your IP addressing method:

    • For dynamic IP addressing, ensure that you have configured your network routes dynamically via DHCP.
    • For static IP addressing, ensure that you have configured the network routes manually via the static networking configurations.
    Important

    You cannot combine dynamic IP addresses with static route configurations. When the Assisted Installer receives a dynamic IP address (with a /128 prefix), it specifically looks for network routes that were also configured dynamically, such as those advertised via Router Advertisement (RA). If a network route is configured manually (with a /64 prefix, for example), the Assisted Installer ignores it.

  • You have opened port 6443 to allow the API URL access to the cluster using the oc CLI tool when outside the firewall.
  • You have opened port 22624 in all firewalls. The Machine Config Operator (MCO) and new worker nodes use port 22624 to get the ignition data from the cluster API.
  • You have opened port 443 to allow console access outside the firewall. Port 443 is also used for all ingress traffic.
  • You have configured DNS to connect to the cluster API or ingress endpoints from outside the cluster.
  • Optional: You have created a DNS Pointer record (PTR) for each node in the cluster if using static IP addressing.
Note

You must create a DNS PTR record to boot with a preset hostname if the hostname will not come from another source (/etc/hosts or DHCP). Otherwise, the Assisted Installer’s automatic node renaming feature will rename the nodes to their network interface MAC address.

2.4.2. External DNS

Installing multi-node cluster with user-managed networking requires external DNS. External DNS is not required to install multi-node clusters with cluster-managed networking or Single-node OpenShift with the Assisted Installer. Configure external DNS after installation to connect to the cluster from an external source.

External DNS requires the creation of the following record types:

  • A/AAAA record for api.<cluster_name>.<base_domain>.
  • A/AAAA record with a wildcard for *.apps.<cluster_name>.<base_domain>.
  • A/AAAA record for each node in the cluster.
Important
  • Do not create a wildcard, such as *.<cluster_name>.<base_domain>, or the installation will not proceed.
  • A/AAAA record settings at top-level domain registrars can take significant time to update. Ensure the newly created DNS Records are resolving before installation to prevent installation delays.
  • For DNS record examples, see Example DNS configuration.

The OpenShift Container Platform cluster’s network must also meet the following requirements:

  • Connectivity between all cluster nodes
  • Connectivity for each node to the internet
  • Access to an NTP server for time synchronization between the cluster nodes

2.4.2.1. Example DNS configuration

The following DNS configuration provides A and PTR record configuration examples that meet the DNS requirements for deploying OpenShift Container Platform using the Assisted Installer. The examples are not meant to recommend one DNS solution over another.

In the examples, the cluster name is ocp4 and the base domain is example.com.

Example DNS A record configuration

The following example is a BIND zone file that shows sample A records for name resolution in a cluster installed using the Assisted Installer.

Example DNS zone database

$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
	IN	MX 10	smtp.example.com.
;
;
ns1.example.com.		IN	A	192.168.1.1
smtp.example.com.		IN	A	192.168.1.5
;
helper.example.com.		IN	A	192.168.1.5
;
api.ocp4.example.com.		IN	A	192.168.1.5 
1

api-int.ocp4.example.com.	IN	A	192.168.1.5 
2

;
*.apps.ocp4.example.com.	IN	A	192.168.1.5 
3

;
control-plane0.ocp4.example.com.	IN	A	192.168.1.97 
4

control-plane1.ocp4.example.com.	IN	A	192.168.1.98
control-plane2.ocp4.example.com.	IN	A	192.168.1.99
;
worker0.ocp4.example.com.	IN	A	192.168.1.11 
5

worker1.ocp4.example.com.	IN	A	192.168.1.7
;
;EOF
Copy to Clipboard Toggle word wrap

1
Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer.
2
Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications.
3
Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the worker machines by default.
Note

In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

4
Provides name resolution for the control plane machines.
5
Provides name resolution for the worker machines.
Example DNS PTR record configuration

The following example is a BIND zone file that shows sample PTR records for reverse name resolution in a cluster installed using the Assisted Installer.

Example DNS zone database for reverse records

$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
;
5.1.168.192.in-addr.arpa.	IN	PTR	api.ocp4.example.com. 
1

5.1.168.192.in-addr.arpa.	IN	PTR	api-int.ocp4.example.com. 
2

;
97.1.168.192.in-addr.arpa.	IN	PTR	control-plane0.ocp4.example.com. 
3

98.1.168.192.in-addr.arpa.	IN	PTR	control-plane1.ocp4.example.com.
99.1.168.192.in-addr.arpa.	IN	PTR	control-plane2.ocp4.example.com.
;
11.1.168.192.in-addr.arpa.	IN	PTR	worker0.ocp4.example.com. 
4

7.1.168.192.in-addr.arpa.	IN	PTR	worker1.ocp4.example.com.
;
;EOF
Copy to Clipboard Toggle word wrap

1
Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer.
2
Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications.
3
Provides reverse DNS resolution for the control plane machines.
4
Provides reverse DNS resolution for the worker machines.
Note

A PTR record is not required for the OpenShift Container Platform application wildcard.

2.4.3. Networking requirements for IBM Z

In IBM Z® environments, advanced networking technologies like Open Systems Adapter (OSA), HiperSockets, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) require specific configurations that deviate from the standard settings used in Assisted Installer deployments. These overrides are necessary to accommodate their unique requirements and ensure a successful and efficient deployment on IBM Z®.

The following table lists the network devices that are supported for the network configuration override functionality:

Expand
Network devicez/VMKVMLPAR ClassicLPAR Dynamic Partition Manager (DPM)

OSA virtual switch

Supported

Not applicable

Not applicable

Not applicable

Direct attached OSA

Supported

Only through a Linux bridge

Supported

Supported

RDMA over Converged Ethernet (RoCE)

Supported

Only through a Linux bridge

Supported

Supported

HiperSockets

Supported

Only through a Linux bridge

Supported

Supported

Linux bridge

Not applicable

Supported

Not applicable

Not applicable

2.4.3.1. Configuring network overrides in IBM Z

You can specify a static IP address on IBM Z® machines that uses Logical Partition (LPAR) and z/VM. This is specially useful when the network devices do not have a static MAC address assigned to them.

If you have an existing .parm file, edit it to include the following entry:

ai.ip_cfg_override=1
Copy to Clipboard Toggle word wrap

This parameter allows the file to add the network settings to the CoreOS installer.

Example of the .parm file

rd.neednet=1 cio_ignore=all,!condev
console=ttysclp0
coreos.live.rootfs_url=<coreos_url> 
1

ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>
rd.znet=qeth,<network_adaptor_range>,layer2=1
rd.<disk_type>=<adapter> 
2

rd.zfcp=<adapter>,<wwpn>,<lun> random.trust_cpu=on 
3

zfcp.allow_lun_scan=0
ai.ip_cfg_override=1 
4

ignition.firstboot ignition.platform.id=metal
random.trust_cpu=on
Copy to Clipboard Toggle word wrap

1
For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are booting. Only HTTP and HTTPS protocols are supported.
2
For installations on direct access storage devices (DASD) type disks, use rd. to specify the DASD where Red Hat Enterprise Linux (RHEL) is to be installed. For installations on Fibre Channel Protocol (FCP) disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHEL is to be installed.
3
Specify values for adapter, wwpn, and lun as in the following example: rd.zfcp=0.0.8002,0x500507630400d1e3,0x4000404600000000.
4
Specify this parameter when using an OSA network adapter or HiperSockets.
Note

The override parameter overrides the host’s network configuration settings.

2.5. Preflight validations

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing software on the nodes, the Assisted Installer conducts the following validations:

  • Ensures network connectivity
  • Ensures sufficient network bandwidth
  • Ensures connectivity to the registry
  • Ensures that any upstream DNS can resolve the required domain name
  • Ensures time synchronization between cluster nodes
  • Verifies that the cluster nodes meet the minimum hardware requirements
  • Validates the installation configuration parameters

If the Assisted Installer does not successfully validate the foregoing requirements, installation will not proceed.

Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat