Chapter 1. About the Assisted Installer
You can install OpenShift Container Platform on on-premise hardware or on-premise virtual machines by using the Assisted Installer.
1.1. Using the Assisted Installer Copy linkLink copied to clipboard!
The Assisted Installer for Red Hat OpenShift Container Platform is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console. The Assisted Installer supports various deployment platforms with a focus on bare metal, Nutanix, vSphere, and Oracle Cloud Infrastructure. The Assisted Installer also supports various CPU architectures, including x86_64, s390x (IBM Z®), arm64, and ppc64le (IBM Power®).
You can install OpenShift Container Platform on premises in a connected environment, with an optional HTTP/S proxy, for the following platforms:
- Highly available OpenShift Container Platform or single-node OpenShift cluster
- OpenShift Container Platform on bare metal or vSphere with full platform integration, or other virtualization platforms without integration
- Optionally, OpenShift Virtualization and Red Hat OpenShift Data Foundation
For access, see the following links:
1.2. Features Copy linkLink copied to clipboard!
The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following features:
- Web interface
- You can install your cluster by using the Assisted Installer in the Hybrid Cloud Console instead of creating installation configuration files manually.
- No bootstrap node
- You do not need a bootstrap node because the bootstrapping process runs on a node within the cluster.
- Streamlined installation workflow
- You do not need in-depth knowledge of OpenShift Container Platform to deploy a cluster. The Assisted Installer provides reasonable default configurations.
- You do not need to run the OpenShift Container Platform installer locally.
- You have access to the latest Assisted Installer for the latest tested z-stream releases.
- Advanced networking options
- The Assisted Installer supports IPv4 and dual stack networking with OVN only, NMState-based static IP addressing, and an HTTP/S proxy.
- OVN is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later.
- SDN is supported up to OpenShift Container Platform 4.14. SDN supports IPv4 only.
- Preinstallation validation
Before installing, the Assisted Installer checks the following configurations:
- Network connectivity
- Network bandwidth
- Connectivity to the registry
- Upstream DNS resolution of the domain name
- Time synchronization between cluster nodes
- Cluster node hardware
- Installation configuration parameters
- REST API
- You can automate the installation process by using the Assisted Installer REST API.
1.3. Host topology Copy linkLink copied to clipboard!
The OpenShift Container Platform architecture allows you to select a standard Kubernetes role for each of the discovered hosts. These roles define the function of the host within the cluster.
1.3.1. Supported host roles Copy linkLink copied to clipboard!
During the installation process, you can select a role for a host or configure the Assisted Installer to assign it for you.
The host must meet the minimum requirements for the role you selected. You can find the hardware requirements by referring to the Prerequisites section of this document or using the preflight requirement API.
If you do not select a role, the system selects one for you. You can change the role at any time before installation starts.
Each host can have any of the following roles:
- Control plane
- Compute
- Arbiter
- Auto-assign
1.3.1.1. Control plane (master) role Copy linkLink copied to clipboard!
The control plane nodes run the services that are required to control the cluster, including the API server. The control plane schedules workloads, maintains cluster state, and ensures stability.
1.3.1.2. Compute (worker) role Copy linkLink copied to clipboard!
The compute nodes are responsible for executing workloads for cluster users. Compute nodes advertise their capacity, so that the control plane scheduler can identify suitable compute nodes for running pods and containers.
1.3.1.3. Arbiter role Copy linkLink copied to clipboard!
Arbiter nodes are a more cost-effective alternative to control plane nodes. They function similarly but run only the essential components required to maintain the etcd quorum and prevent a split-brain condition. Because they do not host the full control plane or any workloads, arbiter nodes can use less powerful hardware.
The Assisted Installer provides arbiter nodes for Two-Node OpenShift with Arbiter (TNA) clusters. Support for Two-Node OpenShift with Arbiter clusters begins with OpenShift Container Platform version 4.19 and later. For more details, see "Two-Node OpenShift with Arbiter (TNA) resource requirements" in Additional resources.
To install a Two-Node OpenShift with Arbiter cluster, assign the arbiter or auto-assign role to at least one of the nodes, and set the control plane node count for the cluster to 2.
Two-Node OpenShift with Arbiter (TNA) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
1.3.1.4. Auto-assign role Copy linkLink copied to clipboard!
The Assisted Installer sets each host to an auto-assign role by default. Auto-assign allows the Assisted Installer to automatically determine whether the host should function as a control plane, arbiter, or compute (worker) role, based on detected hardware and network latency.
To determine the most suitable role, the Assisted Installer evaluates each host’s memory, CPU, disk space, and network performance. It assigns an internal suggested_role value to each host, which drives the auto-assignment process when auto-assigned is enabled. Pre-installation validations ensure the resulting role allocation is valid.
The logic for auto-assigning roles is as follows:
Sort hosts by capability
The Assisted Installer sorts the hosts by their hardware capabilities, from weakest to strongest. All hosts must meet the minimum requirements.
Assign control plane roles
The Assisted Installer assigns the control plane role to the weakest hosts first, until it reaches the number of control plane nodes specified by the
control_plane_countfield. A host is assigned a control plane role only if passes the necessary control plane role validations. For details on specifying the control plane count, see Additional resources.Assign arbiter role
The Assisted Installer assigns the arbiter role to a host only when the following conditions are met:
- The control plane count is 2.
- The host meets the minimum hardware requirements for the cluster.
One of the following applies:
- The cluster already contains two control plane nodes, either manually assigned or auto-assigned; or
- The host does not meet the minimum hardware requirements for a control plane node.
Handle GPU hosts
By default, the Assisted Installer designates non-GPU hosts as control plane nodes and GPU hosts as worker nodes.
The Assisted Installer designates a GPU host as a control plane node in either of the following scenarios:
- When no other available hosts meet the minimum requirements.
- When the number of required control plane nodes exceeds the number of available non-GPU hosts.
Assign remaining hosts
The Assisted Installer designates all remaining hosts as worker (compute) nodes. This approach ensures that the Assisted Installer prioritizes the most capable hosts for worker roles, while still maintaining the necessary number of valid control plane and arbiter nodes.
To assign a role to a host using the web console or the API, or to troubleshoot pre-installation validation errors for hosts with an auto-assign role, see Additional resources.
1.3.2. Control plane count Copy linkLink copied to clipboard!
The control plane count is the number of control plane (master) nodes in the cluster. Using a higher number of control plane nodes boosts fault tolerance and availability, minimizing downtime during failures. The number of control plane nodes that the Assisted Installer supports varies according to OpenShift Container Platform version:
- All versions of OpenShift Container Platform support one or three control plane nodes, where one control plane node is a Single-node OpenShift cluster.
- From OpenShift Container Platform version 4.18 and later, the Assisted Installer also supports four or five control plane nodes on a bare metal or user-managed networking platform with an x86_64 architecture. An implementation can support any number of compute nodes.
- From OpenShift Container Platform version 4.19 and later, the Assisted Installer also supports two control plane nodes for a Two-Node OpenShift with Arbiter (TNA) cluster topology. A cluster with only two control plane nodes must have at least one host with an arbiter role. For details, see "Supported host roles" in Additional resources.
Two-Node OpenShift with Arbiter (TNA) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
To specify the required number of control plane nodes for your cluster in either the web console or API, see Additional resources.
1.3.3. Control plane workload scheduling Copy linkLink copied to clipboard!
For smaller clusters, scheduling workloads to run on control plane nodes improves efficiency and maximizes resource usage. You can enable this option during installation setup or as a postinstallation step.
Use the following guidelines to determine when to use this feature:
- Single-node OpenShift clusters, Two-Node OpenShift with Arbiter clusters, or clusters with up to one worker (compute) node: The system schedules workloads on control plane nodes by default. This setting cannot be changed.
- Clusters of between two to seven worker nodes: This configuration supports the manual scheduling of workloads on both control plane (master) and compute (worker) nodes.
- Clusters with more than seven worker nodes: Scheduling workloads on control plane nodes is not recommended.
Schedulable control plane nodes have the role Control plane, Worker.
When you configure control plane nodes to be schedulable for workloads, an additional subscription is required for each control plane node that function as a compute (worker) node.
For guidance on configuring control plane nodes as schedulable in the Assisted Installer during installation, and for post-installation steps in OpenShift Container Platform, see Additional resources.
1.4. API support policy Copy linkLink copied to clipboard!
Assisted Installer APIs are supported for a minimum of three months from the announcement of deprecation.
1.4.1. API deprecation notice Copy linkLink copied to clipboard!
The following table presents the deprecated and modified APIs in the Assisted Installer.
assisted_service API
Expand Affected models Description of change -
cluster -
cluster-create-params
The
high_availability_modefield is deprecated starting from April 2025, and is planned to be removed in three months. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed.The alternative is to use
control_plane_countinstead. This change enables support for clusters with 4 or 5 control plane nodes, in addition to the previously supported configurations with 1 or 3 control plane nodes. The Assisted Installer supports 4 or 5 control plane nodes starting from OpenShift Container Platform version 4.18 and later.-