Chapter 3. Installing with the Assisted Installer UI


After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster.

3.1. Pre-installation considerations

Before installing OpenShift Container Platform with the Assisted Installer, you must consider the following configuration choices:

  • Which base domain to use
  • Which OpenShift Container Platform product version to install
  • Whether to install a full cluster or single-node OpenShift
  • Whether to use a DHCP server or a static network configuration
  • Whether to use IPv4 or dual-stack networking
  • Whether to install OpenShift Virtualization
  • Whether to install Red Hat OpenShift Data Foundation
  • Whether to install Multicluster Engine
  • Whether to integrate with the platform when installing on vSphere or Nutanix
  • Whether to install a mixed-cluster architecture
Important

If you intend to install any of the Operators, refer to the relevant hardware and storage requirements in Optional:Installing Operators.

3.2. Setting the cluster details

To create a cluster with the Assisted Installer web user interface, use the following procedure.

Procedure

  1. Log in to the Red Hat Hybrid Cloud Console.
  2. In the Red Hat OpenShift tile, click Scale your applications.
  3. In the menu, click Clusters.
  4. Click Create cluster.
  5. Click the Datacenter tab.
  6. Under Assisted Installer, click Create cluster.
  7. Enter a name for the cluster in the Cluster name field.
  8. Enter a base domain for the cluster in the Base domain field. All subdomains for the cluster will use this base domain.

    Note

    The base domain must be a valid DNS name. You must not have a wild card domain set up for the base domain.

  9. Select the version of OpenShift Container Platform to install.

    Important
    • For IBM Power and IBM zSystems platforms, only OpenShift Container Platform version 4.13 and later is supported.
    • For a mixed-architecture cluster installation, select OpenShift Container Platform version 4.12 or later, and use the -multi option. For instructions on installing a mixed-architecture cluster, see Additional resources.
  10. Optional: Select Install single node Openshift (SNO) if you want to install OpenShift Container Platform on a single node.

    Note

    Currently, SNO is not supported on IBM zSystems and IBM Power platforms.

  11. Optional: The Assisted Installer already has the pull secret associated to your account. If you want to use a different pull secret, select Edit pull secret.
  12. Optional: If you are installing OpenShift Container Platform on a third-party platform, select the platform from the Integrate with external parter platforms list. Valid values are Nutanix, vSphere or Oracle Cloud Infrastructure. Assisted Installer defaults to having no platform integration.

    Note

    For details on each of the external partner integrations, see Additional Resources.

    Important

    Assisted Installer supports Oracle Cloud Infrastructure (OCI) integration from OpenShift Container Platform 4.14 and later. For OpenShift Container Platform 4.14, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features - Scope of Support.

  13. Optional: Assisted Installer defaults to using x86_64 CPU architecture. If you are installing OpenShift Container Platform on a different architecture select the respective architecture to use. Valid values are arm64, ppc64le, and s390x. Keep in mind, some features are not available with arm64, ppc64le, and s390x CPU architectures.

    Important

    For a mixed-architecture cluster installation, use the default x86_64 architecture. For instructions on installing a mixed-architecture cluster, see Additional resources.

  14. Optional: The Assisted Installer defaults to DHCP networking. If you are using a static IP configuration, bridges or bonds for the cluster nodes instead of DHCP reservations, select Static IP, bridges, and bonds.

    Note

    A Static IP configuration is not supported for OpenShift Container Platform installations on Oracle Cloud Infrastructure (OCI).

  15. Optional: If you want to enable encryption of the installation disks, under Enable encryption of installation disks you can select Control plane node, worker for single-node OpenShift. For multi-node clusters, you can select Control plane nodes to encrypt the control plane node installation disks and select Workers to encrypt worker node installation disks.
Important

You cannot change the base domain, the SNO checkbox, the CPU architecture, the host’s network configuration, or the disk-encryption after installation begins.

3.3. Optional: Configuring static networks

The Assisted Installer supports IPv4 networking with SDN and OVN, and supports IPv6 and dual stack networking with OVN only. The Assisted Installer supports configuring the network with static network interfaces with IP address/MAC address mapping. The Assisted Installer also supports configuring host network interfaces with the NMState library, a declarative network manager API for hosts. You can use NMState to deploy hosts with static IP addressing, bonds, VLANs and other advanced networking features. First, you must set network-wide configurations. Then, you must create a host-specific configuration for each host.

Note

For installations on IBM Z with z/VM, ensure that the z/VM nodes and vSwitches are properly configured for static networks and NMState. Also, the z/VM nodes must have a fixed MAC address assigned as the pool MAC addresses might cause issues with NMState.

Procedure

  1. Select the internet protocol version. Valid options are IPv4 and Dual stack.
  2. If the cluster hosts are on a shared VLAN, enter the VLAN ID.
  3. Enter the network-wide IP addresses. If you selected Dual stack networking, you must enter both IPv4 and IPv6 addresses.

    1. Enter the cluster network’s IP address range in CIDR notation.
    2. Enter the default gateway IP address.
    3. Enter the DNS server IP address.
  4. Enter the host-specific configuration.

    1. If you are only setting a static IP address that uses a single network interface, use the form view to enter the IP address and the MAC address for each host.
    2. If you use multiple interfaces, bonding, or other advanced networking features, use the YAML view and enter the desired network state for each host that uses NMState syntax. Then, add the MAC address and interface name for each host interface used in your network configuration.

Additional resources

3.4. Configuring Operators

The Assisted Installer can install with certain Operators configured. The Operators include:

  • OpenShift Virtualization
  • Multicluster Engine (MCE) for Kubernetes
  • OpenShift Data Foundation
  • Logical Volume Manager (LVM) Storage
Important

For a detailed description of each of the Operators, together with hardware requirements, storage considerations, interdependencies, and additional installation instructions, see Additional Resources.

This step is optional. You can complete the installation without selecting an Operator.

Procedure

  1. To install OpenShift Virtualization, select Install OpenShift Virtualization.
  2. To install Multicluster Engine (MCE), select Install multicluster engine.
  3. To install OpenShift Data Foundation, select Install OpenShift Data Foundation.
  4. To install Logical Volume Manager, select Install Logical Volume Manager.
  5. Click Next to proceed to the next step.

3.5. Adding hosts to the cluster

You must add one or more hosts to the cluster. Adding a host to the cluster involves generating a discovery ISO. The discovery ISO runs Red Hat Enterprise Linux CoreOS (RHCOS) in-memory with an agent.

Perform the following procedure for each host on the cluster.

Procedure

  1. Click the Add hosts button and select the provisioning type.

    1. Select Minimal image file: Provision with virtual media to download a smaller image that will fetch the data needed to boot. The nodes must have virtual media capability. This is the recommended method for x86_64 and arm64 architectures.
    2. Select Full image file: Provision with physical media to download the larger full image. This is the recommended method for the ppc64le architecture and for the s390x architecture when installing with RHEL KVM.
    3. Select iPXE: Provision from your network server to boot the hosts using iPXE.

      Note
      • If you install on RHEL KVM, in some circumstances, the VMs on the KVM host are not rebooted on first boot and need to be restarted manually.
      • If you install OpenShift Container Platform on Oracle Cloud Infrastructure, select Minimal image file: Provision with virtual media only.
  2. Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.
  3. Optional: Add an SSH public key so that you can connect to the cluster nodes as the core user. Having a login to the cluster nodes can provide you with debugging information during the installation.

    Important

    Do not skip this procedure in production environments, where disaster recovery and debugging is required.

    1. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access.
    2. In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
  4. Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy, or if the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates. Add additional certificates in X.509 format.
  5. Configure the discovery image if needed.
  6. Optional: If you are installing on a platform and want to integrate with the platform, select Integrate with your virtualization platform. You must boot all hosts and ensure they appear in the host inventory. All the hosts must be on the same platform.
  7. Click Generate Discovery ISO or Generate Script File.
  8. Download the discovery ISO or iPXE script.
  9. Boot the host(s) with the discovery image or iPXE script.

3.6. Configuring hosts

After booting the hosts with the discovery ISO, the hosts will appear in the table at the bottom of the page. You can optionally configure the hostname and role for each host. You can also delete a host if necessary.

Procedure

  1. From the Options (⋮) menu for a host, select Change hostname. If necessary, enter a new name for the host and click Change. You must ensure that each host has a valid and unique hostname.

    Alternatively, from the Actions list, select Change hostname to rename multiple selected hosts. In the Change Hostname dialog, type the new name and include {{n}} to make each hostname unique. Then click Change.

    Note

    You can see the new names appearing in the Preview pane as you type. The name will be identical for all selected hosts, with the exception of a single-digit increment per host.

  2. From the Options (⋮) menu, you can select Delete host to delete a host. Click Delete to confirm the deletion.

    Alternatively, from the Actions list, select Delete to delete multiple selected hosts at the same time. Then click Delete hosts.

    Note

    In a regular deployment, a cluster can have three or more hosts, and three of these must be control plane hosts. If you delete a host that is also a control plane, or if you are left with only two hosts, you will get a message saying that the system is not ready. To restore a host, you will need to reboot it from the discovery ISO.

  3. From the Options (⋮) menu for the host, optionally select View host events. The events in the list are presented chronologically.
  4. For multi-host clusters, in the Role column next to the host name, you can click on the menu to change the role of the host.

    If you do not select a role, the Assisted Installer will assign the role automatically. The minimum hardware requirements for control plane nodes exceed that of worker nodes. If you assign a role to a host, ensure that you assign the control plane role to hosts that meet the minimum hardware requirements.

  5. Click the Status link to view hardware, network and operator validations for the host.
  6. Click the arrow to the left of a host name to expand the host details.

Once all cluster hosts appear with a status of Ready, proceed to the next step.

3.7. Configuring storage disks

After discovering and configuring the cluster hosts, you can optionally configure the storage disks for each host.

Any host configurations possible here are discussed in the Configuring Hosts section. See the additional resources below for the link.

Procedure

  1. To the left of the checkbox next to a host name, click to display the storage disks for that host.
  2. If there are multiple storage disks for a host, you can select a different disk to act as the installation disk. Click the Role dropdown list for the disk, and then select Installation disk. The role of the previous installation disk changes to None.
  3. All bootable disks are marked for reformatting during the installation by default, with the exception of read-only disks such as CDROMs. Deselect the Format checkbox to prevent a disk from being reformatted. The installation disk must be reformatted. Back up any sensitive data before proceeding.

Once all disk drives appear with a status of Ready, proceed to the next step.

Additional resources

3.8. Configuring networking

Before installing OpenShift Container Platform, you must configure the cluster network.

Procedure

  1. In the Networking page, select one of the following if it is not already selected for you:

    • Cluster-Managed Networking: Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology, including keepalived and Virtual Router Redundancy Protocol (VRRP) for managing the API and Ingress VIP addresses.

      Note
      • Currently, Cluster-Managed Networking is not supported on IBM zSystems and IBM Power in OpenShift Container Platform version 4.13.
      • Oracle Cloud Infrastructure (OCI) is available for OpenShift Container Platform 4.14 with a user-managed networking configuration only.
    • User-Managed Networking: Selecting user-managed networking allows you to deploy OpenShift Container Platform with a non-standard network topology. For example, if you want to deploy with an external load balancer instead of keepalived and VRRP, or if you intend to deploy the cluster nodes across many distinct L2 network segments.
  2. For cluster-managed networking, configure the following settings:

    1. Define the Machine network. You can use the default network or select a subnet.
    2. Define an API virtual IP. An API virtual IP provides an endpoint for all users to interact with, and configure the platform.
    3. Define an Ingress virtual IP. An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.
  3. For user-managed networking, configure the following settings:

    1. Select your Networking stack type:

      • IPv4: Select this type when your hosts are only using IPv4.
      • Dual-stack: You can select dual-stack when your hosts are using IPv4 together with IPv6.
    2. Define the Machine network. You can use the default network or select a subnet.
    3. Define an API virtual IP. An API virtual IP provides an endpoint for all users to interact with, and configure the platform.
    4. Define an Ingress virtual IP. An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.
    5. Optional: You can select Allocate IPs via DHCP server to automatically allocate the API IP and Ingress IP using the DHCP server.
  4. Optional: Select Use advanced networking to configure the following advanced networking properties:

    • Cluster network CIDR: Define an IP address block from which Pod IP addresses are allocated.
    • Cluster network host prefix: Define a subnet prefix length to assign to each node.
    • Service network CIDR: Define an IP address to use for service IP addresses.
    • Network type: Select either Software-Defined Networking (SDN) for standard networking or Open Virtual Networking (OVN) for IPv6, dual-stack networking, and telco features. In OpenShift Container Platform 4.12 and later releases, OVN is the default Container Network Interface (CNI).

Additional resources

3.9. Pre-installation validation

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex post-installation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass pre-installation validation.

Additional resources

3.10. Installing the cluster

After you have completed the configuration and all the nodes are Ready, you can begin installation. The installation process takes a considerable amount of time, and you can monitor the installation from the Assisted Installer web console. Nodes will reboot during the installation, and they will initialize after installation.

Procedure

  1. Press Begin installation.
  2. Click on the link in the Status column of the Host Inventory list to see the installation status of a particular host.

3.11. Completing the installation

After the cluster is installed and initialized, the Assisted Installer indicates that the installation is finished. The Assisted Installer provides the console URL, the kubeadmin username and password, and the kubeconfig file. Additionally, the Assisted Installer provides cluster details including the OpenShift Container Platform version, base domain, CPU architecture, API and Ingress IP addresses, and the cluster and service network IP addresses.

Prerequisites

  • You have installed the oc CLI tool.

Procedure

  1. Make a copy of the kubeadmin username and password.
  2. Download the kubeconfig file and copy it to the auth directory under your working directory:

    $ mkdir -p <working_directory>/auth
    $ cp kubeadmin <working_directory>/auth
    Note

    The kubeconfig file is available for download for 24 hours after completing the installation.

  3. Add the kubeconfig file to your environment:

    $ export KUBECONFIG=<your working directory>/auth/kubeconfig
  4. Login with the oc CLI tool:

    $ oc login -u kubeadmin -p <password>

    Replace <password> with the password of the kubeadmin user.

  5. Click on the web console URL or click Launch OpenShift Console to open the console.
  6. Enter the kubeadmin username and password. Follow the instructions in the OpenShift Container Platform console to configure an identity provider and configure alert receivers.
  7. Add a bookmark of the OpenShift Container Platform console.
  8. Complete any post-installation platform integration steps.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.