Chapter 11. Installing on-premise with Assisted Installer


11.1. Installing an on-premise cluster using the Assisted Installer

You can install OpenShift Container Platform on on-premise hardware or on-premise VMs using the Assisted Installer. Installing OpenShift Container Platform using the Assisted Installer supports both x86_64 and AArch64 CPU architectures.

11.1.1. Using the Assisted Installer

The OpenShift Container Platform Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console. The Assisted Installer supports the various deployment platforms with a focus on bare metal and vSphere infrastructures.

The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following advantages:

  • Web user interface: The web user interface performs cluster installation without the user having to create the installation configuration files manually.
  • No bootstrap node: A bootstrap node is not required when installing with the Assisted Installer. The bootstrapping process executes on a node within the cluster.
  • Hosting: The Assisted Installer hosts:

    • Ignition files
    • The installation configuration
    • A discovery ISO
    • The installer
  • Streamlined installation workflow: Deployment does not require in-depth knowledge of OpenShift Container Platform. The Assisted Installer provides reasonable defaults and provides the installer as a service, which:

    • Eliminates the need to install and run the OpenShift Container Platform installer locally.
    • Ensures the latest version of the installer up to the latest tested z-stream releases. Older versions remain available, if needed.
    • Enables building automation by using the API without the need to run the OpenShift Container Platform installer locally.
  • Advanced networking: The Assisted Installer supports IPv4/IPv6 dual stack networking, NMState-based static IP addressing, and an HTTP/S proxy.
  • Pre-installation validation: The Assisted Installer validates the configuration before installation to ensure a high probability of success. Validation includes:

    • Ensuring network connectivity
    • Ensuring sufficient network bandwidth
    • Ensuring connectivity to the registry
    • Ensuring time synchronization between cluster nodes
    • Verifying that the cluster nodes meet the minimum hardware requirements
    • Validating the installation configuration parameters
  • REST API: The Assisted Installer has a REST API, enabling automation.

The Assisted Installer supports installing OpenShift Container Platform on premises in a connected environment, including with an optional HTTP/S proxy. It can install the following:

  • Highly available OpenShift Container Platform or Single Node OpenShift (SNO)
  • OpenShift Container Platform on bare metal or vSphere with full platform integration, or other virtualization platforms without integration
  • Optionally OpenShift Virtualization and OpenShift Data Foundation (formerly OpenShift Container Storage)

The user interface provides an intuitive interactive workflow where automation does not exist or is not required. Users may also automate installations using the REST API.

See Install OpenShift with the Assisted Installer to create an OpenShift Container Platform cluster with the Assisted Installer.

11.1.2. API support for the Assisted Installer

Supported APIs for the Assisted Installer are stable for a minimum of three months from the announcement of deprecation.

11.2. Preparing to install with the Assisted Installer

Before installing a cluster, you must ensure the cluster nodes and network meet the requirements.

11.2.1. Prerequisites

11.2.2. Assisted Installer prerequisites

The Assisted Installer validates the following prerequisites to ensure successful installation.

11.2.2.1. Hardware

For control plane nodes or the single-node OpenShift node, nodes must have at least the following resources:

  • 8 CPU cores
  • 16.00 GiB RAM
  • 100 GB storage
  • 10ms write speed or less for etcd wal_fsync_duration_seconds

For worker nodes, each node must have at least the following resources:

  • 4 CPU cores
  • 16.00 GiB RAM
  • 100 GB storage

11.2.2.2. Networking

The network must meet the following requirements:

  • A DHCP server unless using static IP addressing.
  • A base domain name. You must ensure that the following requirements are met:

    • There is no wildcard, such as *.<cluster_name>.<base_domain>, or the installation will not proceed.
    • A DNS A/AAAA record for api.<cluster_name>.<base_domain>.
    • A DNS A/AAAA record with a wildcard for *.apps.<cluster_name>.<base_domain>.
  • Port 6443 is open for the API URL if you intend to allow users outside the firewall to access the cluster via the oc CLI tool.
  • Port 443 is open for the console if you intend to allow users outside the firewall to access the console.
Important

DNS A/AAAA record settings at top-level domain registrars can take significant time to update. Ensure the A/AAAA record DNS settings are working before installation to prevent installation delays.

The OpenShift Container Platform cluster’s network must also meet the following requirements:

  • Connectivity between all cluster nodes
  • Connectivity for each node to the internet
  • Access to an NTP server for time synchronization between the cluster nodes

11.2.2.3. Preflight validations

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex post-installation troubleshooting, thereby saving significant amounts of time and effort. Before installing software on the nodes, the Assisted Installer conducts the following validations:

  • Ensures network connectivity
  • Ensures sufficient network bandwidth
  • Ensures connectivity to the registry
  • Ensures time synchronization between cluster nodes
  • Verifies that the cluster nodes meet the minimum hardware requirements
  • Validates the installation configuration parameters

If the Assisted Installer does not successfully validate the foregoing requirements, installation will not proceed.

11.2.3. Additional resources

11.3. Installing with the Assisted Installer

After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster.

11.3.1. Pre-installation considerations

Before installing OpenShift Container Platform with the Assisted Installer, you must consider the following configuration choices:

  • Which base domain to use
  • Which OpenShift Container Platform product version to install
  • Whether to install a full cluster or single-node OpenShift
  • Whether to use a DHCP server or a static network configuration
  • Whether to use IPv4 or dual-stack networking
  • Whether to install OpenShift Virtualization
  • Whether to install Red Hat OpenShift Data Foundation
  • Whether to integrate with vSphere when installing on vSphere

11.3.2. Setting the cluster details

To create a cluster with the Assisted Installer web user interface, use the following procedure.

Procedure

  1. Log in to the RedHat Hybrid Cloud Console.
  2. In the menu, click OpenShift.
  3. Click Create cluster.
  4. Click the Datacenter tab.
  5. Under the Assisted Installer section, select Create cluster.
  6. Enter a name for the cluster in the Cluster name field.
  7. Enter a base domain for the cluster in the Base domain field. All subdomains for the cluster will use this base domain.

    Note

    The base domain must be a valid DNS name. You must not have a wild card domain set up for the base domain.

  8. Select the version of OpenShift Container Platform to install.
  9. Optional: Select Install single node Openshift (SNO) if you want to install OpenShift Container Platform on a single node.
  10. Optional: The Assisted Installer already has the pull secret associated to your account. If you want to use a different pull secret, select Edit pull secret.
  11. Optional: Assisted Installer defaults to using x86_64 CPU architecture. If you are installing OpenShift Container Platform on 64-bit ARM CPUs, select Use arm64 CPU architecture. Keep in mind, some features are not available with AArch64 CPU architectures.
  12. Optional: If you are using a static IP configuration for the cluster nodes instead of DHCP reservations, select Static network configuration.
  13. Optional: If you want to enable encryption of the installation disks, select Enable encryption of installation disks. For multi-node clusters, you can choose to encrypt the control plane and worker node installation disks separately.
Important

You cannot change the base domain, the SNO checkbox, the CPU architecture, the host’s network configuration, or the disk-encryption after installation begins.

11.3.3. Optional: Configuring host network interfaces

The Assisted Installer supports IPv4 networking and dual stack networking. The Assisted Installer also supports configuring host network interfaces with the NMState library, a declarative network manager API for hosts. You can use NMState to deploy hosts with static IP addressing, bonds, VLANs and other advanced networking features. If you chose to configure host network interfaces, you must set network-wide configurations. Then, you must create a host-specific configuration for each host and generate the discovery ISO with the host-specific settings.

Procedure

  1. Select the internet protocol version. Valid options are IPv4 and Dual stack.
  2. If the cluster hosts are on a shared VLAN, enter the VLAN ID.
  3. Enter the network-wide IP addresses. If you selected Dual stack networking, you must enter both IPv4 and IPv6 addresses.

    1. Enter the cluster network’s IP address range in CIDR notation.
    2. Enter the default gateway IP address.
    3. Enter the DNS server IP addresss.
  4. Enter the host-specific configuration.

    1. If you are only setting a static IP address that uses a single network interface, use the form view to enter the IP address and the MAC address for the host.
    2. If you are using multiple interfaces, bonding, or other advanced networking features, use the YAML view and enter the desired network state for the host using NMState syntax.
    3. Add the MAC address and interface name for each interface used in your network configuration.

Additional resources

11.3.4. Adding hosts to the cluster

You must add one or more hosts to the cluster. Adding a host to the cluster involves generating a discovery ISO. The discovery ISO runs Red Hat Enterprise Linux CoreOS (RHCOS) in-memory with an agent. Perform the following procedure for each host on the cluster.

Procedure

  1. Click the Add hosts button and select the installation media.

    1. Select Minimal image file: Provision with virtual media to download a smaller image that will fetch the data needed to boot. The nodes must have virtual media capability. This is the recommended method.
    2. Select Full image file: Provision with physical media to download the larger full image.
  2. Add an SSH public key so that you can connect to the cluster nodes as the core user. Having a login to the cluster nodes can provide you with debugging information during the installation.
  3. Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.
  4. Click Generate Discovery ISO.
  5. Download the discovery ISO.

11.3.5. Creating an ISO image on a USB drive

You can install software using a USB drive that contains an ISO image. Starting the server with the USB drive prepares the server for the software installation.

Procedure

  1. On the administration host, insert a USB drive into a USB port.
  2. Create an ISO image on the USB drive, for example:

    # dd if=<path_to_iso> of=<path_to_usb> status=progress

    where:

    <path_to_iso>
    is the relative path to the downloaded ISO file, for example, rhcos-live.iso.
    <path_to_usb>

    is the location of the connected USB drive, for example, /dev/sdb.

    After the ISO is copied to the USB drive, you can use the USB drive to install software on the server.

11.3.6. Booting with a USB drive

To register nodes with the Assisted Installer using a bootable USB drive, use the following procedure.

Procedure

  1. Attach the RHCOS discovery ISO to the target host.
  2. Configure the boot drive order in the server BIOS settings to boot from the attached discovery ISO, and then reboot the server.
  3. On the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts.

11.3.7. Booting from an HTTP-hosted ISO image using the Redfish API

You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API.

Prerequisites

  1. Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO.

Procedure

  1. Copy the ISO file to an HTTP server accessible in your network.
  2. Boot the host from the hosted ISO file, for example:

    1. Call the redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command:

      $ curl -k -u <bmc_username>:<bmc_password> -d '{"Image":"<hosted_iso_file>", "Inserted": true}' -H "Content-Type: application/json" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia

      Where:

      <bmc_username>:<bmc_password>
      Is the username and password for the target host BMC.
      <hosted_iso_file>
      Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso. The ISO must be accessible from the target host machine.
      <host_bmc_address>
      Is the BMC IP address of the target host machine.
    2. Set the host to boot from the VirtualMedia device by running the following command:

      $ curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1
    3. Reboot the host:

      $ curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "ForceRestart"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset
    4. Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command:

      $ curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "On"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset

11.3.8. Configuring hosts

After booting the hosts with the discovery ISO, the hosts will appear in the table at the bottom of the page. You can configure the hostname, role, and installation disk for each host.

Procedure

  1. Select a host.
  2. From the Actions list, select Change hostname. You must ensure each host has a valid and unique hostname. If necessary, enter a new name for the host and click Change.
  3. For multi-host clusters, in the Role column next to the host name, you can click on the menu to change the role of the host.

    If you do not select a role, the Assisted Installer will assign the role automatically. The minimum hardware requirements for control plane nodes exceed that of worker nodes. If you assign a role to a host, ensure that you assign the control plane role to hosts that meet the minimum hardware requirements.

  4. To the left of the checkbox next to a host name, click to expand the host details. If you have multiple disk drives, you can select a different disk drive to act as the installation disk.
  5. Repeat this procedure for each host.

Once all cluster hosts appear with a status of Ready, proceed to the next step.

11.3.9. Configuring networking

Before installing OpenShift Container Platform, you must configure the cluster network.

Procedure

  1. In the Networking page, select one of the following if it is not already selected for you:

    • Cluster-Managed Networking: Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology, including keepalived and Virtual Router Redundancy Protocol (VRRP) for managing the API and Ingress VIP addresses.
    • User-Managed Networking: Selecting user-managed networking allows you to deploy OpenShift Container Platform with a non-standard network topology. For example, if you want to deploy with an external load balancer instead of keepalived and VRRP, or if you intend to deploy the cluster nodes across many distinct L2 network segments.
  2. For cluster-managed networking, configure the following settings:

    1. Define the Machine network. You can use the default network or select a subnet.
    2. Define an API virtual IP. An API virtual IP provides an endpoint for all users to interact with, and configure the platform.
    3. Define an Ingress virtual IP. An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.
  3. For user-managed networking, configure the following settings:

    1. Select your Networking stack type:

      • IPv4: Select this type when your hosts are only using IPv4.
      • Dual-stack: You can select dual-stack when your hosts are using IPv4 together with IPv6.
    2. Define the Machine network. You can use the default network or select a subnet.
    3. Define an API virtual IP. An API virtual IP provides an endpoint for all users to interact with, and configure the platform.
    4. Define an Ingress virtual IP. An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.
    5. Optional: You can select Allocate IPs via DHCP server to automatically allocate the API IP and Ingress IP using the DHCP server.
  4. Optional: Select Use advanced networking to configure the following advanced networking properties:

    • Cluster network CIDR: Define an IP address block from which Pod IP addresses are allocated.
    • Cluster network host prefix: Define a subnet prefix length to assign to each node.
    • Service network CIDR: Define an IP address to use for service IP addresses.
    • Network type: Select either Software-Defined Networking (SDN) for standard networking or Open Virtual Networking (OVN) for telco features.

11.3.10. Installing the cluster

After you have completed the configuration and all the nodes are Ready, you can begin installation. The installation process takes a considerable amount of time, and you can monitor the installation from the Assisted Installer web console. Nodes will reboot during the installation, and they will initialize after installation.

Procedure

  • Press Begin installation.

    1. Click on the link in the Status column of the Host Inventory list to see the installation status of a particular host.

11.3.11. Completing the installation

After the cluster is installed and initialized, the Assisted Installer indicates that the installation is finished. The Assisted Installer provides the console URL, the kubeadmin username and password, and the kubeconfig file. Additionally, the Assisted Installer provides cluster details including the OpenShift Container Platform version, base domain, CPU architecture, API and Ingress IP addresses, and the cluster and service network IP addresses.

Prerequisites

  • You have installed the oc CLI tool.

Procedure

  1. Make a copy of the kubeadmin username and password.
  2. Download the kubeconfig file and copy it to the auth directory under your working directory:

    $ mkdir -p <working_directory>/auth
    $ cp kubeadmin <working_directory>/auth
    Note

    The kubeconfig file is available for download for 24 hours after completing the installation.

  3. Add the kubeconfig file to your environment:

    $ export KUBECONFIG=<your working directory>/auth/kubeconfig
  4. Login with the oc CLI tool:

    $ oc login -u kubeadmin -p <password>

    Replace <password> with the password of the kubeadmin user.

  5. Click on the web console URL or click Launch OpenShift Console to open the console.
  6. Enter the kubeadmin username and password. Follow the instructions in the OpenShift Container Platform console to configure an identity provider and configure alert receivers.
  7. Add a bookmark of the OpenShift Container Platform console.

11.3.12. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.