Search

Chapter 3. Installing with the Assisted Installer web console

download PDF

After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster.

3.1. Preinstallation considerations

Before installing OpenShift Container Platform with the Assisted Installer, you must consider the following configuration choices:

  • Which base domain to use
  • Which OpenShift Container Platform product version to install
  • Whether to install a full cluster or single-node OpenShift
  • Whether to use a DHCP server or a static network configuration
  • Whether to use IPv4 or dual-stack networking
  • Whether to install OpenShift Virtualization
  • Whether to install Red Hat OpenShift Data Foundation
  • Whether to install multicluster engine for Kubernetes
  • Whether to integrate with the platform when installing on vSphere or Nutanix
  • Whether to install a mixed-cluster architecture

3.2. Setting the cluster details

To create a cluster with the Assisted Installer web user interface, use the following procedure.

Procedure

  1. Log in to the Red Hat Hybrid Cloud Console.
  2. In the Red Hat OpenShift tile, click Scale your applications.
  3. In the menu, click Clusters.
  4. Click Create cluster.
  5. Click the Datacenter tab.
  6. Under Assisted Installer, click Create cluster.
  7. Enter a name for the cluster in the Cluster name field.
  8. Enter a base domain for the cluster in the Base domain field. All subdomains for the cluster will use this base domain.

    Note

    The base domain must be a valid DNS name. You must not have a wild card domain set up for the base domain.

  9. Select the version of OpenShift Container Platform to install.

    Important
    • For IBM Power and IBM zSystems platforms, only OpenShift Container Platform 4.13 and later is supported.
    • For a mixed-architecture cluster installation, select OpenShift Container Platform 4.12 or later, and use the -multi option. For instructions on installing a mixed-architecture cluster, see Additional resources.
  10. Optional: Select Install single node Openshift (SNO) if you want to install OpenShift Container Platform on a single node.

    Note

    Currently, SNO is not supported on IBM zSystems and IBM Power platforms.

  11. Optional: The Assisted Installer already has the pull secret associated to your account. If you want to use a different pull secret, select Edit pull secret.
  12. Optional: If you are installing OpenShift Container Platform on a third-party platform, select the platform from the Integrate with external parter platforms list. Valid values are Nutanix, vSphere or Oracle Cloud Infrastructure. Assisted Installer defaults to having no platform integration.

    Note

    For details on each of the external partner integrations, see Additional Resources.

    Important

    Assisted Installer supports Oracle Cloud Infrastructure (OCI) integration from OpenShift Container Platform 4.14 and later. For OpenShift Container Platform 4.14, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features - Scope of Support.

  13. Optional: Assisted Installer defaults to using x86_64 CPU architecture. If you are installing OpenShift Container Platform on a different architecture select the respective architecture to use. Valid values are arm64, ppc64le, and s390x. Keep in mind, some features are not available with arm64, ppc64le, and s390x CPU architectures.

    Important

    For a mixed-architecture cluster installation, use the default x86_64 architecture. For instructions on installing a mixed-architecture cluster, see Additional resources.

  14. Optional: Select Include custom manifests if you have at least one custom manifest to include in the installation. A custom manifest contains additional configurations not currently supported in the Assisted Installer. Selecting the checkbox adds the Custom manifests page to the wizard, where you upload the manifests.

    Important
    • If you are installing OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) third-party platform, it is mandatory to add the custom manifests provided by Oracle.
    • If you have already added custom manifests, unchecking the Include custom manifests box automatically deletes them all. You will be asked to confirm the deletion.
  15. Optional: The Assisted Installer defaults to DHCP networking. If you are using a static IP configuration, bridges or bonds for the cluster nodes instead of DHCP reservations, select Static IP, bridges, and bonds.

    Note

    A static IP configuration is not supported for OpenShift Container Platform installations on Oracle Cloud Infrastructure.

  16. Optional: If you want to enable encryption of the installation disks, under Enable encryption of installation disks you can select Control plane node, worker for single-node OpenShift. For multi-node clusters, you can select Control plane nodes to encrypt the control plane node installation disks and select Workers to encrypt worker node installation disks.
Important

You cannot change the base domain, the SNO checkbox, the CPU architecture, the host’s network configuration, or the disk-encryption after installation begins.

3.3. Optional: Configuring static networks

The Assisted Installer supports IPv4 networking with SDN up to OpenShift Container Platform 4.14 and OVN, and supports IPv6 and dual stack networking with OVN only. The Assisted Installer supports configuring the network with static network interfaces with IP address/MAC address mapping. The Assisted Installer also supports configuring host network interfaces with the NMState library, a declarative network manager API for hosts. You can use NMState to deploy hosts with static IP addressing, bonds, VLANs and other advanced networking features. First, you must set network-wide configurations. Then, you must create a host-specific configuration for each host.

Note

For installations on IBM Z with z/VM, ensure that the z/VM nodes and vSwitches are properly configured for static networks and NMState. Also, the z/VM nodes must have a fixed MAC address assigned as the pool MAC addresses might cause issues with NMState.

Procedure

  1. Select the internet protocol version. Valid options are IPv4 and Dual stack.
  2. If the cluster hosts are on a shared VLAN, enter the VLAN ID.
  3. Enter the network-wide IP addresses. If you selected Dual stack networking, you must enter both IPv4 and IPv6 addresses.

    1. Enter the cluster network’s IP address range in CIDR notation.
    2. Enter the default gateway IP address.
    3. Enter the DNS server IP address.
  4. Enter the host-specific configuration.

    1. If you are only setting a static IP address that uses a single network interface, use the form view to enter the IP address and the MAC address for each host.
    2. If you use multiple interfaces, bonding, or other advanced networking features, use the YAML view and enter the desired network state for each host that uses NMState syntax. Then, add the MAC address and interface name for each host interface used in your network configuration.

Additional resources

3.4. Optional: Installing Operators

This step is optional.

See the product documentation for prerequisites and configuration options:

If you require advanced options, install the Operators after you have installed the cluster.

Procedure

  1. Select one or more from the following options:

    • Install OpenShift Virtualization
    • Install multicluster engine

      You can deploy the multicluster engine with OpenShift Data Foundation on all OpenShift Container Platform clusters.

      Important

      Deploying the multicluster engine without OpenShift Data Foundation results in the following storage configurations:

      • Multi-node cluster: No storage is configured. You must configure storage after the installation.
      • Single-node OpenShift: LVM Storage is installed.
    • Install Logical Volume Manager Storage
    • Install OpenShift Data Foundation
  2. Click Next.

3.5. Adding hosts to the cluster

You must add one or more hosts to the cluster. Adding a host to the cluster involves generating a discovery ISO. The discovery ISO runs Red Hat Enterprise Linux CoreOS (RHCOS) in-memory with an agent.

Perform the following procedure for each host on the cluster.

Procedure

  1. Click the Add hosts button and select the provisioning type.

    1. Select Minimal image file: Provision with virtual media to download a smaller image that will fetch the data needed to boot. The nodes must have virtual media capability. This is the recommended method for x86_64 and arm64 architectures.
    2. Select Full image file: Provision with physical media to download the larger full image. This is the recommended method for the ppc64le architecture and for the s390x architecture when installing with RHEL KVM.
    3. Select iPXE: Provision from your network server to boot the hosts using iPXE. This is the recommended method for IBM Z with z/VM nodes. ISO boot is the recommended method on the RHEL KVM installation.

      Note
      • If you install on RHEL KVM, in some circumstances, the VMs on the KVM host are not rebooted on first boot and need to be restarted manually.
      • If you install OpenShift Container Platform on Oracle Cloud Infrastructure, select Minimal image file: Provision with virtual media only.
  2. Optional: Activate the Run workloads on control plane nodes switch to schedule workloads to run on control plane nodes, in addition to the default worker nodes.

    Note

    This option is available for clusters of five or more nodes. For clusters of under five nodes, the system runs workloads on the control plane nodes only, by default. For more details, see Configuring schedulable control plane nodes in Additional Resources.

  3. Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.
  4. Optional: Add an SSH public key so that you can connect to the cluster nodes as the core user. Having a login to the cluster nodes can provide you with debugging information during the installation.

    Important

    Do not skip this procedure in production environments, where disaster recovery and debugging is required.

    1. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access.
    2. In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
  5. Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy, or if the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates. Add additional certificates in X.509 format.
  6. Configure the discovery image if needed.
  7. Optional: If you are installing on a platform and want to integrate with the platform, select Integrate with your virtualization platform. You must boot all hosts and ensure they appear in the host inventory. All the hosts must be on the same platform.
  8. Click Generate Discovery ISO or Generate Script File.
  9. Download the discovery ISO or iPXE script.
  10. Boot the host(s) with the discovery image or iPXE script.

3.6. Configuring hosts

After booting the hosts with the discovery ISO, the hosts will appear in the table at the bottom of the page. You can optionally configure the hostname and role for each host. You can also delete a host if necessary.

Procedure

  1. From the Options (⋮) menu for a host, select Change hostname. If necessary, enter a new name for the host and click Change. You must ensure that each host has a valid and unique hostname.

    Alternatively, from the Actions list, select Change hostname to rename multiple selected hosts. In the Change Hostname dialog, type the new name and include {{n}} to make each hostname unique. Then click Change.

    Note

    You can see the new names appearing in the Preview pane as you type. The name will be identical for all selected hosts, with the exception of a single-digit increment per host.

  2. From the Options (⋮) menu, you can select Delete host to delete a host. Click Delete to confirm the deletion.

    Alternatively, from the Actions list, select Delete to delete multiple selected hosts at the same time. Then click Delete hosts.

    Note

    In a regular deployment, a cluster can have three or more hosts, and three of these must be control plane hosts. If you delete a host that is also a control plane, or if you are left with only two hosts, you will get a message saying that the system is not ready. To restore a host, you will need to reboot it from the discovery ISO.

  3. From the Options (⋮) menu for the host, optionally select View host events. The events in the list are presented chronologically.
  4. For multi-host clusters, in the Role column next to the host name, you can click on the menu to change the role of the host.

    If you do not select a role, the Assisted Installer will assign the role automatically. The minimum hardware requirements for control plane nodes exceed that of worker nodes. If you assign a role to a host, ensure that you assign the control plane role to hosts that meet the minimum hardware requirements.

  5. Click the Status link to view hardware, network and operator validations for the host.
  6. Click the arrow to the left of a host name to expand the host details.

Once all cluster hosts appear with a status of Ready, proceed to the next step.

3.7. Configuring storage disks

Each of the hosts retrieved during host discovery can have multiple storage disks. The storage disks are listed for the host on the Storage page of the Assisted Installer wizard.

You can optionally modify the default configurations for each disk.

Changing the installation disk

The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the previous disk.

Procedure

  1. Navigate to the Storage page of the wizard.
  2. Expand a host to display the associated storage disks.
  3. Select Installation disk from the Role list.
  4. When all storage disks return to Ready status, proceed to the next step.

Disabling disk formatting

The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss.

You can choose to disable the formatting of a specific disk. This should be performed with caution, as bootable disks may interfere with the installation process, mainly in terms of boot order.

You cannot disable formatting for the installation disk.

Procedure

  1. Navigate to the Storage page of the wizard.
  2. Expand a host to display the associated storage disks.
  3. Clear Format for a disk.
  4. When all storage disks return to Ready status, proceed to the next step.

Additional resources

3.8. Configuring networking

Before installing OpenShift Container Platform, you must configure the cluster network.

Procedure

  1. In the Networking page, select one of the following if it is not already selected for you:

    • Cluster-Managed Networking: Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology, including keepalived and Virtual Router Redundancy Protocol (VRRP) for managing the API and Ingress VIP addresses.

      Note
      • Currently, Cluster-Managed Networking is not supported on IBM zSystems and IBM Power in OpenShift Container Platform version 4.13.
      • Oracle Cloud Infrastructure (OCI) is available for OpenShift Container Platform 4.14 with a user-managed networking configuration only.
    • User-Managed Networking: Selecting user-managed networking allows you to deploy OpenShift Container Platform with a non-standard network topology. For example, if you want to deploy with an external load balancer instead of keepalived and VRRP, or if you intend to deploy the cluster nodes across many distinct L2 network segments.
  2. For cluster-managed networking, configure the following settings:

    1. Define the Machine network. You can use the default network or select a subnet.
    2. Define an API virtual IP. An API virtual IP provides an endpoint for all users to interact with, and configure the platform.
    3. Define an Ingress virtual IP. An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.
  3. For user-managed networking, configure the following settings:

    1. Select your Networking stack type:

      • IPv4: Select this type when your hosts are only using IPv4.
      • Dual-stack: You can select dual-stack when your hosts are using IPv4 together with IPv6.
    2. Define the Machine network. You can use the default network or select a subnet.
    3. Define an API virtual IP. An API virtual IP provides an endpoint for all users to interact with, and configure the platform.
    4. Define an Ingress virtual IP. An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.
    5. Optional: You can select Allocate IPs via DHCP server to automatically allocate the API IP and Ingress IP using the DHCP server.
  4. Optional: Select Use advanced networking to configure the following advanced networking properties:

    • Cluster network CIDR: Define an IP address block from which Pod IP addresses are allocated.
    • Cluster network host prefix: Define a subnet prefix length to assign to each node.
    • Service network CIDR: Define an IP address to use for service IP addresses.
    • Network type: Select either Software-Defined Networking (SDN) for standard networking or Open Virtual Networking (OVN) for IPv6, dual-stack networking, and telco features. In OpenShift Container Platform 4.12 and later releases, OVN is the default Container Network Interface (CNI). In OpenShift Container Platform 4.15 and later releases, Software-Defined Networking (SDN) is not supported.

Additional resources

3.9. Adding custom manifests

A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party.

You can upload a custom manifest from your file system to either the openshift folder or the manifests folder. There is no limit to the number of custom manifest files permitted.

Only one file can be uploaded at a time. However, each uploaded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.

For a file containing a single custom manifest, accepted file extensions include .yaml, .yml, or .json.

Single custom manifest example

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 99-openshift-machineconfig-master-kargs
spec:
  kernelArguments:
    - loglevel=7

For a file containing multiple custom manifests, accepted file types include .yaml or .yml.

Multiple custom manifest example

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 99-openshift-machineconfig-master-kargs
spec:
  kernelArguments:
    - loglevel=7
---
apiVersion: machineconfiguration.openshift.io/v2
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 98-openshift-machineconfig-worker-kargs
spec:
  kernelArguments:
    - loglevel=5

Note
  • When you install OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) external platform, you must add the custom manifests provided by Oracle. For additional external partner integrations such as vSphere or Nutanix, this step is optional.
  • For more information about custom manifests, see Additional Resources.

Uploading a custom manifest in the Assisted Installer user interface

When uploading a custom manifest, enter the manifest filename and select a destination folder.

Prerequisites

  • You have at least one custom manifest file saved in your file system.

Procedure

  1. On the Cluster details page of the wizard, select the Include custom manifests checkbox.
  2. On the Custom manifest page, in the folder field, select the Assisted Installer folder where you want to save the custom manifest file. Options include openshift or manifest.
  3. In the Filename field, enter a name for the manifest file, including the extension. For example, manifest1.json or multiple1.yaml.
  4. Under Content, click the Upload icon or Browse button to upload a file. Alternatively, drag the file into the Content field from your file system.
  5. To upload another manifest, click Add another manifest and repeat the process. This saves the previously uploaded manifest.
  6. Click Next to save all manifests and proceed to the Review and create page. The uploaded custom manifests are listed under Custom manifests.

Modifying a custom manifest in the Assisted Installer user interface

You can change the folder and file name of an uploaded custom manifest. You can also copy the content of an existing manifest, or download it to the folder defined in the Chrome download settings.

It is not possible to modify the content of an uploaded manifest. However, you can overwrite the file.

Prerequisites

  • You have uploaded at least one custom manifest file.

Procedure

  1. To change the folder, select a different folder for the manifest from the Folder list.
  2. To modify the file name, type the new name for the manifest in the File name field.
  3. To overwrite a manifest, save the new manifest in the same folder with the same file name.
  4. To save a manifest as a file in your file system, click the Download icon.
  5. To copy the manifest, click the Copy to clipboard icon.
  6. To apply the changes, click either Add another manifest or Next.

Removing custom manifests in the Assisted Installer user interface

You can remove uploaded custom manifests before installation in one of two ways:

  • Removing one or more manifests individually.
  • Removing all manifests at once.

Once you have removed a manifest you cannot undo the action. The workaround is to upload the manifest again.

Removing a single manifest

You can delete one manifest at a time. This option does not allow you to delete the last remaining manifest.

Prerequisites

  • You have uploaded at least two custom manifest files.

Procedure

  1. Navigate to the Custom manifests page.
  2. Hover over the manifest name to display the Delete (minus) icon.
  3. Click the icon and then click Delete in the dialog box.
Removing all manifests

You can remove all custom manifests at once. This also hides the Custom manifest page.

Prerequisites

  • You have uploaded at least one custom manifest file.

Procedure

  1. Navigate to the Cluster details page of the wizard.
  2. Clear the Include custom manifests checkbox.
  3. In the Remove custom manifests dialog box, click Remove.

3.10. Preinstallation validations

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation.

Additional resources

3.11. Installing the cluster

After you have completed the configuration and all the nodes are Ready, you can begin installation. The installation process takes a considerable amount of time, and you can monitor the installation from the Assisted Installer web console. Nodes will reboot during the installation, and they will initialize after installation.

Procedure

  1. Press Begin installation.
  2. Click the link in the Status column of the Host Inventory list to see the installation status of a particular host.

3.12. Completing the installation

After the cluster is installed and initialized, the Assisted Installer indicates that the installation is finished. The Assisted Installer provides the console URL, the kubeadmin username and password, and the kubeconfig file. Additionally, the Assisted Installer provides cluster details including the OpenShift Container Platform version, base domain, CPU architecture, API and Ingress IP addresses, and the cluster and service network IP addresses.

Prerequisites

  • You have installed the oc CLI tool.

Procedure

  1. Make a copy of the kubeadmin username and password.
  2. Download the kubeconfig file and copy it to the auth directory under your working directory:

    $ mkdir -p <working_directory>/auth
    $ cp kubeadmin <working_directory>/auth
    Note

    The kubeconfig file is available for download for 24 hours after completing the installation.

  3. Add the kubeconfig file to your environment:

    $ export KUBECONFIG=<your working directory>/auth/kubeconfig
  4. Login with the oc CLI tool:

    $ oc login -u kubeadmin -p <password>

    Replace <password> with the password of the kubeadmin user.

  5. Click the web console URL or click Launch OpenShift Console to open the console.
  6. Enter the kubeadmin username and password. Follow the instructions in the OpenShift Container Platform console to configure an identity provider and configure alert receivers.
  7. Add a bookmark of the OpenShift Container Platform console.
  8. Complete any postinstallation platform integration steps.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.