Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Installing with the Assisted Installer web console
After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster.
4.1. Preinstallation considerations Link kopierenLink in die Zwischenablage kopiert!
Before installing OpenShift Container Platform with the Assisted Installer, you must consider the following configuration choices:
- Which base domain to use
- Which OpenShift Container Platform product version to install
- Whether to install a full cluster or single-node OpenShift
- Whether to use a DHCP server or a static network configuration
- Whether to use IPv4 or dual-stack networking
- Whether to install OpenShift Virtualization
- Whether to install Red Hat OpenShift Data Foundation
- Whether to install multicluster engine for Kubernetes
- Whether to integrate with the platform when installing on vSphere or Nutanix
- Whether to install a multi-architecture compute cluster
4.2. Setting the cluster details Link kopierenLink in die Zwischenablage kopiert!
To create a cluster with the Assisted Installer web user interface, use the following procedure.
Procedure
- Log in to the Red Hat Hybrid Cloud Console.
- On the Red Hat OpenShift tile, click OpenShift.
- On the Red Hat OpenShift Container Platform tile, click Create cluster.
- Click the Datacenter tab.
- Under Assisted Installer, click Create cluster.
- Enter a name for the cluster in the Cluster name field.
Enter a base domain for the cluster in the Base domain field. All subdomains for the cluster will use this base domain.
NoteThe base domain must be a valid DNS name. You must not have a wildcard domain set up for the base domain.
From the OpenShift version dropdown list, select the version that you want to install and click Select. By default, the dropdown lists the latest OpenShift version. If you need an older version that is not displayed, click Show all available versions at the bottom of the list, and use the search box to find it.
Important-
For a multi-architecture compute cluster installation, select OpenShift Container Platform 4.12 or later, and use the
-multi
option. For instructions on installing a multi-architecture compute cluster, see Installing multi-architecture compute clusters. - For IBM Power® and IBM Z® platforms, only OpenShift Container Platform 4.13 and later is supported.
- If you are booting from an iSCSI drive, select OpenShift Container Platform version 4.15 or later.
-
For a multi-architecture compute cluster installation, select OpenShift Container Platform 4.12 or later, and use the
Optional: Assisted Installer defaults to using x86_64 CPU architecture. If you are installing OpenShift Container Platform on a different architecture, select the architecture to use. Valid values are arm64, ppc64le, and s390x. Remember that some features are not available with arm64, ppc64le, and s390x CPU architectures.
ImportantFor a multi-architecture compute cluster installation, you can use
x86_64
or 64-bit ARM CPU architecture for the control plane nodes. Automatic conversion fromx86_64
to 64-bit ARM is only supported on Amazon Web Services (AWS). For instructions on installing a multi-architecture compute cluster, see Installing multi-architecture compute clusters.- Optional: The Assisted Installer already has the pull secret associated to your account. If you want to use a different pull secret, select Edit pull secret.
Optional: If you are installing OpenShift Container Platform on a third-party platform, select the platform from the Integrate with external partner platforms list. Valid values are
Nutanix
,vSphere
orOracle Cloud Infrastructure
. Assisted Installer defaults to having no platform integration.Note- Assisted Installer supports Oracle Cloud Infrastructure (OCI) integration from OpenShift Container Platform 4.14 and later.
- For details on each of the external partner integrations, see Additional Resources.
From the Number of control plane nodes field, optionally change the default value of three control plane nodes for your installation. The possible options are
1
(single-node OpenShift),2
,3
,4
, or5
.Important- Currently, single-node OpenShift is not supported on IBM Z® and IBM Power® platforms.
-
The Assisted Installer supports
4
or5
control plane nodes from OpenShift Container Platform 4.18 and later, on a bare metal or user-managed networking platform with an x86_64 CPU architecture. For details, see About specifying the number of control plane nodes. -
The Assisted Installer supports
2
control plane nodes from OpenShift Container Platform 4.19 and later, for a Two-Node OpenShift with Arbiter cluster topology. If the number of control plane nodes for a cluster is2
, then it must have at least one additional arbiter host. For details, see About specifying the number of control plane nodes.
Optional: Select Include custom manifests if you have at least one custom manifest to include in the installation. A custom manifest has additional configurations not currently supported in the Assisted Installer. Selecting the checkbox adds the Custom manifests step to the wizard, where you upload the manifests.
Important- If you are installing OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) third-party platform, it is mandatory to add the custom manifests provided by Oracle.
- If you have already added custom manifests, clearing the Include custom manifests checkbox automatically deletes them all. You must confirm the deletion.
Optional: The Assisted Installer defaults to DHCP networking. If you are using a static IP configuration, bridges or bonds for the cluster nodes instead of DHCP reservations, select Static IP, bridges, and bonds. Selecting the checkbox adds the Static network configurations step to the wizard. For details, see Configuring static networks.
ImportantA static IP configuration is not supported in the following scenarios:
- OpenShift Container Platform installations on Oracle Cloud Infrastructure.
- OpenShift Container Platform installations on iSCSI boot volumes.
Optional: If you want to enable encryption of the installation disks, under Enable encryption of installation disks you can select one of the following:
- For single-node OpenShift, select Control plane node, worker.
For multi-node clusters, select Control plane nodes to encrypt the control plane node installation disks. Select Workers to encrypt worker node installation disks. Select Arbiter to encrypt the arbiter node installation disks.
ImportantYou cannot change the base domain, the single-node OpenShift checkbox, the CPU architecture, the host’s network configuration, or the disk-encryption after installation begins.
4.3. Configuring static networks Link kopierenLink in die Zwischenablage kopiert!
The Assisted Installer supports the following network configurations:
- IPv4 networking with SDN, supported up to OpenShift Container Platform 4.14.
- IPv4 and dual-stack networking with IPv4 as primary (OVN only), supported from OpenShift Container Platform 4.15 and later.
- Static network configuration using static interfaces with IP address and MAC address mapping.
- Host network interface configuration using the NMState library, a declarative network manager API for hosts. You can also use NMState to deploy hosts with bonds, VLANs, and other advanced networking features.
For installations on IBM Z® with z/VM, ensure that the z/VM nodes and vSwitches are properly configured for static networks and NMState. Also, the z/VM nodes must have a fixed MAC address assigned as the pool MAC addresses might cause issues with NMState. For more information about NMState, see NMState Declarative Network API.
4.3.1. Configuring static networks using form view Link kopierenLink in die Zwischenablage kopiert!
You can configure networks by using the form view or the YAML view. Select Form view for basic configurations.
To add new hosts that will use the new or edited configurations, you’ll need to regenerate the Discovery ISO in the 'Host discovery' step and boot your new hosts from it.
Prerequisites
- You have selected the Static IP, bridges and bonds option under Hosts' network configuration on the Cluster details page. Selecting this option adds the Static network configurations step to the wizard.
Procedure
- Go to the Static network configurations page.
- From the Configure via options, select Form view.
Enter the network-wide configurations:
Select the Networking stack type. Valid options are IPv4 and Dual stack (with IPv4 as primary).
ImportantIPv6 is not currently supported in the following configurations:
- Single stack
- Primary within dual stack
- If the cluster hosts are on a shared VLAN, select the Use VLAN checkbox and enter the VLAN ID.
Enter the network-wide IP addresses. If you selected Dual stack networking, you must enter both IPv4 and IPv6 addresses.
- Enter the DNS server IP address.
- Enter the cluster subnet’s IP address range in CIDR notation.
- Enter the default gateway IP address.
Enter the host-specific configurations:
- If you are only setting a static IP address that uses a single network interface, use the form view to enter the IP address and the MAC address for each host.
Optional: You can use bonds to combine network interfaces for increased bandwidth and ensure redundancy. Creating a bond with a static IP address aggregates two network interfaces per host.
- Select the Use bond checkbox.
From the Bond type dropdown list, select the bond type. The default bond type is
Active-Backup (1)
.NoteFor a description of the bond types, see Bonding modes.
- Enter the MAC address for Port 1.
- Enter the MAC address for Port 2.
- Enter the IP address for the bond.
- Click Next.
4.3.2. Configuring static networks using YAML view Link kopierenLink in die Zwischenablage kopiert!
If you use multiple interfaces or other advanced networking features, use the YAML view to enter the network state for each host that uses NMState syntax. For more information about NMState, see NMState Declarative Network API.
You can only create host-specific configurations using form view.
Prerequisites
- You have selected the Static IP, bridges and bonds option under Hosts' network configuration on the Cluster details page. Selecting this option adds the Static network configurations step to the wizard.
Procedure
- Go to the Static network configurations page.
- From the Configure via options, select YAML view.
- Upload, drag and drop, or copy and paste a YAML file containing NMState into the editor for network configurations.
- In the MAC to interface name mapping fields, enter the MAC address and interface name for each host interface used in your network configuration. Each host requires the MAC to interface name mapping to run the NMState YAML on the right machine. Assisted Installer uses the MAC to interface name mapping to replace any temporary interface names with the actual names.
- Select the Copy the YAML content checkbox to copy the YAML content between multiple hosts.
- Click Add another host configuration to configure additional hosts.
- Click Next.
4.4. Installing Operators and Operator bundles Link kopierenLink in die Zwischenablage kopiert!
You can customize your deployment by selecting Operators and Operator bundles during the installation. If you require advanced options, add the Operators or bundles after you have installed the cluster.
When installing Operators and Operator bundles through the web console, the following conditions apply:
- Some Operators are only available as part of a bundle and cannot be selected individually.
- Some Operators can be selected either individually or as part of a bundle. If you select them as part of a bundle, you can only remove them by deselecting the bundle.
- Some Operators are only available as standalone installations.
This step is optional.
4.4.1. Installing standalone Operators Link kopierenLink in die Zwischenablage kopiert!
You can select more than one standalone Operator and add Operator bundles as needed. Operators that appear greyed out are only available for installation as part of a bundle.
For instructions on installing Operator bundles, see Installing Operator bundles.
Prerequisites
- You have reviewed Customizing your installation using Operators for an overview of each Operator that you intend to install, together with its prerequisites and dependencies.
Procedure
- On the Operators page, expand the Single Operators arrow to display the full list of Operators.
Select one or more Operators from the following options:
ImportantThe integration of the AMD GPU, Kernel Module Management, Node Feature Discovery, NVIDIA GPU, and OpenShift AI Operators into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
AMD GPU
NoteSelecting the AMD GPU Operator automatically activates the Kernel Module Management Operator.
- Kernel Module Management
- Logical Volume Manager Storage
Migration Toolkit for Virtualization
ImportantSelecting the Migration Toolkit for Virtualization Operator automatically activates the OpenShift Virtualization Operator. For a Single-node OpenShift installation, the Assisted Installer also activates the LVM Storage Operator.
Multicluster engine
ImportantYou can deploy the multicluster engine with OpenShift Data Foundation on all OpenShift Container Platform clusters. Deploying the multicluster engine without OpenShift Data Foundation results in the following storage configurations:
- Multi-node cluster: No storage is configured. You must configure storage after the installation.
- Single-node OpenShift: LVM Storage is installed.
NMState
NoteCurrently, you cannot install the Kubernetes NMState Operator on the Nutanix or Oracle Cloud Infrastructure (OCI) third-party platforms.
- Node Feature Discovery
NVIDIA GPU
NoteSelecting the NVIDIA GPU Operator automatically activates the Node Feature Discovery Operator.
- OpenShift AI
- OpenShift Data Foundation
OpenShift sandboxed containers
ImportantThe integration of the OpenShift sandboxed containers Operator into the Assisted Installer is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
OpenShift Virtualization
ImportantThe OpenShift Virtualization Operator requires backend storage and might automatically activate a storage Operator in the background, according to the following criteria:
- None - If the CPU architecture is ARM64, no storage Operator is activated.
- LVM Storage - For single-node OpenShift clusters on any other CPU architecture deploying OpenShift Container Platform 4.12 or higher.
- Local Storage Operator (LSO) - For all other deployments.
- To install the Self Node Remediation Operator, see Installing Operators by using the API. It is not currently possible to install this Operator using the web console.
- Click Next.
4.4.2. Installing Operator bundles Link kopierenLink in die Zwischenablage kopiert!
You can select more than one Operator bundle together with additional Operators as needed.
For instructions on installing individual Operators, see Installing Operators.
Prerequisites
- You have reviewed Customizing your installation using Operator bundles for an overview of each Operator bundle that you intend to install, together with its prerequisites and associated Operators.
Procedure
On the Operators page, select an Operator bundle:
Virtualization - Contains the following Operators:
- OpenShift Virtualization
- Kube Descheduler
- Node Maintenance
- Migration Toolkit for Virtualization
- Kubernetes NMState
- Fence Agents Remediation
- Node Health Check
- Local Storage Operator (LSO)
- Cluster Observability
- MetalLB
- NUMA Resources
- OADP
OpenShift AI - Contains the following Operators:
- Kubernetes Authorino
- OpenShift Data Foundation
- OpenShift AI
- AMD GPU
- Node Feature Discovery
- NVIDIA GPU
- OpenShift Pipelines
- OpenShift Service Mesh
- OpenShift Serverless
- Kernel Module Management
ImportantEach of the Operator bundles is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
- Click Next.
4.5. Adding hosts to the cluster Link kopierenLink in die Zwischenablage kopiert!
You must add one or more hosts to the cluster. Adding a host to the cluster involves generating a discovery ISO. The discovery ISO runs Red Hat Enterprise Linux CoreOS (RHCOS) in-memory with an agent.
If you are installing the IBM Z® architecture, use the following table to identify the image file type:
Expand Architecture Boot method Image type Logical Partition-Classic
iPXE
Full image file: Download a self-contained ISO image
Logical Partition-Data Protection Manager
ISO or iPXE
Minimal image file: Download an ISO image that fetches content when booting up
- ISO images are not supported for installations on IBM Z (s390x) with z/VM or logical partitioning (LPAR) nodes; use the "Booting hosts with iPXE" procedure. ISO images and iPXE are supported for installations on RHEL KVM.
Perform the following procedure for each host on the cluster.
Procedure
Click the Add hosts button and select the provisioning type.
Select Minimal image file: Provision with virtual media to download a smaller image that will fetch the data needed to boot. The nodes must have virtual media capability. This is the recommended method for
x86_64
andarm64
architectures.ImportantThis option is mandatory in the following scenarios:
- If you are installing OpenShift Container Platform on Oracle Cloud Infrastructure.
- If you are installing OpenShift Container Platform on iSCSI boot volumes.
-
Select Full image file: Provision with physical media to download the larger full image. This is the recommended method for the
ppc64le
architecture and for thes390x
architecture when installing with RHEL KVM. Select iPXE: Provision from your network server to boot the hosts using iPXE. This is the recommended method on IBM Z® with z/VM nodes and LPAR (both static and DPM). ISO boot is the recommended method on the RHEL KVM installation.
NoteIf you are installing OpenShift Container Platform on RHEL KVM, in some circumstances, the VMs on the KVM host are not rebooted on first boot and need to be restarted manually.
Optional: Activate the Run workloads on control plane nodes switch to schedule workloads to run on control plane nodes, in addition to the default worker nodes.
NoteThis option is available for clusters of five or more nodes. For clusters of under five nodes, the system runs workloads on the control plane nodes only, by default. For more details, see Configuring schedulable control plane nodes in Additional Resources.
Optional: If the cluster hosts require the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, required domains or IP addresses, and port for the HTTP and HTTPS URLs of the proxy server. If the cluster hosts are behind a firewall, allow the nodes to access the required domains or IP addresses through the firewall. See Configuring your firewall for OpenShift Container Platform for more information.
NoteThe proxy username and password must be URL-encoded.
Optional: Add an SSH public key so that you can connect to the cluster nodes as the
core
user. Having a login to the cluster nodes can provide you with debugging information during the installation.ImportantDo not skip this procedure in production environments, where disaster recovery and debugging is required.
- If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access.
-
In the SSH public key field, click Browse to upload the
id_rsa.pub
file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
- Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy, or if the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates. Add additional certificates in X.509 format.
- Configure the discovery image if needed.
- Optional: If you are installing on a platform and want to integrate with the platform, select Integrate with your virtualization platform. You must boot all hosts and ensure they appear in the host inventory. All the hosts must be on the same platform.
- Click Generate Discovery ISO or Generate Script File.
- Download the discovery ISO or iPXE script.
- Boot the host(s) with the discovery image or iPXE script.
Additional resources
- Configuring the discovery image for additional details.
- Booting hosts with the discovery image for additional details.
- Red Hat Enterprise Linux 9 - Configuring and managing virtualization for additional details.
- How to configure a VIOS Media Repository/Virtual Media Library for additional details.
- Adding hosts on Nutanix with the web console
- Adding hosts on vSphere
- About scheduling workloads on control plane nodes
4.6. Configuring hosts Link kopierenLink in die Zwischenablage kopiert!
After booting the hosts with the discovery ISO, the hosts will appear in a table on the Host Discovery page. You can optionally configure the hostname and role for each host. You can also delete a host if necessary.
Procedure
- Go to the Host Discovery tab.
In multi-host clusters, you can select a role for each host after host discovery:
- In the Role column, expand the Auto-assign arrow for the host.
Choose one of the following options:
-
Auto-assign - Automatically determines whether the host is a
control plane
,worker
orarbiter
node. This is the default setting. - Control plane node - Assigns the control plane (master) role to the host, allowing the host to manage and coordinate the cluster.
- Worker - Assigns the compute (worker) role to the host, enabling the host to run application workloads.
- Arbiter - Assigns the arbiter role to a host, providing a cost-effective solution for components that require a quorum.
The minimum hardware requirements for control plane nodes exceed that of worker nodes. If you assign a role to a host, ensure that you assign the control plane role to hosts that meet the minimum hardware requirements.
For more details about the different host roles, see About assigning roles to hosts.
-
Auto-assign - Automatically determines whether the host is a
- Click the Status link to view hardware, network, and operator validations for the host.
- Optionally select or deselect all hosts by clicking the <number> selected checkbox or alternatively by selecting Select all or Select none from the dropdown list.
Optionally change one or more hostnames:
- To rename a single host, from the Options (⋮) menu for the host, select Change hostname. If necessary, enter a new name for the host and click Change. You must ensure that each host has a valid and unique hostname.
To rename multiple selected hosts, from the Actions list, select Change hostname. In the Change Hostname dialog, type the new name and include
{{n}}
to make each hostname unique. Then click Change.NoteYou can see the new names appearing in the Preview pane as you type. The name will be identical for all selected hosts, with the exception of a single-digit increment per host.
Optionally remove one or more hosts:
- To remove a single host, from the Options (⋮) menu for the host, select Remove host. Click Remove host to confirm the deletion.
- To remove multiple selected hosts at the same time, from the Actions list, select Remove. Click Remove hosts to confirm the deletion.
NoteIn a regular deployment, a cluster can have three or more hosts, and at least three of these must be control plane nodes. If you delete a host that is also a control plane node, or if there are only two hosts, you will get a message saying that the system is not ready. To restore a host, you must reboot it from the discovery ISO.
- From the Options (⋮) menu for the host, optionally select View host events. The events in the list are presented chronologically.
- Click the arrow to the left of a host name to expand the host details.
Once all cluster hosts appear with a status of Ready, proceed to the next step.
4.7. Configuring storage disks Link kopierenLink in die Zwischenablage kopiert!
Each of the hosts retrieved during host discovery can have multiple storage disks. The storage disks are listed for the host on the Storage page of the Assisted Installer wizard.
You can optionally modify the default configurations for each disk.
- Starting from OpenShift Container Platform 4.14, you can configure nodes with Intel® Virtual RAID on CPU (VROC) to manage NVMe RAIDs. For details, see Configuring an Intel® Virtual RAID on CPU (VROC) data volume.
- Starting from OpenShift Container Platform 4.15, you can install a cluster on a single or multipath iSCSI boot device using the Assisted Installer.
4.7.1. Changing the installation disk Link kopierenLink in die Zwischenablage kopiert!
The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the previous disk.
Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing over Fibre Channel on the installation disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with an /etc/multipath.conf
configuration. For details, see Modifying the DM Multipath configuration file.
Procedure
- Navigate to the Storage page of the wizard.
- Expand a host to display the associated storage disks.
Select Installation disk from the Role list.
NoteMultipath devices are automatically discovered and listed in the host’s inventory. To assign a multipath Fibre Channel disk as the installation disk, choose a disk with Drive type set to
Multipath
, rather than toFC
which indicates a single path.- When all storage disks return to Ready status, proceed to the next step.
4.7.2. Disabling disk formatting Link kopierenLink in die Zwischenablage kopiert!
The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss.
You can choose to disable the formatting of a specific disk. Perform this with caution, as bootable disks can interfere with the installation process, mainly in terms of boot order.
You cannot disable formatting for the installation disk.
Procedure
- Navigate to the Storage page of the wizard.
- Expand a host to display the associated storage disks.
- Clear Format for a disk.
- When all storage disks return to Ready status, proceed to the next step.
4.8. Configuring networking Link kopierenLink in die Zwischenablage kopiert!
Before installing OpenShift Container Platform, you must configure the cluster network.
Procedure
In the Networking step, select one of the following network management types if it is not already selected for you:
Cluster-Managed Networking: Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology. This configuration includes an integrated load balancer and virtual routing for managing the API and Ingress VIP addresses. For details, see Network management types.
Note- Currently, cluster-managed networking is not supported on IBM Z® and IBM Power®.
- Cluster-managed networking is not supported on single-node OpenShift.
User-Managed Networking: Selecting user-managed networking deploys OpenShift Container Platform with a non-standard network topology. Select user-managed networking if you want to deploy with an external load balancer and DNS, or if you intend to deploy the cluster nodes across many distinct subnets. For details, see Network management types.
NoteOracle Cloud Infrastructure (OCI) is available for OpenShift Container Platform 4.14 with a user-managed networking configuration only.
ImportantThe Assisted Installer supports a third network management type called Cluster-Managed Networking with a User-Managed Load Balancer. This network management type provides automated cluster networking with an external load balancer. Currently you can configure this network management type through the API only. For details, see Installing cluster-managed networking with a user-managed load balancer.
For cluster-managed networking, configure the following settings:
Select your Networking stack type:
- IPv4: Select this type when your hosts are only using IPv4.
- Dual-stack: You can select dual-stack when your hosts are using IPv4 together with IPv6.
Define the Machine network. You can use the default network or select a subnet.
ImportantFor iSCSI boot volumes, the hosts connect over two machine networks: one designated for the OpenShift Container Platform installation and the other for iSCSI traffic. Ensure you select the OpenShift Container Platform network from the dropdown list. The iSCSI host IP address should not be on the machine network. Choosing the iSCSI network will result in an Insufficient status for the host in the Networking step.
- Define an API virtual IP. An API virtual IP provides an endpoint for all users to interact with, and configure the platform.
- Define an Ingress virtual IP. An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.
Optional: Select Use advanced networking to configure the following advanced networking properties:
- Cluster network CIDR: Define an IP address block to assign pod IP addresses.
- Cluster network host prefix: Define a subnet prefix length to assign to each node.
- Service network CIDR: Define an IP address block to assign service IP addresses.
- Optional: Select Host SSH Public Key for troubleshooting after installation to connect to hosts using a public SSH key for troubleshooting after installation.
Additional resources
4.9. Adding manifests and patches Link kopierenLink in die Zwischenablage kopiert!
You can upload custom manifests and patches for system manifests in the Assisted Installer web console. You can also replace and remove these files.
For information about adding and modifying custom manifests by using the Assisted Installer API, see Adding custom manifests with the API.
4.9.1. Preparing custom manifests and manifest patches Link kopierenLink in die Zwischenablage kopiert!
This section provides an overview of custom manifests and system manifest patches, including formatting considerations and the required naming conventions for uploading the files.
Follow these guidelines to ensure that the files you upload comply with the system requirements.
4.9.1.1. Custom manifests Link kopierenLink in die Zwischenablage kopiert!
A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party.
You can upload a custom manifest from your file system to either the openshift
folder or the manifests
folder. The number of custom manifest files permitted is unlimited.
You can upload only one file at a time. However, each uploaded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.
For a file containing a single custom manifest, accepted file extensions include .yaml
, .yml
, or .json
. For a file containing multiple custom manifests, accepted file types include .yaml
or .yml
.
Single custom manifest example
Multiple custom manifest example
When you install OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) external platform, you must add the custom manifests provided by Oracle. For additional external partner integrations such as vSphere or Nutanix, this step is optional.
Additional resources
4.9.1.2. Patches for system manifests Link kopierenLink in die Zwischenablage kopiert!
A manifest patch file conforms to the syntax of a YAML patch. Its purpose is to modify a system manifest that is automatically created by the Assisted Installer during installation preparation. Manifest patches are used to adjust onfigurations, manage updates, or apply changes in a structured and automated way. This approach ensures consistency and helps avoid errors when altering complex YAML documents.
4.9.1.2.1. General YAML syntax for system manifest patches Link kopierenLink in die Zwischenablage kopiert!
The yaml-patch
package is an implementation of JavaScript Object Notation (JSON) Patch, directly transposed to YAML. The general syntax of a system manifest YAML patch is the following:
- op: <add | remove | replace | move | copy | test> from: <source-path> path: <target-path> value: <any-yaml-structure>
- op: <add | remove | replace | move | copy | test>
from: <source-path>
path: <target-path>
value: <any-yaml-structure>
- 1
- See the JavaScript Object Notation (JSON) Patch for an explanation of each operation.
- 2
- Only valid for
move
andcopy
operations. - 3
- Always mandatory.
- 4
- Only valid for
add
,replace
andtest
operations.
4.9.1.2.2. Naming conventions for system manifest patches Link kopierenLink in die Zwischenablage kopiert!
When creating a new patch for a system manifest, use the following naming convention: <file to be patched>.patch_<suffix>
. The name itself ensures that the correct manifest is overwritten, and the suffix allows for the application of many patches to the same manifest.
For example, if the original file has the name 50-masters-chrony-configuration.yaml
, then the new patch should be called 50-masters-chrony-configuration.yaml.patch_1_apply-chrony-dhcp
or similar.
The following example outlines the steps for patching a system manifest, and shows how the naming convention is applied:
The Assisted Installer automatically adds the following YAML file to the manifests of the cluster at the start of the installation.
Directory:
OpenShift
Filename:50-masters-chrony-configuration.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To patch this YAML file with different content, you must generate a new
base64
representation of the content and create a patch file:Generate
base64
file content for/etc/chrony.conf
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a patch file using this
base64
string:Directory:
OpenShift
Filename:50-masters-chrony-configuration.yaml.patch_1_apply-chrony-dhcp
--- - op: replace path: /spec/config/storage/files/0/contents value: data:text/plain;charset=utf-8;base64,ZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAptYWtlc3RlcCAxLjAgMwpydGNzeW5jCmxvZ2RpciAvdmFyL2xvZy9jaHJvbnkKc291cmNlZGlyIC9ydW4vY2hyb255LWRoY3AK
--- - op: replace path: /spec/config/storage/files/0/contents value: data:text/plain;charset=utf-8;base64,ZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAptYWtlc3RlcCAxLjAgMwpydGNzeW5jCmxvZ2RpciAvdmFyL2xvZy9jaHJvbnkKc291cmNlZGlyIC9ydW4vY2hyb255LWRoY3AK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- You then upload the patch file in the Assisted Installer web console. For details, see the following section.
4.9.2. Uploading custom manifests and manifest patches Link kopierenLink in die Zwischenablage kopiert!
When uploading a custom manifest or patch, enter the filename and select a destination folder. The filename must be unique across both folders; you cannot use the same file name in both folders.
Prerequisites
- You have saved a custom manifest file to a local directory using an appropriate file name and extension.
Procedure
- On the Cluster details page of the wizard, select the Include custom manifests checkbox.
On the Custom manifest page, in the folder field, select the Assisted Installer folder where you want to save the manifest or patch.
NoteYou can upload a file to either the openShift or manifest folder. For a manifest patch, the system will look in both folders for the target file that it needs to patch.
In the Filename field, enter a name for the manifest file, including the extension:
-
For custom manifests, examples include
manifest1.json
ormultiple1.yaml
. - For manifest patches, an example is 50-masters-chrony-configuration.yaml.patch_1_apply-chrony-dhcp.
-
For custom manifests, examples include
- Under Content, click the Upload icon or Browse button to upload a file. Alternatively, drag the file into the Content field from your file system.
- To upload another file, click Add another manifest and repeat the process. This saves any previously uploaded files.
- Click Next to save all files and proceed to the Review and create page. Custom manifests displays a list of the uploaded custom manifests and patches.
4.9.3. Modifying custom manifests and manifest patches Link kopierenLink in die Zwischenablage kopiert!
You can rename uploaded custom manifest or patch files, and save custom manifest files to a different folder. Additionally, you can copy the contents of an existing file, or download it to the folder specified in your Chrome download settings.
It is not possible to edit the content of an uploaded manifest or patch file. Instead, you can overwrite the existing file.
Prerequisites
- You have uploaded at least one custom manifest or patch file.
Procedure
- To change the location of a custom manifest file, select a different folder from the Folder list.
- To change the file name, type the new name for the manifest or patch in the File name field. Patch files should respect the patch naming conventions discussed earlier in this section.
To overwrite a manifest or patch file, save a new file with the same file name in either the openshift or manifest folder.
NoteThe system will automatically detect and replace the original file, regardless of which folder it is in.
- To download a manifest or patch to your file system, click the Download icon.
- To copy a manifest or patch, click the Copy to clipboard icon.
- To apply the changes, click either Add another manifest or Next.
4.9.4. Removing custom manifests and manifest patches Link kopierenLink in die Zwischenablage kopiert!
You can remove uploaded custom manifests or patches before installation in one of two ways:
- Removing a single custom manifest or patch.
- Removing all manifests and patches at the same time.
Once you have removed a manifest or patch file you cannot undo the action. The workaround is to upload the file again.
4.9.4.1. Removing all custom manifests and patches Link kopierenLink in die Zwischenablage kopiert!
You can remove all custom manifests and patches at the same time. This also hides the Custom manifest page.
Prerequisites
- You have uploaded at least one custom manifest or patch file.
Procedure
- Browse to the Cluster details page of the wizard.
- Clear the Include custom manifests checkbox.
- In the Remove custom manifests dialog box, click Remove.
4.9.4.2. Removing a single custom manifest or patch Link kopierenLink in die Zwischenablage kopiert!
You can delete one file at a time. This option does not allow deletion of the last remaining manifest or patch.
Prerequisites
- You have uploaded at least two custom manifest or patch files.
Procedure
- Browse to the Custom manifests page.
- Hover over the manifest name to display the Delete (minus) icon.
- Click the icon and then click Delete in the dialog box.
4.10. Preinstallation validations Link kopierenLink in die Zwischenablage kopiert!
The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation.
4.11. Installing the cluster Link kopierenLink in die Zwischenablage kopiert!
After you have completed the configuration and all the nodes are Ready, you can begin installation. The installation process takes a considerable amount of time, and you can monitor the installation from the Assisted Installer web console. Nodes will reboot during the installation, and they will initialize after installation.
Procedure
- Click Begin installation.
- Click the link in the Status column of the Host Inventory list to see the installation status of a particular host.
4.12. Completing the installation Link kopierenLink in die Zwischenablage kopiert!
After the cluster is installed and initialized, the Assisted Installer indicates that the installation is finished. The Assisted Installer provides the console URL, the kubeadmin
username and password, and the kubeconfig
file. Additionally, the Assisted Installer provides cluster details including the OpenShift Container Platform version, base domain, CPU architecture, API and Ingress IP addresses, and the cluster and service network IP addresses.
Prerequisites
-
You have installed the
oc
CLI tool.
Procedure
-
Make a copy of the
kubeadmin
username and password. Download the
kubeconfig
file and copy it to theauth
directory under your working directory:mkdir -p <working_directory>/auth
$ mkdir -p <working_directory>/auth
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp kubeconfig <working_directory>/auth
$ cp kubeconfig <working_directory>/auth
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
kubeconfig
file is available for download for 20 days after completing the installation.Add the
kubeconfig
file to your environment:export KUBECONFIG=<your working directory>/auth/kubeconfig
$ export KUBECONFIG=<your working directory>/auth/kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Login with the
oc
CLI tool:oc login -u kubeadmin -p <password>
$ oc login -u kubeadmin -p <password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<password>
with the password of thekubeadmin
user.- Click the web console URL or click Launch OpenShift Console to open the console.
-
Enter the
kubeadmin
username and password. Follow the instructions in the OpenShift Container Platform console to configure an identity provider and configure alert receivers. - Add a bookmark of the OpenShift Container Platform console.
- Complete any postinstallation platform integration steps.