This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Este contenido no está disponible en el idioma seleccionado.
Chapter 8. Deploying installer-provisioned clusters on bare metal
8.1. Overview Copiar enlaceEnlace copiado en el portapapeles!
Installer-provisioned installation on bare metal nodes deploys and configures the infrastructure that a OpenShift Container Platform cluster runs on. This guide provides a methodology to achieving a successful installer-provisioned bare-metal installation. The following diagram illustrates the installation environment in phase 1 of deployment:
The provisioning node can be removed after the installation.
- Provisioner: A physical machine that runs the installation program and hosts the bootstrap VM that deploys the controller of a new OpenShift Container Platform cluster.
- Bootstrap VM: A virtual machine used in the process of deploying an OpenShift Container Platform cluster.
-
Network bridges: The bootstrap VM connects to the bare metal network and to the provisioning network, if present, via network bridges,
eno1andeno2.
In phase 2 of the deployment, the provisioner destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes. The API VIP moves to the control plane nodes and the Ingress VIP moves to the worker nodes.
The following diagram illustrates phase 2 of deployment:
The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia.
8.2. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
Installer-provisioned installation of OpenShift Container Platform requires:
- One provisioner node with Red Hat Enterprise Linux (RHEL) 8.x installed. The provisioning node can be removed after installation.
- Three control plane nodes.
- Baseboard Management Controller (BMC) access to each node.
At least one network:
- One required routable network
- One optional network for provisioning nodes; and,
- One optional management network.
Before starting an installer-provisioned installation of OpenShift Container Platform, ensure the hardware environment meets the following requirements.
8.2.1. Node requirements Copiar enlaceEnlace copiado en el portapapeles!
Installer-provisioned installation involves a number of hardware node requirements:
-
CPU architecture: All nodes must use
x86_64CPU architecture. - Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration.
-
Baseboard Management Controller: The
provisionernode must be able to access the baseboard management controller (BMC) of each OpenShift Container Platform cluster node. You may use IPMI, Redfish, or a proprietary protocol. -
Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, RHEL 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 8 for the
provisionernode and RHCOS 8 for the control plane and worker nodes. - Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.
-
Provisioner node: Installer-provisioned installation requires one
provisionernode. - Control plane: Installer-provisioned installation requires three control plane nodes for high availability. You can deploy an OpenShift Container Platform cluster with only three control plane nodes, making the control plane nodes schedulable as worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing.
Worker nodes: While not required, a typical production cluster has two or more worker nodes.
ImportantDo not deploy a cluster with only one worker node, because the cluster will deploy with routers and ingress traffic in a degraded state.
-
Network interfaces: Each node must have at least one network interface for the routable
baremetalnetwork. Each node must have one network interface for aprovisioningnetwork when using theprovisioningnetwork for deployment. Using theprovisioningnetwork is the default configuration. Network interface naming must be consistent across control plane nodes for the provisioning network. For example, if a control plane node uses theeth0NIC for the provisioning network, the other control plane nodes must use it as well. -
Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OpenShift Container Platform nodes when using IPv6 addressing on the
provisioningnetwork. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on theprovisioningnetwork NIC, but omitting theprovisioningnetwork removes this requirement. Secure Boot: Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed.
- Manually: To deploy an OpenShift Container Platform cluster with Secure Boot manually, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot with manually enabled UEFI and Secure Boot only when installer-provisioned installations use Redfish virtual media. See "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section for additional details.
Managed: To deploy an OpenShift Container Platform cluster with managed Secure Boot, you must set the
bootModevalue toUEFISecureBootin theinstall-config.yamlfile. Red Hat only supports installer-provisioned installation with managed Secure Boot on 10th generation HPE hardware and 13th generation Dell hardware running firmware version2.75.75.75or greater. Deploying with managed Secure Boot does not require Redfish virtual media. See "Configuring managed Secure Boot" in the "Setting up the environment for an OpenShift installation" section for details.NoteRed Hat does not support Secure Boot with self-generated keys.
8.2.2. Planning a bare metal cluster for OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster.
If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation. This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster.
NoteYou can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability.
- Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode.
- If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform.
8.2.3. Firmware requirements for installing with virtual media Copiar enlaceEnlace copiado en el portapapeles!
The installer for installer-provisioned OpenShift Container Platform clusters validates the hardware and firmware compatibility with Redfish virtual media. The following table lists the minimum firmware versions tested and verified to work for installer-provisioned OpenShift Container Platform clusters deployed by using Redfish virtual media.
| Hardware | Model | Management | Firmware versions |
|---|---|---|---|
| HP | 10th Generation | iLO5 | 2.63 or later |
| Dell | 14th Generation | iDRAC 9 | v4.20.20.20 - v4.40.00.00 only |
| 13th Generation | iDRAC 8 | v2.75.75.75 or later |
Red Hat does not test every combination of firmware, hardware, or other third-party components. For further information about third-party support, see Red Hat third-party support policy.
See the hardware documentation for the nodes or contact the hardware vendor for information about updating the firmware.
For HP servers, Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media.
For Dell servers, ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration 04.40.00.00, the Virtual Console plugin defaults to eHTML5, which causes problems with the InsertVirtualMedia workflow. Set the plug-in to HTML5 to avoid this issue. The menu path is: Configuration
The installer will not initiate installation on a node if the node firmware is below the foregoing versions when installing with virtual media.
8.2.4. Network requirements Copiar enlaceEnlace copiado en el portapapeles!
Installer-provisioned installation of OpenShift Container Platform involves several network requirements. First, installer-provisioned installation involves an optional non-routable provisioning network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable baremetal network.
8.2.4.1. Increase the network MTU Copiar enlaceEnlace copiado en el portapapeles!
Before deploying OpenShift Container Platform, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation.
8.2.4.2. Configuring NICs Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform deploys with two networks:
provisioning: Theprovisioningnetwork is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. The network interface for theprovisioningnetwork on each cluster node must have the BIOS or UEFI configured to PXE boot.The
provisioningNetworkInterfaceconfiguration setting specifies theprovisioningnetwork NIC name on the control plane nodes, which must be identical on the control plane nodes. ThebootMACAddressconfiguration setting provides a means to specify a particular NIC on each node for theprovisioningnetwork.The
provisioningnetwork is optional, but it is required for PXE booting. If you deploy without aprovisioningnetwork, you must use a virtual media BMC addressing option such asredfish-virtualmediaoridrac-virtualmedia.-
baremetal: Thebaremetalnetwork is a routable network. You can use any NIC to interface with thebaremetalnetwork provided the NIC is not configured to use theprovisioningnetwork.
When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network.
8.2.4.3. DNS requirements Copiar enlaceEnlace copiado en el portapapeles!
Clients access the OpenShift Container Platform cluster nodes over the baremetal network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
<cluster_name>.<base_domain>
<cluster_name>.<base_domain>
For example:
test-cluster.example.com
test-cluster.example.com
OpenShift Container Platform includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
In OpenShift Container Platform deployments, DNS name resolution is required for the following components:
- The Kubernetes API
- The OpenShift Container Platform application wildcard ingress API
A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records or DHCP to set the hostnames for all the nodes.
Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..
| Component | Record | Description |
|---|---|---|
| Kubernetes API |
| An A/AAAA record, and a PTR record, identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
| Routes |
| The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
For example, |
You can use the dig command to verify DNS resolution.
8.2.4.4. Dynamic Host Configuration Protocol (DHCP) requirements Copiar enlaceEnlace copiado en el portapapeles!
By default, installer-provisioned installation deploys ironic-dnsmasq with DHCP enabled for the provisioning network. No other DHCP servers should be running on the provisioning network when the provisioningNetwork configuration setting is set to managed, which is the default value. If you have a DHCP server running on the provisioning network, you must set the provisioningNetwork configuration setting to unmanaged in the install-config.yaml file.
Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster for the baremetal network on an external DHCP server.
8.2.4.5. Reserving IP addresses for nodes with the DHCP server Copiar enlaceEnlace copiado en el portapapeles!
For the baremetal network, a network administrator must reserve a number of IP addresses, including:
Two unique virtual IP addresses.
- One virtual IP address for the API endpoint.
- One virtual IP address for the wildcard ingress endpoint.
- One IP address for the provisioner node.
- One IP address for each control plane (master) node.
- One IP address for each worker node, if applicable.
Some administrators prefer to use static IP addresses so that each node’s IP address remains constant in the absence of a DHCP server. To use static IP addresses in the OpenShift Container Platform cluster, reserve the IP addresses with an infinite lease. During deployment, the installer will reconfigure the NICs from DHCP assigned addresses to static IP addresses. NICs with DHCP leases that are not infinite will remain configured to use DHCP.
Setting IP addresses with an infinite lease is incompatible with network configuration deployed by using the Machine Config Operator.
Your DHCP server must provide a DHCP expiration time of 4294967295 seconds to properly set an infinite lease as specified by rfc2131. If a lesser value is returned for the DHCP infinite lease time, the node reports an error and a permanent IP is not set for the node. In RHEL 8, dhcpd does not provide infinite leases. If you want to use the provisioner node to serve dynamic IP addresses with infinite lease times, use dnsmasq rather than dhcpd.
External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes.
Do not change a worker node’s IP address manually after deployment. To change the IP address of a worker node after deployment, you must mark the worker node unschedulable, evacuate the pods, delete the node, and recreate it with the new IP address. See "Working with nodes" for additional details. To change the IP address of a control plane node after deployment, contact support.
The storage interface requires a DHCP reservation.
The following table provides an exemplary embodiment of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer.
| Usage | Host Name | IP |
|---|---|---|
| API |
|
|
| Ingress LB (apps) |
|
|
| Provisioner node |
|
|
| Master-0 |
|
|
| Master-1 |
|
|
| Master-2 |
|
|
| Worker-0 |
|
|
| Worker-1 |
|
|
| Worker-n |
|
|
If you do not create DHCP reservations, the installer requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes.
8.2.4.6. Network Time Protocol (NTP) Copiar enlaceEnlace copiado en el portapapeles!
Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
Define a consistent clock date and time format in each cluster node’s BIOS settings, or installation might fail.
You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
8.2.4.7. State-driven network configuration requirements (Technology Preview) Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform supports additional post-installation state-driven network configuration on the secondary network interfaces of cluster nodes using kubernetes-nmstate. For example, system administrators might configure a secondary network interface on cluster nodes after installation for a storage network.
Configuration must occur before scheduling pods.
State-driven network configuration requires installing kubernetes-nmstate, and also requires Network Manager running on the cluster nodes. See OpenShift Virtualization > Kubernetes NMState (Tech Preview) for additional details.
8.2.4.8. Port access for the out-of-band management IP address Copiar enlaceEnlace copiado en el portapapeles!
The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the baremetal node during installation, the out-of-band management IP address address must be granted access to the TCP 6180 port.
8.2.5. Configuring nodes Copiar enlaceEnlace copiado en el portapapeles!
Configuring nodes when using the provisioning network
Each node in the cluster requires the following configuration for proper installation.
A mismatch between nodes will cause an installation failure.
While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs:
| NIC | Network | VLAN |
|---|---|---|
| NIC1 |
| <provisioning_vlan> |
| NIC2 |
| <baremetal_vlan> |
NIC1 is a non-routable network (provisioning) that is only used for the installation of the OpenShift Container Platform cluster.
The Red Hat Enterprise Linux (RHEL) 8.x installation process on the provisioner node might vary. To install Red Hat Enterprise Linux (RHEL) 8.x using a local Satellite server or a PXE server, PXE-enable NIC2.
| PXE | Boot order |
|---|---|
|
NIC1 PXE-enabled | 1 |
|
NIC2 | 2 |
Ensure PXE is disabled on all other NICs.
Configure the control plane and worker nodes as follows:
| PXE | Boot order |
|---|---|
| NIC1 PXE-enabled (provisioning network) | 1 |
Configuring nodes without the provisioning network
The installation process requires one NIC:
| NIC | Network | VLAN |
|---|---|---|
| NICx |
| <baremetal_vlan> |
NICx is a routable network (baremetal) that is used for the installation of the OpenShift Container Platform cluster, and routable to the internet.
The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia.
Configuring nodes for Secure Boot manually
Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications, and the operating system.
Red Hat only supports manually configured Secure Boot when deploying with Redfish virtual media.
To enable Secure Boot manually, refer to the hardware guide for the node and execute the following:
Procedure
- Boot the node and enter the BIOS menu.
- Set the node’s boot mode to UEFI Enabled.
- Enable Secure Boot.
Red Hat does not support Secure Boot with self-generated keys.
Configuring the Compatibility Support Module for Fujitsu iRMC
The Compatibility Support Module (CSM) configuration provides support for legacy BIOS backward compatibility with UEFI systems. You must configure the CSM when you deploy a cluster with Fujitsu iRMC, otherwise the installation might fail.
For information about configuring the CSM for your specific node type, refer to the hardware guide for the node.
Prerequisites
-
Ensure that you have disabled Secure Boot Control. You can disable the feature under Security
Secure Boot Configuration Secure Boot Control.
Procedure
- Boot the node and select the BIOS menu.
- Under the Advanced tab, select CSM Configuration from the list.
Enable the Launch CSM option and set the following values:
Expand Item Value Boot option filter
UEFI and Legacy
Launch PXE OpROM Policy
UEFI only
Launch Storage OpROM policy
UEFI only
Other PCI device ROM priority
UEFI only
8.2.6. Out-of-band management Copiar enlaceEnlace copiado en el portapapeles!
Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner node.
Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OpenShift Container Platform 4 installation.
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning network or the baremetal network are valid options.
8.2.7. Required data for installation Copiar enlaceEnlace copiado en el portapapeles!
Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes:
Out-of-band management IP
Examples
- Dell (iDRAC) IP
- HP (iLO) IP
- Fujitsu (iRMC) IP
When using the provisioning network
-
NIC (
provisioning) MAC address -
NIC (
baremetal) MAC address
When omitting the provisioning network
-
NIC (
baremetal) MAC address
8.2.8. Validation checklist for nodes Copiar enlaceEnlace copiado en el portapapeles!
When using the provisioning network
-
❏ NIC1 VLAN is configured for the
provisioningnetwork. -
❏ NIC1 for the
provisioningnetwork is PXE-enabled on the provisioner, control plane (master), and worker nodes. -
❏ NIC2 VLAN is configured for the
baremetalnetwork. - ❏ PXE has been disabled on all other NICs.
- ❏ DNS is configured with API and Ingress endpoints.
- ❏ Control plane and worker nodes are configured.
- ❏ All nodes accessible via out-of-band management.
- ❏ (Optional) A separate management network has been created.
- ❏ Required data for installation.
When omitting the provisioning network
-
❏ NIC1 VLAN is configured for the
baremetalnetwork. - ❏ DNS is configured with API and Ingress endpoints.
- ❏ Control plane and worker nodes are configured.
- ❏ All nodes accessible via out-of-band management.
- ❏ (Optional) A separate management network has been created.
- ❏ Required data for installation.
8.3. Setting up the environment for an OpenShift installation Copiar enlaceEnlace copiado en el portapapeles!
8.3.1. Installing RHEL on the provisioner node Copiar enlaceEnlace copiado en el portapapeles!
With the networking configuration complete, the next step is to install RHEL 8.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media.
8.3.2. Preparing the provisioner node for OpenShift Container Platform installation Copiar enlaceEnlace copiado en el portapapeles!
Perform the following steps to prepare the environment.
Procedure
-
Log in to the provisioner node via
ssh. Create a non-root user (
kni) and provide that user withsudoprivileges:useradd kni passwd kni echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni chmod 0440 /etc/sudoers.d/kni
# useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kniCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
sshkey for the new user:su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''"
# su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in as the new user on the provisioner node:
su - kni
# su - kni $Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use Red Hat Subscription Manager to register the provisioner node:
sudo subscription-manager register --username=<user> --password=<pass> --auto-attach sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms
$ sudo subscription-manager register --username=<user> --password=<pass> --auto-attach $ sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager.
Install the following packages:
sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
$ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the user to add the
libvirtgroup to the newly created user:sudo usermod --append --groups libvirt <user>
$ sudo usermod --append --groups libvirt <user>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart
firewalldand enable thehttpservice:sudo systemctl start firewalld sudo firewall-cmd --zone=public --add-service=http --permanent sudo firewall-cmd --reload
$ sudo systemctl start firewalld $ sudo firewall-cmd --zone=public --add-service=http --permanent $ sudo firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start and enable the
libvirtdservice:sudo systemctl enable libvirtd --now
$ sudo systemctl enable libvirtd --nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
defaultstorage pool and start it:sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images sudo virsh pool-start default sudo virsh pool-autostart default
$ sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images $ sudo virsh pool-start default $ sudo virsh pool-autostart defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure networking.
NoteYou can also configure networking from the web console.
Export the
baremetalnetwork NIC name:export PUB_CONN=<baremetal_nic_name>
$ export PUB_CONN=<baremetal_nic_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
baremetalnetwork:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are deploying with a
provisioningnetwork, export theprovisioningnetwork NIC name:export PROV_CONN=<prov_nic_name>
$ export PROV_CONN=<prov_nic_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are deploying with a
provisioningnetwork, configure theprovisioningnetwork:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
sshconnection might disconnect after executing these steps.The IPv6 address can be any address as long as it is not routable via the
baremetalnetwork.Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing.
Configure the IPv4 address on the
provisioningnetwork connection.nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual
$ nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manualCopy to Clipboard Copied! Toggle word wrap Toggle overflow sshback into theprovisionernode (if required).ssh kni@provisioner.<cluster-name>.<domain>
# ssh kni@provisioner.<cluster-name>.<domain>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the connection bridges have been properly created.
sudo nmcli con show
$ sudo nmcli con showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
pull-secret.txtfile.vim pull-secret.txt
$ vim pull-secret.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure, and scroll down to the Downloads section. Click Copy pull secret. Paste the contents into the
pull-secret.txtfile and save the contents in thekniuser’s home directory.
8.3.3. Retrieving the OpenShift Container Platform installer Copiar enlaceEnlace copiado en el portapapeles!
Use the latest-4.x version of the installer to deploy the latest generally available version of OpenShift Container Platform:
export VERSION=latest-4.8
$ export VERSION=latest-4.8
export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')
8.3.4. Extracting the OpenShift Container Platform installer Copiar enlaceEnlace copiado en el portapapeles!
After retrieving the installer, the next step is to extract it.
Procedure
Set the environment variables:
export cmd=openshift-baremetal-install export pullsecret_file=~/pull-secret.txt export extract_dir=$(pwd)
$ export cmd=openshift-baremetal-install $ export pullsecret_file=~/pull-secret.txt $ export extract_dir=$(pwd)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
ocbinary:curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.tar.gz | tar zxvf - oc
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.tar.gz | tar zxvf - ocCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the installer:
sudo cp oc /usr/local/bin oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE} sudo cp openshift-baremetal-install /usr/local/bin$ sudo cp oc /usr/local/bin $ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE} $ sudo cp openshift-baremetal-install /usr/local/binCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.5. Creating an RHCOS images cache (optional) Copiar enlaceEnlace copiado en el portapapeles!
To employ image caching, you must download two images: the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM and the RHCOS image used by the installer to provision the different nodes. Image caching is optional, but especially useful when running the installer on a network with limited bandwidth.
If you are running the installer on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installer will timeout. Caching images on a web server will help in such scenarios.
Install a container that contains the images.
Procedure
Install
podman:sudo dnf install -y podman
$ sudo dnf install -y podmanCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open firewall port
8080to be used for RHCOS image caching:sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent
$ sudo firewall-cmd --add-port=8080/tcp --zone=public --permanentCopy to Clipboard Copied! Toggle word wrap Toggle overflow sudo firewall-cmd --reload
$ sudo firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a directory to store the
bootstraposimageandclusterosimage:mkdir /home/kni/rhcos_image_cache
$ mkdir /home/kni/rhcos_image_cacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the appropriate SELinux context for the newly created directory:
sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?"
$ sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?"Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo restorecon -Rv rhcos_image_cache/
$ sudo restorecon -Rv rhcos_image_cache/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the commit ID from the installer:
export COMMIT_ID=$(/usr/local/bin/openshift-baremetal-install version | grep '^built from commit' | awk '{print $4}')$ export COMMIT_ID=$(/usr/local/bin/openshift-baremetal-install version | grep '^built from commit' | awk '{print $4}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow The ID determines which images the installer needs to download.
Get the URI for the RHCOS image that the installer will deploy on the nodes:
export RHCOS_OPENSTACK_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .images.openstack.path | sed 's/"//g')
$ export RHCOS_OPENSTACK_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .images.openstack.path | sed 's/"//g')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the URI for the RHCOS image that the installer will deploy on the bootstrap VM:
export RHCOS_QEMU_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .images.qemu.path | sed 's/"//g')
$ export RHCOS_QEMU_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .images.qemu.path | sed 's/"//g')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the path where the images are published:
export RHCOS_PATH=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .baseURI | sed 's/"//g')
$ export RHCOS_PATH=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .baseURI | sed 's/"//g')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM:
export RHCOS_QEMU_SHA_UNCOMPRESSED=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq -r '.images.qemu["uncompressed-sha256"]')
$ export RHCOS_QEMU_SHA_UNCOMPRESSED=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq -r '.images.qemu["uncompressed-sha256"]')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the SHA hash for the RHCOS image that will be deployed on the nodes:
export RHCOS_OPENSTACK_SHA_COMPRESSED=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq -r '.images.openstack.sha256')
$ export RHCOS_OPENSTACK_SHA_COMPRESSED=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq -r '.images.openstack.sha256')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the images and place them in the
/home/kni/rhcos_image_cachedirectory:curl -L ${RHCOS_PATH}${RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/${RHCOS_QEMU_URI}$ curl -L ${RHCOS_PATH}${RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/${RHCOS_QEMU_URI}Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -L ${RHCOS_PATH}${RHCOS_OPENSTACK_URI} -o /home/kni/rhcos_image_cache/${RHCOS_OPENSTACK_URI}$ curl -L ${RHCOS_PATH}${RHCOS_OPENSTACK_URI} -o /home/kni/rhcos_image_cache/${RHCOS_OPENSTACK_URI}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm SELinux type is of
httpd_sys_content_tfor the newly created files:ls -Z /home/kni/rhcos_image_cache
$ ls -Z /home/kni/rhcos_image_cacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod:
podman run -d --name rhcos_image_cache \ -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ quay.io/centos7/httpd-24-centos7:latest
$ podman run -d --name rhcos_image_cache \ -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ quay.io/centos7/httpd-24-centos7:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow The above command creates a caching webserver with the name
rhcos_image_cache, which serves the images for deployment. The first image${RHCOS_PATH}${RHCOS_QEMU_URI}?sha256=${RHCOS_QEMU_SHA_UNCOMPRESSED}is thebootstrapOSImageand the second image${RHCOS_PATH}${RHCOS_OPENSTACK_URI}?sha256=${RHCOS_OPENSTACK_SHA_COMPRESSED}is theclusterOSImagein theinstall-config.yamlfile.Generate the
bootstrapOSImageandclusterOSImageconfiguration:export BAREMETAL_IP=$(ip addr show dev baremetal | awk '/inet /{print $2}' | cut -d"/" -f1)$ export BAREMETAL_IP=$(ip addr show dev baremetal | awk '/inet /{print $2}' | cut -d"/" -f1)Copy to Clipboard Copied! Toggle word wrap Toggle overflow export RHCOS_OPENSTACK_SHA256=$(zcat /home/kni/rhcos_image_cache/${RHCOS_OPENSTACK_URI} | sha256sum | awk '{print $1}')$ export RHCOS_OPENSTACK_SHA256=$(zcat /home/kni/rhcos_image_cache/${RHCOS_OPENSTACK_URI} | sha256sum | awk '{print $1}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow export RHCOS_QEMU_SHA256=$(zcat /home/kni/rhcos_image_cache/${RHCOS_QEMU_URI} | sha256sum | awk '{print $1}')$ export RHCOS_QEMU_SHA256=$(zcat /home/kni/rhcos_image_cache/${RHCOS_QEMU_URI} | sha256sum | awk '{print $1}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow export CLUSTER_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_OPENSTACK_URI}?sha256=${RHCOS_OPENSTACK_SHA256}"$ export CLUSTER_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_OPENSTACK_URI}?sha256=${RHCOS_OPENSTACK_SHA256}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow export BOOTSTRAP_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_QEMU_URI}?sha256=${RHCOS_QEMU_SHA256}"$ export BOOTSTRAP_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_QEMU_URI}?sha256=${RHCOS_QEMU_SHA256}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo "${RHCOS_OPENSTACK_SHA256} ${RHCOS_OPENSTACK_URI}" > /home/kni/rhcos_image_cache/rhcos-ootpa-latest.qcow2.sha256sum$ echo "${RHCOS_OPENSTACK_SHA256} ${RHCOS_OPENSTACK_URI}" > /home/kni/rhcos_image_cache/rhcos-ootpa-latest.qcow2.sha256sumCopy to Clipboard Copied! Toggle word wrap Toggle overflow echo " bootstrapOSImage=${BOOTSTRAP_OS_IMAGE}"$ echo " bootstrapOSImage=${BOOTSTRAP_OS_IMAGE}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo " clusterOSImage=${CLUSTER_OS_IMAGE}"$ echo " clusterOSImage=${CLUSTER_OS_IMAGE}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the required configuration to the
install-config.yamlfile underplatform.baremetal:platform: baremetal: bootstrapOSImage: http://<BAREMETAL_IP>:8080/<RHCOS_QEMU_URI>?sha256=<RHCOS_QEMU_SHA256> clusterOSImage: http://<BAREMETAL_IP>:8080/<RHCOS_OPENSTACK_URI>?sha256=<RHCOS_OPENSTACK_SHA256>platform: baremetal: bootstrapOSImage: http://<BAREMETAL_IP>:8080/<RHCOS_QEMU_URI>?sha256=<RHCOS_QEMU_SHA256> clusterOSImage: http://<BAREMETAL_IP>:8080/<RHCOS_OPENSTACK_URI>?sha256=<RHCOS_OPENSTACK_SHA256>Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the "Configuration files" section for additional details.
8.3.6. Configuration files Copiar enlaceEnlace copiado en el portapapeles!
8.3.6.1. Configuring the install-config.yaml file Copiar enlaceEnlace copiado en el portapapeles!
The install-config.yaml file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available hardware so that it is able to fully manage it.
Configure
install-config.yaml. Change the appropriate variables to match the environment, includingpullSecretandsshKey.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster.
- 2 4 6 8
- See the BMC addressing sections for more options.
- 3 5 7 9
- Set the path to the installation disk drive, for example,
/dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2.
Create a directory to store cluster configs.
mkdir ~/clusterconfigs cp install-config.yaml ~/clusterconfigs
$ mkdir ~/clusterconfigs $ cp install-config.yaml ~/clusterconfigsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster.
ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
$ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power offCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove old bootstrap resources if any are left over from a previous deployment attempt.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.6.2. Setting proxy settings within the install-config.yaml file (optional) Copiar enlaceEnlace copiado en el portapapeles!
To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file.
The following is an example of noProxy with values.
noProxy: .example.com,172.22.0.0/24,10.10.0.0/24
noProxy: .example.com,172.22.0.0/24,10.10.0.0/24
With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair.
Key considerations:
-
If the proxy does not have an HTTPS proxy, change the value of
httpsProxyfromhttps://tohttp://. -
If using a provisioning network, include it in the
noProxysetting, otherwise the installer will fail. -
Set all of the proxy settings as environment variables within the provisioner node. For example,
HTTP_PROXY,HTTPS_PROXY, andNO_PROXY.
When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately.
8.3.6.3. Modifying the install-config.yaml file for no provisioning network (optional) Copiar enlaceEnlace copiado en el portapapeles!
To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file.
platform:
baremetal:
apiVIP: <apiVIP>
ingressVIP: <ingress/wildcard VIP>
provisioningNetwork: "Disabled"
platform:
baremetal:
apiVIP: <apiVIP>
ingressVIP: <ingress/wildcard VIP>
provisioningNetwork: "Disabled"
- 1
- Add the
provisioningNetworkconfiguration setting, if needed, and set it toDisabled.
The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia. See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details.
8.3.6.4. Modifying the install-config.yaml file for dual-stack network (optional) Copiar enlaceEnlace copiado en el portapapeles!
To deploy an OpenShift Container Platform cluster with dual-stack networking, edit the machineNetwork, clusterNetwork, and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. Ensure the first CIDR entry is the IPv4 setting and the second CIDR entry is the IPv6 setting.
The API VIP IP address and the Ingress VIP address must be of the primary IP address family when using dual-stack networking. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. However, Red Hat does support dual-stack networking with IPv4 as the primary IP address family. Therefore, the IPv4 entries must go before the IPv6 entries.
8.3.6.5. Configuring managed Secure Boot in the install-config.yaml file (optional) Copiar enlaceEnlace copiado en el portapapeles!
You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish, redfish-virtualmedia, or idrac-virtualmedia. To enable managed Secure Boot, add the bootMode configuration setting to each node:
Example
- 1
- Ensure the
bmc.addresssetting usesredfish,redfish-virtualmedia, oridrac-virtualmediaas the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. - 2
- The
bootModesetting isUEFIby default. Change it toUEFISecureBootto enable managed Secure Boot.
See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media.
Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities.
8.3.6.6. Additional install-config parameters Copiar enlaceEnlace copiado en el portapapeles!
See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file.
| Parameters | Default | Description |
|---|---|---|
|
|
The domain name for the cluster. For example, | |
|
|
|
The boot mode for a node. Options are |
|
|
The | |
|
|
The | |
metadata:
name:
|
The name to be given to the OpenShift Container Platform cluster. For example, | |
networking:
machineCIDR:
|
The public CIDR (Classless Inter-Domain Routing) of the external network. For example, | |
compute: - name: worker
| The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. | |
compute:
replicas: 2
| Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. | |
controlPlane:
name: master
| The OpenShift Container Platform cluster requires a name for control plane (master) nodes. | |
controlPlane:
replicas: 3
| Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. | |
|
|
The name of the network interface on nodes connected to the | |
|
| The default configuration used for machine pools without a platform configuration. | |
|
| (Optional) The virtual IP address for Kubernetes API communication.
This setting must either be provided in the | |
|
|
|
|
|
| (Optional) The virtual IP address for ingress traffic.
This setting must either be provided in the |
| Parameters | Default | Description |
|---|---|---|
|
|
|
Defines the IP range for nodes on the |
|
|
|
The CIDR for the network to use for provisioning. This option is required when not using the default address range on the |
|
|
The third IP address of the |
The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the |
|
|
The second IP address of the |
The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the |
|
|
|
The name of the |
|
|
|
The name of the |
|
| The default configuration used for machine pools without a platform configuration. | |
|
|
A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: | |
|
|
A URL to override the default operating system for cluster nodes. The URL must include a SHA-256 hash of the image. For example, | |
|
|
The
| |
|
| Set this parameter to the appropriate HTTP proxy used within your environment. | |
|
| Set this parameter to the appropriate HTTPS proxy used within your environment. | |
|
| Set this parameter to the appropriate list of exclusions for proxy usage within your environment. |
Hosts
The hosts parameter is a list of separate bare metal assets used to build the cluster.
| Name | Default | Description |
|---|---|---|
|
|
The name of the | |
|
|
The role of the bare metal node. Either | |
|
| Connection details for the baseboard management controller. See the BMC addressing section for additional details. | |
|
|
The MAC address of the NIC that the host uses for the Note
You must provide a valid MAC address from the host if you disabled the |
8.3.6.7. BMC addressing Copiar enlaceEnlace copiado en el portapapeles!
Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI.
IPMI
Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file.
The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia. See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details.
Redfish network boot
To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file.
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.
8.3.6.8. BMC addressing for Dell iDRAC Copiar enlaceEnlace copiado en el portapapeles!
The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.
- 1
- The
addressconfiguration setting specifies the protocol.
For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI.
| Protocol | Address Format |
|---|---|
| iDRAC virtual media |
|
| Redfish network boot |
|
| IPMI |
|
Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell’s idrac-virtualmedia uses the Redfish standard with Dell’s OEM extensions.
See the following sections for additional details.
Redfish virtual media for Dell iDRAC
For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work.
The following example demonstrates using iDRAC virtual media within the install-config.yaml file.
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.
Currently, Redfish is only supported on Dell with iDRAC firmware versions 4.20.20.20 through 04.40.00.00 for installer-provisioned installations on bare metal deployments. There is a known issue with version 04.40.00.00. With iDRAC 9 firmware version 04.40.00.00, the Virtual Console plugin defaults to eHTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to HTML5 to avoid this issue. The menu path is: Configuration
Ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration AutoAttach .
Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell’s idrac-virtualmedia:// protocol uses the Redfish standard with Dell’s OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware.
Redfish network boot for iDRAC
To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file.
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.
Currently, Redfish is only supported on Dell hardware with iDRAC firmware versions 4.20.20.20 through 04.40.00.00 for installer-provisioned installations on bare metal deployments. There is a known issue with version 04.40.00.00. With iDRAC 9 firmware version 04.40.00.00, the Virtual Console plugin defaults to eHTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to HTML5 to avoid this issue. The menu path is: Configuration
Ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration
The redfish:// URL protocol corresponds to the redfish hardware type in Ironic.
8.3.6.9. BMC addressing for HPE iLO Copiar enlaceEnlace copiado en el portapapeles!
The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.
- 1
- The
addressconfiguration setting specifies the protocol.
For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI.
| Protocol | Address Format |
|---|---|
| Redfish virtual media |
|
| Redfish network boot |
|
| IPMI |
|
See the following sections for additional details.
Redfish virtual media for HPE iLO
To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file.
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.
Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media.
Redfish network boot for HPE iLO
To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file.
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.
8.3.6.10. BMC addressing for Fujitsu iRMC Copiar enlaceEnlace copiado en el portapapeles!
The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.
- 1
- The
addressconfiguration setting specifies the protocol.
For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI.
| Protocol | Address Format |
|---|---|
| iRMC |
|
| IPMI |
|
iRMC
Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443. The following example demonstrates an iRMC configuration within the install-config.yaml file.
Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal.
8.3.6.11. Root device hints Copiar enlaceEnlace copiado en el portapapeles!
The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it.
| Subfield | Description |
|---|---|
|
|
A string containing a Linux device name like |
|
|
A string containing a SCSI bus address like |
|
| A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. |
|
| A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. |
|
| A string containing the device serial number. The hint must match the actual value exactly. |
|
| An integer representing the minimum size of the device in gigabytes. |
|
| A string containing the unique storage identifier. The hint must match the actual value exactly. |
|
| A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. |
|
| A string containing the unique vendor storage identifier. The hint must match the actual value exactly. |
|
| A boolean indicating whether the device should be a rotating disk (true) or not (false). |
Example usage
8.3.6.12. Creating the OpenShift Container Platform manifests Copiar enlaceEnlace copiado en el portapapeles!
Create the OpenShift Container Platform manifests.
./openshift-baremetal-install --dir ~/clusterconfigs create manifests
$ ./openshift-baremetal-install --dir ~/clusterconfigs create manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regeneratedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.6.13. Configuring NTP for disconnected clusters (optional) Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes.
OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.
Procedure
Create a Butane config,
99-master-chrony-conf-override.bu, including the contents of thechrony.conffile for the control plane nodes.NoteSee "Creating machine configs with Butane" for information about Butane.
Butane config example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must replace
<cluster-name>with the name of the cluster and replace<domain>with the fully qualified domain name.
Use Butane to generate a
MachineConfigobject file,99-master-chrony-conf-override.yaml, containing the configuration to be delivered to the control plane nodes:butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
$ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Butane config,
99-worker-chrony-conf-override.bu, including the contents of thechrony.conffile for the worker nodes that references the NTP servers on the control plane nodes.Butane config example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must replace
<cluster-name>with the name of the cluster and replace<domain>with the fully qualified domain name.
Use Butane to generate a
MachineConfigobject file,99-worker-chrony-conf-override.yaml, containing the configuration to be delivered to the worker nodes:butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
$ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.6.14. (Optional) Configure network components to run on the control plane Copiar enlaceEnlace copiado en el portapapeles!
You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes. When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes.
Procedure
Change to the directory storing the
install-config.yamlfile:cd ~/clusterconfigs
$ cd ~/clusterconfigsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Switch to the
manifestssubdirectory:cd manifests
$ cd manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file named
cluster-network-avoid-workers-99-config.yaml:touch cluster-network-avoid-workers-99-config.yaml
$ touch cluster-network-avoid-workers-99-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the
cluster-network-avoid-workers-99-config.yamlfile in an editor and enter a custom resource (CR) that describes the Operator configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This manifest places the
ingressVIPvirtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only:-
openshift-ingress-operator -
keepalived
-
-
Save the
cluster-network-avoid-workers-99-config.yamlfile. Create a
manifests/cluster-ingress-default-ingresscontroller.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Consider backing up the
manifestsdirectory. The installer deletes themanifests/directory when creating the cluster. Modify the
cluster-scheduler-02-config.ymlmanifest to make the control plane nodes schedulable by setting themastersSchedulablefield totrue. Control plane nodes are not schedulable by default. For example:sed -i "s;mastersSchedulable: false;mastersSchedulable: true;g" clusterconfigs/manifests/cluster-scheduler-02-config.yml
$ sed -i "s;mastersSchedulable: false;mastersSchedulable: true;g" clusterconfigs/manifests/cluster-scheduler-02-config.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail.
8.3.7. Creating a disconnected registry (optional) Copiar enlaceEnlace copiado en el portapapeles!
In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet.
A local, or mirrored, copy of the registry requires the following:
- A certificate for the registry node. This can be a self-signed certificate.
- A web server that a container on a system will serve.
- An updated pull secret that contains the certificate and local repository information.
Creating a disconnected registry on a registry node is optional. The subsequent sections indicate that they are optional since they are steps you need to execute only when creating a disconnected registry on a registry node. You should execute all of the subsequent sub-sections labeled "(optional)" when creating a disconnected registry on a registry node.
8.3.7.1. Preparing the registry node to host the mirrored registry (optional) Copiar enlaceEnlace copiado en el portapapeles!
Make the following changes to the registry node.
Procedure
Open the firewall port on the registry node.
sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent sudo firewall-cmd --reload
$ sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent $ sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent $ sudo firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the required packages for the registry node.
sudo yum -y install python3 podman httpd httpd-tools jq
$ sudo yum -y install python3 podman httpd httpd-tools jqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the directory structure where the repository information will be held.
sudo mkdir -p /opt/registry/{auth,certs,data}$ sudo mkdir -p /opt/registry/{auth,certs,data}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.7.2. Generating the self-signed certificate (optional) Copiar enlaceEnlace copiado en el portapapeles!
Generate a self-signed certificate for the registry node and put it in the /opt/registry/certs directory.
Procedure
Adjust the certificate information as appropriate.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen replacing
<Country Name>, ensure that it only contains two letters. For example,US.Update the registry node’s
ca-trustwith the new certificate.sudo cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust extract
$ sudo cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ $ sudo update-ca-trust extractCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.7.3. Creating the registry podman container (optional) Copiar enlaceEnlace copiado en el portapapeles!
The registry container uses the /opt/registry directory for certificates, authentication files, and to store its data files.
The registry container uses httpd and needs an htpasswd file for authentication.
Procedure
Create an
htpasswdfile in/opt/registry/authfor the container to use.htpasswd -bBc /opt/registry/auth/htpasswd <user> <passwd>
$ htpasswd -bBc /opt/registry/auth/htpasswd <user> <passwd>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<user>with the user name and<passwd>with the password.Create and start the registry container.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman start ocpdiscon-registry
$ podman start ocpdiscon-registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.7.4. Copy and update the pull-secret (optional) Copiar enlaceEnlace copiado en el portapapeles!
Copy the pull secret file from the provisioner node to the registry node and modify it to include the authentication information for the new registry node.
Procedure
Copy the
pull-secret.txtfile.scp kni@provisioner:/home/kni/pull-secret.txt pull-secret.txt
$ scp kni@provisioner:/home/kni/pull-secret.txt pull-secret.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
host_fqdnenvironment variable with the fully qualified domain name of the registry node.host_fqdn=$( hostname --long )
$ host_fqdn=$( hostname --long )Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
b64authenvironment variable with the base64 encoding of thehttpcredentials used to create thehtpasswdfile.b64auth=$( echo -n '<username>:<passwd>' | openssl base64 )
$ b64auth=$( echo -n '<username>:<passwd>' | openssl base64 )Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<username>with the user name and<passwd>with the password.Set the
AUTHSTRINGenvironment variable to use thebase64authorization string. The$USERvariable is an environment variable containing the name of the current user.AUTHSTRING="{\"$host_fqdn:5000\": {\"auth\": \"$b64auth\",\"email\": \"$USER@redhat.com\"}}"$ AUTHSTRING="{\"$host_fqdn:5000\": {\"auth\": \"$b64auth\",\"email\": \"$USER@redhat.com\"}}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
pull-secret.txtfile.jq ".auths += $AUTHSTRING" < pull-secret.txt > pull-secret-update.txt
$ jq ".auths += $AUTHSTRING" < pull-secret.txt > pull-secret-update.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.7.5. Mirroring the repository (optional) Copiar enlaceEnlace copiado en el portapapeles!
Procedure
Copy the
ocbinary from the provisioner node to the registry node.sudo scp kni@provisioner:/usr/local/bin/oc /usr/local/bin
$ sudo scp kni@provisioner:/usr/local/bin/oc /usr/local/binCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the required environment variables.
Set the release version:
VERSION=<release_version>
$ VERSION=<release_version>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For
<release_version>, specify the tag that corresponds to the version of OpenShift Container Platform to install, such as4.8.Set the local registry name and host port:
LOCAL_REG='<local_registry_host_name>:<local_registry_host_port>'
$ LOCAL_REG='<local_registry_host_name>:<local_registry_host_port>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For
<local_registry_host_name>, specify the registry domain name for your mirror repository, and for<local_registry_host_port>, specify the port that it serves content on.Set the local repository name:
LOCAL_REPO='<local_repository_name>'
$ LOCAL_REPO='<local_repository_name>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For
<local_repository_name>, specify the name of the repository to create in your registry, such asocp4/openshift4.
Mirror the remote install images to the local repository.
/usr/local/bin/oc adm release mirror \ -a pull-secret-update.txt \ --from=$UPSTREAM_REPO \ --to-release-image=$LOCAL_REG/$LOCAL_REPO:${VERSION} \ --to=$LOCAL_REG/$LOCAL_REPO$ /usr/local/bin/oc adm release mirror \ -a pull-secret-update.txt \ --from=$UPSTREAM_REPO \ --to-release-image=$LOCAL_REG/$LOCAL_REPO:${VERSION} \ --to=$LOCAL_REG/$LOCAL_REPOCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.7.6. Modify the install-config.yaml file to use the disconnected registry (optional) Copiar enlaceEnlace copiado en el portapapeles!
On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node’s certificate and registry information.
Procedure
Add the disconnected registry node’s certificate to the
install-config.yamlfile. The certificate should follow the"additionalTrustBundle: |"line and be properly indented, usually by two spaces.echo "additionalTrustBundle: |" >> install-config.yaml sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml
$ echo "additionalTrustBundle: |" >> install-config.yaml $ sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the mirror information for the registry to the
install-config.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteReplace
registry.example.comwith the registry’s fully qualified domain name.
8.3.8. Deploying routers on worker nodes Copiar enlaceEnlace copiado en el portapapeles!
During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If the initial cluster has only one worker node, or if a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas.
By default, the installer deploys two routers. If the cluster has at least two worker nodes, you can skip this section.
If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. If the cluster has no worker nodes, you can skip this section.
Procedure
Create a
router-replicas.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteReplace
<num-of-router-pods>with an appropriate value. If working with just one worker node, setreplicas:to1. If working with more than 3 worker nodes, you can increasereplicas:from the default value2as appropriate.Save and copy the
router-replicas.yamlfile to theclusterconfigs/openshiftdirectory.cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml
cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.9. Validation checklist for installation Copiar enlaceEnlace copiado en el portapapeles!
- ❏ OpenShift Container Platform installer has been retrieved.
- ❏ OpenShift Container Platform installer has been extracted.
-
❏ Required parameters for the
install-config.yamlhave been configured. -
❏ The
hostsparameter for theinstall-config.yamlhas been configured. -
❏ The
bmcparameter for theinstall-config.yamlhas been configured. -
❏ Conventions for the values configured in the
bmcaddressfield have been applied. - ❏ Created a disconnected registry (optional).
- ❏ (optional) Validate disconnected registry settings if in use.
- ❏ (optional) Deployed routers on worker nodes.
8.3.10. Deploying the cluster via the OpenShift Container Platform installer Copiar enlaceEnlace copiado en el portapapeles!
Run the OpenShift Container Platform installer:
./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster
$ ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster
8.3.11. Following the installation Copiar enlaceEnlace copiado en el portapapeles!
During the deployment process, you can check the installation’s overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder.
tail -f /path/to/install-dir/.openshift_install.log
$ tail -f /path/to/install-dir/.openshift_install.log
8.3.12. Verifying static IP address configuration Copiar enlaceEnlace copiado en el portapapeles!
If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node’s network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address.
The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing.
Verify the network configuration is working properly.
Procedure
- Check the network interface configuration on the node.
- Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly.
8.3.13. Preparing to reinstall a cluster on bare metal Copiar enlaceEnlace copiado en el portapapeles!
Before you reinstall a cluster on bare metal, you must perform cleanup operations.
Procedure
- Remove or reformat the disks for the bootstrap, control plane (also known as master) node, and worker nodes. If you are working in a hypervisor environment, you must add any disks you removed.
Delete the artifacts that the previous installation generated:
cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \ .openshift_install.log .openshift_install_state.json
$ cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \ .openshift_install.log .openshift_install_state.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Generate new manifests and Ignition config files. See “Creating the Kubernetes manifest and Ignition config files" for more information.
- Upload the new bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. This will overwrite the previous Ignition files.
8.4. Installer-provisioned post-installation configuration Copiar enlaceEnlace copiado en el portapapeles!
After successfully deploying an installer-provisioned cluster, consider the following post-installation procedures.
8.4.1. Configuring NTP for disconnected clusters (optional) Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment.
OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.
Procedure
Create a Butane config,
99-master-chrony-conf-override.bu, including the contents of thechrony.conffile for the control plane nodes.NoteSee "Creating machine configs with Butane" for information about Butane.
Butane config example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must replace
<cluster-name>with the name of the cluster and replace<domain>with the fully qualified domain name.
Use Butane to generate a
MachineConfigobject file,99-master-chrony-conf-override.yaml, containing the configuration to be delivered to the control plane nodes:butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
$ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Butane config,
99-worker-chrony-conf-override.bu, including the contents of thechrony.conffile for the worker nodes that references the NTP servers on the control plane nodes.Butane config example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must replace
<cluster-name>with the name of the cluster and replace<domain>with the fully qualified domain name.
Use Butane to generate a
MachineConfigobject file,99-worker-chrony-conf-override.yaml, containing the configuration to be delivered to the worker nodes:butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
$ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
99-master-chrony-conf-override.yamlpolicy to the control plane nodes.oc apply -f 99-master-chrony-conf-override.yaml
$ oc apply -f 99-master-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created
machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
99-worker-chrony-conf-override.yamlpolicy to the worker nodes.oc apply -f 99-worker-chrony-conf-override.yaml
$ oc apply -f 99-worker-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created
machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the applied NTP settings.
oc describe machineconfigpool
$ oc describe machineconfigpoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4.2. Enabling a provisioning network after installation Copiar enlaceEnlace copiado en el portapapeles!
The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node’s baseboard management controller is routable via the baremetal network.
In OpenShift Container Platform 4.8 and later, you can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO).
Prerequisites
- A dedicated physical network must exist, connected to all worker and control plane nodes.
- You must isolate the native, untagged physical network.
-
The network cannot have a DHCP server when the
provisioningNetworkconfiguration setting is set toManaged. -
You can omit the
provisioningInterfacesetting in OpenShift Container Platform 4.9 to use thebootMACAddressconfiguration setting.
Procedure
-
When setting the
provisioningInterfacesetting, first identify the provisioning interface name for the cluster nodes. For example,eth0oreno1. -
Enable the Preboot eXecution Environment (PXE) on the
provisioningnetwork interface of the cluster nodes. Retrieve the current state of the
provisioningnetwork and save it to a provisioning custom resource (CR) file:oc get provisioning -o yaml > enable-provisioning-nw.yaml
$ oc get provisioning -o yaml > enable-provisioning-nw.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the provisioning CR file:
vim ~/enable-provisioning-nw.yaml
$ vim ~/enable-provisioning-nw.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scroll down to the
provisioningNetworkconfiguration setting and change it fromDisabledtoManaged. Then, add theprovisioningOSDownloadURL,provisioningIP,provisioningNetworkCIDR,provisioningDHCPRange,provisioningInterface, andwatchAllNameSpacesconfiguration settings after theprovisioningNetworksetting. Provide appropriate values for each setting.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
provisioningNetworkis one ofManaged,Unmanaged, orDisabled. When set toManaged, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set toUnmanaged, the system administrator configures the DHCP server manually. - 2
- The
provisioningOSDownloadURLis a valid HTTPS URL with a valid sha256 checksum that enables the Metal3 pod to download a qcow2 operating system image ending in.qcow2.gzor.qcow2.xz. This field is required whether the provisioning network isManaged,Unmanaged, orDisabled. For example:http://192.168.0.1/images/rhcos-<version>.x86_64.qcow2.gz?sha256=<sha>. - 3
- The
provisioningIPis the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within theprovisioningsubnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if theprovisioningnetwork isDisabled. The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server. - 4
- The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the
provisioningnetwork isDisabled. For example:192.168.0.1/24. - 5
- The DHCP range. This setting is only applicable to a
Managedprovisioning network. Omit this configuration setting if theprovisioningnetwork isDisabled. For example:192.168.0.64, 192.168.0.253. - 6
- The NIC name for the
provisioninginterface on cluster nodes. TheprovisioningInterfacesetting is only applicable toManagedandUnmanagedprovisioning networks. Omit theprovisioningInterfaceconfiguration setting if theprovisioningnetwork isDisabled. Omit theprovisioningInterfaceconfiguration setting to use thebootMACAddressconfiguration setting instead. - 7
- Set this setting to
trueif you want metal3 to watch namespaces other than the defaultopenshift-machine-apinamespace. The default value isfalse.
- Save the changes to the provisioning CR file.
Apply the provisioning CR file to the cluster:
oc apply -f enable-provisioning-nw.yaml
$ oc apply -f enable-provisioning-nw.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4.3. Configuring an external load balancer Copiar enlaceEnlace copiado en el portapapeles!
You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer.
Prerequisites
- On your load balancer, TCP over ports 6443, 443, and 80 must be available to any users of your system.
- Load balance the API port, 6443, between each of the control plane nodes.
- Load balance the application ports, 443 and 80, between all of the compute nodes.
- On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster.
Your load balancer must be able to access every machine in your cluster. Methods to allow this access include:
- Attaching the load balancer to the cluster’s machine subnet.
- Attaching floating IP addresses to machines that use the load balancer.
External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes.
Procedure
Enable access to the cluster from your load balancer on ports 6443, 443, and 80.
As an example, note this HAProxy configuration:
A section of a sample HAProxy configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add records to your DNS server for the cluster API and apps over the load balancer. For example:
<load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain>
<load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain>Copy to Clipboard Copied! Toggle word wrap Toggle overflow From a command line, use
curlto verify that the external load balancer and DNS configuration are operational.Verify that the cluster API is accessible:
curl https://<loadbalancer_ip_address>:6443/version --insecure
$ curl https://<loadbalancer_ip_address>:6443/version --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, you receive a JSON object in response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that cluster applications are accessible:
NoteYou can also verify application accessibility by opening the OpenShift Container Platform console in a web browser.
curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, you receive an HTTP response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.5. Expanding the cluster Copiar enlaceEnlace copiado en el portapapeles!
After deploying an installer-provisioned OpenShift Container Platform cluster, you can use the following procedures to expand the number of worker nodes. Ensure that each prospective worker node meets the prerequisites.
Expanding the cluster using RedFish Virtual Media involves meeting minimum firmware requirements. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details when expanding the cluster using RedFish Virtual Media.
8.5.1. Preparing the bare metal node Copiar enlaceEnlace copiado en el portapapeles!
Expanding the cluster requires a DHCP server. Each node must have a DHCP reservation.
Some administrators prefer to use static IP addresses so that each node’s IP address remains constant in the absence of a DHCP server. To use static IP addresses in the OpenShift Container Platform cluster, reserve the IP addresses in the DHCP server with an infinite lease. After the installer provisions the node successfully, the dispatcher script will check the node’s network configuration. If the dispatcher script finds that the network configuration contains a DHCP infinite lease, it will recreate the connection as a static IP connection using the IP address from the DHCP infinite lease. NICs without DHCP infinite leases will remain unmodified.
Setting IP addresses with an infinite lease is incompatible with network configuration deployed by using the Machine Config Operator.
Preparing the bare metal node requires executing the following procedure from the provisioner node.
Procedure
Get the
ocbinary, if needed. It should already exist on the provisioner node.curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux-$VERSION.tar.gz | tar zxvf - oc
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux-$VERSION.tar.gz | tar zxvf - ocCopy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cp oc /usr/local/bin
$ sudo cp oc /usr/local/binCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Power off the bare metal node by using the baseboard management controller, and ensure it is off.
Retrieve the user name and password of the bare metal node’s baseboard management controller. Then, create
base64strings from the user name and password:echo -ne "root" | base64
$ echo -ne "root" | base64Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo -ne "password" | base64
$ echo -ne "password" | base64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a configuration file for the bare metal node.
vim bmh.yaml
$ vim bmh.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<num>for the worker number of the bare metal node in the twonamefields and thecredentialsNamefield. Replace<base64-of-uid>with thebase64string of the user name. Replace<base64-of-pwd>with thebase64string of the password. Replace<NIC1-mac-address>with the MAC address of the bare metal node’s first NIC.See the BMC addressing section for additional BMC configuration options. Replace
<protocol>with the BMC protocol, such as IPMI, RedFish, or others. Replace<bmc-ip>with the IP address of the bare metal node’s baseboard management controller.NoteIf the MAC address of an existing bare metal node matches the MAC address of a bare metal host that you are attempting to provision, then the Ironic installation will fail. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. See Diagnosing a host duplicate MAC address for more information.
Create the bare metal node.
oc -n openshift-machine-api create -f bmh.yaml
$ oc -n openshift-machine-api create -f bmh.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created
secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<num>will be the worker number.Power up and inspect the bare metal node.
oc -n openshift-machine-api get bmh openshift-worker-<num>
$ oc -n openshift-machine-api get bmh openshift-worker-<num>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<num>is the worker node number.NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR openshift-worker-<num> OK ready ipmi://<out-of-band-ip> unknown true
NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR openshift-worker-<num> OK ready ipmi://<out-of-band-ip> unknown trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.5.2. Replacing a bare-metal control plane node Copiar enlaceEnlace copiado en el portapapeles!
Use the following procedure to replace an installer-provisioned OpenShift Container Platform control plane node.
If you reuse the BareMetalHost object definition from an existing control plane host, do not leave the externallyProvisioned field set to true.
Existing control plane BareMetalHost objects may have the externallyProvisioned flag set to true if they were provisioned by the OpenShift Container Platform installation program.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. You have taken an etcd backup.
ImportantTake an etcd backup before performing this procedure so that you can restore your cluster if you encounter any issues. For more information about taking an etcd backup, see the Additional resources section.
Procedure
Ensure that the Bare Metal Operator is available:
oc get clusteroperator baremetal
$ oc get clusteroperator baremetalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.8.0 True False False 3d15h
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.8.0 True False False 3d15hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the old
BareMetalHostandMachineobjects:oc delete bmh -n openshift-machine-api <host_name> oc delete machine -n openshift-machine-api <machine_name>
$ oc delete bmh -n openshift-machine-api <host_name> $ oc delete machine -n openshift-machine-api <machine_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<host_name>with the name of the host and<machine_name>with the name of the machine. The machine name appears under theCONSUMERfield.After you remove the
BareMetalHostandMachineobjects, then the machine controller automatically deletes theNodeobject.Create the new
BareMetalHostobject and the secret to store the BMC credentials:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 4 6
- Replace
<num>for the control plane number of the bare metal node in thenamefields and thecredentialsNamefield. - 2
- Replace
<base64_of_uid>with thebase64string of the user name. - 3
- Replace
<base64_of_pwd>with thebase64string of the password. - 5
- Replace
<protocol>with the BMC protocol, such asredfish,redfish-virtualmedia,idrac-virtualmedia, or others. Replace<bmc_ip>with the IP address of the bare metal node’s baseboard management controller. For additional BMC configuration options, see "BMC addressing" in the Additional resources section. - 7
- Replace
<NIC1_mac_address>with the MAC address of the bare metal node’s first NIC.
After the inspection is complete, the
BareMetalHostobject is created and available to be provisioned.View available
BareMetalHostobjects:oc get bmh -n openshift-machine-api
$ oc get bmh -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow There are no
MachineSetobjects for control plane nodes, so you must create aMachineobject instead. You can copy theproviderSpecfrom another control planeMachineobject.Create a
Machineobject:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view the
BareMetalHostobjects, run the following command:oc get bmh -A
$ oc get bmh -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the RHCOS installation, verify that the
BareMetalHostis added to the cluster:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAfter replacement of the new control plane node, the etcd pod running in the new node is in
crashloopbackstatus. See "Replacing an unhealthy etcd member" in the Additional resources section for more information.
8.5.3. Diagnosing a duplicate MAC address when provisioning a new host in the cluster Copiar enlaceEnlace copiado en el portapapeles!
If the MAC address of an existing bare-metal node in the cluster matches the MAC address of a bare-metal host you are attempting to add to the cluster, the Bare Metal Operator associates the host with the existing node. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. A registration error is displayed for the failed bare-metal host.
You can diagnose a duplicate MAC address by examining the bare-metal hosts that are running in the openshift-machine-api namespace.
Prerequisites
- Install an OpenShift Container Platform cluster on bare metal.
-
Install the OpenShift Container Platform CLI
oc. -
Log in as a user with
cluster-adminprivileges.
Procedure
To determine whether a bare-metal host that fails provisioning has the same MAC address as an existing node, do the following:
Get the bare-metal hosts running in the
openshift-machine-apinamespace:oc get bmh -n openshift-machine-api
$ oc get bmh -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To see more detailed information about the status of the failing host, run the following command replacing
<bare_metal_host_name>with the name of the host:oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml
$ oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.5.4. Provisioning the bare metal node Copiar enlaceEnlace copiado en el portapapeles!
Provisioning the bare metal node requires executing the following procedure from the provisioner node.
Procedure
Ensure the
PROVISIONING STATUSisreadybefore provisioning the bare metal node.oc -n openshift-machine-api get bmh openshift-worker-<num>
$ oc -n openshift-machine-api get bmh openshift-worker-<num>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<num>is the worker node number.NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR openshift-worker-<num> OK ready ipmi://<out-of-band-ip> unknown true
NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR openshift-worker-<num> OK ready ipmi://<out-of-band-ip> unknown trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get a count of the number of worker nodes.
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the machine set.
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME DESIRED CURRENT READY AVAILABLE AGE ... openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m
NAME DESIRED CURRENT READY AVAILABLE AGE ... openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Increase the number of worker nodes by one.
oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=<num> machineset <machineset> -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<num>with the new number of worker nodes. Replace<machineset>with the name of the machine set from the previous step.Check the status of the bare metal node.
oc -n openshift-machine-api get bmh openshift-worker-<num>
$ oc -n openshift-machine-api get bmh openshift-worker-<num>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<num>is the worker node number. The status changes fromreadytoprovisioning.NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR openshift-worker-<num> OK provisioning openshift-worker-<num>-65tjz ipmi://<out-of-band-ip> unknown true
NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR openshift-worker-<num> OK provisioning openshift-worker-<num>-65tjz ipmi://<out-of-band-ip> unknown trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
provisioningstatus remains until the OpenShift Container Platform cluster provisions the node. This can take 30 minutes or more. After the node is provisioned, the status will change toprovisioned.NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR openshift-worker-<num> OK provisioned openshift-worker-<num>-65tjz ipmi://<out-of-band-ip> unknown true
NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR openshift-worker-<num> OK provisioned openshift-worker-<num>-65tjz ipmi://<out-of-band-ip> unknown trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow After provisioning completes, ensure the bare metal node is ready.
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also check the kubelet.
ssh openshift-worker-<num>
$ ssh openshift-worker-<num>Copy to Clipboard Copied! Toggle word wrap Toggle overflow [kni@openshift-worker-<num>]$ journalctl -fu kubelet
[kni@openshift-worker-<num>]$ journalctl -fu kubeletCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6. Troubleshooting Copiar enlaceEnlace copiado en el portapapeles!
8.6.1. Troubleshooting the installer workflow Copiar enlaceEnlace copiado en el portapapeles!
Prior to troubleshooting the installation environment, it is critical to understand the overall flow of the installer-provisioned installation on bare metal. The diagrams below provide a troubleshooting flow with a step-by-step breakdown for the environment.
Workflow 1 of 4 illustrates a troubleshooting workflow when the install-config.yaml file has errors or the Red Hat Enterprise Linux CoreOS (RHCOS) images are inaccessible. Troubleshooting suggestions can be found at Troubleshooting install-config.yaml.
Workflow 2 of 4 illustrates a troubleshooting workflow for bootstrap VM issues, bootstrap VMs that cannot boot up the cluster nodes, and inspecting logs. When installing a OpenShift Container Platform cluster without the provisioning network, this workflow does not apply.
Workflow 3 of 4 illustrates a troubleshooting workflow for cluster nodes that will not PXE boot. If installing using RedFish Virtual Media, each node must meet minimum firmware requirements for the installer to deploy the node. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details.
Workflow 4 of 4 illustrates a troubleshooting workflow from a non-accessible API to a validated installation.
8.6.2. Troubleshooting install-config.yaml Copiar enlaceEnlace copiado en el portapapeles!
The install-config.yaml configuration file represents all of the nodes that are part of the OpenShift Container Platform cluster. The file contains the necessary options consisting of but not limited to apiVersion, baseDomain, imageContentSources and virtual IP addresses. If errors occur early in the deployment of the OpenShift Container Platform cluster, the errors are likely in the install-config.yaml configuration file.
Procedure
- Use the guidelines in YAML-tips.
- Verify the YAML syntax is correct using syntax-check.
Verify the Red Hat Enterprise Linux CoreOS (RHCOS) QEMU images are properly defined and accessible via the URL provided in the
install-config.yaml. For example:curl -s -o /dev/null -I -w "%{http_code}\n" http://webserver.example.com:8080/rhcos-44.81.202004250133-0-qemu.x86_64.qcow2.gz?sha256=7d884b46ee54fe87bbc3893bf2aa99af3b2d31f2e19ab5529c60636fbd0f1ce7$ curl -s -o /dev/null -I -w "%{http_code}\n" http://webserver.example.com:8080/rhcos-44.81.202004250133-0-qemu.x86_64.qcow2.gz?sha256=7d884b46ee54fe87bbc3893bf2aa99af3b2d31f2e19ab5529c60636fbd0f1ce7Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output is
200, there is a valid response from the webserver storing the bootstrap VM image.
8.6.3. Bootstrap VM issues Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Container Platform installation program spawns a bootstrap node virtual machine, which handles provisioning the OpenShift Container Platform cluster nodes.
Procedure
About 10 to 15 minutes after triggering the installation program, check to ensure the bootstrap VM is operational using the
virshcommand:sudo virsh list
$ sudo virsh listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Id Name State -------------------------------------------- 12 openshift-xf6fq-bootstrap running
Id Name State -------------------------------------------- 12 openshift-xf6fq-bootstrap runningCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name of the bootstrap VM is always the cluster name followed by a random set of characters and ending in the word "bootstrap."
If the bootstrap VM is not running after 10-15 minutes, troubleshoot why it is not running. Possible issues include:
Verify
libvirtdis running on the system:systemctl status libvirtd
$ systemctl status libvirtdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the bootstrap VM is operational, log in to it.
Use the
virsh consolecommand to find the IP address of the bootstrap VM:sudo virsh console example.com
$ sudo virsh console example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantWhen deploying a OpenShift Container Platform cluster without the
provisioningnetwork, you must use a public IP address and not a private IP address like172.22.0.2.After you obtain the IP address, log in to the bootstrap VM using the
sshcommand:NoteIn the console output of the previous step, you can use the IPv6 IP address provided by
ens3or the IPv4 IP provided byens4.ssh core@172.22.0.2
$ ssh core@172.22.0.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you are not successful logging in to the bootstrap VM, you have likely encountered one of the following scenarios:
-
You cannot reach the
172.22.0.0/24network. Verify the network connectivity between the provisioner and theprovisioningnetwork bridge. This issue might occur if you are using aprovisioningnetwork. ` -
You cannot reach the bootstrap VM through the public network. When attempting to SSH via
baremetalnetwork, verify connectivity on theprovisionerhost specifically around thebaremetalnetwork bridge. -
You encountered
Permission denied (publickey,password,keyboard-interactive). When attempting to access the bootstrap VM, aPermission deniederror might occur. Verify that the SSH key for the user attempting to log into the VM is set within theinstall-config.yamlfile.
8.6.3.1. Bootstrap VM cannot boot up the cluster nodes Copiar enlaceEnlace copiado en el portapapeles!
During the deployment, it is possible for the bootstrap VM to fail to boot the cluster nodes, which prevents the VM from provisioning the nodes with the RHCOS image. This scenario can arise due to:
-
A problem with the
install-config.yamlfile. - Issues with out-of-band network access when using the baremetal network.
To verify the issue, there are three containers related to ironic:
-
ironic-api -
ironic-conductor -
ironic-inspector
Procedure
Log in to the bootstrap VM:
ssh core@172.22.0.2
$ ssh core@172.22.0.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check the container logs, execute the following:
sudo podman logs -f <container-name>
[core@localhost ~]$ sudo podman logs -f <container-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<container-name>with one ofironic-api,ironic-conductor, orironic-inspector. If you encounter an issue where the control plane nodes are not booting up via PXE, check theironic-conductorpod. Theironic-conductorpod contains the most detail about the attempt to boot the cluster nodes, because it attempts to log in to the node over IPMI.
Potential reason
The cluster nodes might be in the ON state when deployment started.
Solution
Power off the OpenShift Container Platform cluster nodes before you begin the installation over IPMI:
ipmitool -I lanplus -U root -P <password> -H <out-of-band-ip> power off
$ ipmitool -I lanplus -U root -P <password> -H <out-of-band-ip> power off
8.6.3.2. Inspecting logs Copiar enlaceEnlace copiado en el portapapeles!
When experiencing issues downloading or accessing the RHCOS images, first verify that the URL is correct in the install-config.yaml configuration file.
Example of internal webserver hosting RHCOS images
bootstrapOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-qemu.x86_64.qcow2.gz?sha256=9d999f55ff1d44f7ed7c106508e5deecd04dc3c06095d34d36bf1cd127837e0c clusterOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-openstack.x86_64.qcow2.gz?sha256=a1bda656fa0892f7b936fdc6b6a6086bddaed5dafacedcd7a1e811abb78fe3b0
bootstrapOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-qemu.x86_64.qcow2.gz?sha256=9d999f55ff1d44f7ed7c106508e5deecd04dc3c06095d34d36bf1cd127837e0c
clusterOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-openstack.x86_64.qcow2.gz?sha256=a1bda656fa0892f7b936fdc6b6a6086bddaed5dafacedcd7a1e811abb78fe3b0
The ipa-downloader and coreos-downloader containers download resources from a webserver or the external quay.io registry, whichever the install-config.yaml configuration file specifies. Verify the following two containers are up and running and inspect their logs as needed:
-
ipa-downloader -
coreos-downloader
Procedure
Log in to the bootstrap VM:
ssh core@172.22.0.2
$ ssh core@172.22.0.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the
ipa-downloaderandcoreos-downloadercontainers within the bootstrap VM:sudo podman logs -f ipa-downloader
[core@localhost ~]$ sudo podman logs -f ipa-downloaderCopy to Clipboard Copied! Toggle word wrap Toggle overflow sudo podman logs -f coreos-downloader
[core@localhost ~]$ sudo podman logs -f coreos-downloaderCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the bootstrap VM cannot access the URL to the images, use the
curlcommand to verify that the VM can access the images.To inspect the
bootkubelogs that indicate if all the containers launched during the deployment phase, execute the following:journalctl -xe
[core@localhost ~]$ journalctl -xeCopy to Clipboard Copied! Toggle word wrap Toggle overflow journalctl -b -f -u bootkube.service
[core@localhost ~]$ journalctl -b -f -u bootkube.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all the pods, including
dnsmasq,mariadb,httpd, andironic, are running:sudo podman ps
[core@localhost ~]$ sudo podman psCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there are issues with the pods, check the logs of the containers with issues. To check the log of the
ironic-api, execute the following:sudo podman logs <ironic-api>
[core@localhost ~]$ sudo podman logs <ironic-api>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6.4. Cluster nodes will not PXE boot Copiar enlaceEnlace copiado en el portapapeles!
When OpenShift Container Platform cluster nodes will not PXE boot, execute the following checks on the cluster nodes that will not PXE boot. This procedure does not apply when installing a OpenShift Container Platform cluster without the provisioning network.
Procedure
-
Check the network connectivity to the
provisioningnetwork. -
Ensure PXE is enabled on the NIC for the
provisioningnetwork and PXE is disabled for all other NICs. Verify that the
install-config.yamlconfiguration file has the proper hardware profile and boot MAC address for the NIC connected to theprovisioningnetwork. For example:control plane node settings
bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC hardwareProfile: default #control plane node settings
bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC hardwareProfile: default #control plane node settingsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Worker node settings
bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC hardwareProfile: unknown #worker node settings
bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC hardwareProfile: unknown #worker node settingsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6.5. The API is not accessible Copiar enlaceEnlace copiado en el portapapeles!
When the cluster is running and clients cannot access the API, domain name resolution issues might impede access to the API.
Procedure
Hostname Resolution: Check the cluster nodes to ensure they have a fully qualified domain name, and not just
localhost.localdomain. For example:hostname
$ hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow If a hostname is not set, set the correct hostname. For example:
hostnamectl set-hostname <hostname>
$ hostnamectl set-hostname <hostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Incorrect Name Resolution: Ensure that each node has the correct name resolution in the DNS server using
digandnslookup. For example:dig api.<cluster-name>.example.com
$ dig api.<cluster-name>.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output in the foregoing example indicates that the appropriate IP address for the
api.<cluster-name>.example.comVIP is10.19.13.86. This IP address should reside on thebaremetalnetwork.
8.6.6. Cleaning up previous installations Copiar enlaceEnlace copiado en el portapapeles!
In the event of a previous failed deployment, remove the artifacts from the failed attempt before attempting to deploy OpenShift Container Platform again.
Procedure
Power off all bare metal nodes prior to installing the OpenShift Container Platform cluster:
ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
$ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power offCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all old bootstrap resources if any are left over from a previous deployment attempt:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the following from the
clusterconfigsdirectory to prevent Terraform from failing:rm -rf ~/clusterconfigs/auth ~/clusterconfigs/terraform* ~/clusterconfigs/tls ~/clusterconfigs/metadata.json
$ rm -rf ~/clusterconfigs/auth ~/clusterconfigs/terraform* ~/clusterconfigs/tls ~/clusterconfigs/metadata.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6.7. Issues with creating the registry Copiar enlaceEnlace copiado en el portapapeles!
When creating a disconnected registry, you might encounter a "User Not Authorized" error when attempting to mirror the registry. This error might occur if you fail to append the new authentication to the existing pull-secret.txt file.
Procedure
Check to ensure authentication is successful:
/usr/local/bin/oc adm release mirror \ -a pull-secret-update.json
$ /usr/local/bin/oc adm release mirror \ -a pull-secret-update.json --from=$UPSTREAM_REPO \ --to-release-image=$LOCAL_REG/$LOCAL_REPO:${VERSION} \ --to=$LOCAL_REG/$LOCAL_REPOCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteExample output of the variables used to mirror the install images:
UPSTREAM_REPO=${RELEASE_IMAGE} LOCAL_REG=<registry_FQDN>:<registry_port> LOCAL_REPO='ocp4/openshift4'UPSTREAM_REPO=${RELEASE_IMAGE} LOCAL_REG=<registry_FQDN>:<registry_port> LOCAL_REPO='ocp4/openshift4'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The values of
RELEASE_IMAGEandVERSIONwere set during the Retrieving OpenShift Installer step of the Setting up the environment for an OpenShift installation section.After mirroring the registry, confirm that you can access it in your disconnected environment:
curl -k -u <user>:<password> https://registry.example.com:<registry-port>/v2/_catalog
$ curl -k -u <user>:<password> https://registry.example.com:<registry-port>/v2/_catalog {"repositories":["<Repo-Name>"]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6.8. Miscellaneous issues Copiar enlaceEnlace copiado en el portapapeles!
8.6.8.1. Addressing the runtime network not ready error Copiar enlaceEnlace copiado en el portapapeles!
After the deployment of a cluster you might receive the following error:
`runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network`
`runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network`
The Cluster Network Operator is responsible for deploying the networking components in response to a special object created by the installer. It runs very early in the installation process, after the control plane (master) nodes have come up, but before the bootstrap control plane has been torn down. It can be indicative of more subtle installer issues, such as long delays in bringing up control plane (master) nodes or issues with apiserver communication.
Procedure
Inspect the pods in the
openshift-network-operatornamespace:oc get all -n openshift-network-operator
$ oc get all -n openshift-network-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME READY STATUS RESTARTS AGE pod/network-operator-69dfd7b577-bg89v 0/1 ContainerCreating 0 149m
NAME READY STATUS RESTARTS AGE pod/network-operator-69dfd7b577-bg89v 0/1 ContainerCreating 0 149mCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the
provisionernode, determine that the network configuration exists:kubectl get network.config.openshift.io cluster -oyaml
$ kubectl get network.config.openshift.io cluster -oyamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow If it does not exist, the installer did not create it. To determine why the installer did not create it, execute the following:
openshift-install create manifests
$ openshift-install create manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
network-operatoris running:kubectl -n openshift-network-operator get pods
$ kubectl -n openshift-network-operator get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the logs:
kubectl -n openshift-network-operator logs -l "name=network-operator"
$ kubectl -n openshift-network-operator logs -l "name=network-operator"Copy to Clipboard Copied! Toggle word wrap Toggle overflow On high availability clusters with three or more control plane (master) nodes, the Operator will perform leader election and all other Operators will sleep. For additional details, see Troubleshooting.
8.6.8.2. Cluster nodes not getting the correct IPv6 address over DHCP Copiar enlaceEnlace copiado en el portapapeles!
If the cluster nodes are not getting the correct IPv6 address over DHCP, check the following:
- Ensure the reserved IPv6 addresses reside outside the DHCP range.
In the IP address reservation on the DHCP server, ensure the reservation specifies the correct DHCP Unique Identifier (DUID). For example:
This is a dnsmasq dhcp reservation, 'id:00:03:00:01' is the client id and '18:db:f2:8c:d5:9f' is the MAC Address for the NIC
# This is a dnsmasq dhcp reservation, 'id:00:03:00:01' is the client id and '18:db:f2:8c:d5:9f' is the MAC Address for the NIC id:00:03:00:01:18:db:f2:8c:d5:9f,openshift-master-1,[2620:52:0:1302::6]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that route announcements are working.
- Ensure that the DHCP server is listening on the required interfaces serving the IP address ranges.
8.6.8.3. Cluster nodes not getting the correct hostname over DHCP Copiar enlaceEnlace copiado en el portapapeles!
During IPv6 deployment, cluster nodes must get their hostname over DHCP. Sometimes the NetworkManager does not assign the hostname immediately. A control plane (master) node might report an error such as:
Failed Units: 2 NetworkManager-wait-online.service nodeip-configuration.service
Failed Units: 2
NetworkManager-wait-online.service
nodeip-configuration.service
This error indicates that the cluster node likely booted without first receiving a hostname from the DHCP server, which causes kubelet to boot with a localhost.localdomain hostname. To address the error, force the node to renew the hostname.
Procedure
Retrieve the
hostname:hostname
[core@master-X ~]$ hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the hostname is
localhost, proceed with the following steps.NoteWhere
Xis the control plane node (also known as the master node) number.Force the cluster node to renew the DHCP lease:
sudo nmcli con up "<bare-metal-nic>"
[core@master-X ~]$ sudo nmcli con up "<bare-metal-nic>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<bare-metal-nic>with the wired connection corresponding to thebaremetalnetwork.Check
hostnameagain:hostname
[core@master-X ~]$ hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the hostname is still
localhost.localdomain, restartNetworkManager:sudo systemctl restart NetworkManager
[core@master-X ~]$ sudo systemctl restart NetworkManagerCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the hostname is still
localhost.localdomain, wait a few minutes and check again. If the hostname remainslocalhost.localdomain, repeat the previous steps. Restart the
nodeip-configurationservice:sudo systemctl restart nodeip-configuration.service
[core@master-X ~]$ sudo systemctl restart nodeip-configuration.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow This service will reconfigure the
kubeletservice with the correct hostname references.Reload the unit files definition since the kubelet changed in the previous step:
sudo systemctl daemon-reload
[core@master-X ~]$ sudo systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
kubeletservice:sudo systemctl restart kubelet.service
[core@master-X ~]$ sudo systemctl restart kubelet.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure
kubeletbooted with the correct hostname:sudo journalctl -fu kubelet.service
[core@master-X ~]$ sudo journalctl -fu kubelet.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If the cluster node is not getting the correct hostname over DHCP after the cluster is up and running, such as during a reboot, the cluster will have a pending csr. Do not approve a csr, or other issues might arise.
Addressing a csr
Get CSRs on the cluster:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify if a pending
csrcontainsSubject Name: localhost.localdomain:oc get csr <pending_csr> -o jsonpath='{.spec.request}' | base64 --decode | openssl req -noout -text$ oc get csr <pending_csr> -o jsonpath='{.spec.request}' | base64 --decode | openssl req -noout -textCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove any
csrthat containsSubject Name: localhost.localdomain:oc delete csr <wrong_csr>
$ oc delete csr <wrong_csr>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6.8.4. Routes do not reach endpoints Copiar enlaceEnlace copiado en el portapapeles!
During the installation process, it is possible to encounter a Virtual Router Redundancy Protocol (VRRP) conflict. This conflict might occur if a previously used OpenShift Container Platform node that was once part of a cluster deployment using a specific cluster name is still running but not part of the current OpenShift Container Platform cluster deployment using that same cluster name. For example, a cluster was deployed using the cluster name openshift, deploying three control plane (master) nodes and three worker nodes. Later, a separate install uses the same cluster name openshift, but this redeployment only installed three control plane (master) nodes, leaving the three worker nodes from a previous deployment in an ON state. This might cause a Virtual Router Identifier (VRID) conflict and a VRRP conflict.
Get the route:
oc get route oauth-openshift
$ oc get route oauth-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the service endpoint:
oc get svc oauth-openshift
$ oc get svc oauth-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oauth-openshift ClusterIP 172.30.19.162 <none> 443/TCP 59m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oauth-openshift ClusterIP 172.30.19.162 <none> 443/TCP 59mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attempt to reach the service from a control plane (master) node:
curl -k https://172.30.19.162
[core@master0 ~]$ curl -k https://172.30.19.162Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
authentication-operatorerrors from theprovisionernode:oc logs deployment/authentication-operator -n openshift-authentication-operator
$ oc logs deployment/authentication-operator -n openshift-authentication-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"225c5bd5-b368-439b-9155-5fd3c0459d98", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 2 endpoints for oauth-server are reporting"Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"225c5bd5-b368-439b-9155-5fd3c0459d98", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 2 endpoints for oauth-server are reporting"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Solution
- Ensure that the cluster name for every deployment is unique, ensuring no conflict.
- Turn off all the rogue nodes which are not part of the cluster deployment that are using the same cluster name. Otherwise, the authentication pod of the OpenShift Container Platform cluster might never start successfully.
8.6.8.5. Failed Ignition during Firstboot Copiar enlaceEnlace copiado en el portapapeles!
During the Firstboot, the Ignition configuration may fail.
Procedure
Connect to the node where the Ignition configuration failed:
Failed Units: 1 machine-config-daemon-firstboot.service
Failed Units: 1 machine-config-daemon-firstboot.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
machine-config-daemon-firstbootservice:sudo systemctl restart machine-config-daemon-firstboot.service
[core@worker-X ~]$ sudo systemctl restart machine-config-daemon-firstboot.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6.8.6. NTP out of sync Copiar enlaceEnlace copiado en el portapapeles!
The deployment of OpenShift Container Platform clusters depends on NTP synchronized clocks among the cluster nodes. Without synchronized clocks, the deployment may fail due to clock drift if the time difference is greater than two seconds.
Procedure
Check for differences in the
AGEof the cluster nodes. For example:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME STATUS ROLES AGE VERSION master-0.cloud.example.com Ready master 145m v1.16.2 master-1.cloud.example.com Ready master 135m v1.16.2 master-2.cloud.example.com Ready master 145m v1.16.2 worker-2.cloud.example.com Ready worker 100m v1.16.2
NAME STATUS ROLES AGE VERSION master-0.cloud.example.com Ready master 145m v1.16.2 master-1.cloud.example.com Ready master 135m v1.16.2 master-2.cloud.example.com Ready master 145m v1.16.2 worker-2.cloud.example.com Ready worker 100m v1.16.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check for inconsistent timing delays due to clock drift. For example:
oc get bmh -n openshift-machine-api
$ oc get bmh -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow master-1 error registering master-1 ipmi://<out-of-band-ip>
master-1 error registering master-1 ipmi://<out-of-band-ip>Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo timedatectl
$ sudo timedatectlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Addressing clock drift in existing clusters
Create a Butane config file including the contents of the
chrony.conffile to be delivered to the nodes. In the following example, create99-master-chrony.buto add the file to the control plane nodes. You can modify the file for worker nodes or repeat this procedure for the worker role.NoteSee "Creating machine configs with Butane" for information about Butane.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<NTP-server>with the IP address of the NTP server.
Use Butane to generate a
MachineConfigobject file,99-master-chrony.yaml, containing the configuration to be delivered to the nodes:butane 99-master-chrony.bu -o 99-master-chrony.yaml
$ butane 99-master-chrony.bu -o 99-master-chrony.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
MachineConfigobject file:oc apply -f 99-master-chrony.yaml
$ oc apply -f 99-master-chrony.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the
System clock synchronizedvalue is yes:sudo timedatectl
$ sudo timedatectlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow To setup clock synchronization prior to deployment, generate the manifest files and add this file to the
openshiftdirectory. For example:cp chrony-masters.yaml ~/clusterconfigs/openshift/99_masters-chrony-configuration.yaml
$ cp chrony-masters.yaml ~/clusterconfigs/openshift/99_masters-chrony-configuration.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Then, continue to create the cluster.
8.6.9. Reviewing the installation Copiar enlaceEnlace copiado en el portapapeles!
After installation, ensure the installer deployed the nodes and pods successfully.
Procedure
When the OpenShift Container Platform cluster nodes are installed appropriately, the following
Readystate is seen within theSTATUScolumn:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME STATUS ROLES AGE VERSION master-0.example.com Ready master,worker 4h v1.16.2 master-1.example.com Ready master,worker 4h v1.16.2 master-2.example.com Ready master,worker 4h v1.16.2
NAME STATUS ROLES AGE VERSION master-0.example.com Ready master,worker 4h v1.16.2 master-1.example.com Ready master,worker 4h v1.16.2 master-2.example.com Ready master,worker 4h v1.16.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the installer deployed all pods successfully. The following command removes any pods that are still running or have completed as part of the output.
oc get pods --all-namespaces | grep -iv running | grep -iv complete
$ oc get pods --all-namespaces | grep -iv running | grep -iv completeCopy to Clipboard Copied! Toggle word wrap Toggle overflow