Este contenido no está disponible en el idioma seleccionado.

Chapter 8. Deploying installer-provisioned clusters on bare metal


8.1. Overview

Installer-provisioned installation on bare metal nodes deploys and configures the infrastructure that a OpenShift Container Platform cluster runs on. This guide provides a methodology to achieving a successful installer-provisioned bare-metal installation. The following diagram illustrates the installation environment in phase 1 of deployment:

Deployment phase one

The provisioning node can be removed after the installation.

  • Provisioner: A physical machine that runs the installation program and hosts the bootstrap VM that deploys the controller of a new OpenShift Container Platform cluster.
  • Bootstrap VM: A virtual machine used in the process of deploying an OpenShift Container Platform cluster.
  • Network bridges: The bootstrap VM connects to the bare metal network and to the provisioning network, if present, via network bridges,
    eno1
    and
    eno2
    .

In phase 2 of the deployment, the provisioner destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes. The API VIP moves to the control plane nodes and the Ingress VIP moves to the worker nodes.

The following diagram illustrates phase 2 of deployment:

Deployment phase two
Important

The

provisioning
network is optional, but it is required for PXE booting. If you deploy without a
provisioning
network, you must use a virtual media BMC addressing option such as
redfish-virtualmedia
or
idrac-virtualmedia
.

8.2. Prerequisites

Installer-provisioned installation of OpenShift Container Platform requires:

  1. One provisioner node with Red Hat Enterprise Linux (RHEL) 8.x installed. The provisioning node can be removed after installation.
  2. Three control plane nodes.
  3. Baseboard Management Controller (BMC) access to each node.
  4. At least one network:

    1. One required routable network
    2. One optional network for provisioning nodes; and,
    3. One optional management network.

Before starting an installer-provisioned installation of OpenShift Container Platform, ensure the hardware environment meets the following requirements.

8.2.1. Node requirements

Installer-provisioned installation involves a number of hardware node requirements:

  • CPU architecture: All nodes must use
    x86_64
    CPU architecture.
  • Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration.
  • Baseboard Management Controller: The
    provisioner
    node must be able to access the baseboard management controller (BMC) of each OpenShift Container Platform cluster node. You may use IPMI, Redfish, or a proprietary protocol.
  • Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, RHEL 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 8 for the
    provisioner
    node and RHCOS 8 for the control plane and worker nodes.
  • Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.
  • Provisioner node: Installer-provisioned installation requires one
    provisioner
    node.
  • Control plane: Installer-provisioned installation requires three control plane nodes for high availability. You can deploy an OpenShift Container Platform cluster with only three control plane nodes, making the control plane nodes schedulable as worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing.
  • Worker nodes: While not required, a typical production cluster has two or more worker nodes.

    Important

    Do not deploy a cluster with only one worker node, because the cluster will deploy with routers and ingress traffic in a degraded state.

  • Network interfaces: Each node must have at least one network interface for the routable
    baremetal
    network. Each node must have one network interface for a
    provisioning
    network when using the
    provisioning
    network for deployment. Using the
    provisioning
    network is the default configuration. Network interface naming must be consistent across control plane nodes for the provisioning network. For example, if a control plane node uses the
    eth0
    NIC for the provisioning network, the other control plane nodes must use it as well.
  • Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OpenShift Container Platform nodes when using IPv6 addressing on the
    provisioning
    network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the
    provisioning
    network NIC, but omitting the
    provisioning
    network removes this requirement.
  • Secure Boot: Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed.

    1. Manually: To deploy an OpenShift Container Platform cluster with Secure Boot manually, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot with manually enabled UEFI and Secure Boot only when installer-provisioned installations use Redfish virtual media. See "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section for additional details.
    2. Managed: To deploy an OpenShift Container Platform cluster with managed Secure Boot, you must set the

      bootMode
      value to
      UEFISecureBoot
      in the
      install-config.yaml
      file. Red Hat only supports installer-provisioned installation with managed Secure Boot on 10th generation HPE hardware and 13th generation Dell hardware running firmware version
      2.75.75.75
      or greater. Deploying with managed Secure Boot does not require Redfish virtual media. See "Configuring managed Secure Boot" in the "Setting up the environment for an OpenShift installation" section for details.

      Note

      Red Hat does not support Secure Boot with self-generated keys.

8.2.2. Planning a bare metal cluster for OpenShift Virtualization

If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster.

  • If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation. This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster.

    Note

    You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability.

  • Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode.
  • If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform.

8.2.3. Firmware requirements for installing with virtual media

The installer for installer-provisioned OpenShift Container Platform clusters validates the hardware and firmware compatibility with Redfish virtual media. The following table lists the minimum firmware versions tested and verified to work for installer-provisioned OpenShift Container Platform clusters deployed by using Redfish virtual media.

Expand
Table 8.1. Firmware compatibility for Redfish virtual media
HardwareModelManagementFirmware versions

HP

10th Generation

iLO5

2.63 or later

Dell

14th Generation

iDRAC 9

v4.20.20.20 - v4.40.00.00 only

13th Generation

iDRAC 8

v2.75.75.75 or later

Note

Red Hat does not test every combination of firmware, hardware, or other third-party components. For further information about third-party support, see Red Hat third-party support policy.

See the hardware documentation for the nodes or contact the hardware vendor for information about updating the firmware.

For HP servers, Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media.

For Dell servers, ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . With iDRAC 9 firmware version

04.40.00.00
, the Virtual Console plugin defaults to
eHTML5
, which causes problems with the InsertVirtualMedia workflow. Set the plug-in to
HTML5
to avoid this issue. The menu path is: Configuration Virtual console Plug-in Type HTML5 .

Important

The installer will not initiate installation on a node if the node firmware is below the foregoing versions when installing with virtual media.

8.2.4. Network requirements

Installer-provisioned installation of OpenShift Container Platform involves several network requirements. First, installer-provisioned installation involves an optional non-routable

provisioning
network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable
baremetal
network.

Installer-provisioned networking

8.2.4.1. Increase the network MTU

Before deploying OpenShift Container Platform, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation.

8.2.4.2. Configuring NICs

OpenShift Container Platform deploys with two networks:

  • provisioning
    : The
    provisioning
    network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. The network interface for the
    provisioning
    network on each cluster node must have the BIOS or UEFI configured to PXE boot.

    The

    provisioningNetworkInterface
    configuration setting specifies the
    provisioning
    network NIC name on the control plane nodes, which must be identical on the control plane nodes. The
    bootMACAddress
    configuration setting provides a means to specify a particular NIC on each node for the
    provisioning
    network.

    The

    provisioning
    network is optional, but it is required for PXE booting. If you deploy without a
    provisioning
    network, you must use a virtual media BMC addressing option such as
    redfish-virtualmedia
    or
    idrac-virtualmedia
    .

  • baremetal
    : The
    baremetal
    network is a routable network. You can use any NIC to interface with the
    baremetal
    network provided the NIC is not configured to use the
    provisioning
    network.
Important

When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network.

8.2.4.3. DNS requirements

Clients access the OpenShift Container Platform cluster nodes over the

baremetal
network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.

<cluster_name>.<base_domain>

For example:

test-cluster.example.com

OpenShift Container Platform includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.

In OpenShift Container Platform deployments, DNS name resolution is required for the following components:

  • The Kubernetes API
  • The OpenShift Container Platform application wildcard ingress API

A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records or DHCP to set the hostnames for all the nodes.

Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record,

<cluster_name>
is the cluster name and
<base_domain>
is the base domain that you specify in the
install-config.yaml
file. A complete DNS record takes the form:
<component>.<cluster_name>.<base_domain>.
.

Expand
Table 8.2. Required DNS records
ComponentRecordDescription

Kubernetes API

api.<cluster_name>.<base_domain>.

An A/AAAA record, and a PTR record, identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

Routes

*.apps.<cluster_name>.<base_domain>.

The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

For example,

console-openshift-console.apps.<cluster_name>.<base_domain>
is used as a wildcard route to the OpenShift Container Platform console.

Tip

You can use the

dig
command to verify DNS resolution.

8.2.4.4. Dynamic Host Configuration Protocol (DHCP) requirements

By default, installer-provisioned installation deploys

ironic-dnsmasq
with DHCP enabled for the
provisioning
network. No other DHCP servers should be running on the
provisioning
network when the
provisioningNetwork
configuration setting is set to
managed
, which is the default value. If you have a DHCP server running on the
provisioning
network, you must set the
provisioningNetwork
configuration setting to
unmanaged
in the
install-config.yaml
file.

Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster for the

baremetal
network on an external DHCP server.

8.2.4.5. Reserving IP addresses for nodes with the DHCP server

For the

baremetal
network, a network administrator must reserve a number of IP addresses, including:

  1. Two unique virtual IP addresses.

    • One virtual IP address for the API endpoint.
    • One virtual IP address for the wildcard ingress endpoint.
  2. One IP address for the provisioner node.
  3. One IP address for each control plane (master) node.
  4. One IP address for each worker node, if applicable.
Reserving IP addresses so they become static IP addresses

Some administrators prefer to use static IP addresses so that each node’s IP address remains constant in the absence of a DHCP server. To use static IP addresses in the OpenShift Container Platform cluster, reserve the IP addresses with an infinite lease. During deployment, the installer will reconfigure the NICs from DHCP assigned addresses to static IP addresses. NICs with DHCP leases that are not infinite will remain configured to use DHCP.

Setting IP addresses with an infinite lease is incompatible with network configuration deployed by using the Machine Config Operator.

Ensuring that your DHCP server can provide infinite leases

Your DHCP server must provide a DHCP expiration time of 4294967295 seconds to properly set an infinite lease as specified by rfc2131. If a lesser value is returned for the DHCP infinite lease time, the node reports an error and a permanent IP is not set for the node. In RHEL 8,

dhcpd
does not provide infinite leases. If you want to use the provisioner node to serve dynamic IP addresses with infinite lease times, use
dnsmasq
rather than
dhcpd
.

Networking between external load balancers and control plane nodes

External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes.

Do not change IP addresses manually after deployment

Do not change a worker node’s IP address manually after deployment. To change the IP address of a worker node after deployment, you must mark the worker node unschedulable, evacuate the pods, delete the node, and recreate it with the new IP address. See "Working with nodes" for additional details. To change the IP address of a control plane node after deployment, contact support.

The storage interface requires a DHCP reservation.

The following table provides an exemplary embodiment of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer.

Expand
UsageHost NameIP

API

api.<cluster_name>.<base_domain>

<ip>

Ingress LB (apps)

*.apps.<cluster_name>.<base_domain>

<ip>

Provisioner node

provisioner.<cluster_name>.<base_domain>

<ip>

Master-0

openshift-master-0.<cluster_name>.<base_domain>

<ip>

Master-1

openshift-master-1.<cluster_name>-.<base_domain>

<ip>

Master-2

openshift-master-2.<cluster_name>.<base_domain>

<ip>

Worker-0

openshift-worker-0.<cluster_name>.<base_domain>

<ip>

Worker-1

openshift-worker-1.<cluster_name>.<base_domain>

<ip>

Worker-n

openshift-worker-n.<cluster_name>.<base_domain>

<ip>

Note

If you do not create DHCP reservations, the installer requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes.

8.2.4.6. Network Time Protocol (NTP)

Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.

Important

Define a consistent clock date and time format in each cluster node’s BIOS settings, or installation might fail.

You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.

OpenShift Container Platform supports additional post-installation state-driven network configuration on the secondary network interfaces of cluster nodes using

kubernetes-nmstate
. For example, system administrators might configure a secondary network interface on cluster nodes after installation for a storage network.

Note

Configuration must occur before scheduling pods.

State-driven network configuration requires installing

kubernetes-nmstate
, and also requires Network Manager running on the cluster nodes. See OpenShift Virtualization > Kubernetes NMState (Tech Preview) for additional details.

8.2.4.8. Port access for the out-of-band management IP address

The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the

baremetal
node during installation, the out-of-band management IP address address must be granted access to the TCP 6180 port.

8.2.5. Configuring nodes

Configuring nodes when using the provisioning network

Each node in the cluster requires the following configuration for proper installation.

Warning

A mismatch between nodes will cause an installation failure.

While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs:

Expand
NICNetworkVLAN

NIC1

provisioning

<provisioning_vlan>

NIC2

baremetal

<baremetal_vlan>

NIC1 is a non-routable network (

provisioning
) that is only used for the installation of the OpenShift Container Platform cluster.

The Red Hat Enterprise Linux (RHEL) 8.x installation process on the provisioner node might vary. To install Red Hat Enterprise Linux (RHEL) 8.x using a local Satellite server or a PXE server, PXE-enable NIC2.

Expand
PXEBoot order

NIC1 PXE-enabled

provisioning
network

1

NIC2

baremetal
network. PXE-enabled is optional.

2

Note

Ensure PXE is disabled on all other NICs.

Configure the control plane and worker nodes as follows:

Expand
PXEBoot order

NIC1 PXE-enabled (provisioning network)

1

Configuring nodes without the provisioning network

The installation process requires one NIC:

Expand
NICNetworkVLAN

NICx

baremetal

<baremetal_vlan>

NICx is a routable network (

baremetal
) that is used for the installation of the OpenShift Container Platform cluster, and routable to the internet.

Important

The

provisioning
network is optional, but it is required for PXE booting. If you deploy without a
provisioning
network, you must use a virtual media BMC addressing option such as
redfish-virtualmedia
or
idrac-virtualmedia
.

Configuring nodes for Secure Boot manually

Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications, and the operating system.

Note

Red Hat only supports manually configured Secure Boot when deploying with Redfish virtual media.

To enable Secure Boot manually, refer to the hardware guide for the node and execute the following:

Procedure

  1. Boot the node and enter the BIOS menu.
  2. Set the node’s boot mode to UEFI Enabled.
  3. Enable Secure Boot.
Important

Red Hat does not support Secure Boot with self-generated keys.

Configuring the Compatibility Support Module for Fujitsu iRMC

The Compatibility Support Module (CSM) configuration provides support for legacy BIOS backward compatibility with UEFI systems. You must configure the CSM when you deploy a cluster with Fujitsu iRMC, otherwise the installation might fail.

Note

For information about configuring the CSM for your specific node type, refer to the hardware guide for the node.

Prerequisites

  • Ensure that you have disabled Secure Boot Control. You can disable the feature under Security Secure Boot Configuration Secure Boot Control.

Procedure

  1. Boot the node and select the BIOS menu.
  2. Under the Advanced tab, select CSM Configuration from the list.
  3. Enable the Launch CSM option and set the following values:

    Expand
    ItemValue

    Boot option filter

    UEFI and Legacy

    Launch PXE OpROM Policy

    UEFI only

    Launch Storage OpROM policy

    UEFI only

    Other PCI device ROM priority

    UEFI only

8.2.6. Out-of-band management

Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner node.

Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OpenShift Container Platform 4 installation.

The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the

provisioning
network or the
baremetal
network are valid options.

8.2.7. Required data for installation

Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes:

  • Out-of-band management IP

    • Examples

      • Dell (iDRAC) IP
      • HP (iLO) IP
      • Fujitsu (iRMC) IP

When using the provisioning network

  • NIC (
    provisioning
    ) MAC address
  • NIC (
    baremetal
    ) MAC address

When omitting the provisioning network

  • NIC (
    baremetal
    ) MAC address

8.2.8. Validation checklist for nodes

When using the provisioning network

  • ❏ NIC1 VLAN is configured for the
    provisioning
    network.
  • ❏ NIC1 for the
    provisioning
    network is PXE-enabled on the provisioner, control plane (master), and worker nodes.
  • ❏ NIC2 VLAN is configured for the
    baremetal
    network.
  • ❏ PXE has been disabled on all other NICs.
  • ❏ DNS is configured with API and Ingress endpoints.
  • ❏ Control plane and worker nodes are configured.
  • ❏ All nodes accessible via out-of-band management.
  • ❏ (Optional) A separate management network has been created.
  • ❏ Required data for installation.

When omitting the provisioning network

  • ❏ NIC1 VLAN is configured for the
    baremetal
    network.
  • ❏ DNS is configured with API and Ingress endpoints.
  • ❏ Control plane and worker nodes are configured.
  • ❏ All nodes accessible via out-of-band management.
  • ❏ (Optional) A separate management network has been created.
  • ❏ Required data for installation.

8.3. Setting up the environment for an OpenShift installation

8.3.1. Installing RHEL on the provisioner node

With the networking configuration complete, the next step is to install RHEL 8.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media.

Perform the following steps to prepare the environment.

Procedure

  1. Log in to the provisioner node via
    ssh
    .
  2. Create a non-root user (

    kni
    ) and provide that user with
    sudo
    privileges:

    # useradd kni
    # passwd kni
    # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
    # chmod 0440 /etc/sudoers.d/kni
  3. Create an

    ssh
    key for the new user:

    # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''"
  4. Log in as the new user on the provisioner node:

    # su - kni
    $
  5. Use Red Hat Subscription Manager to register the provisioner node:

    $ sudo subscription-manager register --username=<user> --password=<pass> --auto-attach
    $ sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms
    Note

    For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager.

  6. Install the following packages:

    $ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
  7. Modify the user to add the

    libvirt
    group to the newly created user:

    $ sudo usermod --append --groups libvirt <user>
  8. Restart

    firewalld
    and enable the
    http
    service:

    $ sudo systemctl start firewalld
    $ sudo firewall-cmd --zone=public --add-service=http --permanent
    $ sudo firewall-cmd --reload
  9. Start and enable the

    libvirtd
    service:

    $ sudo systemctl enable libvirtd --now
  10. Create the

    default
    storage pool and start it:

    $ sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
    $ sudo virsh pool-start default
    $ sudo virsh pool-autostart default
  11. Configure networking.

    Note

    You can also configure networking from the web console.

    Export the

    baremetal
    network NIC name:

    $ export PUB_CONN=<baremetal_nic_name>

    Configure the

    baremetal
    network:

    $ sudo nohup bash -c "
        nmcli con down \"$PUB_CONN\"
        nmcli con delete \"$PUB_CONN\"
        # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists
        nmcli con down \"System $PUB_CONN\"
        nmcli con delete \"System $PUB_CONN\"
        nmcli connection add ifname baremetal type bridge con-name baremetal
        nmcli con add type bridge-slave ifname \"$PUB_CONN\" master baremetal
        pkill dhclient;dhclient baremetal
    "

    If you are deploying with a

    provisioning
    network, export the
    provisioning
    network NIC name:

    $ export PROV_CONN=<prov_nic_name>

    If you are deploying with a

    provisioning
    network, configure the
    provisioning
    network:

    $ sudo nohup bash -c "
        nmcli con down \"$PROV_CONN\"
        nmcli con delete \"$PROV_CONN\"
        nmcli connection add ifname provisioning type bridge con-name provisioning
        nmcli con add type bridge-slave ifname \"$PROV_CONN\" master provisioning
        nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual
        nmcli con down provisioning
        nmcli con up provisioning
    "
    Note

    The

    ssh
    connection might disconnect after executing these steps.

    The IPv6 address can be any address as long as it is not routable via the

    baremetal
    network.

    Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing.

  12. Configure the IPv4 address on the

    provisioning
    network connection.

    $ nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual
  13. ssh
    back into the
    provisioner
    node (if required).

    # ssh kni@provisioner.<cluster-name>.<domain>
  14. Verify the connection bridges have been properly created.

    $ sudo nmcli con show
    NAME               UUID                                  TYPE      DEVICE
    baremetal          4d5133a5-8351-4bb9-bfd4-3af264801530  bridge    baremetal
    provisioning       43942805-017f-4d7d-a2c2-7cb3324482ed  bridge    provisioning
    virbr0             d9bca40f-eee1-410b-8879-a2d4bb0465e7  bridge    virbr0
    bridge-slave-eno1  76a8ed50-c7e5-4999-b4f6-6d9014dd0812  ethernet  eno1
    bridge-slave-eno2  f31c3353-54b7-48de-893a-02d2b34c4736  ethernet  eno2
  15. Create a

    pull-secret.txt
    file.

    $ vim pull-secret.txt

    In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure, and scroll down to the Downloads section. Click Copy pull secret. Paste the contents into the

    pull-secret.txt
    file and save the contents in the
    kni
    user’s home directory.

8.3.3. Retrieving the OpenShift Container Platform installer

Use the

latest-4.x
version of the installer to deploy the latest generally available version of OpenShift Container Platform:

$ export VERSION=latest-4.8
export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')

8.3.4. Extracting the OpenShift Container Platform installer

After retrieving the installer, the next step is to extract it.

Procedure

  1. Set the environment variables:

    $ export cmd=openshift-baremetal-install
    $ export pullsecret_file=~/pull-secret.txt
    $ export extract_dir=$(pwd)
  2. Get the

    oc
    binary:

    $ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.tar.gz | tar zxvf - oc
  3. Extract the installer:

    $ sudo cp oc /usr/local/bin
    $ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE}
    $ sudo cp openshift-baremetal-install /usr/local/bin

8.3.5. Creating an RHCOS images cache (optional)

To employ image caching, you must download two images: the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM and the RHCOS image used by the installer to provision the different nodes. Image caching is optional, but especially useful when running the installer on a network with limited bandwidth.

If you are running the installer on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installer will timeout. Caching images on a web server will help in such scenarios.

Install a container that contains the images.

Procedure

  1. Install

    podman
    :

    $ sudo dnf install -y podman
  2. Open firewall port

    8080
    to be used for RHCOS image caching:

    $ sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent
    $ sudo firewall-cmd --reload
  3. Create a directory to store the

    bootstraposimage
    and
    clusterosimage
    :

    $ mkdir /home/kni/rhcos_image_cache
  4. Set the appropriate SELinux context for the newly created directory:

    $ sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?"
    $ sudo restorecon -Rv rhcos_image_cache/
  5. Get the commit ID from the installer:

    $ export COMMIT_ID=$(/usr/local/bin/openshift-baremetal-install version | grep '^built from commit' | awk '{print $4}')

    The ID determines which images the installer needs to download.

  6. Get the URI for the RHCOS image that the installer will deploy on the nodes:

    $ export RHCOS_OPENSTACK_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json  | jq .images.openstack.path | sed 's/"//g')
  7. Get the URI for the RHCOS image that the installer will deploy on the bootstrap VM:

    $ export RHCOS_QEMU_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json  | jq .images.qemu.path | sed 's/"//g')
  8. Get the path where the images are published:

    $ export RHCOS_PATH=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .baseURI | sed 's/"//g')
  9. Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM:

    $ export RHCOS_QEMU_SHA_UNCOMPRESSED=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json  | jq -r '.images.qemu["uncompressed-sha256"]')
  10. Get the SHA hash for the RHCOS image that will be deployed on the nodes:

    $ export RHCOS_OPENSTACK_SHA_COMPRESSED=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json  | jq -r '.images.openstack.sha256')
  11. Download the images and place them in the

    /home/kni/rhcos_image_cache
    directory:

    $ curl -L ${RHCOS_PATH}${RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/${RHCOS_QEMU_URI}
    $ curl -L ${RHCOS_PATH}${RHCOS_OPENSTACK_URI} -o /home/kni/rhcos_image_cache/${RHCOS_OPENSTACK_URI}
  12. Confirm SELinux type is of

    httpd_sys_content_t
    for the newly created files:

    $ ls -Z /home/kni/rhcos_image_cache
  13. Create the pod:

    $ podman run -d --name rhcos_image_cache \
    -v /home/kni/rhcos_image_cache:/var/www/html \
    -p 8080:8080/tcp \
    quay.io/centos7/httpd-24-centos7:latest

    The above command creates a caching webserver with the name

    rhcos_image_cache
    , which serves the images for deployment. The first image
    ${RHCOS_PATH}${RHCOS_QEMU_URI}?sha256=${RHCOS_QEMU_SHA_UNCOMPRESSED}
    is the
    bootstrapOSImage
    and the second image
    ${RHCOS_PATH}${RHCOS_OPENSTACK_URI}?sha256=${RHCOS_OPENSTACK_SHA_COMPRESSED}
    is the
    clusterOSImage
    in the
    install-config.yaml
    file.

  14. Generate the

    bootstrapOSImage
    and
    clusterOSImage
    configuration:

    $ export BAREMETAL_IP=$(ip addr show dev baremetal | awk '/inet /{print $2}' | cut -d"/" -f1)
    $ export RHCOS_OPENSTACK_SHA256=$(zcat /home/kni/rhcos_image_cache/${RHCOS_OPENSTACK_URI} | sha256sum | awk '{print $1}')
    $ export RHCOS_QEMU_SHA256=$(zcat /home/kni/rhcos_image_cache/${RHCOS_QEMU_URI} | sha256sum | awk '{print $1}')
    $ export CLUSTER_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_OPENSTACK_URI}?sha256=${RHCOS_OPENSTACK_SHA256}"
    $ export BOOTSTRAP_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_QEMU_URI}?sha256=${RHCOS_QEMU_SHA256}"
    $ echo "${RHCOS_OPENSTACK_SHA256}  ${RHCOS_OPENSTACK_URI}" > /home/kni/rhcos_image_cache/rhcos-ootpa-latest.qcow2.sha256sum
    $ echo "    bootstrapOSImage=${BOOTSTRAP_OS_IMAGE}"
    $ echo "    clusterOSImage=${CLUSTER_OS_IMAGE}"
  15. Add the required configuration to the

    install-config.yaml
    file under
    platform.baremetal
    :

    platform:
      baremetal:
        bootstrapOSImage: http://<BAREMETAL_IP>:8080/<RHCOS_QEMU_URI>?sha256=<RHCOS_QEMU_SHA256>
        clusterOSImage: http://<BAREMETAL_IP>:8080/<RHCOS_OPENSTACK_URI>?sha256=<RHCOS_OPENSTACK_SHA256>

    See the "Configuration files" section for additional details.

8.3.6. Configuration files

8.3.6.1. Configuring the install-config.yaml file

The

install-config.yaml
file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available hardware so that it is able to fully manage it.

  1. Configure

    install-config.yaml
    . Change the appropriate variables to match the environment, including
    pullSecret
    and
    sshKey
    .

    apiVersion: v1
    baseDomain: <domain>
    metadata:
      name: <cluster-name>
    networking:
      machineCIDR: <public-cidr>
      networkType: OVNKubernetes
    compute:
    - name: worker
      replicas: 2 
    1
    
    controlPlane:
      name: master
      replicas: 3
      platform:
        baremetal: {}
    platform:
      baremetal:
        apiVIP: <api-ip>
        ingressVIP: <wildcard-ip>
        provisioningNetworkCIDR: <CIDR>
        hosts:
          - name: openshift-master-0
            role: master
            bmc:
              address: ipmi://<out-of-band-ip> 
    2
    
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
            rootDeviceHints:
             deviceName: "/dev/disk/by-id/<disk_id>" 
    3
    
          - name: <openshift-master-1>
            role: master
            bmc:
              address: ipmi://<out-of-band-ip> 
    4
    
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
            rootDeviceHints:
             deviceName: "/dev/disk/by-id/<disk_id>" 
    5
    
          - name: <openshift-master-2>
            role: master
            bmc:
              address: ipmi://<out-of-band-ip> 
    6
    
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
            rootDeviceHints:
             deviceName: "/dev/disk/by-id/<disk_id>" 
    7
    
          - name: <openshift-worker-0>
            role: worker
            bmc:
              address: ipmi://<out-of-band-ip> 
    8
    
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
          - name: <openshift-worker-1>
            role: worker
            bmc:
              address: ipmi://<out-of-band-ip>
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
            rootDeviceHints:
             deviceName: "/dev/disk/by-id/<disk_id>" 
    9
    
    pullSecret: '<pull_secret>'
    sshKey: '<ssh_pub_key>'
    1
    Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster.
    2 4 6 8
    See the BMC addressing sections for more options.
    3 5 7 9
    Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2.
  2. Create a directory to store cluster configs.

    $ mkdir ~/clusterconfigs
    $ cp install-config.yaml ~/clusterconfigs
  3. Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster.

    $ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
  4. Remove old bootstrap resources if any are left over from a previous deployment attempt.

    for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
    do
      sudo virsh destroy $i;
      sudo virsh undefine $i;
      sudo virsh vol-delete $i --pool $i;
      sudo virsh vol-delete $i.ign --pool $i;
      sudo virsh pool-destroy $i;
      sudo virsh pool-undefine $i;
    done

To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the

install-config.yaml
file.

apiVersion: v1
baseDomain: <domain>
proxy:
  httpProxy: http://USERNAME:PASSWORD@proxy.example.com:PORT
  httpsProxy: https://USERNAME:PASSWORD@proxy.example.com:PORT
  noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>

The following is an example of

noProxy
with values.

noProxy: .example.com,172.22.0.0/24,10.10.0.0/24

With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair.

Key considerations:

  • If the proxy does not have an HTTPS proxy, change the value of
    httpsProxy
    from
    https://
    to
    http://
    .
  • If using a provisioning network, include it in the
    noProxy
    setting, otherwise the installer will fail.
  • Set all of the proxy settings as environment variables within the provisioner node. For example,
    HTTP_PROXY
    ,
    HTTPS_PROXY
    , and
    NO_PROXY
    .
Note

When provisioning with IPv6, you cannot define a CIDR address block in the

noProxy
settings. You must define each address separately.

To deploy an OpenShift Container Platform cluster without a

provisioning
network, make the following changes to the
install-config.yaml
file.

platform:
  baremetal:
    apiVIP: <apiVIP>
    ingressVIP: <ingress/wildcard VIP>
    provisioningNetwork: "Disabled" 
1
1
Add the provisioningNetwork configuration setting, if needed, and set it to Disabled.
Important

The

provisioning
network is required for PXE booting. If you deploy without a
provisioning
network, you must use a virtual media BMC addressing option such as
redfish-virtualmedia
or
idrac-virtualmedia
. See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details.

To deploy an OpenShift Container Platform cluster with dual-stack networking, edit the

machineNetwork
,
clusterNetwork
, and
serviceNetwork
configuration settings in the
install-config.yaml
file. Each setting must have two CIDR entries each. Ensure the first CIDR entry is the IPv4 setting and the second CIDR entry is the IPv6 setting.

machineNetwork:
- cidr: {{ extcidrnet }}
- cidr: {{ extcidrnet6 }}
clusterNetwork:
- cidr: 10.128.0.0/14
  hostPrefix: 23
- cidr: fd02::/48
  hostPrefix: 64
serviceNetwork:
- 172.30.0.0/16
- fd03::/112
Important

The API VIP IP address and the Ingress VIP address must be of the primary IP address family when using dual-stack networking. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. However, Red Hat does support dual-stack networking with IPv4 as the primary IP address family. Therefore, the IPv4 entries must go before the IPv6 entries.

You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as

redfish
,
redfish-virtualmedia
, or
idrac-virtualmedia
. To enable managed Secure Boot, add the
bootMode
configuration setting to each node:

Example

hosts:
  - name: openshift-master-0
    role: master
    bmc:
      address: redfish://<out_of_band_ip> 
1

      username: <user>
      password: <password>
    bootMACAddress: <NIC1_mac_address>
    rootDeviceHints:
     deviceName: "/dev/sda"
    bootMode: UEFISecureBoot 
2

1
Ensure the bmc.address setting uses redfish, redfish-virtualmedia, or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details.
2
The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot.
Note

See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media.

Note

Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities.

8.3.6.6. Additional install-config parameters

See the following tables for the required parameters, the

hosts
parameter, and the
bmc
parameter for the
install-config.yaml
file.

Expand
Table 8.3. Required parameters
ParametersDefaultDescription

baseDomain

 

The domain name for the cluster. For example,

example.com
.

bootMode

UEFI

The boot mode for a node. Options are

legacy
,
UEFI
, and
UEFISecureBoot
. If
bootMode
is not set, Ironic sets it while inspecting the node.

sshKey

 

The

sshKey
configuration setting contains the key in the
~/.ssh/id_rsa.pub
file required to access the control plane nodes and worker nodes. Typically, this key is from the
provisioner
node.

pullSecret

 

The

pullSecret
configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node.

metadata:
    name:
 

The name to be given to the OpenShift Container Platform cluster. For example,

openshift
.

networking:
    machineCIDR:
 

The public CIDR (Classless Inter-Domain Routing) of the external network. For example,

10.0.0.0/24
.

compute:
  - name: worker
 

The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes.

compute:
    replicas: 2
 

Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster.

controlPlane:
    name: master
 

The OpenShift Container Platform cluster requires a name for control plane (master) nodes.

controlPlane:
    replicas: 3
 

Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster.

provisioningNetworkInterface

 

The name of the network interface on nodes connected to the

provisioning
network. For OpenShift Container Platform 4.9 and later releases, use the
bootMACAddress
configuration setting to enable Ironic to identify the IP address of the NIC instead of using the
provisioningNetworkInterface
configuration setting to identify the name of the NIC.

defaultMachinePlatform

 

The default configuration used for machine pools without a platform configuration.

apiVIP

 

(Optional) The virtual IP address for Kubernetes API communication.

This setting must either be provided in the

install-config.yaml
file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the
apiVIP
configuration setting in the
install-config.yaml
file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses
api.<cluster_name>.<base_domain>
to derive the IP address from the DNS.

disableCertificateVerification

False

redfish
and
redfish-virtualmedia
need this parameter to manage BMC addresses. The value should be
True
when using a self-signed certificate for BMC addresses.

ingressVIP

 

(Optional) The virtual IP address for ingress traffic.

This setting must either be provided in the

install-config.yaml
file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the
ingressVIP
configuration setting in the
install-config.yaml
file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses
test.apps.<cluster_name>.<base_domain>
to derive the IP address from the DNS.

Expand
Table 8.4. Optional Parameters
ParametersDefaultDescription

provisioningDHCPRange

172.22.0.10,172.22.0.100

Defines the IP range for nodes on the

provisioning
network.

provisioningNetworkCIDR

172.22.0.0/24

The CIDR for the network to use for provisioning. This option is required when not using the default address range on the

provisioning
network.

clusterProvisioningIP

The third IP address of the

provisioningNetworkCIDR
.

The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the

provisioning
subnet. For example,
172.22.0.3
.

bootstrapProvisioningIP

The second IP address of the

provisioningNetworkCIDR
.

The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the

provisioning
subnet. For example,
172.22.0.2
or
2620:52:0:1307::2
.

externalBridge

baremetal

The name of the

baremetal
bridge of the hypervisor attached to the
baremetal
network.

provisioningBridge

provisioning

The name of the

provisioning
bridge on the
provisioner
host attached to the
provisioning
network.

defaultMachinePlatform

 

The default configuration used for machine pools without a platform configuration.

bootstrapOSImage

 

A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example:

https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256>
.

clusterOSImage

 

A URL to override the default operating system for cluster nodes. The URL must include a SHA-256 hash of the image. For example,

https://mirror.openshift.com/images/rhcos-<version>-openstack.qcow2.gz?sha256=<compressed_sha256>
.

provisioningNetwork

 

The

provisioningNetwork
configuration setting determines whether the cluster uses the
provisioning
network. If it does, the configuration setting also determines if the cluster manages the network.

Disabled
: Set this parameter to
Disabled
to disable the requirement for a
provisioning
network. When set to
Disabled
, you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If
Disabled
and using power management, BMCs must be accessible from the
baremetal
network. If
Disabled
, you must provide two IP addresses on the
baremetal
network that are used for the provisioning services.

Managed
: Set this parameter to
Managed
, which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on.

Unmanaged
: Set this parameter to
Unmanaged
to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required.

httpProxy

 

Set this parameter to the appropriate HTTP proxy used within your environment.

httpsProxy

 

Set this parameter to the appropriate HTTPS proxy used within your environment.

noProxy

 

Set this parameter to the appropriate list of exclusions for proxy usage within your environment.

Hosts

The

hosts
parameter is a list of separate bare metal assets used to build the cluster.

Expand
Table 8.5. Hosts
NameDefaultDescription

name

 

The name of the

BareMetalHost
resource to associate with the details. For example,
openshift-master-0
.

role

 

The role of the bare metal node. Either

master
or
worker
.

bmc

 

Connection details for the baseboard management controller. See the BMC addressing section for additional details.

bootMACAddress

 

The MAC address of the NIC that the host uses for the

provisioning
network. Ironic retrieves the IP address using the
bootMACAddress
configuration setting. Then, it binds to the host.

Note

You must provide a valid MAC address from the host if you disabled the

provisioning
network.

8.3.6.7. BMC addressing

Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI.

IPMI

Hosts using IPMI use the

ipmi://<out-of-band-ip>:<port>
address format, which defaults to port
623
if not specified. The following example demonstrates an IPMI configuration within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: ipmi://<out-of-band-ip>
          username: <user>
          password: <password>
Important

The

provisioning
network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a
provisioning
network. If you deploy without a
provisioning
network, you must use a virtual media BMC addressing option such as
redfish-virtualmedia
or
idrac-virtualmedia
. See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details.

Redfish network boot

To enable Redfish, use

redfish://
or
redfish+http://
to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include

disableCertificateVerification: True
in the
bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the
disableCertificateVerification: True
configuration parameter within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
          disableCertificateVerification: True

8.3.6.8. BMC addressing for Dell iDRAC

The

address
field for each
bmc
entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.

platform:
  baremetal:
    hosts:
      - name: <hostname>
        role: <master | worker>
        bmc:
          address: <address> 
1

          username: <user>
          password: <password>
1
The address configuration setting specifies the protocol.

For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI.

Expand
Table 8.6. BMC address formats for Dell iDRAC
ProtocolAddress Format

iDRAC virtual media

idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1

Redfish network boot

redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1

IPMI

ipmi://<out-of-band-ip>

Important

Use

idrac-virtualmedia
as the protocol for Redfish virtual media.
redfish-virtualmedia
will not work on Dell hardware. Dell’s
idrac-virtualmedia
uses the Redfish standard with Dell’s OEM extensions.

See the following sections for additional details.

Redfish virtual media for Dell iDRAC

For Redfish virtual media on Dell servers, use

idrac-virtualmedia://
in the
address
setting. Using
redfish-virtualmedia://
will not work.

The following example demonstrates using iDRAC virtual media within the

install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include

disableCertificateVerification: True
in the
bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the
disableCertificateVerification: True
configuration parameter within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
          username: <user>
          password: <password>
          disableCertificateVerification: True
Note

Currently, Redfish is only supported on Dell with iDRAC firmware versions

4.20.20.20
through
04.40.00.00
for installer-provisioned installations on bare metal deployments. There is a known issue with version
04.40.00.00
. With iDRAC 9 firmware version
04.40.00.00
, the Virtual Console plugin defaults to
eHTML5
, which causes problems with the InsertVirtualMedia workflow. Set the plugin to
HTML5
to avoid this issue. The menu path is: Configuration Virtual console Plug-in Type HTML5 .

Ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode

AutoAttach
.

Use

idrac-virtualmedia://
as the protocol for Redfish virtual media. Using
redfish-virtualmedia://
will not work on Dell hardware, because the
idrac-virtualmedia://
protocol corresponds to the
idrac
hardware type and the Redfish protocol in Ironic. Dell’s
idrac-virtualmedia://
protocol uses the Redfish standard with Dell’s OEM extensions. Ironic also supports the
idrac
type with the WSMAN protocol. Therefore, you must specify
idrac-virtualmedia://
to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware.

Redfish network boot for iDRAC

To enable Redfish, use

redfish://
or
redfish+http://
to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include

disableCertificateVerification: True
in the
bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the
disableCertificateVerification: True
configuration parameter within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
          username: <user>
          password: <password>
          disableCertificateVerification: True
Note

Currently, Redfish is only supported on Dell hardware with iDRAC firmware versions

4.20.20.20
through
04.40.00.00
for installer-provisioned installations on bare metal deployments. There is a known issue with version
04.40.00.00
. With iDRAC 9 firmware version
04.40.00.00
, the Virtual Console plugin defaults to
eHTML5
, which causes problems with the InsertVirtualMedia workflow. Set the plugin to
HTML5
to avoid this issue. The menu path is: Configuration Virtual console Plug-in Type HTML5 .

Ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach .

The

redfish://
URL protocol corresponds to the
redfish
hardware type in Ironic.

8.3.6.9. BMC addressing for HPE iLO

The

address
field for each
bmc
entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.

platform:
  baremetal:
    hosts:
      - name: <hostname>
        role: <master | worker>
        bmc:
          address: <address> 
1

          username: <user>
          password: <password>
1
The address configuration setting specifies the protocol.

For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI.

Expand
Table 8.7. BMC address formats for HPE iLO
ProtocolAddress Format

Redfish virtual media

redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1

Redfish network boot

redfish://<out-of-band-ip>/redfish/v1/Systems/1

IPMI

ipmi://<out-of-band-ip>

See the following sections for additional details.

Redfish virtual media for HPE iLO

To enable Redfish virtual media for HPE servers, use

redfish-virtualmedia://
in the
address
setting. The following example demonstrates using Redfish virtual media within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include

disableCertificateVerification: True
in the
bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the
disableCertificateVerification: True
configuration parameter within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
          disableCertificateVerification: True
Note

Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media.

Redfish network boot for HPE iLO

To enable Redfish, use

redfish://
or
redfish+http://
to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include

disableCertificateVerification: True
in the
bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the
disableCertificateVerification: True
configuration parameter within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
          disableCertificateVerification: True

8.3.6.10. BMC addressing for Fujitsu iRMC

The

address
field for each
bmc
entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.

platform:
  baremetal:
    hosts:
      - name: <hostname>
        role: <master | worker>
        bmc:
          address: <address> 
1

          username: <user>
          password: <password>
1
The address configuration setting specifies the protocol.

For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI.

Expand
Table 8.8. BMC address formats for Fujitsu iRMC
ProtocolAddress Format

iRMC

irmc://<out-of-band-ip>

IPMI

ipmi://<out-of-band-ip>

iRMC

Fujitsu nodes can use

irmc://<out-of-band-ip>
and defaults to port
443
. The following example demonstrates an iRMC configuration within the
install-config.yaml
file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: irmc://<out-of-band-ip>
          username: <user>
          password: <password>
Note

Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal.

8.3.6.11. Root device hints

The

rootDeviceHints
parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it.

Expand
Table 8.9. Subfields
SubfieldDescription

deviceName

A string containing a Linux device name like

/dev/vda
. The hint must match the actual value exactly.

hctl

A string containing a SCSI bus address like

0:0:0:0
. The hint must match the actual value exactly.

model

A string containing a vendor-specific device identifier. The hint can be a substring of the actual value.

vendor

A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value.

serialNumber

A string containing the device serial number. The hint must match the actual value exactly.

minSizeGigabytes

An integer representing the minimum size of the device in gigabytes.

wwn

A string containing the unique storage identifier. The hint must match the actual value exactly.

wwnWithExtension

A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly.

wwnVendorExtension

A string containing the unique vendor storage identifier. The hint must match the actual value exactly.

rotational

A boolean indicating whether the device should be a rotating disk (true) or not (false).

Example usage

     - name: master-0
       role: master
       bmc:
         address: ipmi://10.10.0.3:6203
         username: admin
         password: redhat
       bootMACAddress: de:ad:be:ef:00:40
       rootDeviceHints:
         deviceName: "/dev/sda"

8.3.6.12. Creating the OpenShift Container Platform manifests

  1. Create the OpenShift Container Platform manifests.

    $ ./openshift-baremetal-install --dir ~/clusterconfigs create manifests
    INFO Consuming Install Config from target directory
    WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
    WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated

8.3.6.13. Configuring NTP for disconnected clusters (optional)

OpenShift Container Platform installs the

chrony
Network Time Protocol (NTP) service on the cluster nodes.

OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.

Procedure

  1. Create a Butane config,

    99-master-chrony-conf-override.bu
    , including the contents of the
    chrony.conf
    file for the control plane nodes.

    Note

    See "Creating machine configs with Butane" for information about Butane.

    Butane config example

    variant: openshift
    version: 4.8.0
    metadata:
      name: 99-master-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: master
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # Use public servers from the pool.ntp.org project.
              # Please consider joining the pool (https://www.pool.ntp.org/join.html).
    
              # The Machine Config Operator manages this file
              server openshift-master-0.<cluster-name>.<domain> iburst 
    1
    
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    
              # Configure the control plane nodes to serve as local NTP servers
              # for all worker nodes, even if they are not in sync with an
              # upstream NTP server.
    
              # Allow NTP client access from the local network.
              allow all
              # Serve time even if not synchronized to a time source.
              local stratum 3 orphan

    1
    You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  2. Use Butane to generate a

    MachineConfig
    object file,
    99-master-chrony-conf-override.yaml
    , containing the configuration to be delivered to the control plane nodes:

    $ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
  3. Create a Butane config,

    99-worker-chrony-conf-override.bu
    , including the contents of the
    chrony.conf
    file for the worker nodes that references the NTP servers on the control plane nodes.

    Butane config example

    variant: openshift
    version: 4.8.0
    metadata:
      name: 99-worker-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: worker
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # The Machine Config Operator manages this file.
              server openshift-master-0.<cluster-name>.<domain> iburst 
    1
    
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony

    1
    You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  4. Use Butane to generate a

    MachineConfig
    object file,
    99-worker-chrony-conf-override.yaml
    , containing the configuration to be delivered to the worker nodes:

    $ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml

You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the

ingressVIP
virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes. When deploying remote workers in separate subnets, you must place the
ingressVIP
virtual IP address exclusively with the control plane nodes.

Procedure

  1. Change to the directory storing the

    install-config.yaml
    file:

    $ cd ~/clusterconfigs
  2. Switch to the

    manifests
    subdirectory:

    $ cd manifests
  3. Create a file named

    cluster-network-avoid-workers-99-config.yaml
    :

    $ touch cluster-network-avoid-workers-99-config.yaml
  4. Open the

    cluster-network-avoid-workers-99-config.yaml
    file in an editor and enter a custom resource (CR) that describes the Operator configuration:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 50-worker-fix-ipi-rwn
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/kubernetes/manifests/keepalived.yaml
              mode: 0644
              contents:
                source: data:,

    This manifest places the

    ingressVIP
    virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only:

    • openshift-ingress-operator
    • keepalived
  5. Save the
    cluster-network-avoid-workers-99-config.yaml
    file.
  6. Create a

    manifests/cluster-ingress-default-ingresscontroller.yaml
    file:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/master: ""
  7. Consider backing up the
    manifests
    directory. The installer deletes the
    manifests/
    directory when creating the cluster.
  8. Modify the

    cluster-scheduler-02-config.yml
    manifest to make the control plane nodes schedulable by setting the
    mastersSchedulable
    field to
    true
    . Control plane nodes are not schedulable by default. For example:

    $ sed -i "s;mastersSchedulable: false;mastersSchedulable: true;g" clusterconfigs/manifests/cluster-scheduler-02-config.yml
    Note

    If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail.

8.3.7. Creating a disconnected registry (optional)

In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet.

A local, or mirrored, copy of the registry requires the following:

  • A certificate for the registry node. This can be a self-signed certificate.
  • A web server that a container on a system will serve.
  • An updated pull secret that contains the certificate and local repository information.
Note

Creating a disconnected registry on a registry node is optional. The subsequent sections indicate that they are optional since they are steps you need to execute only when creating a disconnected registry on a registry node. You should execute all of the subsequent sub-sections labeled "(optional)" when creating a disconnected registry on a registry node.

Make the following changes to the registry node.

Procedure

  1. Open the firewall port on the registry node.

    $ sudo firewall-cmd --add-port=5000/tcp --zone=libvirt  --permanent
    $ sudo firewall-cmd --add-port=5000/tcp --zone=public   --permanent
    $ sudo firewall-cmd --reload
  2. Install the required packages for the registry node.

    $ sudo yum -y install python3 podman httpd httpd-tools jq
  3. Create the directory structure where the repository information will be held.

    $ sudo mkdir -p /opt/registry/{auth,certs,data}

8.3.7.2. Generating the self-signed certificate (optional)

Generate a self-signed certificate for the registry node and put it in the

/opt/registry/certs
directory.

Procedure

  1. Adjust the certificate information as appropriate.

    $ host_fqdn=$( hostname --long )
    $ cert_c="<Country Name>"   # Country Name (C, 2 letter code)
    $ cert_s="<State>"          # Certificate State (S)
    $ cert_l="<Locality>"       # Certificate Locality (L)
    $ cert_o="<Organization>"   # Certificate Organization (O)
    $ cert_ou="<Org Unit>"      # Certificate Organizational Unit (OU)
    $ cert_cn="${host_fqdn}"    # Certificate Common Name (CN)
    
    $ openssl req \
        -newkey rsa:4096 \
        -nodes \
        -sha256 \
        -keyout /opt/registry/certs/domain.key \
        -x509 \
        -days 365 \
        -out /opt/registry/certs/domain.crt \
        -addext "subjectAltName = DNS:${host_fqdn}" \
        -subj "/C=${cert_c}/ST=${cert_s}/L=${cert_l}/O=${cert_o}/OU=${cert_ou}/CN=${cert_cn}"
    Note

    When replacing

    <Country Name>
    , ensure that it only contains two letters. For example,
    US
    .

  2. Update the registry node’s

    ca-trust
    with the new certificate.

    $ sudo cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/
    $ sudo update-ca-trust extract

8.3.7.3. Creating the registry podman container (optional)

The registry container uses the

/opt/registry
directory for certificates, authentication files, and to store its data files.

The registry container uses

httpd
and needs an
htpasswd
file for authentication.

Procedure

  1. Create an

    htpasswd
    file in
    /opt/registry/auth
    for the container to use.

    $ htpasswd -bBc /opt/registry/auth/htpasswd <user> <passwd>

    Replace

    <user>
    with the user name and
    <passwd>
    with the password.

  2. Create and start the registry container.

    $ podman create \
      --name ocpdiscon-registry \
      -p 5000:5000 \
      -e "REGISTRY_AUTH=htpasswd" \
      -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry" \
      -e "REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry" \
      -e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
      -e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt" \
      -e "REGISTRY_HTTP_TLS_KEY=/certs/domain.key" \
      -e "REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true" \
      -v /opt/registry/data:/var/lib/registry:z \
      -v /opt/registry/auth:/auth:z \
      -v /opt/registry/certs:/certs:z \
      docker.io/library/registry:2
    $ podman start ocpdiscon-registry

8.3.7.4. Copy and update the pull-secret (optional)

Copy the pull secret file from the provisioner node to the registry node and modify it to include the authentication information for the new registry node.

Procedure

  1. Copy the

    pull-secret.txt
    file.

    $ scp kni@provisioner:/home/kni/pull-secret.txt pull-secret.txt
  2. Update the

    host_fqdn
    environment variable with the fully qualified domain name of the registry node.

    $ host_fqdn=$( hostname --long )
  3. Update the

    b64auth
    environment variable with the base64 encoding of the
    http
    credentials used to create the
    htpasswd
    file.

    $ b64auth=$( echo -n '<username>:<passwd>' | openssl base64 )

    Replace

    <username>
    with the user name and
    <passwd>
    with the password.

  4. Set the

    AUTHSTRING
    environment variable to use the
    base64
    authorization string. The
    $USER
    variable is an environment variable containing the name of the current user.

    $ AUTHSTRING="{\"$host_fqdn:5000\": {\"auth\": \"$b64auth\",\"email\": \"$USER@redhat.com\"}}"
  5. Update the

    pull-secret.txt
    file.

    $ jq ".auths += $AUTHSTRING" < pull-secret.txt > pull-secret-update.txt

8.3.7.5. Mirroring the repository (optional)

Procedure

  1. Copy the

    oc
    binary from the provisioner node to the registry node.

    $ sudo scp kni@provisioner:/usr/local/bin/oc /usr/local/bin
  2. Set the required environment variables.

    1. Set the release version:

      $ VERSION=<release_version>

      For

      <release_version>
      , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as
      4.8
      .

    2. Set the local registry name and host port:

      $ LOCAL_REG='<local_registry_host_name>:<local_registry_host_port>'

      For

      <local_registry_host_name>
      , specify the registry domain name for your mirror repository, and for
      <local_registry_host_port>
      , specify the port that it serves content on.

    3. Set the local repository name:

      $ LOCAL_REPO='<local_repository_name>'

      For

      <local_repository_name>
      , specify the name of the repository to create in your registry, such as
      ocp4/openshift4
      .

  3. Mirror the remote install images to the local repository.

    $ /usr/local/bin/oc adm release mirror \
      -a pull-secret-update.txt \
      --from=$UPSTREAM_REPO \
      --to-release-image=$LOCAL_REG/$LOCAL_REPO:${VERSION} \
      --to=$LOCAL_REG/$LOCAL_REPO

On the provisioner node, the

install-config.yaml
file should use the newly created pull-secret from the
pull-secret-update.txt
file. The
install-config.yaml
file must also contain the disconnected registry node’s certificate and registry information.

Procedure

  1. Add the disconnected registry node’s certificate to the

    install-config.yaml
    file. The certificate should follow the
    "additionalTrustBundle: |"
    line and be properly indented, usually by two spaces.

    $ echo "additionalTrustBundle: |" >> install-config.yaml
    $ sed -e 's/^/  /' /opt/registry/certs/domain.crt >> install-config.yaml
  2. Add the mirror information for the registry to the

    install-config.yaml
    file.

    $ echo "imageContentSources:" >> install-config.yaml
    $ echo "- mirrors:" >> install-config.yaml
    $ echo "  - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml
    $ echo "  source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml
    $ echo "- mirrors:" >> install-config.yaml
    $ echo "  - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml
    $ echo "  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml
    Note

    Replace

    registry.example.com
    with the registry’s fully qualified domain name.

8.3.8. Deploying routers on worker nodes

During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If the initial cluster has only one worker node, or if a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a

yaml
file to set an appropriate number of router replicas.

Note

By default, the installer deploys two routers. If the cluster has at least two worker nodes, you can skip this section.

Note

If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. If the cluster has no worker nodes, you can skip this section.

Procedure

  1. Create a

    router-replicas.yaml
    file.

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: <num-of-router-pods>
      endpointPublishingStrategy:
        type: HostNetwork
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/worker: ""
    Note

    Replace

    <num-of-router-pods>
    with an appropriate value. If working with just one worker node, set
    replicas:
    to
    1
    . If working with more than 3 worker nodes, you can increase
    replicas:
    from the default value
    2
    as appropriate.

  2. Save and copy the

    router-replicas.yaml
    file to the
    clusterconfigs/openshift
    directory.

    cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml

8.3.9. Validation checklist for installation

  • ❏ OpenShift Container Platform installer has been retrieved.
  • ❏ OpenShift Container Platform installer has been extracted.
  • ❏ Required parameters for the
    install-config.yaml
    have been configured.
  • ❏ The
    hosts
    parameter for the
    install-config.yaml
    has been configured.
  • ❏ The
    bmc
    parameter for the
    install-config.yaml
    has been configured.
  • ❏ Conventions for the values configured in the
    bmc
    address
    field have been applied.
  • ❏ Created a disconnected registry (optional).
  • ❏ (optional) Validate disconnected registry settings if in use.
  • ❏ (optional) Deployed routers on worker nodes.

Run the OpenShift Container Platform installer:

$ ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster

8.3.11. Following the installation

During the deployment process, you can check the installation’s overall status by issuing the

tail
command to the
.openshift_install.log
log file in the install directory folder.

$ tail -f /path/to/install-dir/.openshift_install.log

8.3.12. Verifying static IP address configuration

If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node’s network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address.

Note

The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing.

Verify the network configuration is working properly.

Procedure

  1. Check the network interface configuration on the node.
  2. Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly.

8.3.13. Preparing to reinstall a cluster on bare metal

Before you reinstall a cluster on bare metal, you must perform cleanup operations.

Procedure

  1. Remove or reformat the disks for the bootstrap, control plane (also known as master) node, and worker nodes. If you are working in a hypervisor environment, you must add any disks you removed.
  2. Delete the artifacts that the previous installation generated:

    $ cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \
    .openshift_install.log .openshift_install_state.json
  3. Generate new manifests and Ignition config files. See “Creating the Kubernetes manifest and Ignition config files" for more information.
  4. Upload the new bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. This will overwrite the previous Ignition files.

8.4. Installer-provisioned post-installation configuration

After successfully deploying an installer-provisioned cluster, consider the following post-installation procedures.

8.4.1. Configuring NTP for disconnected clusters (optional)

OpenShift Container Platform installs the

chrony
Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment.

OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.

Procedure

  1. Create a Butane config,

    99-master-chrony-conf-override.bu
    , including the contents of the
    chrony.conf
    file for the control plane nodes.

    Note

    See "Creating machine configs with Butane" for information about Butane.

    Butane config example

    variant: openshift
    version: 4.8.0
    metadata:
      name: 99-master-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: master
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # Use public servers from the pool.ntp.org project.
              # Please consider joining the pool (https://www.pool.ntp.org/join.html).
    
              # The Machine Config Operator manages this file
              server openshift-master-0.<cluster-name>.<domain> iburst 
    1
    
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    
              # Configure the control plane nodes to serve as local NTP servers
              # for all worker nodes, even if they are not in sync with an
              # upstream NTP server.
    
              # Allow NTP client access from the local network.
              allow all
              # Serve time even if not synchronized to a time source.
              local stratum 3 orphan

    1
    You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  2. Use Butane to generate a

    MachineConfig
    object file,
    99-master-chrony-conf-override.yaml
    , containing the configuration to be delivered to the control plane nodes:

    $ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
  3. Create a Butane config,

    99-worker-chrony-conf-override.bu
    , including the contents of the
    chrony.conf
    file for the worker nodes that references the NTP servers on the control plane nodes.

    Butane config example

    variant: openshift
    version: 4.8.0
    metadata:
      name: 99-worker-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: worker
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # The Machine Config Operator manages this file.
              server openshift-master-0.<cluster-name>.<domain> iburst 
    1
    
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony

    1
    You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  4. Use Butane to generate a

    MachineConfig
    object file,
    99-worker-chrony-conf-override.yaml
    , containing the configuration to be delivered to the worker nodes:

    $ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
  5. Apply the

    99-master-chrony-conf-override.yaml
    policy to the control plane nodes.

    $ oc apply -f 99-master-chrony-conf-override.yaml

    Example output

    machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created

  6. Apply the

    99-worker-chrony-conf-override.yaml
    policy to the worker nodes.

    $ oc apply -f 99-worker-chrony-conf-override.yaml

    Example output

    machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created

  7. Check the status of the applied NTP settings.

    $ oc describe machineconfigpool

8.4.2. Enabling a provisioning network after installation

The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a

provisioning
network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node’s baseboard management controller is routable via the
baremetal
network.

In OpenShift Container Platform 4.8 and later, you can enable a

provisioning
network after installation using the Cluster Baremetal Operator (CBO).

Prerequisites

  • A dedicated physical network must exist, connected to all worker and control plane nodes.
  • You must isolate the native, untagged physical network.
  • The network cannot have a DHCP server when the
    provisioningNetwork
    configuration setting is set to
    Managed
    .
  • You can omit the
    provisioningInterface
    setting in OpenShift Container Platform 4.9 to use the
    bootMACAddress
    configuration setting.

Procedure

  1. When setting the
    provisioningInterface
    setting, first identify the provisioning interface name for the cluster nodes. For example,
    eth0
    or
    eno1
    .
  2. Enable the Preboot eXecution Environment (PXE) on the
    provisioning
    network interface of the cluster nodes.
  3. Retrieve the current state of the

    provisioning
    network and save it to a provisioning custom resource (CR) file:

    $ oc get provisioning -o yaml > enable-provisioning-nw.yaml
  4. Modify the provisioning CR file:

    $ vim ~/enable-provisioning-nw.yaml

    Scroll down to the

    provisioningNetwork
    configuration setting and change it from
    Disabled
    to
    Managed
    . Then, add the
    provisioningOSDownloadURL
    ,
    provisioningIP
    ,
    provisioningNetworkCIDR
    ,
    provisioningDHCPRange
    ,
    provisioningInterface
    , and
    watchAllNameSpaces
    configuration settings after the
    provisioningNetwork
    setting. Provide appropriate values for each setting.

    apiVersion: v1
    items:
    - apiVersion: metal3.io/v1alpha1
      kind: Provisioning
      metadata:
        name: provisioning-configuration
      spec:
        provisioningNetwork: 
    1
    
        provisioningOSDownloadURL: 
    2
    
        provisioningIP: 
    3
    
        provisioningNetworkCIDR: 
    4
    
        provisioningDHCPRange: 
    5
    
        provisioningInterface: 
    6
    
        watchAllNameSpaces: 
    7
    1
    The provisioningNetwork is one of Managed, Unmanaged, or Disabled. When set to Managed, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged, the system administrator configures the DHCP server manually.
    2
    The provisioningOSDownloadURL is a valid HTTPS URL with a valid sha256 checksum that enables the Metal3 pod to download a qcow2 operating system image ending in .qcow2.gz or .qcow2.xz. This field is required whether the provisioning network is Managed, Unmanaged, or Disabled. For example: http://192.168.0.1/images/rhcos-<version>.x86_64.qcow2.gz?sha256=<sha>.
    3
    The provisioningIP is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the provisioning subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the provisioning network is Disabled. The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server.
    4
    The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning network is Disabled. For example: 192.168.0.1/24.
    5
    The DHCP range. This setting is only applicable to a Managed provisioning network. Omit this configuration setting if the provisioning network is Disabled. For example: 192.168.0.64, 192.168.0.253.
    6
    The NIC name for the provisioning interface on cluster nodes. The provisioningInterface setting is only applicable to Managed and Unmanaged provisioning networks. Omit the provisioningInterface configuration setting if the provisioning network is Disabled. Omit the provisioningInterface configuration setting to use the bootMACAddress configuration setting instead.
    7
    Set this setting to true if you want metal3 to watch namespaces other than the default openshift-machine-api namespace. The default value is false.
  5. Save the changes to the provisioning CR file.
  6. Apply the provisioning CR file to the cluster:

    $ oc apply -f enable-provisioning-nw.yaml

8.4.3. Configuring an external load balancer

You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer.

Prerequisites

  • On your load balancer, TCP over ports 6443, 443, and 80 must be available to any users of your system.
  • Load balance the API port, 6443, between each of the control plane nodes.
  • Load balance the application ports, 443 and 80, between all of the compute nodes.
  • On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster.
  • Your load balancer must be able to access every machine in your cluster. Methods to allow this access include:

    • Attaching the load balancer to the cluster’s machine subnet.
    • Attaching floating IP addresses to machines that use the load balancer.
Important

External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes.

Procedure

  1. Enable access to the cluster from your load balancer on ports 6443, 443, and 80.

    As an example, note this HAProxy configuration:

    A section of a sample HAProxy configuration

    ...
    listen my-cluster-api-6443
        bind 0.0.0.0:6443
        mode tcp
        balance roundrobin
        server my-cluster-master-2 192.0.2.2:6443 check
        server my-cluster-master-0 192.0.2.3:6443 check
        server my-cluster-master-1 192.0.2.1:6443 check
    listen my-cluster-apps-443
            bind 0.0.0.0:443
            mode tcp
            balance roundrobin
            server my-cluster-worker-0 192.0.2.6:443 check
            server my-cluster-worker-1 192.0.2.5:443 check
            server my-cluster-worker-2 192.0.2.4:443 check
    listen my-cluster-apps-80
            bind 0.0.0.0:80
            mode tcp
            balance roundrobin
            server my-cluster-worker-0 192.0.2.7:80 check
            server my-cluster-worker-1 192.0.2.9:80 check
            server my-cluster-worker-2 192.0.2.8:80 check

  2. Add records to your DNS server for the cluster API and apps over the load balancer. For example:

    <load_balancer_ip_address> api.<cluster_name>.<base_domain>
    <load_balancer_ip_address> apps.<cluster_name>.<base_domain>
  3. From a command line, use

    curl
    to verify that the external load balancer and DNS configuration are operational.

    1. Verify that the cluster API is accessible:

      $ curl https://<loadbalancer_ip_address>:6443/version --insecure

      If the configuration is correct, you receive a JSON object in response:

      {
        "major": "1",
        "minor": "11+",
        "gitVersion": "v1.11.0+ad103ed",
        "gitCommit": "ad103ed",
        "gitTreeState": "clean",
        "buildDate": "2019-01-09T06:44:10Z",
        "goVersion": "go1.10.3",
        "compiler": "gc",
        "platform": "linux/amd64"
      }
    2. Verify that cluster applications are accessible:

      Note

      You can also verify application accessibility by opening the OpenShift Container Platform console in a web browser.

      $ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure

      If the configuration is correct, you receive an HTTP response:

      HTTP/1.1 302 Found
      content-length: 0
      location: https://console-openshift-console.apps.<cluster-name>.<base domain>/
      cache-control: no-cacheHTTP/1.1 200 OK
      referrer-policy: strict-origin-when-cross-origin
      set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure
      x-content-type-options: nosniff
      x-dns-prefetch-control: off
      x-frame-options: DENY
      x-xss-protection: 1; mode=block
      date: Tue, 17 Nov 2020 08:42:10 GMT
      content-type: text/html; charset=utf-8
      set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
      cache-control: private

8.5. Expanding the cluster

After deploying an installer-provisioned OpenShift Container Platform cluster, you can use the following procedures to expand the number of worker nodes. Ensure that each prospective worker node meets the prerequisites.

Note

Expanding the cluster using RedFish Virtual Media involves meeting minimum firmware requirements. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details when expanding the cluster using RedFish Virtual Media.

8.5.1. Preparing the bare metal node

Expanding the cluster requires a DHCP server. Each node must have a DHCP reservation.

Reserving IP addresses so they become static IP addresses

Some administrators prefer to use static IP addresses so that each node’s IP address remains constant in the absence of a DHCP server. To use static IP addresses in the OpenShift Container Platform cluster, reserve the IP addresses in the DHCP server with an infinite lease. After the installer provisions the node successfully, the dispatcher script will check the node’s network configuration. If the dispatcher script finds that the network configuration contains a DHCP infinite lease, it will recreate the connection as a static IP connection using the IP address from the DHCP infinite lease. NICs without DHCP infinite leases will remain unmodified.

Setting IP addresses with an infinite lease is incompatible with network configuration deployed by using the Machine Config Operator.

Preparing the bare metal node requires executing the following procedure from the provisioner node.

Procedure

  1. Get the

    oc
    binary, if needed. It should already exist on the provisioner node.

    $ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux-$VERSION.tar.gz | tar zxvf - oc
    $ sudo cp oc /usr/local/bin
  2. Power off the bare metal node by using the baseboard management controller, and ensure it is off.
  3. Retrieve the user name and password of the bare metal node’s baseboard management controller. Then, create

    base64
    strings from the user name and password:

    $ echo -ne "root" | base64
    $ echo -ne "password" | base64
  4. Create a configuration file for the bare metal node.

    $ vim bmh.yaml
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: openshift-worker-<num>-bmc-secret
    type: Opaque
    data:
      username: <base64-of-uid>
      password: <base64-of-pwd>
    ---
    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: openshift-worker-<num>
    spec:
      online: true
      bootMACAddress: <NIC1-mac-address>
      bmc:
        address: <protocol>://<bmc-ip>
        credentialsName: openshift-worker-<num>-bmc-secret

    Replace

    <num>
    for the worker number of the bare metal node in the two
    name
    fields and the
    credentialsName
    field. Replace
    <base64-of-uid>
    with the
    base64
    string of the user name. Replace
    <base64-of-pwd>
    with the
    base64
    string of the password. Replace
    <NIC1-mac-address>
    with the MAC address of the bare metal node’s first NIC.

    See the BMC addressing section for additional BMC configuration options. Replace

    <protocol>
    with the BMC protocol, such as IPMI, RedFish, or others. Replace
    <bmc-ip>
    with the IP address of the bare metal node’s baseboard management controller.

    Note

    If the MAC address of an existing bare metal node matches the MAC address of a bare metal host that you are attempting to provision, then the Ironic installation will fail. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. See Diagnosing a host duplicate MAC address for more information.

  5. Create the bare metal node.

    $ oc -n openshift-machine-api create -f bmh.yaml
    secret/openshift-worker-<num>-bmc-secret created
    baremetalhost.metal3.io/openshift-worker-<num> created

    Where

    <num>
    will be the worker number.

  6. Power up and inspect the bare metal node.

    $ oc -n openshift-machine-api get bmh openshift-worker-<num>

    Where

    <num>
    is the worker node number.

    NAME                 STATUS   PROVISIONING STATUS   CONSUMER   BMC                 HARDWARE PROFILE   ONLINE   ERROR
    openshift-worker-<num>   OK       ready                            ipmi://<out-of-band-ip>   unknown            true

8.5.2. Replacing a bare-metal control plane node

Use the following procedure to replace an installer-provisioned OpenShift Container Platform control plane node.

Important

If you reuse the

BareMetalHost
object definition from an existing control plane host, do not leave the
externallyProvisioned
field set to
true
.

Existing control plane

BareMetalHost
objects may have the
externallyProvisioned
flag set to
true
if they were provisioned by the OpenShift Container Platform installation program.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • You have taken an etcd backup.

    Important

    Take an etcd backup before performing this procedure so that you can restore your cluster if you encounter any issues. For more information about taking an etcd backup, see the Additional resources section.

Procedure

  1. Ensure that the Bare Metal Operator is available:

    $ oc get clusteroperator baremetal

    Example output

    NAME        VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    baremetal   4.8.0     True        False         False      3d15h

  2. Remove the old

    BareMetalHost
    and
    Machine
    objects:

    $ oc delete bmh -n openshift-machine-api <host_name>
    $ oc delete machine -n openshift-machine-api <machine_name>

    Replace

    <host_name>
    with the name of the host and
    <machine_name>
    with the name of the machine. The machine name appears under the
    CONSUMER
    field.

    After you remove the

    BareMetalHost
    and
    Machine
    objects, then the machine controller automatically deletes the
    Node
    object.

  3. Create the new

    BareMetalHost
    object and the secret to store the BMC credentials:

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: control-plane-<num>-bmc-secret 
    1
    
      namespace: openshift-machine-api
    data:
      username: <base64_of_uid> 
    2
    
      password: <base64_of_pwd> 
    3
    
    type: Opaque
    ---
    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: control-plane-<num> 
    4
    
      namespace: openshift-machine-api
    spec:
      automatedCleaningMode: disabled
      bmc:
        address: <protocol>://<bmc_ip> 
    5
    
        credentialsName: control-plane-<num>-bmc-secret 
    6
    
      bootMACAddress: <NIC1_mac_address> 
    7
    
      bootMode: UEFI
      externallyProvisioned: false
      hardwareProfile: unknown
      online: true
    EOF
    1 4 6
    Replace <num> for the control plane number of the bare metal node in the name fields and the credentialsName field.
    2
    Replace <base64_of_uid> with the base64 string of the user name.
    3
    Replace <base64_of_pwd> with the base64 string of the password.
    5
    Replace <protocol> with the BMC protocol, such as redfish, redfish-virtualmedia, idrac-virtualmedia, or others. Replace <bmc_ip> with the IP address of the bare metal node’s baseboard management controller. For additional BMC configuration options, see "BMC addressing" in the Additional resources section.
    7
    Replace <NIC1_mac_address> with the MAC address of the bare metal node’s first NIC.

    After the inspection is complete, the

    BareMetalHost
    object is created and available to be provisioned.

  4. View available

    BareMetalHost
    objects:

    $ oc get bmh -n openshift-machine-api

    Example output

    NAME                          STATE                    CONSUMER                   ONLINE   ERROR   AGE
    control-plane-1.example.com   available                control-plane-1            true             1h10m
    control-plane-2.example.com   externally provisioned   control-plane-2            true             4h53m
    control-plane-3.example.com   externally provisioned   control-plane-3            true             4h53m
    compute-1.example.com         provisioned              compute-1-ktmmx            true             4h53m
    compute-1.example.com         provisioned              compute-2-l2zmb            true             4h53m

    There are no

    MachineSet
    objects for control plane nodes, so you must create a
    Machine
    object instead. You can copy the
    providerSpec
    from another control plane
    Machine
    object.

  5. Create a

    Machine
    object:

    $ cat <<EOF | oc apply -f -
    apiVersion: machine.openshift.io/v1beta1
    kind: Machine
    metadata:
      annotations:
        metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 
    1
    
      labels:
        machine.openshift.io/cluster-api-cluster: control-plane-<num> 
    2
    
        machine.openshift.io/cluster-api-machine-role: master
        machine.openshift.io/cluster-api-machine-type: master
      name: control-plane-<num> 
    3
    
      namespace: openshift-machine-api
    spec:
      metadata: {}
      providerSpec:
        value:
          apiVersion: baremetal.cluster.k8s.io/v1alpha1
          customDeploy:
            method: install_coreos
          hostSelector: {}
          image:
            checksum: ""
            url: ""
          kind: BareMetalMachineProviderSpec
          metadata:
            creationTimestamp: null
          userData:
            name: master-user-data-managed
    EOF
    1 2 3
    Replace <num> for the control plane number of the bare metal node in the name, labels and annotations fields.
  6. To view the

    BareMetalHost
    objects, run the following command:

    $ oc get bmh -A

    Example output

    NAME                          STATE                    CONSUMER                   ONLINE   ERROR   AGE
    control-plane-1.example.com   provisioned              control-plane-1            true             2h53m
    control-plane-2.example.com   externally provisioned   control-plane-2            true             5h53m
    control-plane-3.example.com   externally provisioned   control-plane-3            true             5h53m
    compute-1.example.com         provisioned              compute-1-ktmmx            true             5h53m
    compute-2.example.com         provisioned              compute-2-l2zmb            true             5h53m

  7. After the RHCOS installation, verify that the

    BareMetalHost
    is added to the cluster:

    $ oc get nodes

    Example output

    NAME                           STATUS      ROLES     AGE   VERSION
    control-plane-1.example.com    available   master    4m2s  v1.18.2
    control-plane-2.example.com    available   master    141m  v1.18.2
    control-plane-3.example.com    available   master    141m  v1.18.2
    compute-1.example.com          available   worker    87m   v1.18.2
    compute-2.example.com          available   worker    87m   v1.18.2

    Note

    After replacement of the new control plane node, the etcd pod running in the new node is in

    crashloopback
    status. See "Replacing an unhealthy etcd member" in the Additional resources section for more information.

If the MAC address of an existing bare-metal node in the cluster matches the MAC address of a bare-metal host you are attempting to add to the cluster, the Bare Metal Operator associates the host with the existing node. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. A registration error is displayed for the failed bare-metal host.

You can diagnose a duplicate MAC address by examining the bare-metal hosts that are running in the

openshift-machine-api
namespace.

Prerequisites

  • Install an OpenShift Container Platform cluster on bare metal.
  • Install the OpenShift Container Platform CLI
    oc
    .
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

To determine whether a bare-metal host that fails provisioning has the same MAC address as an existing node, do the following:

  1. Get the bare-metal hosts running in the

    openshift-machine-api
    namespace:

    $ oc get bmh -n openshift-machine-api

    Example output

    NAME                 STATUS   PROVISIONING STATUS      CONSUMER
    openshift-master-0   OK       externally provisioned   openshift-zpwpq-master-0
    openshift-master-1   OK       externally provisioned   openshift-zpwpq-master-1
    openshift-master-2   OK       externally provisioned   openshift-zpwpq-master-2
    openshift-worker-0   OK       provisioned              openshift-zpwpq-worker-0-lv84n
    openshift-worker-1   OK       provisioned              openshift-zpwpq-worker-0-zd8lm
    openshift-worker-2   error    registering

  2. To see more detailed information about the status of the failing host, run the following command replacing

    <bare_metal_host_name>
    with the name of the host:

    $ oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml

    Example output

    ...
    status:
      errorCount: 12
      errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1
      errorType: registration error
    ...

8.5.4. Provisioning the bare metal node

Provisioning the bare metal node requires executing the following procedure from the provisioner node.

Procedure

  1. Ensure the

    PROVISIONING STATUS
    is
    ready
    before provisioning the bare metal node.

    $  oc -n openshift-machine-api get bmh openshift-worker-<num>

    Where

    <num>
    is the worker node number.

    NAME                 STATUS   PROVISIONING STATUS   CONSUMER   BMC                 HARDWARE PROFILE   ONLINE   ERROR
    openshift-worker-<num>   OK       ready                            ipmi://<out-of-band-ip>   unknown            true
  2. Get a count of the number of worker nodes.

    $ oc get nodes
    NAME                                                STATUS   ROLES           AGE     VERSION
    provisioner.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-master-1.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-master-2.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-master-3.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-worker-0.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-worker-1.openshift.example.com            Ready    master          30h     v1.16.2
  3. Get the machine set.

    $ oc get machinesets -n openshift-machine-api
    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    ...
    openshift-worker-0.example.com      1         1         1       1           55m
    openshift-worker-1.example.com      1         1         1       1           55m
  4. Increase the number of worker nodes by one.

    $ oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api

    Replace

    <num>
    with the new number of worker nodes. Replace
    <machineset>
    with the name of the machine set from the previous step.

  5. Check the status of the bare metal node.

    $ oc -n openshift-machine-api get bmh openshift-worker-<num>

    Where

    <num>
    is the worker node number. The status changes from
    ready
    to
    provisioning
    .

    NAME                 STATUS   PROVISIONING STATUS   CONSUMER                  BMC                 HARDWARE PROFILE   ONLINE   ERROR
    openshift-worker-<num>   OK       provisioning          openshift-worker-<num>-65tjz   ipmi://<out-of-band-ip>   unknown            true

    The

    provisioning
    status remains until the OpenShift Container Platform cluster provisions the node. This can take 30 minutes or more. After the node is provisioned, the status will change to
    provisioned
    .

    NAME                 STATUS   PROVISIONING STATUS   CONSUMER                  BMC                 HARDWARE PROFILE   ONLINE   ERROR
    openshift-worker-<num>   OK       provisioned           openshift-worker-<num>-65tjz   ipmi://<out-of-band-ip>   unknown            true
  6. After provisioning completes, ensure the bare metal node is ready.

    $ oc get nodes
    NAME                                          STATUS   ROLES   AGE     VERSION
    provisioner.openshift.example.com             Ready    master  30h     v1.16.2
    openshift-master-1.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-master-2.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-master-3.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-worker-0.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-worker-1.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-worker-<num>.openshift.example.com  Ready    worker  3m27s   v1.16.2

    You can also check the kubelet.

    $ ssh openshift-worker-<num>
    [kni@openshift-worker-<num>]$ journalctl -fu kubelet

8.6. Troubleshooting

8.6.1. Troubleshooting the installer workflow

Prior to troubleshooting the installation environment, it is critical to understand the overall flow of the installer-provisioned installation on bare metal. The diagrams below provide a troubleshooting flow with a step-by-step breakdown for the environment.

Flow-Diagram-1

Workflow 1 of 4 illustrates a troubleshooting workflow when the

install-config.yaml
file has errors or the Red Hat Enterprise Linux CoreOS (RHCOS) images are inaccessible. Troubleshooting suggestions can be found at Troubleshooting install-config.yaml.

Flow-Diagram-2

Workflow 2 of 4 illustrates a troubleshooting workflow for bootstrap VM issues, bootstrap VMs that cannot boot up the cluster nodes, and inspecting logs. When installing a OpenShift Container Platform cluster without the

provisioning
network, this workflow does not apply.

Flow-Diagram-3

Workflow 3 of 4 illustrates a troubleshooting workflow for cluster nodes that will not PXE boot. If installing using RedFish Virtual Media, each node must meet minimum firmware requirements for the installer to deploy the node. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details.

Flow-Diagram-4

Workflow 4 of 4 illustrates a troubleshooting workflow from a non-accessible API to a validated installation.

8.6.2. Troubleshooting install-config.yaml

The

install-config.yaml
configuration file represents all of the nodes that are part of the OpenShift Container Platform cluster. The file contains the necessary options consisting of but not limited to
apiVersion
,
baseDomain
,
imageContentSources
and virtual IP addresses. If errors occur early in the deployment of the OpenShift Container Platform cluster, the errors are likely in the
install-config.yaml
configuration file.

Procedure

  1. Use the guidelines in YAML-tips.
  2. Verify the YAML syntax is correct using syntax-check.
  3. Verify the Red Hat Enterprise Linux CoreOS (RHCOS) QEMU images are properly defined and accessible via the URL provided in the

    install-config.yaml
    . For example:

    $ curl -s -o /dev/null -I -w "%{http_code}\n" http://webserver.example.com:8080/rhcos-44.81.202004250133-0-qemu.x86_64.qcow2.gz?sha256=7d884b46ee54fe87bbc3893bf2aa99af3b2d31f2e19ab5529c60636fbd0f1ce7

    If the output is

    200
    , there is a valid response from the webserver storing the bootstrap VM image.

8.6.3. Bootstrap VM issues

The OpenShift Container Platform installation program spawns a bootstrap node virtual machine, which handles provisioning the OpenShift Container Platform cluster nodes.

Procedure

  1. About 10 to 15 minutes after triggering the installation program, check to ensure the bootstrap VM is operational using the

    virsh
    command:

    $ sudo virsh list
     Id    Name                           State
     --------------------------------------------
     12    openshift-xf6fq-bootstrap      running
    Note

    The name of the bootstrap VM is always the cluster name followed by a random set of characters and ending in the word "bootstrap."

    If the bootstrap VM is not running after 10-15 minutes, troubleshoot why it is not running. Possible issues include:

  2. Verify

    libvirtd
    is running on the system:

    $ systemctl status libvirtd
    ● libvirtd.service - Virtualization daemon
       Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
       Active: active (running) since Tue 2020-03-03 21:21:07 UTC; 3 weeks 5 days ago
         Docs: man:libvirtd(8)
               https://libvirt.org
     Main PID: 9850 (libvirtd)
        Tasks: 20 (limit: 32768)
       Memory: 74.8M
       CGroup: /system.slice/libvirtd.service
               ├─ 9850 /usr/sbin/libvirtd

    If the bootstrap VM is operational, log in to it.

  3. Use the

    virsh console
    command to find the IP address of the bootstrap VM:

    $ sudo virsh console example.com
    Connected to domain example.com
    Escape character is ^]
    Red Hat Enterprise Linux CoreOS 43.81.202001142154.0 (Ootpa) 4.3
    SSH host key: SHA256:BRWJktXZgQQRY5zjuAV0IKZ4WM7i4TiUyMVanqu9Pqg (ED25519)
    SSH host key: SHA256:7+iKGA7VtG5szmk2jB5gl/5EZ+SNcJ3a2g23o0lnIio (ECDSA)
    SSH host key: SHA256:DH5VWhvhvagOTaLsYiVNse9ca+ZSW/30OOMed8rIGOc (RSA)
    ens3:  fd35:919d:4042:2:c7ed:9a9f:a9ec:7
    ens4: 172.22.0.2 fe80::1d05:e52e:be5d:263f
    localhost login:
    Important

    When deploying a OpenShift Container Platform cluster without the

    provisioning
    network, you must use a public IP address and not a private IP address like
    172.22.0.2
    .

  4. After you obtain the IP address, log in to the bootstrap VM using the

    ssh
    command:

    Note

    In the console output of the previous step, you can use the IPv6 IP address provided by

    ens3
    or the IPv4 IP provided by
    ens4
    .

    $ ssh core@172.22.0.2

If you are not successful logging in to the bootstrap VM, you have likely encountered one of the following scenarios:

  • You cannot reach the
    172.22.0.0/24
    network. Verify the network connectivity between the provisioner and the
    provisioning
    network bridge. This issue might occur if you are using a
    provisioning
    network. `
  • You cannot reach the bootstrap VM through the public network. When attempting to SSH via
    baremetal
    network, verify connectivity on the
    provisioner
    host specifically around the
    baremetal
    network bridge.
  • You encountered
    Permission denied (publickey,password,keyboard-interactive)
    . When attempting to access the bootstrap VM, a
    Permission denied
    error might occur. Verify that the SSH key for the user attempting to log into the VM is set within the
    install-config.yaml
    file.

8.6.3.1. Bootstrap VM cannot boot up the cluster nodes

During the deployment, it is possible for the bootstrap VM to fail to boot the cluster nodes, which prevents the VM from provisioning the nodes with the RHCOS image. This scenario can arise due to:

  • A problem with the
    install-config.yaml
    file.
  • Issues with out-of-band network access when using the baremetal network.

To verify the issue, there are three containers related to

ironic
:

  • ironic-api
  • ironic-conductor
  • ironic-inspector

Procedure

  1. Log in to the bootstrap VM:

    $ ssh core@172.22.0.2
  2. To check the container logs, execute the following:

    [core@localhost ~]$ sudo podman logs -f <container-name>

    Replace

    <container-name>
    with one of
    ironic-api
    ,
    ironic-conductor
    , or
    ironic-inspector
    . If you encounter an issue where the control plane nodes are not booting up via PXE, check the
    ironic-conductor
    pod. The
    ironic-conductor
    pod contains the most detail about the attempt to boot the cluster nodes, because it attempts to log in to the node over IPMI.

Potential reason

The cluster nodes might be in the

ON
state when deployment started.

Solution

Power off the OpenShift Container Platform cluster nodes before you begin the installation over IPMI:

$ ipmitool -I lanplus -U root -P <password> -H <out-of-band-ip> power off

8.6.3.2. Inspecting logs

When experiencing issues downloading or accessing the RHCOS images, first verify that the URL is correct in the

install-config.yaml
configuration file.

Example of internal webserver hosting RHCOS images

bootstrapOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-qemu.x86_64.qcow2.gz?sha256=9d999f55ff1d44f7ed7c106508e5deecd04dc3c06095d34d36bf1cd127837e0c
clusterOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-openstack.x86_64.qcow2.gz?sha256=a1bda656fa0892f7b936fdc6b6a6086bddaed5dafacedcd7a1e811abb78fe3b0

The

ipa-downloader
and
coreos-downloader
containers download resources from a webserver or the external quay.io registry, whichever the
install-config.yaml
configuration file specifies. Verify the following two containers are up and running and inspect their logs as needed:

  • ipa-downloader
  • coreos-downloader

Procedure

  1. Log in to the bootstrap VM:

    $ ssh core@172.22.0.2
  2. Check the status of the

    ipa-downloader
    and
    coreos-downloader
    containers within the bootstrap VM:

    [core@localhost ~]$ sudo podman logs -f ipa-downloader
    [core@localhost ~]$ sudo podman logs -f coreos-downloader

    If the bootstrap VM cannot access the URL to the images, use the

    curl
    command to verify that the VM can access the images.

  3. To inspect the

    bootkube
    logs that indicate if all the containers launched during the deployment phase, execute the following:

    [core@localhost ~]$ journalctl -xe
    [core@localhost ~]$ journalctl -b -f -u bootkube.service
  4. Verify all the pods, including

    dnsmasq
    ,
    mariadb
    ,
    httpd
    , and
    ironic
    , are running:

    [core@localhost ~]$ sudo podman ps
  5. If there are issues with the pods, check the logs of the containers with issues. To check the log of the

    ironic-api
    , execute the following:

    [core@localhost ~]$ sudo podman logs <ironic-api>

8.6.4. Cluster nodes will not PXE boot

When OpenShift Container Platform cluster nodes will not PXE boot, execute the following checks on the cluster nodes that will not PXE boot. This procedure does not apply when installing a OpenShift Container Platform cluster without the

provisioning
network.

Procedure

  1. Check the network connectivity to the
    provisioning
    network.
  2. Ensure PXE is enabled on the NIC for the
    provisioning
    network and PXE is disabled for all other NICs.
  3. Verify that the

    install-config.yaml
    configuration file has the proper hardware profile and boot MAC address for the NIC connected to the
    provisioning
    network. For example:

    control plane node settings

    bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC
    hardwareProfile: default          #control plane node settings

    Worker node settings

    bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC
    hardwareProfile: unknown          #worker node settings

8.6.5. The API is not accessible

When the cluster is running and clients cannot access the API, domain name resolution issues might impede access to the API.

Procedure

  1. Hostname Resolution: Check the cluster nodes to ensure they have a fully qualified domain name, and not just

    localhost.localdomain
    . For example:

    $ hostname

    If a hostname is not set, set the correct hostname. For example:

    $ hostnamectl set-hostname <hostname>
  2. Incorrect Name Resolution: Ensure that each node has the correct name resolution in the DNS server using

    dig
    and
    nslookup
    . For example:

    $ dig api.<cluster-name>.example.com
    ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el8 <<>> api.<cluster-name>.example.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37551
    ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ; COOKIE: 866929d2f8e8563582af23f05ec44203d313e50948d43f60 (good)
    ;; QUESTION SECTION:
    ;api.<cluster-name>.example.com. IN A
    
    ;; ANSWER SECTION:
    api.<cluster-name>.example.com. 10800 IN	A 10.19.13.86
    
    ;; AUTHORITY SECTION:
    <cluster-name>.example.com. 10800 IN NS	<cluster-name>.example.com.
    
    ;; ADDITIONAL SECTION:
    <cluster-name>.example.com. 10800 IN A	10.19.14.247
    
    ;; Query time: 0 msec
    ;; SERVER: 10.19.14.247#53(10.19.14.247)
    ;; WHEN: Tue May 19 20:30:59 UTC 2020
    ;; MSG SIZE  rcvd: 140

    The output in the foregoing example indicates that the appropriate IP address for the

    api.<cluster-name>.example.com
    VIP is
    10.19.13.86
    . This IP address should reside on the
    baremetal
    network.

8.6.6. Cleaning up previous installations

In the event of a previous failed deployment, remove the artifacts from the failed attempt before attempting to deploy OpenShift Container Platform again.

Procedure

  1. Power off all bare metal nodes prior to installing the OpenShift Container Platform cluster:

    $ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
  2. Remove all old bootstrap resources if any are left over from a previous deployment attempt:

    for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
    do
      sudo virsh destroy $i;
      sudo virsh undefine $i;
      sudo virsh vol-delete $i --pool $i;
      sudo virsh vol-delete $i.ign --pool $i;
      sudo virsh pool-destroy $i;
      sudo virsh pool-undefine $i;
    done
  3. Remove the following from the

    clusterconfigs
    directory to prevent Terraform from failing:

    $ rm -rf ~/clusterconfigs/auth ~/clusterconfigs/terraform* ~/clusterconfigs/tls ~/clusterconfigs/metadata.json

8.6.7. Issues with creating the registry

When creating a disconnected registry, you might encounter a "User Not Authorized" error when attempting to mirror the registry. This error might occur if you fail to append the new authentication to the existing

pull-secret.txt
file.

Procedure

  1. Check to ensure authentication is successful:

    $ /usr/local/bin/oc adm release mirror \
      -a pull-secret-update.json
      --from=$UPSTREAM_REPO \
      --to-release-image=$LOCAL_REG/$LOCAL_REPO:${VERSION} \
      --to=$LOCAL_REG/$LOCAL_REPO
    Note

    Example output of the variables used to mirror the install images:

    UPSTREAM_REPO=${RELEASE_IMAGE}
    LOCAL_REG=<registry_FQDN>:<registry_port>
    LOCAL_REPO='ocp4/openshift4'

    The values of

    RELEASE_IMAGE
    and
    VERSION
    were set during the Retrieving OpenShift Installer step of the Setting up the environment for an OpenShift installation section.

  2. After mirroring the registry, confirm that you can access it in your disconnected environment:

    $ curl -k -u <user>:<password> https://registry.example.com:<registry-port>/v2/_catalog
    {"repositories":["<Repo-Name>"]}

8.6.8. Miscellaneous issues

8.6.8.1. Addressing the runtime network not ready error

After the deployment of a cluster you might receive the following error:

`runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network`

The Cluster Network Operator is responsible for deploying the networking components in response to a special object created by the installer. It runs very early in the installation process, after the control plane (master) nodes have come up, but before the bootstrap control plane has been torn down. It can be indicative of more subtle installer issues, such as long delays in bringing up control plane (master) nodes or issues with

apiserver
communication.

Procedure

  1. Inspect the pods in the

    openshift-network-operator
    namespace:

    $ oc get all -n openshift-network-operator
    NAME                                    READY STATUS            RESTARTS   AGE
    pod/network-operator-69dfd7b577-bg89v   0/1   ContainerCreating 0          149m
  2. On the

    provisioner
    node, determine that the network configuration exists:

    $ kubectl get network.config.openshift.io cluster -oyaml
    apiVersion: config.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      serviceNetwork:
      - 172.30.0.0/16
      clusterNetwork:
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      networkType: OpenShiftSDN

    If it does not exist, the installer did not create it. To determine why the installer did not create it, execute the following:

    $ openshift-install create manifests
  3. Check that the

    network-operator
    is running:

    $ kubectl -n openshift-network-operator get pods
  4. Retrieve the logs:

    $ kubectl -n openshift-network-operator logs -l "name=network-operator"

    On high availability clusters with three or more control plane (master) nodes, the Operator will perform leader election and all other Operators will sleep. For additional details, see Troubleshooting.

8.6.8.2. Cluster nodes not getting the correct IPv6 address over DHCP

If the cluster nodes are not getting the correct IPv6 address over DHCP, check the following:

  1. Ensure the reserved IPv6 addresses reside outside the DHCP range.
  2. In the IP address reservation on the DHCP server, ensure the reservation specifies the correct DHCP Unique Identifier (DUID). For example:

    # This is a dnsmasq dhcp reservation, 'id:00:03:00:01' is the client id and '18:db:f2:8c:d5:9f' is the MAC Address for the NIC
    id:00:03:00:01:18:db:f2:8c:d5:9f,openshift-master-1,[2620:52:0:1302::6]
  3. Ensure that route announcements are working.
  4. Ensure that the DHCP server is listening on the required interfaces serving the IP address ranges.

8.6.8.3. Cluster nodes not getting the correct hostname over DHCP

During IPv6 deployment, cluster nodes must get their hostname over DHCP. Sometimes the

NetworkManager
does not assign the hostname immediately. A control plane (master) node might report an error such as:

Failed Units: 2
  NetworkManager-wait-online.service
  nodeip-configuration.service

This error indicates that the cluster node likely booted without first receiving a hostname from the DHCP server, which causes

kubelet
to boot with a
localhost.localdomain
hostname. To address the error, force the node to renew the hostname.

Procedure

  1. Retrieve the

    hostname
    :

    [core@master-X ~]$ hostname

    If the hostname is

    localhost
    , proceed with the following steps.

    Note

    Where

    X
    is the control plane node (also known as the master node) number.

  2. Force the cluster node to renew the DHCP lease:

    [core@master-X ~]$ sudo nmcli con up "<bare-metal-nic>"

    Replace

    <bare-metal-nic>
    with the wired connection corresponding to the
    baremetal
    network.

  3. Check

    hostname
    again:

    [core@master-X ~]$ hostname
  4. If the hostname is still

    localhost.localdomain
    , restart
    NetworkManager
    :

    [core@master-X ~]$ sudo systemctl restart NetworkManager
  5. If the hostname is still
    localhost.localdomain
    , wait a few minutes and check again. If the hostname remains
    localhost.localdomain
    , repeat the previous steps.
  6. Restart the

    nodeip-configuration
    service:

    [core@master-X ~]$ sudo systemctl restart nodeip-configuration.service

    This service will reconfigure the

    kubelet
    service with the correct hostname references.

  7. Reload the unit files definition since the kubelet changed in the previous step:

    [core@master-X ~]$ sudo systemctl daemon-reload
  8. Restart the

    kubelet
    service:

    [core@master-X ~]$ sudo systemctl restart kubelet.service
  9. Ensure

    kubelet
    booted with the correct hostname:

    [core@master-X ~]$ sudo journalctl -fu kubelet.service

If the cluster node is not getting the correct hostname over DHCP after the cluster is up and running, such as during a reboot, the cluster will have a pending

csr
. Do not approve a
csr
, or other issues might arise.

Addressing a csr

  1. Get CSRs on the cluster:

    $ oc get csr
  2. Verify if a pending

    csr
    contains
    Subject Name: localhost.localdomain
    :

    $ oc get csr <pending_csr> -o jsonpath='{.spec.request}' | base64 --decode | openssl req -noout -text
  3. Remove any

    csr
    that contains
    Subject Name: localhost.localdomain
    :

    $ oc delete csr <wrong_csr>

8.6.8.4. Routes do not reach endpoints

During the installation process, it is possible to encounter a Virtual Router Redundancy Protocol (VRRP) conflict. This conflict might occur if a previously used OpenShift Container Platform node that was once part of a cluster deployment using a specific cluster name is still running but not part of the current OpenShift Container Platform cluster deployment using that same cluster name. For example, a cluster was deployed using the cluster name

openshift
, deploying three control plane (master) nodes and three worker nodes. Later, a separate install uses the same cluster name
openshift
, but this redeployment only installed three control plane (master) nodes, leaving the three worker nodes from a previous deployment in an
ON
state. This might cause a Virtual Router Identifier (VRID) conflict and a VRRP conflict.

  1. Get the route:

    $ oc get route oauth-openshift
  2. Check the service endpoint:

    $ oc get svc oauth-openshift
    NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    oauth-openshift   ClusterIP   172.30.19.162   <none>        443/TCP   59m
  3. Attempt to reach the service from a control plane (master) node:

    [core@master0 ~]$ curl -k https://172.30.19.162
    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {
      },
      "status": "Failure",
      "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
      "reason": "Forbidden",
      "details": {
      },
      "code": 403
  4. Identify the

    authentication-operator
    errors from the
    provisioner
    node:

    $ oc logs deployment/authentication-operator -n openshift-authentication-operator
    Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"225c5bd5-b368-439b-9155-5fd3c0459d98", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 2 endpoints for oauth-server are reporting"

Solution

  1. Ensure that the cluster name for every deployment is unique, ensuring no conflict.
  2. Turn off all the rogue nodes which are not part of the cluster deployment that are using the same cluster name. Otherwise, the authentication pod of the OpenShift Container Platform cluster might never start successfully.

8.6.8.5. Failed Ignition during Firstboot

During the Firstboot, the Ignition configuration may fail.

Procedure

  1. Connect to the node where the Ignition configuration failed:

    Failed Units: 1
      machine-config-daemon-firstboot.service
  2. Restart the

    machine-config-daemon-firstboot
    service:

    [core@worker-X ~]$ sudo systemctl restart machine-config-daemon-firstboot.service

8.6.8.6. NTP out of sync

The deployment of OpenShift Container Platform clusters depends on NTP synchronized clocks among the cluster nodes. Without synchronized clocks, the deployment may fail due to clock drift if the time difference is greater than two seconds.

Procedure

  1. Check for differences in the

    AGE
    of the cluster nodes. For example:

    $ oc get nodes
    NAME                         STATUS   ROLES    AGE   VERSION
    master-0.cloud.example.com   Ready    master   145m   v1.16.2
    master-1.cloud.example.com   Ready    master   135m   v1.16.2
    master-2.cloud.example.com   Ready    master   145m   v1.16.2
    worker-2.cloud.example.com   Ready    worker   100m   v1.16.2
  2. Check for inconsistent timing delays due to clock drift. For example:

    $ oc get bmh -n openshift-machine-api
    master-1   error registering master-1  ipmi://<out-of-band-ip>
    $ sudo timedatectl
                   Local time: Tue 2020-03-10 18:20:02 UTC
               Universal time: Tue 2020-03-10 18:20:02 UTC
                     RTC time: Tue 2020-03-10 18:36:53
                    Time zone: UTC (UTC, +0000)
    System clock synchronized: no
                  NTP service: active
              RTC in local TZ: no

Addressing clock drift in existing clusters

  1. Create a Butane config file including the contents of the

    chrony.conf
    file to be delivered to the nodes. In the following example, create
    99-master-chrony.bu
    to add the file to the control plane nodes. You can modify the file for worker nodes or repeat this procedure for the worker role.

    Note

    See "Creating machine configs with Butane" for information about Butane.

    variant: openshift
    version: 4.8.0
    metadata:
      name: 99-master-chrony
      labels:
        machineconfiguration.openshift.io/role: master
    storage:
      files:
      - path: /etc/chrony.conf
        mode: 0644
        overwrite: true
        contents:
          inline: |
            server <NTP-server> iburst 
    1
    
            stratumweight 0
            driftfile /var/lib/chrony/drift
            rtcsync
            makestep 10 3
            bindcmdaddress 127.0.0.1
            bindcmdaddress ::1
            keyfile /etc/chrony.keys
            commandkey 1
            generatecommandkey
            noclientlog
            logchange 0.5
            logdir /var/log/chrony
    1
    Replace <NTP-server> with the IP address of the NTP server.
  2. Use Butane to generate a

    MachineConfig
    object file,
    99-master-chrony.yaml
    , containing the configuration to be delivered to the nodes:

    $ butane 99-master-chrony.bu -o 99-master-chrony.yaml
  3. Apply the

    MachineConfig
    object file:

    $ oc apply -f 99-master-chrony.yaml
  4. Ensure the

    System clock synchronized
    value is yes:

    $ sudo timedatectl
                   Local time: Tue 2020-03-10 19:10:02 UTC
               Universal time: Tue 2020-03-10 19:10:02 UTC
                     RTC time: Tue 2020-03-10 19:36:53
                    Time zone: UTC (UTC, +0000)
    System clock synchronized: yes
                  NTP service: active
              RTC in local TZ: no

    To setup clock synchronization prior to deployment, generate the manifest files and add this file to the

    openshift
    directory. For example:

    $ cp chrony-masters.yaml ~/clusterconfigs/openshift/99_masters-chrony-configuration.yaml

    Then, continue to create the cluster.

8.6.9. Reviewing the installation

After installation, ensure the installer deployed the nodes and pods successfully.

Procedure

  1. When the OpenShift Container Platform cluster nodes are installed appropriately, the following

    Ready
    state is seen within the
    STATUS
    column:

    $ oc get nodes
    NAME                   STATUS   ROLES           AGE  VERSION
    master-0.example.com   Ready    master,worker   4h   v1.16.2
    master-1.example.com   Ready    master,worker   4h   v1.16.2
    master-2.example.com   Ready    master,worker   4h   v1.16.2
  2. Confirm the installer deployed all pods successfully. The following command removes any pods that are still running or have completed as part of the output.

    $ oc get pods --all-namespaces | grep -iv running | grep -iv complete
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2026 Red Hat
Volver arriba