16.3. Setting up the environment for an OpenShift installation


16.3.1. Installing RHEL on the provisioner node

With the configuration of the prerequisites complete, the next step is to install RHEL 8.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media.

Perform the following steps to prepare the environment.

Procédure

  1. Log in to the provisioner node via ssh.
  2. Create a non-root user (kni) and provide that user with sudo privileges:

    # useradd kni
    Copy to Clipboard Toggle word wrap
    # passwd kni
    Copy to Clipboard Toggle word wrap
    # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
    Copy to Clipboard Toggle word wrap
    # chmod 0440 /etc/sudoers.d/kni
    Copy to Clipboard Toggle word wrap
  3. Create an ssh key for the new user:

    # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''"
    Copy to Clipboard Toggle word wrap
  4. Log in as the new user on the provisioner node:

    # su - kni
    Copy to Clipboard Toggle word wrap
  5. Use Red Hat Subscription Manager to register the provisioner node:

    $ sudo subscription-manager register --username=<user> --password=<pass> --auto-attach
    $ sudo subscription-manager repos --enable=rhel-8-for-<architecture>-appstream-rpms --enable=rhel-8-for-<architecture>-baseos-rpms
    Copy to Clipboard Toggle word wrap
    Note

    For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager.

  6. Install the following packages:

    $ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
    Copy to Clipboard Toggle word wrap
  7. Modify the user to add the libvirt group to the newly created user:

    $ sudo usermod --append --groups libvirt <user>
    Copy to Clipboard Toggle word wrap
  8. Restart firewalld and enable the http service:

    $ sudo systemctl start firewalld
    Copy to Clipboard Toggle word wrap
    $ sudo firewall-cmd --zone=public --add-service=http --permanent
    Copy to Clipboard Toggle word wrap
    $ sudo firewall-cmd --reload
    Copy to Clipboard Toggle word wrap
  9. Start and enable the libvirtd service:

    $ sudo systemctl enable libvirtd --now
    Copy to Clipboard Toggle word wrap
  10. Create the default storage pool and start it:

    $ sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
    Copy to Clipboard Toggle word wrap
    $ sudo virsh pool-start default
    Copy to Clipboard Toggle word wrap
    $ sudo virsh pool-autostart default
    Copy to Clipboard Toggle word wrap
  11. Create a pull-secret.txt file:

    $ vim pull-secret.txt
    Copy to Clipboard Toggle word wrap

    In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure. Click Copy pull secret. Paste the contents into the pull-secret.txt file and save the contents in the kni user’s home directory.

16.3.3. Configuring networking

Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a baremetal bridge and network, and an optional provisioning bridge and network.

Note

You can also configure networking from the web console.

Procédure

  1. Export the baremetal network NIC name:

    $ export PUB_CONN=<baremetal_nic_name>
    Copy to Clipboard Toggle word wrap
  2. Configure the baremetal network:

    Note

    The SSH connection might disconnect after executing these steps.

    $ sudo nohup bash -c "
        nmcli con down \"$PUB_CONN\"
        nmcli con delete \"$PUB_CONN\"
        # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists
        nmcli con down \"System $PUB_CONN\"
        nmcli con delete \"System $PUB_CONN\"
        nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no
        nmcli con add type bridge-slave ifname \"$PUB_CONN\" master baremetal
        pkill dhclient;dhclient baremetal
    "
    Copy to Clipboard Toggle word wrap
  3. Optional: If you are deploying with a provisioning network, export the provisioning network NIC name:

    $ export PROV_CONN=<prov_nic_name>
    Copy to Clipboard Toggle word wrap
  4. Optional: If you are deploying with a provisioning network, configure the provisioning network:

    $ sudo nohup bash -c "
        nmcli con down \"$PROV_CONN\"
        nmcli con delete \"$PROV_CONN\"
        nmcli connection add ifname provisioning type bridge con-name provisioning
        nmcli con add type bridge-slave ifname \"$PROV_CONN\" master provisioning
        nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual
        nmcli con down provisioning
        nmcli con up provisioning
    "
    Copy to Clipboard Toggle word wrap
    Note

    The ssh connection might disconnect after executing these steps.

    The IPv6 address can be any address as long as it is not routable via the baremetal network.

    Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing.

  5. Optional: If you are deploying with a provisioning network, configure the IPv4 address on the provisioning network connection:

    $ nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual
    Copy to Clipboard Toggle word wrap
  6. ssh back into the provisioner node (if required):

    # ssh kni@provisioner.<cluster-name>.<domain>
    Copy to Clipboard Toggle word wrap
  7. Verify the connection bridges have been properly created:

    $ sudo nmcli con show
    Copy to Clipboard Toggle word wrap
    NAME               UUID                                  TYPE      DEVICE
    baremetal          4d5133a5-8351-4bb9-bfd4-3af264801530  bridge    baremetal
    provisioning       43942805-017f-4d7d-a2c2-7cb3324482ed  bridge    provisioning
    virbr0             d9bca40f-eee1-410b-8879-a2d4bb0465e7  bridge    virbr0
    bridge-slave-eno1  76a8ed50-c7e5-4999-b4f6-6d9014dd0812  ethernet  eno1
    bridge-slave-eno2  f31c3353-54b7-48de-893a-02d2b34c4736  ethernet  eno2
    Copy to Clipboard Toggle word wrap

16.3.4. Retrieving the OpenShift Container Platform installer

Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform:

$ export VERSION=stable-4.12
Copy to Clipboard Toggle word wrap
$ export RELEASE_ARCH=<architecture>
Copy to Clipboard Toggle word wrap
$ export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/$RELEASE_ARCH/clients/ocp/$VERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')
Copy to Clipboard Toggle word wrap

16.3.5. Extracting the OpenShift Container Platform installer

After retrieving the installer, the next step is to extract it.

Procédure

  1. Set the environment variables:

    $ export cmd=openshift-baremetal-install
    Copy to Clipboard Toggle word wrap
    $ export pullsecret_file=~/pull-secret.txt
    Copy to Clipboard Toggle word wrap
    $ export extract_dir=$(pwd)
    Copy to Clipboard Toggle word wrap
  2. Get the oc binary:

    $ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.tar.gz | tar zxvf - oc
    Copy to Clipboard Toggle word wrap
  3. Extract the installer:

    $ sudo cp oc /usr/local/bin
    Copy to Clipboard Toggle word wrap
    $ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE}
    Copy to Clipboard Toggle word wrap
    $ sudo cp openshift-baremetal-install /usr/local/bin
    Copy to Clipboard Toggle word wrap

16.3.6. Optional: Creating an RHCOS images cache

To employ image caching, you must download the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth.

Note

The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload.

If you are running the installation program on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios.

Avertissement

If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported.

Install a container that contains the images.

Procédure

  1. Install podman:

    $ sudo dnf install -y podman
    Copy to Clipboard Toggle word wrap
  2. Open firewall port 8080 to be used for RHCOS image caching:

    $ sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent
    Copy to Clipboard Toggle word wrap
    $ sudo firewall-cmd --reload
    Copy to Clipboard Toggle word wrap
  3. Create a directory to store the bootstraposimage:

    $ mkdir /home/kni/rhcos_image_cache
    Copy to Clipboard Toggle word wrap
  4. Set the appropriate SELinux context for the newly created directory:

    $ sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?"
    Copy to Clipboard Toggle word wrap
    $ sudo restorecon -Rv /home/kni/rhcos_image_cache/
    Copy to Clipboard Toggle word wrap
  5. Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM:

    $ export RHCOS_QEMU_URI=$(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "$(arch)" '.architectures[$ARCH].artifacts.qemu.formats["qcow2.gz"].disk.location')
    Copy to Clipboard Toggle word wrap
  6. Get the name of the image that the installation program will deploy on the bootstrap VM:

    $ export RHCOS_QEMU_NAME=${RHCOS_QEMU_URI##*/}
    Copy to Clipboard Toggle word wrap
  7. Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM:

    $ export RHCOS_QEMU_UNCOMPRESSED_SHA256=$(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "$(arch)" '.architectures[$ARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]')
    Copy to Clipboard Toggle word wrap
  8. Download the image and place it in the /home/kni/rhcos_image_cache directory:

    $ curl -L ${RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/${RHCOS_QEMU_NAME}
    Copy to Clipboard Toggle word wrap
  9. Confirm SELinux type is of httpd_sys_content_t for the new file:

    $ ls -Z /home/kni/rhcos_image_cache
    Copy to Clipboard Toggle word wrap
  10. Créer la capsule :

    $ podman run -d --name rhcos_image_cache \ 
    1
    
    -v /home/kni/rhcos_image_cache:/var/www/html \
    -p 8080:8080/tcp \
    quay.io/centos7/httpd-24-centos7:latest
    Copy to Clipboard Toggle word wrap
    1
    Creates a caching webserver with the name rhcos_image_cache. This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment.
  11. Generate the bootstrapOSImage configuration:

    $ export BAREMETAL_IP=$(ip addr show dev baremetal | awk '/inet /{print $2}' | cut -d"/" -f1)
    Copy to Clipboard Toggle word wrap
    $ export BOOTSTRAP_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_QEMU_NAME}?sha256=${RHCOS_QEMU_UNCOMPRESSED_SHA256}"
    Copy to Clipboard Toggle word wrap
    $ echo "    bootstrapOSImage=${BOOTSTRAP_OS_IMAGE}"
    Copy to Clipboard Toggle word wrap
  12. Add the required configuration to the install-config.yaml file under platform.baremetal:

    platform:
      baremetal:
        bootstrapOSImage: <bootstrap_os_image>  
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <bootstrap_os_image> with the value of $BOOTSTRAP_OS_IMAGE.

    See the "Configuring the install-config.yaml file" section for additional details.

16.3.7. Configuring the install-config.yaml file

16.3.7.1. Configuring the install-config.yaml file

The install-config.yaml file requires some additional details. Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it.

Note

The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload.

  1. Configure install-config.yaml. Change the appropriate variables to match the environment, including pullSecret and sshKey:

    apiVersion: v1
    baseDomain: <domain>
    metadata:
      name: <cluster_name>
    networking:
      machineNetwork:
      - cidr: <public_cidr>
      networkType: OVNKubernetes
    compute:
    - name: worker
      replicas: 2 
    1
    
    controlPlane:
      name: master
      replicas: 3
      platform:
        baremetal: {}
    platform:
      baremetal:
        apiVIPs:
          - <api_ip>
        ingressVIPs:
          - <wildcard_ip>
        provisioningNetworkCIDR: <CIDR>
        bootstrapExternalStaticIP: <bootstrap_static_ip_address> 
    2
    
        bootstrapExternalStaticGateway: <bootstrap_static_gateway> 
    3
    
        hosts:
          - name: openshift-master-0
            role: master
            bmc:
              address: ipmi://<out_of_band_ip> 
    4
    
              username: <user>
              password: <password>
            bootMACAddress: <NIC1_mac_address>
            rootDeviceHints:
             deviceName: "/dev/disk/by-id/<disk_id>" 
    5
    
          - name: <openshift_master_1>
            role: master
            bmc:
              address: ipmi://<out_of_band_ip> 
    6
    
              username: <user>
              password: <password>
            bootMACAddress: <NIC1_mac_address>
            rootDeviceHints:
             deviceName: "/dev/disk/by-id/<disk_id>" 
    7
    
          - name: <openshift_master_2>
            role: master
            bmc:
              address: ipmi://<out_of_band_ip> 
    8
    
              username: <user>
              password: <password>
            bootMACAddress: <NIC1_mac_address>
            rootDeviceHints:
             deviceName: "/dev/disk/by-id/<disk_id>" 
    9
    
          - name: <openshift_worker_0>
            role: worker
            bmc:
              address: ipmi://<out_of_band_ip> 
    10
    
              username: <user>
              password: <password>
            bootMACAddress: <NIC1_mac_address>
          - name: <openshift_worker_1>
            role: worker
            bmc:
              address: ipmi://<out_of_band_ip>
              username: <user>
              password: <password>
            bootMACAddress: <NIC1_mac_address>
            rootDeviceHints:
             deviceName: "/dev/disk/by-id/<disk_id>" 
    11
    
    pullSecret: '<pull_secret>'
    sshKey: '<ssh_pub_key>'
    Copy to Clipboard Toggle word wrap
    1
    Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2. Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one worker.
    2
    When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the baremetal network.
    3
    When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the baremetal network.
    4 6 8 10
    See the BMC addressing sections for more options.
    5 7 9 11
    Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2.
Note

Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or and IPv6 address for the apiVIP and ingressVIP configuration settings. In OpenShift Container Platform 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the apiVIPs and ingressVIPs configuration settings to specify IPv4 addresses, IPv6 addresses or both IP address formats.

  1. Create a directory to store the cluster configuration:

    $ mkdir ~/clusterconfigs
    Copy to Clipboard Toggle word wrap
  2. Copy the install-config.yaml file to the new directory:

    $ cp install-config.yaml ~/clusterconfigs
    Copy to Clipboard Toggle word wrap
  3. Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster:

    $ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
    Copy to Clipboard Toggle word wrap
  4. Remove old bootstrap resources if any are left over from a previous deployment attempt:

    for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
    do
      sudo virsh destroy $i;
      sudo virsh undefine $i;
      sudo virsh vol-delete $i --pool $i;
      sudo virsh vol-delete $i.ign --pool $i;
      sudo virsh pool-destroy $i;
      sudo virsh pool-undefine $i;
    done
    Copy to Clipboard Toggle word wrap

16.3.7.2. Additional install-config parameters

See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file.

Expand
Tableau 16.4. Required parameters
ParametersDéfautDescription

baseDomain

 

The domain name for the cluster. For example, example.com.

bootMode

UEFI

The boot mode for a node. Options are legacy, UEFI, and UEFISecureBoot. If bootMode is not set, Ironic sets it while inspecting the node.

bootstrapExternalStaticIP

 

The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the baremetal network.

bootstrapExternalStaticGateway

 

The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the baremetal network.

sshKey

 

The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node.

pullSecret

 

The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node.

metadata:
    name:
Copy to Clipboard Toggle word wrap
 

The name to be given to the OpenShift Container Platform cluster. For example, openshift.

networking:
    machineNetwork:
    - cidr:
Copy to Clipboard Toggle word wrap
 

The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24.

compute:
  - name: worker
Copy to Clipboard Toggle word wrap
 

The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes.

compute:
    replicas: 2
Copy to Clipboard Toggle word wrap
 

Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster.

controlPlane:
    name: master
Copy to Clipboard Toggle word wrap
 

The OpenShift Container Platform cluster requires a name for control plane (master) nodes.

controlPlane:
    replicas: 3
Copy to Clipboard Toggle word wrap
 

Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster.

provisioningNetworkInterface

 

The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC.

defaultMachinePlatform

 

The default configuration used for machine pools without a platform configuration.

apiVIPs

 

(Optional) The virtual IP address for Kubernetes API communication.

This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS.

Note

Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats.

disableCertificateVerification

False

redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses.

ingressVIPs

 

(Optional) The virtual IP address for ingress traffic.

This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS.

Note

Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats.

Expand
Tableau 16.5. Optional Parameters
ParametersDéfautDescription

provisioningDHCPRange

172.22.0.10,172.22.0.100

Defines the IP range for nodes on the provisioning network.

provisioningNetworkCIDR

172.22.0.0/24

The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network.

clusterProvisioningIP

The third IP address of the provisioningNetworkCIDR.

The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3.

bootstrapProvisioningIP

The second IP address of the provisioningNetworkCIDR.

The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2.

externalBridge

baremetal

The name of the baremetal bridge of the hypervisor attached to the baremetal network.

provisioningBridge

provisioning

The name of the provisioning bridge on the provisioner host attached to the provisioning network.

architecture

 

Defines the host architecture for your cluster. Valid values are amd64 or arm64.

defaultMachinePlatform

 

The default configuration used for machine pools without a platform configuration.

bootstrapOSImage

 

A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256>.

provisioningNetwork

 

The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network.

Disabled: Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled, you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the baremetal network. If Disabled, you must provide two IP addresses on the baremetal network that are used for the provisioning services.

Managed: Set this parameter to Managed, which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on.

Unmanaged: Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required.

httpProxy

 

Set this parameter to the appropriate HTTP proxy used within your environment.

httpsProxy

 

Set this parameter to the appropriate HTTPS proxy used within your environment.

noProxy

 

Set this parameter to the appropriate list of exclusions for proxy usage within your environment.

Hosts

The hosts parameter is a list of separate bare metal assets used to build the cluster.

Expand
Tableau 16.6. Hosts
NomDéfautDescription

name

 

The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0.

role

 

The role of the bare metal node. Either master or worker.

bmc

 

Connection details for the baseboard management controller. See the BMC addressing section for additional details.

bootMACAddress

 

The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host.

Note

You must provide a valid MAC address from the host if you disabled the provisioning network.

networkConfig

 

Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details.

16.3.7.3. BMC addressing

Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI.

IPMI

Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: ipmi://<out-of-band-ip>
          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap
Important

The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia. See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details.

Redfish network boot

To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
          disableCertificateVerification: True
Copy to Clipboard Toggle word wrap
Redfish APIs

Several redfish API endpoints are called onto your BCM when using the bare-metal installer-provisioned infrastructure.

Important

You need to ensure that your BMC supports all of the redfish APIs before installation.

List of redfish APIs
  • Power on

    curl -u $USER:$PASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"Action": "Reset", "ResetType": "On"}' https://$SERVER/redfish/v1/Systems/$SystemID/Actions/ComputerSystem.Reset
    Copy to Clipboard Toggle word wrap
  • Power off

    curl -u $USER:$PASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"Action": "Reset", "ResetType": "ForceOff"}' https://$SERVER/redfish/v1/Systems/$SystemID/Actions/ComputerSystem.Reset
    Copy to Clipboard Toggle word wrap
  • Temporary boot using pxe

    curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json"  https://$Server/redfish/v1/Systems/$SystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}}
    Copy to Clipboard Toggle word wrap
  • Set BIOS boot mode using Legacy or UEFI

    curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json"  https://$Server/redfish/v1/Systems/$SystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}}
    Copy to Clipboard Toggle word wrap
List of redfish-virtualmedia APIs
  • Set temporary boot device using cd or dvd

    curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" https://$Server/redfish/v1/Systems/$SystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}'
    Copy to Clipboard Toggle word wrap
  • Mount virtual media

    curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" -H "If-Match: *" https://$Server/redfish/v1/Managers/$ManagerID/VirtualMedia/$VmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}'
    Copy to Clipboard Toggle word wrap
Note

The PowerOn and PowerOff commands for redfish APIs are the same for the redfish-virtualmedia APIs.

Important

HTTPS and HTTP are the only supported parameter types for TransferProtocolTypes.

16.3.7.4. BMC addressing for Dell iDRAC

The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.

platform:
  baremetal:
    hosts:
      - name: <hostname>
        role: <master | worker>
        bmc:
          address: <address> 
1

          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap
1
The address configuration setting specifies the protocol.

For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI.

BMC address formats for Dell iDRAC
Expand
ProtocolAddress Format

iDRAC virtual media

idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1

Redfish network boot

redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1

IPMI

ipmi://<out-of-band-ip>

Important

Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell’s idrac-virtualmedia uses the Redfish standard with Dell’s OEM extensions.

See the following sections for additional details.

Redfish virtual media for Dell iDRAC

For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work.

The following example demonstrates using iDRAC virtual media within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
          username: <user>
          password: <password>
          disableCertificateVerification: True
Copy to Clipboard Toggle word wrap
Note

There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 or later for installer-provisioned installations on bare metal deployments. The Virtual Console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 .

Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach .

Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell’s idrac-virtualmedia:// protocol uses the Redfish standard with Dell’s OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware.

Redfish network boot for iDRAC

To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
          username: <user>
          password: <password>
          disableCertificateVerification: True
Copy to Clipboard Toggle word wrap
Note

There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 or later for installer-provisioned installations on bare metal deployments. The Virtual Console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 .

Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach .

The redfish:// URL protocol corresponds to the redfish hardware type in Ironic.

16.3.7.5. BMC addressing for HPE iLO

The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.

platform:
  baremetal:
    hosts:
      - name: <hostname>
        role: <master | worker>
        bmc:
          address: <address> 
1

          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap
1
The address configuration setting specifies the protocol.

For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI.

Expand
Tableau 16.7. BMC address formats for HPE iLO
ProtocolAddress Format

Redfish virtual media

redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1

Redfish network boot

redfish://<out-of-band-ip>/redfish/v1/Systems/1

IPMI

ipmi://<out-of-band-ip>

See the following sections for additional details.

Redfish virtual media for HPE iLO

To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
          disableCertificateVerification: True
Copy to Clipboard Toggle word wrap
Note

Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media.

Redfish network boot for HPE iLO

To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
          disableCertificateVerification: True
Copy to Clipboard Toggle word wrap

16.3.7.6. BMC addressing for Fujitsu iRMC

The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.

platform:
  baremetal:
    hosts:
      - name: <hostname>
        role: <master | worker>
        bmc:
          address: <address> 
1

          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap
1
The address configuration setting specifies the protocol.

For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI.

Expand
Tableau 16.8. BMC address formats for Fujitsu iRMC
ProtocolAddress Format

iRMC

irmc://<out-of-band-ip>

IPMI

ipmi://<out-of-band-ip>

iRMC

Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443. The following example demonstrates an iRMC configuration within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: irmc://<out-of-band-ip>
          username: <user>
          password: <password>
Copy to Clipboard Toggle word wrap
Note

Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal.

16.3.7.7. Root device hints

The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it.

Expand
Tableau 16.9. Subfields
SubfieldDescription

deviceName

A string containing a Linux device name like /dev/vda. The hint must match the actual value exactly.

hctl

A string containing a SCSI bus address like 0:0:0:0. The hint must match the actual value exactly.

model

A string containing a vendor-specific device identifier. The hint can be a substring of the actual value.

vendor

A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value.

serialNumber

A string containing the device serial number. The hint must match the actual value exactly.

minSizeGigabytes

An integer representing the minimum size of the device in gigabytes.

wwn

A string containing the unique storage identifier. The hint must match the actual value exactly.

wwnWithExtension

A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly.

wwnVendorExtension

A string containing the unique vendor storage identifier. The hint must match the actual value exactly.

rotational

A boolean indicating whether the device should be a rotating disk (true) or not (false).

Exemple d'utilisation

     - name: master-0
       role: master
       bmc:
         address: ipmi://10.10.0.3:6203
         username: admin
         password: redhat
       bootMACAddress: de:ad:be:ef:00:40
       rootDeviceHints:
         deviceName: "/dev/sda"
Copy to Clipboard Toggle word wrap

16.3.7.8. Optional: Setting proxy settings

To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file.

apiVersion: v1
baseDomain: <domain>
proxy:
  httpProxy: http://USERNAME:PASSWORD@proxy.example.com:PORT
  httpsProxy: https://USERNAME:PASSWORD@proxy.example.com:PORT
  noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>
Copy to Clipboard Toggle word wrap

The following is an example of noProxy with values.

noProxy: .example.com,172.22.0.0/24,10.10.0.0/24
Copy to Clipboard Toggle word wrap

With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair.

Key considerations:

  • If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http://.
  • If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail.
  • Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY, HTTPS_PROXY, and NO_PROXY.
Note

When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately.

16.3.7.9. Optional: Deploying with no provisioning network

To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file.

platform:
  baremetal:
    apiVIPs:
      - <api_VIP>
    ingressVIPs:
      - <ingress_VIP>
    provisioningNetwork: "Disabled" 
1
Copy to Clipboard Toggle word wrap
1
Add the provisioningNetwork configuration setting, if needed, and set it to Disabled.
Important

The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia. See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details.

16.3.7.10. Optional: Deploying with dual-stack networking

For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork, clusterNetwork, and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. Ensure the first CIDR entry is the IPv4 setting and the second CIDR entry is the IPv6 setting.

machineNetwork:
- cidr: {{ extcidrnet }}
- cidr: {{ extcidrnet6 }}
clusterNetwork:
- cidr: 10.128.0.0/14
  hostPrefix: 23
- cidr: fd02::/48
  hostPrefix: 64
serviceNetwork:
- 172.30.0.0/16
- fd03::/112
Copy to Clipboard Toggle word wrap
Important

The API VIP IP address and the Ingress VIP address must be of the primary IP address family when using dual-stack networking. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. However, Red Hat does support dual-stack networking with IPv4 as the primary IP address family. Therefore, the IPv4 entries must go before the IPv6 entries.

To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service.

platform:
  baremetal:
    apiVIPs:
      - <api_ipv4>
      - <api_ipv6>
    ingressVIPs:
      - <wildcard_ipv4>
      - <wildcard_ipv6>
Copy to Clipboard Toggle word wrap

16.3.7.11. Optional: Configuring host network interfaces

Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces using NMState.

The most common use case for this functionality is to specify a static IP address on the baremetal network, but you can also configure other networks such as a storage network. This functionality supports other NMState features such as VLAN, VXLAN, bridges, bonds, routes, MTU, and DNS resolver settings.

Prequisites

  • Configure a PTR DNS record with a valid hostname for each node with a static IP address.
  • Install the NMState CLI (nmstate).

Procédure

  1. Optional: Consider testing the NMState syntax with nmstatectl gc before including it in the install-config.yaml file, because the installer will not check the NMState YAML syntax.

    Note

    Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes using Kubernetes NMState after deployment or when expanding the cluster.

    1. Create an NMState YAML file:

      interfaces:
      - name: <nic1_name> 
      1
      
        type: ethernet
        state: up
        ipv4:
          address:
          - ip: <ip_address> 
      2
      
            prefix-length: 24
          enabled: true
      dns-resolver:
        config:
          server:
          - <dns_ip_address> 
      3
      
      routes:
        config:
        - destination: 0.0.0.0/0
          next-hop-address: <next_hop_ip_address> 
      4
      
          next-hop-interface: <next_hop_nic1_name> 
      5
      Copy to Clipboard Toggle word wrap
      1 2 3 4 5
      Replace <nic1_name>, <ip_address>, <dns_ip_address>, <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values.
    2. Test the configuration file by running the following command:

      $ nmstatectl gc <nmstate_yaml_file>
      Copy to Clipboard Toggle word wrap

      Replace <nmstate_yaml_file> with the configuration file name.

  2. Use the networkConfig configuration setting by adding the NMState configuration to hosts within the install-config.yaml file:

        hosts:
          - name: openshift-master-0
            role: master
            bmc:
              address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/
              username: <user>
              password: <password>
              disableCertificateVerification: null
            bootMACAddress: <NIC1_mac_address>
            bootMode: UEFI
            rootDeviceHints:
              deviceName: "/dev/sda"
            networkConfig: 
    1
    
              interfaces:
              - name: <nic1_name> 
    2
    
                type: ethernet
                state: up
                ipv4:
                  address:
                  - ip: <ip_address> 
    3
    
                    prefix-length: 24
                  enabled: true
              dns-resolver:
                config:
                  server:
                  - <dns_ip_address> 
    4
    
              routes:
                config:
                - destination: 0.0.0.0/0
                  next-hop-address: <next_hop_ip_address> 
    5
    
                  next-hop-interface: <next_hop_nic1_name> 
    6
    Copy to Clipboard Toggle word wrap
    1
    Add the NMState YAML syntax to configure the host interfaces.
    2 3 4 5 6
    Replace <nic1_name>, <ip_address>, <dns_ip_address>, <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values.
    Important

    After deploying the cluster, you cannot modify the networkConfig configuration setting of install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment.

16.3.7.12. Configuring multiple cluster nodes

You can simultaneously configure OpenShift Container Platform cluster nodes with identical settings. Configuring multiple cluster nodes avoids adding redundant information for each node to the install-config.yaml file. This file contains specific parameters to apply an identical configuration to multiple nodes in the cluster.

Compute nodes are configured separately from the controller node. However, configurations for both node types use the highlighted parameters in the install-config.yaml file to enable multi-node configuration. Set the networkConfig parameters to BOND, as shown in the following example:

hosts:
- name: ostest-master-0
 [...]
 networkConfig: &BOND
   interfaces:
   - name: bond0
     type: bond
     state: up
     ipv4:
       dhcp: true
       enabled: true
     link-aggregation:
       mode: active-backup
       port:
       - enp2s0
       - enp3s0
- name: ostest-master-1
 [...]
 networkConfig: *BOND
- name: ostest-master-2
 [...]
 networkConfig: *BOND
Copy to Clipboard Toggle word wrap
Note

Configuration of multiple cluster nodes is only available for initial deployments on installer-provisioned infrastructure.

16.3.7.13. Optional: Configuring managed Secure Boot

You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish, redfish-virtualmedia, or idrac-virtualmedia. To enable managed Secure Boot, add the bootMode configuration setting to each node:

Exemple

hosts:
  - name: openshift-master-0
    role: master
    bmc:
      address: redfish://<out_of_band_ip> 
1

      username: <user>
      password: <password>
    bootMACAddress: <NIC1_mac_address>
    rootDeviceHints:
     deviceName: "/dev/sda"
    bootMode: UEFISecureBoot 
2
Copy to Clipboard Toggle word wrap

1
Ensure the bmc.address setting uses redfish, redfish-virtualmedia, or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details.
2
The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot.
Note

See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media.

Note

Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities.

16.3.8. Manifest configuration files

16.3.8.1. Creating the OpenShift Container Platform manifests

  1. Create the OpenShift Container Platform manifests.

    $ ./openshift-baremetal-install --dir ~/clusterconfigs create manifests
    Copy to Clipboard Toggle word wrap
    INFO Consuming Install Config from target directory
    WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
    WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
    Copy to Clipboard Toggle word wrap

16.3.8.2. Optional: Configuring NTP for disconnected clusters

OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes.

OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.

Procédure

  1. Create a Butane config, 99-master-chrony-conf-override.bu, including the contents of the chrony.conf file for the control plane nodes.

    Note

    See "Creating machine configs with Butane" for information about Butane.

    Butane config example

    variant: openshift
    version: 4.12.0
    metadata:
      name: 99-master-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: master
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # Use public servers from the pool.ntp.org project.
              # Please consider joining the pool (https://www.pool.ntp.org/join.html).
    
              # The Machine Config Operator manages this file
              server openshift-master-0.<cluster-name>.<domain> iburst 
    1
    
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    
              # Configure the control plane nodes to serve as local NTP servers
              # for all worker nodes, even if they are not in sync with an
              # upstream NTP server.
    
              # Allow NTP client access from the local network.
              allow all
              # Serve time even if not synchronized to a time source.
              local stratum 3 orphan
    Copy to Clipboard Toggle word wrap

    1
    You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  2. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml, containing the configuration to be delivered to the control plane nodes:

    $ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
    Copy to Clipboard Toggle word wrap
  3. Create a Butane config, 99-worker-chrony-conf-override.bu, including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes.

    Butane config example

    variant: openshift
    version: 4.12.0
    metadata:
      name: 99-worker-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: worker
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # The Machine Config Operator manages this file.
              server openshift-master-0.<cluster-name>.<domain> iburst 
    1
    
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    Copy to Clipboard Toggle word wrap

    1
    You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  4. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml, containing the configuration to be delivered to the worker nodes:

    $ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
    Copy to Clipboard Toggle word wrap

You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes.

Important

When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes.

Procédure

  1. Change to the directory storing the install-config.yaml file:

    $ cd ~/clusterconfigs
    Copy to Clipboard Toggle word wrap
  2. Switch to the manifests subdirectory:

    $ cd manifests
    Copy to Clipboard Toggle word wrap
  3. Create a file named cluster-network-avoid-workers-99-config.yaml:

    $ touch cluster-network-avoid-workers-99-config.yaml
    Copy to Clipboard Toggle word wrap
  4. Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 50-worker-fix-ipi-rwn
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/kubernetes/manifests/keepalived.yaml
              mode: 0644
              contents:
                source: data:,
    Copy to Clipboard Toggle word wrap

    This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only:

    • openshift-ingress-operator
    • keepalived
  5. Save the cluster-network-avoid-workers-99-config.yaml file.
  6. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/master: ""
    Copy to Clipboard Toggle word wrap
  7. Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster.
  8. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true. Control plane nodes are not schedulable by default. For example:

    $ sed -i "s;mastersSchedulable: false;mastersSchedulable: true;g" clusterconfigs/manifests/cluster-scheduler-02-config.yml
    Copy to Clipboard Toggle word wrap
    Note

    If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail.

16.3.8.4. Optional: Deploying routers on worker nodes

During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas.

Important

Deploying a cluster with only one worker node is not supported. While modifying the router replicas will address issues with the degraded state when deploying with one worker, the cluster loses high availability for the ingress API, which is not suitable for production environments.

Note

By default, the installer deploys two routers. If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default.

Procédure

  1. Create a router-replicas.yaml file:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: <num-of-router-pods>
      endpointPublishingStrategy:
        type: HostNetwork
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/worker: ""
    Copy to Clipboard Toggle word wrap
    Note

    Replace <num-of-router-pods> with an appropriate value. If working with just one worker node, set replicas: to 1. If working with more than 3 worker nodes, you can increase replicas: from the default value 2 as appropriate.

  2. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory:

    $ cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml
    Copy to Clipboard Toggle word wrap

16.3.8.5. Optional: Configuring the BIOS

The following procedure configures the BIOS during the installation process.

Procédure

  1. Create the manifests.
  2. Modify the BareMetalHost resource file corresponding to the node:

    $ vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml
    Copy to Clipboard Toggle word wrap
  3. Add the BIOS configuration to the spec section of the BareMetalHost resource:

    spec:
      firmware:
        simultaneousMultithreadingEnabled: true
        sriovEnabled: true
        virtualizationEnabled: true
    Copy to Clipboard Toggle word wrap
    Note

    Red Hat supports three BIOS configurations. Only servers with BMC type irmc are supported. Other types of servers are currently not supported.

  4. Create the cluster.

16.3.8.6. Optional: Configuring the RAID

The following procedure configures a redundant array of independent disks (RAID) during the installation process.

Note
  1. OpenShift Container Platform supports hardware RAID for baseboard management controllers (BMCs) using the iRMC protocol only. OpenShift Container Platform 4.12 does not support software RAID.
  2. If you want to configure a hardware RAID for the node, verify that the node has a RAID controller.

Procédure

  1. Create the manifests.
  2. Modify the BareMetalHost resource corresponding to the node:

    $ vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml
    Copy to Clipboard Toggle word wrap
    Note

    The following example uses a hardware RAID configuration because OpenShift Container Platform 4.12 does not support software RAID.

    1. If you added a specific RAID configuration to the spec section, this causes the node to delete the original RAID configuration in the preparing phase and perform a specified configuration on the RAID. For example:

      spec:
        raid:
          hardwareRAIDVolumes:
          - level: "0" 
      1
      
            name: "sda"
            numberOfPhysicalDisks: 1
            rotational: true
            sizeGibibytes: 0
      Copy to Clipboard Toggle word wrap
      1
      level is a required field, and the others are optional fields.
    2. If you added an empty RAID configuration to the spec section, the empty configuration causes the node to delete the original RAID configuration during the preparing phase, but does not perform a new configuration. For example:

      spec:
        raid:
          hardwareRAIDVolumes: []
      Copy to Clipboard Toggle word wrap
    3. If you do not add a raid field in the spec section, the original RAID configuration is not deleted, and no new configuration will be performed.
  3. Create the cluster.

16.3.9. Creating a disconnected registry

In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet.

A local, or mirrored, copy of the registry requires the following:

  • A certificate for the registry node. This can be a self-signed certificate.
  • A web server that a container on a system will serve.
  • An updated pull secret that contains the certificate and local repository information.
Note

Creating a disconnected registry on a registry node is optional. If you need to create a disconnected registry on a registry node, you must complete all of the following sub-sections.

Conditions préalables

The following steps must be completed prior to hosting a mirrored registry on bare metal.

Procédure

  1. Open the firewall port on the registry node:

    $ sudo firewall-cmd --add-port=5000/tcp --zone=libvirt  --permanent
    Copy to Clipboard Toggle word wrap
    $ sudo firewall-cmd --add-port=5000/tcp --zone=public   --permanent
    Copy to Clipboard Toggle word wrap
    $ sudo firewall-cmd --reload
    Copy to Clipboard Toggle word wrap
  2. Install the required packages for the registry node:

    $ sudo yum -y install python3 podman httpd httpd-tools jq
    Copy to Clipboard Toggle word wrap
  3. Create the directory structure where the repository information will be held:

    $ sudo mkdir -p /opt/registry/{auth,certs,data}
    Copy to Clipboard Toggle word wrap

Complete the following steps to mirror the OpenShift Container Platform image repository for a disconnected registry.

Conditions préalables

  • Votre hôte miroir a accès à l'internet.
  • Vous avez configuré un registre miroir à utiliser dans votre réseau restreint et vous pouvez accéder au certificat et aux informations d'identification que vous avez configurés.
  • Vous avez téléchargé le secret d'extraction depuis le Red Hat OpenShift Cluster Manager et l'avez modifié pour inclure l'authentification à votre dépôt miroir.

Procédure

  1. Consultez la page de téléchargement d'OpenShift Cont ainer Platform pour déterminer la version d'OpenShift Container Platform que vous souhaitez installer et déterminez la balise correspondante sur la page Repository Tags.
  2. Définir les variables d'environnement nécessaires :

    1. Exporter la version :

      oCP_RELEASE=<release_version>
      Copy to Clipboard Toggle word wrap

      Pour <release_version>, spécifiez la balise qui correspond à la version d'OpenShift Container Platform à installer, par exemple 4.5.4.

    2. Exporter le nom du registre local et le port de l'hôte :

      $ LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'
      Copy to Clipboard Toggle word wrap

      Pour <local_registry_host_name>, indiquez le nom de domaine de votre dépôt miroir, et pour <local_registry_host_port>, indiquez le port sur lequel il sert le contenu.

    3. Exporter le nom du référentiel local :

      $ LOCAL_REPOSITORY='<local_repository_name>'
      Copy to Clipboard Toggle word wrap

      Pour <local_repository_name>, indiquez le nom du référentiel à créer dans votre registre, par exemple ocp4/openshift4.

    4. Exportez le nom du référentiel à mettre en miroir :

      $ PRODUCT_REPO='openshift-release-dev'
      Copy to Clipboard Toggle word wrap

      Pour une version de production, vous devez spécifier openshift-release-dev.

    5. Exportez le chemin d'accès à votre registre secret :

      $ LOCAL_SECRET_JSON='<path_to_pull_secret>'
      Copy to Clipboard Toggle word wrap

      Pour <path_to_pull_secret>, indiquez le chemin absolu et le nom de fichier du secret d'extraction pour le registre miroir que vous avez créé.

    6. Exporter le miroir de validation :

      $ RELEASE_NAME="ocp-release"
      Copy to Clipboard Toggle word wrap

      Pour une version de production, vous devez spécifier ocp-release.

    7. Export the type of architecture for your server, such as x86_64:

      aRCHITECTURE=<server_architecture>
      Copy to Clipboard Toggle word wrap
    8. Exportez le chemin d'accès au répertoire qui hébergera les images en miroir :

      $ REMOVABLE_MEDIA_PATH=<path> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Indiquer le chemin d'accès complet, y compris la barre oblique initiale (/).
  3. Mettre en miroir les images de la version dans le registre miroir :

    • Si votre hôte miroir n'a pas d'accès à Internet, prenez les mesures suivantes :

      1. Connectez le support amovible à un système connecté à Internet.
      2. Examinez les images et les manifestes de configuration pour créer un miroir :

        $ oc adm release mirror -a ${LOCAL_SECRET_JSON}  \
             --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \
             --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
             --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --dry-run
        Copy to Clipboard Toggle word wrap
      3. Enregistrez l'intégralité de la section imageContentSources à partir de la sortie de la commande précédente. Les informations sur vos miroirs sont propres à votre dépôt miroir, et vous devez ajouter la section imageContentSources au fichier install-config.yaml lors de l'installation.
      4. Mettez les images en miroir dans un répertoire du support amovible :

        $ oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE}
        Copy to Clipboard Toggle word wrap
      5. Transférer les supports dans l'environnement réseau restreint et télécharger les images dans le registre local des conteneurs.

        $ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:${OCP_RELEASE}*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} 
        1
        Copy to Clipboard Toggle word wrap
        1
        Pour REMOVABLE_MEDIA_PATH, vous devez utiliser le même chemin d'accès que celui spécifié lors de la mise en miroir des images.
    • Si le registre de conteneurs local est connecté à l'hôte miroir, prenez les mesures suivantes :

      1. Poussez directement les images de version dans le registre local à l'aide de la commande suivante :

        $ oc adm release mirror -a ${LOCAL_SECRET_JSON}  \
             --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \
             --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
             --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}
        Copy to Clipboard Toggle word wrap

        Cette commande extrait les informations de version sous forme de condensé, et sa sortie inclut les données imageContentSources dont vous avez besoin lorsque vous installez votre cluster.

      2. Enregistrez l'intégralité de la section imageContentSources à partir de la sortie de la commande précédente. Les informations sur vos miroirs sont propres à votre dépôt miroir, et vous devez ajouter la section imageContentSources au fichier install-config.yaml lors de l'installation.

        Note

        Le nom de l'image est remplacé par Quay.io pendant le processus de mise en miroir, et les images podman afficheront Quay.io dans le registre de la machine virtuelle de démarrage.

  4. Pour créer le programme d'installation basé sur le contenu que vous avez mis en miroir, extrayez-le et épinglez-le à la version :

    • Si votre hôte miroir n'a pas d'accès à Internet, exécutez la commande suivante :

      $ oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-baremetal-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}"
      Copy to Clipboard Toggle word wrap
    • Si le registre de conteneurs local est connecté à l'hôte miroir, exécutez la commande suivante :

      $ oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-baremetal-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}"
      Copy to Clipboard Toggle word wrap
      Important

      Pour vous assurer que vous utilisez les images correctes pour la version d'OpenShift Container Platform que vous avez sélectionnée, vous devez extraire le programme d'installation du contenu mis en miroir.

      Vous devez effectuer cette étape sur une machine disposant d'une connexion internet active.

      Si vous êtes dans un environnement déconnecté, utilisez l'indicateur --image dans le cadre de la collecte obligatoire et pointez vers l'image de la charge utile.

  5. Pour les clusters utilisant une infrastructure fournie par l'installateur, exécutez la commande suivante :

    $ openshift-baremetal-install
    Copy to Clipboard Toggle word wrap

On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node’s certificate and registry information.

Procédure

  1. Add the disconnected registry node’s certificate to the install-config.yaml file:

    $ echo "additionalTrustBundle: |" >> install-config.yaml
    Copy to Clipboard Toggle word wrap

    The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces.

    $ sed -e 's/^/  /' /opt/registry/certs/domain.crt >> install-config.yaml
    Copy to Clipboard Toggle word wrap
  2. Add the mirror information for the registry to the install-config.yaml file:

    $ echo "imageContentSources:" >> install-config.yaml
    Copy to Clipboard Toggle word wrap
    $ echo "- mirrors:" >> install-config.yaml
    Copy to Clipboard Toggle word wrap
    $ echo "  - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml
    Copy to Clipboard Toggle word wrap

    Replace registry.example.com with the registry’s fully qualified domain name.

    $ echo "  source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml
    Copy to Clipboard Toggle word wrap
    $ echo "- mirrors:" >> install-config.yaml
    Copy to Clipboard Toggle word wrap
    $ echo "  - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml
    Copy to Clipboard Toggle word wrap

    Replace registry.example.com with the registry’s fully qualified domain name.

    $ echo "  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml
    Copy to Clipboard Toggle word wrap

16.3.10. Validation checklist for installation

  • ❏ OpenShift Container Platform installer has been retrieved.
  • ❏ OpenShift Container Platform installer has been extracted.
  • ❏ Required parameters for the install-config.yaml have been configured.
  • ❏ The hosts parameter for the install-config.yaml has been configured.
  • ❏ The bmc parameter for the install-config.yaml has been configured.
  • ❏ Conventions for the values configured in the bmc address field have been applied.
  • ❏ Created the OpenShift Container Platform manifests.
  • ❏ (Optional) Deployed routers on worker nodes.
  • ❏ (Optional) Created a disconnected registry.
  • ❏ (Optional) Validate disconnected registry settings if in use.

Exécutez le programme d'installation d'OpenShift Container Platform :

$ ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster
Copy to Clipboard Toggle word wrap

16.3.12. Following the installation

During the deployment process, you can check the installation’s overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder:

$ tail -f /path/to/install-dir/.openshift_install.log
Copy to Clipboard Toggle word wrap

16.3.13. Verifying static IP address configuration

If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node’s network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address.

Note

The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing.

Verify the network configuration is working properly.

Procédure

  1. Check the network interface configuration on the node.
  2. Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly.

16.3.14. Preparing to reinstall a cluster on bare metal

Before you reinstall a cluster on bare metal, you must perform cleanup operations.

Procédure

  1. Remove or reformat the disks for the bootstrap, control plane node, and worker nodes. If you are working in a hypervisor environment, you must add any disks you removed.
  2. Delete the artifacts that the previous installation generated:

    $ cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \
    .openshift_install.log .openshift_install_state.json
    Copy to Clipboard Toggle word wrap
  3. Generate new manifests and Ignition config files. See “Creating the Kubernetes manifest and Ignition config files" for more information.
  4. Upload the new bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. This will overwrite the previous Ignition files.
Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat