Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 6. Installer-provisioned postinstallation configuration
After successfully deploying an installer-provisioned cluster, consider the following postinstallation procedures.
6.1. Configuring NTP for disconnected clusters Copier lienLien copié sur presse-papiers!
OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure compute nodes as NTP clients of the control plane nodes after a successful deployment.
OpenShift Container Platform nodes must agree on a date and time to run properly. When compute nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.
Procedure
Install Butane on your installation host by using the following command:
sudo dnf -y install butane
$ sudo dnf -y install butaneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Butane config,
99-master-chrony-conf-override.bu, including the contents of thechrony.conffile for the control plane nodes.NoteSee "Creating machine configs with Butane" for information about Butane.
Butane config example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must replace
<cluster-name>with the name of the cluster and replace<domain>with the fully qualified domain name.
Use Butane to generate a
MachineConfigobject file,99-master-chrony-conf-override.yaml, containing the configuration to be delivered to the control plane nodes:butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
$ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Butane config,
99-worker-chrony-conf-override.bu, including the contents of thechrony.conffile for the compute nodes that references the NTP servers on the control plane nodes.Butane config example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must replace
<cluster-name>with the name of the cluster and replace<domain>with the fully qualified domain name.
Use Butane to generate a
MachineConfigobject file,99-worker-chrony-conf-override.yaml, containing the configuration to be delivered to the worker nodes:butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
$ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
99-master-chrony-conf-override.yamlpolicy to the control plane nodes.oc apply -f 99-master-chrony-conf-override.yaml
$ oc apply -f 99-master-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created
machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
99-worker-chrony-conf-override.yamlpolicy to the compute nodes.oc apply -f 99-worker-chrony-conf-override.yaml
$ oc apply -f 99-worker-chrony-conf-override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created
machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the applied NTP settings.
oc describe machineconfigpool
$ oc describe machineconfigpoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2. Enabling a provisioning network after installation Copier lienLien copié sur presse-papiers!
The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node’s baseboard management controller is routable via the baremetal network.
You can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO).
Prerequisites
- A dedicated physical network must exist, connected to all worker and control plane nodes.
- You must isolate the native, untagged physical network.
-
The network cannot have a DHCP server when the
provisioningNetworkconfiguration setting is set toManaged. -
You can omit the
provisioningInterfacesetting in OpenShift Container Platform 4.10 to use thebootMACAddressconfiguration setting.
Procedure
-
When setting the
provisioningInterfacesetting, first identify the provisioning interface name for the cluster nodes. For example,eth0oreno1. -
Enable the Preboot eXecution Environment (PXE) on the
provisioningnetwork interface of the cluster nodes. Retrieve the current state of the
provisioningnetwork and save it to a provisioning custom resource (CR) file:oc get provisioning -o yaml > enable-provisioning-nw.yaml
$ oc get provisioning -o yaml > enable-provisioning-nw.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the provisioning CR file:
vim ~/enable-provisioning-nw.yaml
$ vim ~/enable-provisioning-nw.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scroll down to the
provisioningNetworkconfiguration setting and change it fromDisabledtoManaged. Then, add theprovisioningIP,provisioningNetworkCIDR,provisioningDHCPRange,provisioningInterface, andwatchAllNameSpacesconfiguration settings after theprovisioningNetworksetting. Provide appropriate values for each setting.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
provisioningNetworkis one ofManaged,Unmanaged, orDisabled. When set toManaged, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set toUnmanaged, the system administrator configures the DHCP server manually. - 2
- The
provisioningIPis the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within theprovisioningsubnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if theprovisioningnetwork isDisabled. The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server. - 3
- The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the
provisioningnetwork isDisabled. For example:192.168.0.1/24. - 4
- The DHCP range. This setting is only applicable to a
Managedprovisioning network. Omit this configuration setting if theprovisioningnetwork isDisabled. For example:192.168.0.64, 192.168.0.253. - 5
- The NIC name for the
provisioninginterface on cluster nodes. TheprovisioningInterfacesetting is only applicable toManagedandUnmanagedprovisioning networks. Omit theprovisioningInterfaceconfiguration setting if theprovisioningnetwork isDisabled. Omit theprovisioningInterfaceconfiguration setting to use thebootMACAddressconfiguration setting instead. - 6
- Set this setting to
trueif you want metal3 to watch namespaces other than the defaultopenshift-machine-apinamespace. The default value isfalse.
- Save the changes to the provisioning CR file.
Apply the provisioning CR file to the cluster:
oc apply -f enable-provisioning-nw.yaml
$ oc apply -f enable-provisioning-nw.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3. Creating a manifest object that includes a customized br-ex bridge Copier lienLien copié sur presse-papiers!
As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal platform, you can create a NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) that includes an NMState configuration file. The Kubernetes NMState Operator uses the NMState configuration file to create a customized br-ex bridge network configuration on each node in your cluster.
After creating the NodeNetworkConfigurationPolicy CR, copy content from the NMState configuration file that was created during cluster installation into the NNCP CR. An incomplete NNCP CR file means that the the network policy described in the file cannot get applied to nodes in the cluster.
This feature supports the following tasks:
- Modifying the maximum transmission unit (MTU) for your cluster.
- Modifying attributes of a different bond interface, such as MIImon (Media Independent Interface Monitor), bonding mode, or Quality of Service (QoS).
- Updating DNS values.
Consider the following use cases for creating a manifest object that includes a customized br-ex bridge:
-
You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes
br-exbridge network. Theconfigure-ovs.shshell script does not support making postinstallation changes to the bridge. - You want to deploy the bridge on a different interface than the interface available on a host or server IP address.
-
You want to make advanced configurations to the bridge that are not possible with the
configure-ovs.shshell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces.
The following list of interface names are reserved and you cannot use the names with NMstate configurations:
-
br-ext -
br-int -
br-local -
br-nexthop -
br0 -
ext-vxlan -
ext -
genev_sys_* -
int -
k8s-* -
ovn-k8s-* -
patch-br-* -
tun0 -
vxlan_sys_*
Prerequisites
-
You set a customized
br-exby using the alternative method toconfigure-ovs. - You installed the Kubernetes NMState Operator.
Procedure
Create a
NodeNetworkConfigurationPolicy(NNCP) CR and define a customizedbr-exbridge network configuration. Depending on your needs, ensure that you set a masquerade IP for either theipv4.address.ip,ipv6.address.ip, or both parameters. Always include a masquerade IP address in the NNCP CR and this address must match an in-use IP address block.ImportantAs a post-installation task, you can configure most parameters for a customized
br-exbridge that you defined in an existing NNCP CR, except for the primary IP address of the customizedbr-exbridge.If you want to convert your single-stack cluster network to a dual-stack cluster network, you can add or change a secondary IPv6 address in the NNCP CR, but the existing primary IP address cannot be changed.
Example of an NNCP CR that sets IPv6 and IPv4 masquerade IP addresses
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name of the policy.
- 2
- Name of the interface.
- 3
- The type of ethernet.
- 4
- The requested state for the interface after creation.
- 5
- Disables IPv4 and IPv6 in this example.
- 6
- The node NIC to which the bridge is attached.
- 7
- Set the parameter to
48to ensure thebr-exdefault route always has the highest precedence (lowest metric). This configuration prevents routing conflicts with any other interfaces that are automatically configured by theNetworkManagerservice.
Next steps
-
Scaling compute nodes to apply the manifest object that includes a customized
br-exbridge to each compute node that exists in your cluster. For more information, see "Expanding the cluster" in the Additional resources section.
6.4. Making disruptive changes to a customized br-ex bridge Copier lienLien copié sur presse-papiers!
For certain situations, you might need to make disruptive changes to a br-ex bridge for planned maintenance or network configuration updates. A br-ex bridge is a gateway for all external network traffic from your workloads, so any change to the bridge might temporarily disconnect pods and virtual machines (VMs) from an external network.
The following procedure uses an example to show making disruptive changes to a br-ex bridge that minimizes any impact to running cluster workloads.
For all the nodes in your cluster to receive the br-ex bridge changes, you must reboot your cluster. Editing the existing MachineConfig object does not force a reboot operation, so you must create an additional MachineConfig object to force a reboot operation for the cluster.
Red Hat does not support changing IP addresses for nodes as a postintallation task.
Prerequisites
-
You created a manifest object that includes a
br-exbridge. -
You deployed your cluster that has the configured
br-exbridge.
Procedure
Make changes to the NMState configuration file that you created during cluster installation for customizing your
br-exbridge network interface.ImportantBefore you save the
MachineConfigobject, check the changed parameter values. If you enter wrong values and save the file, you cannot recover the file to its original state and this impacts networking functionality for your cluster.Use the
base64command to re-encode the contents of the NMState configuration by entering the following command:base64 -w0 <nmstate_configuration>.yml
$ base64 -w0 <nmstate_configuration>.yml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<nmstate_configuration>with the name of your NMState resource YAML file.
-
Update the
MachineConfigmanifest file that you created during cluster installation and re-define the customizedbr-exbridge network interface. Apply the updates from the
MachineConfigobject to your cluster by entering the following command:oc apply -f <machine_config>.yml
$ oc apply -f <machine_config>.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a bare
MachineConfigobject but do not make any configuration changes to the file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a reboot operation by applying the bare
MachineConfigobject configuration to your cluster by entering the following command:oc apply -f <bare_machine_config>.yml
$ oc apply -f <bare_machine_config>.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that each node in your cluster has the
Readystatus to indicate that they have finished rebooting by entering the following command:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the bare
MachineConfigobject by entering the following command:oc delete machineconfig <machine_config_name>
$ oc delete machineconfig <machine_config_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Use the
nmstatectltool to check the configuration for thebr-exbridge interface by running the following command. The tool checks a node that runs thebr-exbridge interface and not the location where you deployed theMachineConfigobjects.sudo nmstatectl show br-ex
$ sudo nmstatectl show br-exCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.5. Services for a user-managed load balancer Copier lienLien copié sur presse-papiers!
You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer.
Configuring a user-managed load balancer depends on your vendor’s load balancer.
The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor’s load balancer.
Red Hat supports the following services for a user-managed load balancer:
- Ingress Controller
- OpenShift API
- OpenShift MachineConfig API
You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams:
Figure 6.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment
Figure 6.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment
Figure 6.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment
The following configuration options are supported for user-managed load balancers:
- Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration.
Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a
/27or/28, you can simplify your load balancer targets.TipYou can list all IP addresses that exist in a network by checking the machine config pool’s resources.
Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information:
- For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller’s load balancer, and API load balancer. Check the vendor’s documentation for this capability.
For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions:
- Assign a static IP address to each control plane node.
- Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment.
- Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur.
6.5.1. Configuring a user-managed load balancer Copier lienLien copié sur presse-papiers!
You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer.
Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section.
Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer.
MetalLB, which runs on a cluster, functions as a user-managed load balancer.
OpenShift API prerequisites
- You defined a front-end IP address.
TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:
- Port 6443 provides access to the OpenShift API service.
- Port 22623 can provide ignition startup configurations to nodes.
- The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
- The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes.
- The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623.
Ingress Controller prerequisites
- You defined a front-end IP address.
- TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer.
- The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
- The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster.
- The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936.
Prerequisite for health check URL specifications
You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.
The following examples show health check specifications for the previously listed backend services:
Example of a Kubernetes API health check specification
Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10
Path: HTTPS:6443/readyz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Example of a Machine Config API health check specification
Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10
Path: HTTPS:22623/healthz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Example of an Ingress Controller health check specification
Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10
Path: HTTP:1936/healthz/ready
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 5
Interval: 10
Procedure
Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration.
Example HAProxy configuration with one listed subnet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example HAProxy configuration with multiple listed subnets
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
curlCLI command to verify that the user-managed load balancer and its resources are operational:Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response:
curl https://<loadbalancer_ip_address>:6443/version --insecure
$ curl https://<loadbalancer_ip_address>:6443/version --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, you receive a JSON object in response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output:
curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure
$ curl -v https://<loadbalancer_ip_address>:22623/healthz --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK Content-Length: 0
HTTP/1.1 200 OK Content-Length: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output:
curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>
$ curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache
HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output:
curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>
$ curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer.
Examples of modified DNS records
<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End
<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front EndCopy to Clipboard Copied! Toggle word wrap Toggle overflow <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End
<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front EndCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantDNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record.
For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster’s
install-config.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set
UserManagedfor thetypeparameter to specify a user-managed load balancer for your cluster. The parameter defaults toOpenShiftManagedDefault, which denotes the default internal load balancer. For services defined in anopenshift-kni-infranamespace, a user-managed load balancer can deploy thecorednsservice to pods in your cluster but ignoreskeepalivedandhaproxyservices. - 2
- Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the Kubernetes API can communicate with the user-managed load balancer.
- 3
- Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster.
Verification
Use the
curlCLI command to verify that the user-managed load balancer and DNS record configuration are operational:Verify that you can access the cluster API, by running the following command and observing the output:
curl https://api.<cluster_name>.<base_domain>:6443/version --insecure
$ curl https://api.<cluster_name>.<base_domain>:6443/version --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, you receive a JSON object in response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you can access the cluster machine configuration, by running the following command and observing the output:
curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure
$ curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK Content-Length: 0
HTTP/1.1 200 OK Content-Length: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you can access each cluster application on port, by running the following command and observing the output:
curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you can access each cluster application on port 443, by running the following command and observing the output:
curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
$ curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecureCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.6. Configuration using the Bare Metal Operator Copier lienLien copié sur presse-papiers!
When deploying OpenShift Container Platform on bare-metal hosts, there are times when you need to make changes to the host either before or after provisioning. This can include inspecting the host’s hardware, firmware, and firmware details. It can also include formatting disks or changing modifiable firmware settings.
You can use the Bare Metal Operator (BMO) to provision, manage, and inspect bare-metal hosts in your cluster. The BMO can complete the following operations:
- Provision bare-metal hosts to the cluster with a specific image.
- Turn on or off a host.
- Inspect hardware details of the host and report them to the bare-metal host.
- Upgrade or downgrade a host’s firmware to a specific version.
- Inspect firmware and configure BIOS settings.
- Clean disk contents for the host before or after provisioning the host.
The BMO uses the following resources to complete these tasks:
-
BareMetalHost -
HostFirmwareSettings -
FirmwareSchema -
HostFirmwareComponents
The BMO maintains an inventory of the physical hosts in the cluster by mapping each bare-metal host to an instance of the BareMetalHost custom resource definition. Each BareMetalHost resource features hardware, software, and firmware details. The BMO continually inspects the bare-metal hosts in the cluster to ensure each BareMetalHost resource accurately details the components of the corresponding host.
The BMO also uses the HostFirmwareSettings resource, the FirmwareSchema resource, and the HostFirmwareComponents resource to detail firmware specifications and upgrade or downgrade firmware for the bare-metal host.
The BMO interfaces with bare-metal hosts in the cluster by using the Ironic API service. The Ironic service uses the Baseboard Management Controller (BMC) on the host to interface with the machine.
6.6.1. Bare Metal Operator architecture Copier lienLien copié sur presse-papiers!
The Bare Metal Operator (BMO) uses the following resources to provision, manage, and inspect bare-metal hosts in your cluster. The following diagram illustrates the architecture of these resources:
BareMetalHost
The BareMetalHost resource defines a physical host and its properties. When you provision a bare-metal host to the cluster, you must define a BareMetalHost resource for that host. For ongoing management of the host, you can inspect the information in the BareMetalHost or update this information.
The BareMetalHost resource features provisioning information such as the following:
- Deployment specifications such as the operating system boot image or the custom RAM disk
- Provisioning state
- Baseboard Management Controller (BMC) address
- Desired power state
The BareMetalHost resource features hardware information such as the following:
- Number of CPUs
- MAC address of a NIC
- Size of the host’s storage device
- Current power state
HostFirmwareSettings
You can use the HostFirmwareSettings resource to retrieve and manage the firmware settings for a host. When a host moves to the Available state, the Ironic service reads the host’s firmware settings and creates the HostFirmwareSettings resource. There is a one-to-one mapping between the BareMetalHost resource and the HostFirmwareSettings resource.
You can use the HostFirmwareSettings resource to inspect the firmware specifications for a host or to update a host’s firmware specifications.
You must adhere to the schema specific to the vendor firmware when you edit the spec field of the HostFirmwareSettings resource. This schema is defined in the read-only FirmwareSchema resource.
FirmwareSchema
Firmware settings vary among hardware vendors and host models. A FirmwareSchema resource is a read-only resource that contains the types and limits for each firmware setting on each host model. The data comes directly from the BMC by using the Ironic service. The FirmwareSchema resource enables you to identify valid values you can specify in the spec field of the HostFirmwareSettings resource.
A FirmwareSchema resource can apply to many BareMetalHost resources if the schema is the same.
HostFirmwareComponents
Metal3 provides the HostFirmwareComponents resource, which describes BIOS and baseboard management controller (BMC) firmware versions. You can upgrade or downgrade the host’s firmware to a specific version by editing the spec field of the HostFirmwareComponents resource. This is useful when deploying with validated patterns that have been tested against specific firmware versions.
6.6.2. About the BareMetalHost resource Copier lienLien copié sur presse-papiers!
Metal3 introduces the concept of the BareMetalHost resource, which defines a physical host and its properties. The BareMetalHost resource contains two sections:
-
The
BareMetalHostspec -
The
BareMetalHoststatus
6.6.2.1. The BareMetalHost spec Copier lienLien copié sur presse-papiers!
The spec section of the BareMetalHost resource defines the desired state of the host.
| Parameters | Description |
|---|---|
|
|
An interface to enable or disable automated cleaning during provisioning and de-provisioning. When set to |
bmc: address: credentialsName: disableCertificateVerification:
|
The
|
|
| The MAC address of the NIC used for provisioning the host. |
|
|
The boot mode of the host. It defaults to |
|
|
A reference to another resource that is using the host. It could be empty if another resource is not currently using the host. For example, a |
|
| A human-provided string to help identify the host. |
|
| A boolean indicating whether the host provisioning and deprovisioning are managed externally. When set:
|
|
|
Contains information about the BIOS configuration of bare metal hosts. Currently,
|
image: url: checksum: checksumType: format:
|
The
|
|
| A reference to the secret containing the network configuration data and its namespace, so that it can be attached to the host before the host boots to set up the network. |
|
|
A boolean indicating whether the host should be powered on ( |
raid: hardwareRAIDVolumes: softwareRAIDVolumes:
| (Optional) Contains the information about the RAID configuration for bare metal hosts. If not specified, it retains the current configuration. Note OpenShift Container Platform 4.17 supports hardware RAID on the installation drive for BMCs, including:
OpenShift Container Platform 4.17 does not support software RAID on the installation drive. See the following configuration settings:
You can set the spec:
raid:
hardwareRAIDVolume: []
If you receive an error message indicating that the driver does not support RAID, set the |
|
|
The
|
6.6.2.2. The BareMetalHost status Copier lienLien copié sur presse-papiers!
The BareMetalHost status represents the host’s current state, and includes tested credentials, current hardware details, and other information.
| Parameters | Description |
|---|---|
|
| A reference to the secret and its namespace holding the last set of baseboard management controller (BMC) credentials the system was able to validate as working. |
|
| Details of the last error reported by the provisioning backend, if any. |
|
| Indicates the class of problem that has caused the host to enter an error state. The error types are:
|
|
|
The
|
hardware: firmware:
| Contains BIOS firmware information. For example, the hardware vendor and version. |
|
|
The
|
hardware: ramMebibytes:
| The host’s amount of memory in Mebibytes (MiB). |
|
|
The
|
hardware:
systemVendor:
manufacturer:
productName:
serialNumber:
|
Contains information about the host’s |
|
| The timestamp of the last time the status of the host was updated. |
|
| The status of the server. The status is one of the following:
|
|
| Boolean indicating whether the host is powered on. |
|
|
The
|
|
| A reference to the secret and its namespace holding the last set of BMC credentials that were sent to the provisioning backend. |
6.6.3. Getting the BareMetalHost resource Copier lienLien copié sur presse-papiers!
The BareMetalHost resource contains the properties of a physical host. You must get the BareMetalHost resource for a physical host to review its properties.
Procedure
Get the list of
BareMetalHostresources:oc get bmh -n openshift-machine-api -o yaml
$ oc get bmh -n openshift-machine-api -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use
baremetalhostas the long form ofbmhwithoc getcommand.Get the list of hosts:
oc get bmh -n openshift-machine-api
$ oc get bmh -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
BareMetalHostresource for a specific host:oc get bmh <host_name> -n openshift-machine-api -o yaml
$ oc get bmh <host_name> -n openshift-machine-api -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>is the name of the host.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.6.4. Editing a BareMetalHost resource Copier lienLien copié sur presse-papiers!
After you deploy an OpenShift Container Platform cluster on bare metal, you might need to edit a node’s BareMetalHost resource. Consider the following examples:
- You deploy a cluster with the Assisted Installer and need to add or edit the baseboard management controller (BMC) host name or IP address.
- You want to move a node from one cluster to another without deprovisioning it.
Prerequisites
-
Ensure the node is in the
Provisioned,ExternallyProvisioned, orAvailablestate.
Procedure
Get the list of nodes:
oc get bmh -n openshift-machine-api
$ oc get bmh -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Before editing the node’s
BareMetalHostresource, detach the node from Ironic by running the following command:oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached=true'
$ oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached=true'1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<node_name>with the name of the node.
Edit the
BareMetalHostresource by running the following command:oc edit bmh <node_name> -n openshift-machine-api
$ oc edit bmh <node_name> -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reattach the node to Ironic by running the following command:
oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached'-
$ oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached'-Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.6.5. Troubleshooting latency when deleting a BareMetalHost resource Copier lienLien copié sur presse-papiers!
When the Bare Metal Operator (BMO) deletes a BareMetalHost resource, Ironic deprovisions the bare-metal host with a process called cleaning. When cleaning fails, Ironic retries the cleaning process three times, which is the source of the latency. The cleaning process might not succeed, causing the provisioning status of the bare-metal host to remain in the deleting state indefinitely. When this occurs, use the following procedure to disable the cleaning process.
Do not remove finalizers from the BareMetalHost resource.
Procedure
- If the cleaning process fails and restarts, wait for it to finish. This might take about 5 minutes.
-
If the provisioning status remains in the deleting state, disable the cleaning process by modifying the
BareMetalHostresource and setting theautomatedCleaningModefield todisabled.
See "Editing a BareMetalHost resource" for additional details.
6.6.6. Attaching a non-bootable ISO to a bare-metal node Copier lienLien copié sur presse-papiers!
You can attach a generic, non-bootable ISO virtual media image to a provisioned node by using the DataImage resource. After you apply the resource, the ISO image becomes accessible to the operating system after it has booted. This is useful for configuring a node after provisioning the operating system and before the node boots for the first time.
Prerequisites
- The node must use Redfish or drivers derived from it to support this feature.
-
The node must be in the
ProvisionedorExternallyProvisionedstate. -
The
namemust be the same as the name of the node defined in itsBareMetalHostresource. -
You have a valid
urlto the ISO image.
Procedure
Create a
DataImageresource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the
DataImageresource to a file by running the following command:vim <node_name>-dataimage.yaml
$ vim <node_name>-dataimage.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
DataImageresource by running the following command:oc apply -f <node_name>-dataimage.yaml -n <node_namespace>
$ oc apply -f <node_name>-dataimage.yaml -n <node_namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<node_namespace>so that the namespace matches the namespace for theBareMetalHostresource. For example,openshift-machine-api.
Reboot the node.
NoteTo reboot the node, attach the
reboot.metal3.ioannotation, or reset set theonlinestatus in theBareMetalHostresource. A forced reboot of the bare-metal node will change the state of the node toNotReadyfor awhile. For example, 5 minutes or more.View the
DataImageresource by running the following command:oc get dataimage <node_name> -n openshift-machine-api -o yaml
$ oc get dataimage <node_name> -n openshift-machine-api -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.6.7. About the HostFirmwareSettings resource Copier lienLien copié sur presse-papiers!
You can use the HostFirmwareSettings resource to retrieve and manage the BIOS settings for a host. When a host moves to the Available state, Ironic reads the host’s BIOS settings and creates the HostFirmwareSettings resource. The resource contains the complete BIOS configuration returned from the baseboard management controller (BMC). Whereas, the firmware field in the BareMetalHost resource returns three vendor-independent fields, the HostFirmwareSettings resource typically comprises many BIOS settings of vendor-specific fields per host.
The HostFirmwareSettings resource contains two sections:
-
The
HostFirmwareSettingsspec. -
The
HostFirmwareSettingsstatus.
6.6.7.1. The HostFirmwareSettings spec Copier lienLien copié sur presse-papiers!
The spec section of the HostFirmwareSettings resource defines the desired state of the host’s BIOS, and it is empty by default. Ironic uses the settings in the spec.settings section to update the baseboard management controller (BMC) when the host is in the Preparing state. Use the FirmwareSchema resource to ensure that you do not send invalid name/value pairs to hosts. See "About the FirmwareSchema resource" for additional details.
Example
spec:
settings:
ProcTurboMode: Disabled
spec:
settings:
ProcTurboMode: Disabled
- 1
- In the foregoing example, the
spec.settingssection contains a name/value pair that will set theProcTurboModeBIOS setting toDisabled.
Integer parameters listed in the status section appear as strings. For example, "1". When setting integers in the spec.settings section, the values should be set as integers without quotes. For example, 1.
6.6.7.2. The HostFirmwareSettings status Copier lienLien copié sur presse-papiers!
The status represents the current state of the host’s BIOS.
| Parameters | Description |
|---|---|
|
|
The
|
status:
schema:
name:
namespace:
lastUpdated:
|
The
|
status: settings:
|
The |
6.6.8. Getting the HostFirmwareSettings resource Copier lienLien copié sur presse-papiers!
The HostFirmwareSettings resource contains the vendor-specific BIOS properties of a physical host. You must get the HostFirmwareSettings resource for a physical host to review its BIOS properties.
Procedure
Get the detailed list of
HostFirmwareSettingsresources:oc get hfs -n openshift-machine-api -o yaml
$ oc get hfs -n openshift-machine-api -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use
hostfirmwaresettingsas the long form ofhfswith theoc getcommand.Get the list of
HostFirmwareSettingsresources:oc get hfs -n openshift-machine-api
$ oc get hfs -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
HostFirmwareSettingsresource for a particular hostoc get hfs <host_name> -n openshift-machine-api -o yaml
$ oc get hfs <host_name> -n openshift-machine-api -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>is the name of the host.
6.6.9. Editing the HostFirmwareSettings resource Copier lienLien copié sur presse-papiers!
You can edit the HostFirmwareSettings of provisioned hosts.
You can only edit hosts when they are in the provisioned state, excluding read-only values. You cannot edit hosts in the externally provisioned state.
Procedure
Get the list of
HostFirmwareSettingsresources:oc get hfs -n openshift-machine-api
$ oc get hfs -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit a host’s
HostFirmwareSettingsresource:oc edit hfs <host_name> -n openshift-machine-api
$ oc edit hfs <host_name> -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>is the name of a provisioned host. TheHostFirmwareSettingsresource will open in the default editor for your terminal.Add name/value pairs to the
spec.settingssection:Example
spec: settings: name: valuespec: settings: name: value1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the
FirmwareSchemaresource to identify the available settings for the host. You cannot set values that are read-only.
- Save the changes and exit the editor.
Get the host’s machine name:
oc get bmh <host_name> -n openshift-machine name
$ oc get bmh <host_name> -n openshift-machine nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>is the name of the host. The machine name appears under theCONSUMERfield.Annotate the machine to delete it from the machineset:
oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api
$ oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<machine_name>is the name of the machine to delete.Get a list of nodes and count the number of worker nodes:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the machineset:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scale the machineset:
oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>
$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<machineset_name>is the name of the machineset and<n-1>is the decremented number of worker nodes.When the host enters the
Availablestate, scale up the machineset to make theHostFirmwareSettingsresource changes take effect:oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>
$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<machineset_name>is the name of the machineset and<n>is the number of worker nodes.
6.6.10. Verifying the HostFirmware Settings resource is valid Copier lienLien copié sur presse-papiers!
When the user edits the spec.settings section to make a change to the HostFirmwareSetting(HFS) resource, the Bare Metal Operator (BMO) validates the change against the FimwareSchema resource, which is a read-only resource. If the setting is invalid, the BMO will set the Type value of the status.Condition setting to False and also generate an event and store it in the HFS resource. Use the following procedure to verify that the resource is valid.
Procedure
Get a list of
HostFirmwareSettingresources:oc get hfs -n openshift-machine-api
$ oc get hfs -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
HostFirmwareSettingsresource for a particular host is valid:oc describe hfs <host_name> -n openshift-machine-api
$ oc describe hfs <host_name> -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>is the name of the host.Example output
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - FooCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the response returns
ValidationFailed, there is an error in the resource configuration and you must update the values to conform to theFirmwareSchemaresource.
6.6.11. About the FirmwareSchema resource Copier lienLien copié sur presse-papiers!
BIOS settings vary among hardware vendors and host models. A FirmwareSchema resource is a read-only resource that contains the types and limits for each BIOS setting on each host model. The data comes directly from the BMC through Ironic. The FirmwareSchema enables you to identify valid values you can specify in the spec field of the HostFirmwareSettings resource. The FirmwareSchema resource has a unique identifier derived from its settings and limits. Identical host models use the same FirmwareSchema identifier. It is likely that multiple instances of HostFirmwareSettings use the same FirmwareSchema.
| Parameters | Description |
|---|---|
|
|
The
|
6.6.12. Getting the FirmwareSchema resource Copier lienLien copié sur presse-papiers!
Each host model from each vendor has different BIOS settings. When editing the HostFirmwareSettings resource’s spec section, the name/value pairs you set must conform to that host’s firmware schema. To ensure you are setting valid name/value pairs, get the FirmwareSchema for the host and review it.
Procedure
To get a list of
FirmwareSchemaresource instances, execute the following:oc get firmwareschema -n openshift-machine-api
$ oc get firmwareschema -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow To get a particular
FirmwareSchemainstance, execute:oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml
$ oc get firmwareschema <instance_name> -n openshift-machine-api -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<instance_name>is the name of the schema instance stated in theHostFirmwareSettingsresource (see Table 3).
6.6.13. About the HostFirmwareComponents resource Copier lienLien copié sur presse-papiers!
Metal3 provides the HostFirmwareComponents resource, which describes BIOS and baseboard management controller (BMC) firmware versions. The HostFirmwareComponents resource contains two sections:
-
The
HostFirmwareComponentsspec -
The
HostFirmwareComponentsstatus
6.6.13.1. HostFirmwareComponents spec Copier lienLien copié sur presse-papiers!
The spec section of the HostFirmwareComponents resource defines the desired state of the host’s BIOS and BMC versions.
| Parameters | Description |
|---|---|
updates: component: url:
|
The
|
6.6.13.2. HostFirmwareComponents status Copier lienLien copié sur presse-papiers!
The status section of the HostFirmwareComponents resource returns the current status of the host’s BIOS and BMC versions.
| Parameters | Description |
|---|---|
|
|
The
|
updates: component: url:
|
The
|
6.6.14. Getting the HostFirmwareComponents resource Copier lienLien copié sur presse-papiers!
The HostFirmwareComponents resource contains the specific firmware version of the BIOS and baseboard management controller (BMC) of a physical host. You must get the HostFirmwareComponents resource for a physical host to review the firmware version and status.
Procedure
Get the detailed list of
HostFirmwareComponentsresources:oc get hostfirmwarecomponents -n openshift-machine-api -o yaml
$ oc get hostfirmwarecomponents -n openshift-machine-api -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the list of
HostFirmwareComponentsresources:oc get hostfirmwarecomponents -n openshift-machine-api
$ oc get hostfirmwarecomponents -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
HostFirmwareComponentsresource for a particular host:oc get hostfirmwarecomponents <host_name> -n openshift-machine-api -o yaml
$ oc get hostfirmwarecomponents <host_name> -n openshift-machine-api -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>is the name of the host.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.6.15. Editing the HostFirmwareComponents resource Copier lienLien copié sur presse-papiers!
You can edit the HostFirmwareComponents resource of a node.
Procedure
Get the detailed list of
HostFirmwareComponentsresources:oc get hostfirmwarecomponents -n openshift-machine-api -o yaml
$ oc get hostfirmwarecomponents -n openshift-machine-api -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit a host’s
HostFirmwareComponentsresource:oc edit <host_name> hostfirmwarecomponents -n openshift-machine-api
$ oc edit <host_name> hostfirmwarecomponents -n openshift-machine-api1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where
<host_name>is the name of the host. TheHostFirmwareComponentsresource will open in the default editor for your terminal.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- To set a BIOS version, set the
nameattribute tobios. - 2
- To set a BIOS version, set the
urlattribute to the URL for the firmware version of the BIOS. - 3
- To set a BMC version, set the
nameattribute tobmc. - 4
- To set a BMC version, set the
urlattribute to the URL for the firmware verison of the BMC.
- Save the changes and exit the editor.
Get the host’s machine name:
oc get bmh <host_name> -n openshift-machine name
$ oc get bmh <host_name> -n openshift-machine name1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where
<host_name>is the name of the host. The machine name appears under theCONSUMERfield.
Annotate the machine to delete it from the machine set:
oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api
$ oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where
<machine_name>is the name of the machine to delete.
Get a list of nodes and count the number of worker nodes:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the machine set:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scale the machine set:
oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>
$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where
<machineset_name>is the name of the machine set and<n-1>is the decremented number of worker nodes.
When the host enters the
Availablestate, scale up the machine set to make theHostFirmwareComponentsresource changes take effect:oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>
$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where
<machineset_name>is the name of the machine set and<n>is the number of worker nodes.