Postinstallation configuration
Day 2 operations for OpenShift Container Platform
Abstract
Chapter 1. Postinstallation configuration overview
After installing OpenShift Container Platform, a cluster administrator can configure and customize the following components:
- Machine
- Bare metal
- Cluster
- Node
- Network
- Storage
- Users
- Alerts and notifications
1.1. Post-installation configuration tasks
You can perform the post-installation configuration tasks to configure your environment to meet your need.
The following lists details these configurations:
-
Configure operating system features: The Machine Config Operator (MCO) manages
MachineConfig
objects. By using the MCO, you can configure nodes and custom resources. Configure bare metal nodes: You can use the Bare Metal Operator (BMO) to manage bare metal hosts. The BMO can complete the following operations:
- Inspects hardware details of the host and report them to the bare metal host.
- Inspect firmware and configure BIOS settings.
- Provision hosts with a desired image.
- Clean disk contents for the host before or after provisioning the host.
Configure cluster features. You can modify the following features of an OpenShift Container Platform cluster:
- Image registry
- Networking configuration
- Image build behavior
- Identity provider
- The etcd configuration
- Machine set creation to handle the workloads
- Cloud provider credential management
Configuring a private cluster: By default, the installation program provisions OpenShift Container Platform by using a publicly accessible DNS and endpoints. To make your cluster accessible only from within an internal network, configure the following components to make them private:
- DNS
- Ingress Controller
- API server
Perform node operations: By default, OpenShift Container Platform uses Red Hat Enterprise Linux CoreOS (RHCOS) compute machines. You can perform the following node operations:
- Add and remove compute machines.
- Add and remove taints and tolerations.
- Configure the maximum number of pods per node.
- Enable Device Manager.
- Configure users: OAuth access tokens allow users to authenticate themselves to the API. You can configure OAuth to perform the following tasks:
- Specify an identity provider
- Use role-based access control to define and supply permissions to users
- Install an Operator from OperatorHub
- Configuring alert notifications: By default, firing alerts are displayed on the Alerting UI of the web console. You can also configure OpenShift Container Platform to send alert notifications to external systems.
Chapter 2. Configuring a private cluster
After you install an OpenShift Container Platform version 4.16 cluster, you can set some of its core components to be private.
2.1. About private clusters
By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
DNS
If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster’s own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for *.apps
, for the Ingress
object, and api
, for the API server.
The *.apps
records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster.
Ingress Controller
Because the default Ingress
object is created as public, the load balancer is internet-facing and in the public subnets.
The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure.
API server
By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic.
On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster’s access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz
path.
On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer.
On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster.
2.2. Configuring DNS records to be published in a private zone
For all OpenShift Container Platform clusters, whether public or private, DNS records are published in a public zone by default.
You can remove the public zone from the cluster DNS configuration to avoid exposing DNS records to the public. You might want to avoid exposing sensitive information, such as internal domain names, internal IP addresses, or the number of clusters at an organization, or you might simply have no need to publish records publicly. If all the clients that should be able to connect to services within the cluster use a private DNS service that has the DNS records from the private zone, then there is no need to have a public DNS record for the cluster.
After you deploy a cluster, you can modify its DNS to use only a private zone by modifying the DNS
custom resource (CR). Modifying the DNS
CR in this way means that any DNS records that are subsequently created are not published to public DNS servers, which keeps knowledge of the DNS records isolated to internal users. This can be done when you configure the cluster to be private, or if you never want DNS records to be publicly resolvable.
Alternatively, even in a private cluster, you might keep the public zone for DNS records because it allows clients to resolve DNS names for applications running on that cluster. For example, an organization can have machines that connect to the public internet and then establish VPN connections for certain private IP ranges in order to connect to private IP addresses. The DNS lookups from these machines use the public DNS to determine the private addresses of those services, and then connect to the private addresses over the VPN.
Procedure
Review the
DNS
CR for your cluster by running the following command and observing the output:$ oc get dnses.config.openshift.io/cluster -o yaml
Example output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}
Note that the
spec
section contains both a private and a public zone.Patch the
DNS
CR to remove the public zone by running the following command:$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
Example output
dns.config.openshift.io/cluster patched
The Ingress Operator consults the
DNS
CR definition when it creates DNS records forIngressController
objects. If only private zones are specified, only private records are created.ImportantExisting DNS records are not modified when you remove the public zone. You must manually delete previously published public DNS records if you no longer want them to be published publicly.
Verification
Review the
DNS
CR for your cluster and confirm that the public zone was removed, by running the following command and observing the output:$ oc get dnses.config.openshift.io/cluster -o yaml
Example output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}
2.3. Setting the Ingress Controller to private
After you deploy a cluster, you can modify its Ingress Controller to use only a private zone.
Procedure
Modify the default Ingress Controller to use only an internal endpoint:
$ oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF
Example output
ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced
The public DNS entry is removed, and the private zone entry is updated.
2.4. Restricting the API server to private
After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Have access to the web console as a user with
admin
privileges.
Procedure
In the web portal or console for your cloud provider, take the following actions:
Locate and delete the appropriate load balancer component:
- For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.
-
For Azure, delete the
api-internal-v4
rule for the public load balancer.
-
For Azure, configure the Ingress Controller endpoint publishing scope to
Internal
. For more information, see "Configuring the Ingress Controller endpoint publishing scope to Internal". -
For the Azure public load balancer, if you configure the Ingress Controller endpoint publishing scope to
Internal
and there are no existing inbound rules in the public load balancer, you must create an outbound rule explicitly to provide outbound traffic for the backend address pool. For more information, see the Microsoft Azure documentation about adding outbound rules. -
Delete the
api.$clustername.$yourdomain
orapi.$clustername
DNS entry in the public zone.
AWS clusters: Remove the external load balancers:
ImportantYou can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers.
If your cluster uses a control plane machine set, delete the lines in the control plane machine set custom resource that configure your public or external load balancer:
# ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ...
If your cluster does not use a control plane machine set, you must delete the external load balancers from each control plane machine.
From your terminal, list the cluster machines by running the following command:
$ oc get machine -n openshift-machine-api
Example output
NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m
The control plane machines contain
master
in the name.Remove the external load balancer from each control plane machine:
Edit a control plane machine object to by running the following command:
$ oc edit machines -n openshift-machine-api <control_plane_name> 1
- 1
- Specify the name of the control plane machine object to modify.
Remove the lines that describe the external load balancer, which are marked in the following example:
# ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ...
- Save your changes and exit the object specification.
- Repeat this process for each of the control plane machines.
Additional resources
2.5. Configuring a private storage endpoint on Azure
You can leverage the Image Registry Operator to use private endpoints on Azure, which enables seamless configuration of private storage accounts when OpenShift Container Platform is deployed on private Azure clusters. This allows you to deploy the image registry without exposing public-facing storage endpoints.
Do not configure a private storage endpoint on Microsoft Azure Red Hat OpenShift (ARO), because the endpoint can put your Microsoft Azure Red Hat OpenShift cluster in an unrecoverable state.
You can configure the Image Registry Operator to use private storage endpoints on Azure in one of two ways:
- By configuring the Image Registry Operator to discover the VNet and subnet names
- With user-provided Azure Virtual Network (VNet) and subnet names
2.5.1. Limitations for configuring a private storage endpoint on Azure
The following limitations apply when configuring a private storage endpoint on Azure:
-
When configuring the Image Registry Operator to use a private storage endpoint, public network access to the storage account is disabled. Consequently, pulling images from the registry outside of OpenShift Container Platform only works by setting
disableRedirect: true
in the registry Operator configuration. With redirect enabled, the registry redirects the client to pull images directly from the storage account, which will no longer work due to disabled public network access. For more information, see "Disabling redirect when using a private storage endpoint on Azure". - This operation cannot be undone by the Image Registry Operator.
2.5.2. Configuring a private storage endpoint on Azure by enabling the Image Registry Operator to discover VNet and subnet names
The following procedure shows you how to set up a private storage endpoint on Azure by configuring the Image Registry Operator to discover VNet and subnet names.
Prerequisites
- You have configured the image registry to run on Azure.
Your network has been set up using the Installer Provisioned Infrastructure installation method.
For users with a custom network setup, see "Configuring a private storage endpoint on Azure with user-provided VNet and subnet names".
Procedure
Edit the Image Registry Operator
config
object and setnetworkAccess.type
toInternal
:$ oc edit configs.imageregistry/cluster
# ... spec: # ... storage: azure: # ... networkAccess: type: Internal # ...
Optional: Enter the following command to confirm that the Operator has completed provisioning. This might take a few minutes.
$ oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
Optional: If the registry is exposed by a route, and you are configuring your storage account to be private, you must disable redirect if you want pulls external to the cluster to continue to work. Enter the following command to disable redirect on the Image Operator configuration:
$ oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
NoteWhen redirect is enabled, pulling images from outside of the cluster will not work.
Verification
Fetch the registry service name by running the following command:
$ oc registry info --internal=true
Example output
image-registry.openshift-image-registry.svc:5000
Enter debug mode by running the following command:
$ oc debug node/<node_name>
Run the suggested
chroot
command. For example:$ chroot /host
Enter the following command to log in to your container registry:
$ podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Example output
Login Succeeded!
Enter the following command to verify that you can pull an image from the registry:
$ podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
Example output
Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools... Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9
2.5.3. Configuring a private storage endpoint on Azure with user-provided VNet and subnet names
Use the following procedure to configure a storage account that has public network access disabled and is exposed behind a private storage endpoint on Azure.
Prerequisites
- You have configured the image registry to run on Azure.
- You must know the VNet and subnet names used for your Azure environment.
- If your network was configured in a separate resource group in Azure, you must also know its name.
Procedure
Edit the Image Registry Operator
config
object and configure the private endpoint using your VNet and subnet names:$ oc edit configs.imageregistry/cluster
# ... spec: # ... storage: azure: # ... networkAccess: type: Internal internal: subnetName: <subnet_name> vnetName: <vnet_name> networkResourceGroupName: <network_resource_group_name> # ...
Optional: Enter the following command to confirm that the Operator has completed provisioning. This might take a few minutes.
$ oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
NoteWhen redirect is enabled, pulling images from outside of the cluster will not work.
Verification
Fetch the registry service name by running the following command:
$ oc registry info --internal=true
Example output
image-registry.openshift-image-registry.svc:5000
Enter debug mode by running the following command:
$ oc debug node/<node_name>
Run the suggested
chroot
command. For example:$ chroot /host
Enter the following command to log in to your container registry:
$ podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Example output
Login Succeeded!
Enter the following command to verify that you can pull an image from the registry:
$ podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
Example output
Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools... Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9
2.5.4. Optional: Disabling redirect when using a private storage endpoint on Azure
By default, redirect is enabled when using the image registry. Redirect allows off-loading of traffic from the registry pods into the object storage, which makes pull faster. When redirect is enabled and the storage account is private, users from outside of the cluster are unable to pull images from the registry.
In some cases, users might want to disable redirect so that users from outside of the cluster can pull images from the registry.
Use the following procedure to disable redirect.
Prerequisites
- You have configured the image registry to run on Azure.
- You have configured a route.
Procedure
Enter the following command to disable redirect on the image registry configuration:
$ oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
Verification
Fetch the registry service name by running the following command:
$ oc registry info
Example output
default-route-openshift-image-registry.<cluster_dns>
Enter the following command to log in to your container registry:
$ podman login --tls-verify=false -u unused -p $(oc whoami -t) default-route-openshift-image-registry.<cluster_dns>
Example output
Login Succeeded!
Enter the following command to verify that you can pull an image from the registry:
$ podman pull --tls-verify=false default-route-openshift-image-registry.<cluster_dns> /openshift/tools
Example output
Trying to pull default-route-openshift-image-registry.<cluster_dns>/openshift/tools... Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9
Chapter 3. Bare-metal configuration
When deploying OpenShift Container Platform on bare-metal hosts, there are times when you need to make changes to the host either before or after provisioning. This can include inspecting the host’s hardware, firmware, and firmware details. It can also include formatting disks or changing modifiable firmware settings.
3.1. Services for a user-managed load balancer
You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer.
Configuring a user-managed load balancer depends on your vendor’s load balancer.
The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor’s load balancer.
Red Hat supports the following services for a user-managed load balancer:
- Ingress Controller
- OpenShift API
- OpenShift MachineConfig API
You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams:
Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment

Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment

Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment

The following configuration options are supported for user-managed load balancers:
- Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration.
Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a
/27
or/28
, you can simplify your load balancer targets.TipYou can list all IP addresses that exist in a network by checking the machine config pool’s resources.
Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information:
- For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller’s load balancer, and API load balancer. Check the vendor’s documentation for this capability.
For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions:
- Assign a static IP address to each control plane node.
- Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment.
- Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur.
3.1.1. Configuring a user-managed load balancer
You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer.
Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section.
Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer.
MetalLB, which runs on a cluster, functions as a user-managed load balancer.
OpenShift API prerequisites
- You defined a front-end IP address.
TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:
- Port 6443 provides access to the OpenShift API service.
- Port 22623 can provide ignition startup configurations to nodes.
- The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
- The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes.
- The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623.
Ingress Controller prerequisites
- You defined a front-end IP address.
- TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer.
- The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
- The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster.
- The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936.
Prerequisite for health check URL specifications
You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.
The following examples show health check specifications for the previously listed backend services:
Example of a Kubernetes API health check specification
Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10
Example of a Machine Config API health check specification
Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10
Example of an Ingress Controller health check specification
Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10
Procedure
Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration.
Example HAProxy configuration with one listed subnet
# ... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ...
Example HAProxy configuration with multiple listed subnets
# ... listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s # ...
Use the
curl
CLI command to verify that the user-managed load balancer and its resources are operational:Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response:
$ curl https://<loadbalancer_ip_address>:6443/version --insecure
If the configuration is correct, you receive a JSON object in response:
{ "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" }
Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output:
$ curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK Content-Length: 0
Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output:
$ curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache
Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output:
$ curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private
Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer.
Examples of modified DNS records
<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End
<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End
ImportantDNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record.
For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster’s
install-config.yaml
file:# ... platform: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3 # ...
- 1
- Set
UserManaged
for thetype
parameter to specify a user-managed load balancer for your cluster. The parameter defaults toOpenShiftManagedDefault
, which denotes the default internal load balancer. For services defined in anopenshift-kni-infra
namespace, a user-managed load balancer can deploy thecoredns
service to pods in your cluster but ignoreskeepalived
andhaproxy
services. - 2
- Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the Kubernetes API can communicate with the user-managed load balancer.
- 3
- Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster.
Verification
Use the
curl
CLI command to verify that the user-managed load balancer and DNS record configuration are operational:Verify that you can access the cluster API, by running the following command and observing the output:
$ curl https://api.<cluster_name>.<base_domain>:6443/version --insecure
If the configuration is correct, you receive a JSON object in response:
{ "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" }
Verify that you can access the cluster machine configuration, by running the following command and observing the output:
$ curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK Content-Length: 0
Verify that you can access each cluster application on port, by running the following command and observing the output:
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private
Verify that you can access each cluster application on port 443, by running the following command and observing the output:
$ curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private
3.2. About the Bare Metal Operator
Use the Bare Metal Operator (BMO) to provision, manage, and inspect bare-metal hosts in your cluster.
The BMO uses the following resources to complete these tasks:
-
BareMetalHost
-
HostFirmwareSettings
-
FirmwareSchema
-
HostFirmwareComponents
The BMO maintains an inventory of the physical hosts in the cluster by mapping each bare-metal host to an instance of the BareMetalHost
custom resource definition. Each BareMetalHost
resource features hardware, software, and firmware details. The BMO continually inspects the bare-metal hosts in the cluster to ensure each BareMetalHost
resource accurately details the components of the corresponding host.
The BMO also uses the HostFirmwareSettings
resource, the FirmwareSchema
resource, and the HostFirmwareComponents
resource to detail firmware specifications and upgrade or downgrade firmware for the bare-metal host.
The BMO interfaces with bare-metal hosts in the cluster by using the Ironic API service. The Ironic service uses the Baseboard Management Controller (BMC) on the host to interface with the machine.
Some common tasks you can complete by using the BMO include the following:
- Provision bare-metal hosts to the cluster with a specific image
- Format a host’s disk contents before provisioning or after deprovisioning
- Turn on or off a host
- Change firmware settings
- View the host’s hardware details
- Upgrade or downgrade a host’s firmware to a specific version
3.2.1. Bare Metal Operator architecture
The Bare Metal Operator (BMO) uses the following resources to provision, manage, and inspect bare-metal hosts in your cluster. The following diagram illustrates the architecture of these resources:

BareMetalHost
The BareMetalHost
resource defines a physical host and its properties. When you provision a bare-metal host to the cluster, you must define a BareMetalHost
resource for that host. For ongoing management of the host, you can inspect the information in the BareMetalHost
or update this information.
The BareMetalHost
resource features provisioning information such as the following:
- Deployment specifications such as the operating system boot image or the custom RAM disk
- Provisioning state
- Baseboard Management Controller (BMC) address
- Desired power state
The BareMetalHost
resource features hardware information such as the following:
- Number of CPUs
- MAC address of a NIC
- Size of the host’s storage device
- Current power state
HostFirmwareSettings
You can use the HostFirmwareSettings
resource to retrieve and manage the firmware settings for a host. When a host moves to the Available
state, the Ironic service reads the host’s firmware settings and creates the HostFirmwareSettings
resource. There is a one-to-one mapping between the BareMetalHost
resource and the HostFirmwareSettings
resource.
You can use the HostFirmwareSettings
resource to inspect the firmware specifications for a host or to update a host’s firmware specifications.
You must adhere to the schema specific to the vendor firmware when you edit the spec
field of the HostFirmwareSettings
resource. This schema is defined in the read-only FirmwareSchema
resource.
FirmwareSchema
Firmware settings vary among hardware vendors and host models. A FirmwareSchema
resource is a read-only resource that contains the types and limits for each firmware setting on each host model. The data comes directly from the BMC by using the Ironic service. The FirmwareSchema
resource enables you to identify valid values you can specify in the spec
field of the HostFirmwareSettings
resource.
A FirmwareSchema
resource can apply to many BareMetalHost
resources if the schema is the same.
HostFirmwareComponents
Metal3 provides the HostFirmwareComponents
resource, which describes BIOS and baseboard management controller (BMC) firmware versions. You can upgrade or downgrade the host’s firmware to a specific version by editing the spec
field of the HostFirmwareComponents
resource. This is useful when deploying with validated patterns that have been tested against specific firmware versions.
3.2.2. Optional: Creating a manifest object that includes a customized br-ex
bridge
As an alternative to using the configure-ovs.sh
shell script to set a br-ex
bridge on a bare-metal platform, you can create a NodeNetworkConfigurationPolicy
custom resource (CR) that includes an NMState configuration file. The NMState configuration file creates a customized br-ex
bridge network configuration on each node in your cluster.
Creating a NodeNetworkConfigurationPolicy
CR that includes a customized br-ex
bridge is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
This feature supports the following tasks:
- Modifying the maximum transmission unit (MTU) for your cluster.
- Modifying attributes of a different bond interface, such as MIImon (Media Independent Interface Monitor), bonding mode, or Quality of Service (QoS).
- Updating DNS values.
Consider the following use cases for creating a manifest object that includes a customized br-ex
bridge:
-
You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes
br-ex
bridge network. Theconfigure-ovs.sh
shell script does not support making postinstallation changes to the bridge. - You want to deploy the bridge on a different interface than the interface available on a host or server IP address.
-
You want to make advanced configurations to the bridge that are not possible with the
configure-ovs.sh
shell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces.
Prerequisites
-
You set a customized
br-ex
by using the alternative method toconfigure-ovs
. - You installed the Kubernetes NMState Operator.
Procedure
Create a
NodeNetworkConfigurationPolicy
(NNCP) CR and define a customizedbr-ex
bridge network configuration. Depending on your needs, ensure that you set a masquerade IP for either theipv4.address.ip
,ipv6.address.ip
, or both parameters. A masquerade IP address must match an in-use IP address block.ImportantAs a post-installation task, you can configure most parameters for a customized
br-ex
bridge that you defined in an existing NNCP CR, except for the IP address.Example of an NNCP CR that sets IPv6 and IPv4 masquerade IP addresses
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-br-ex 1 spec: nodeSelector: kubernetes.io/hostname: worker-0 desiredState: interfaces: - name: enp2s0 2 type: ethernet 3 state: up 4 ipv4: enabled: false 5 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 6 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true address: - ip: "169.254.169.2" prefix-length: 29 ipv6: enabled: false dhcp: false address: - ip: "fd69::2" prefix-length: 125
3.3. About the BareMetalHost resource
Metal3 introduces the concept of the BareMetalHost
resource, which defines a physical host and its properties. The BareMetalHost
resource contains two sections:
-
The
BareMetalHost
spec -
The
BareMetalHost
status
3.3.1. The BareMetalHost spec
The spec
section of the BareMetalHost
resource defines the desired state of the host.
Parameters | Description |
---|---|
|
An interface to enable or disable automated cleaning during provisioning and de-provisioning. When set to |
bmc: address: credentialsName: disableCertificateVerification: |
The
|
| The MAC address of the NIC used for provisioning the host. |
|
The boot mode of the host. It defaults to |
|
A reference to another resource that is using the host. It could be empty if another resource is not currently using the host. For example, a |
| A human-provided string to help identify the host. |
| A boolean indicating whether the host provisioning and deprovisioning are managed externally. When set:
|
|
Contains information about the BIOS configuration of bare metal hosts. Currently,
|
image: url: checksum: checksumType: format: |
The
|
| A reference to the secret containing the network configuration data and its namespace, so that it can be attached to the host before the host boots to set up the network. |
|
A boolean indicating whether the host should be powered on ( |
raid: hardwareRAIDVolumes: softwareRAIDVolumes: | (Optional) Contains the information about the RAID configuration for bare metal hosts. If not specified, it retains the current configuration. Note OpenShift Container Platform 4.16 supports hardware RAID on the installation drive for BMCs, including:
OpenShift Container Platform 4.16 does not support software RAID on the installation drive. See the following configuration settings:
You can set the spec: raid: hardwareRAIDVolume: []
If you receive an error message indicating that the driver does not support RAID, set the |
rootDeviceHints: deviceName: hctl: model: vendor: serialNumber: minSizeGigabytes: wwn: wwnWithExtension: wwnVendorExtension: rotational: |
The
|
3.3.2. The BareMetalHost status
The BareMetalHost
status represents the host’s current state, and includes tested credentials, current hardware details, and other information.
Parameters | Description |
---|---|
| A reference to the secret and its namespace holding the last set of baseboard management controller (BMC) credentials the system was able to validate as working. |
| Details of the last error reported by the provisioning backend, if any. |
| Indicates the class of problem that has caused the host to enter an error state. The error types are:
|
hardware: cpu arch: model: clockMegahertz: flags: count: |
The
|
hardware: firmware: | Contains BIOS firmware information. For example, the hardware vendor and version. |
hardware: nics: - ip: name: mac: speedGbps: vlans: vlanId: pxe: |
The
|
hardware: ramMebibytes: | The host’s amount of memory in Mebibytes (MiB). |
hardware: storage: - name: rotational: sizeBytes: serialNumber: |
The
|
hardware: systemVendor: manufacturer: productName: serialNumber: |
Contains information about the host’s |
| The timestamp of the last time the status of the host was updated. |
| The status of the server. The status is one of the following:
|
| Boolean indicating whether the host is powered on. |
provisioning: state: id: image: raid: firmware: rootDeviceHints: |
The
|
| A reference to the secret and its namespace holding the last set of BMC credentials that were sent to the provisioning backend. |
3.4. Getting the BareMetalHost resource
The BareMetalHost
resource contains the properties of a physical host. You must get the BareMetalHost
resource for a physical host to review its properties.
Procedure
Get the list of
BareMetalHost
resources:$ oc get bmh -n openshift-machine-api -o yaml
NoteYou can use
baremetalhost
as the long form ofbmh
withoc get
command.Get the list of hosts:
$ oc get bmh -n openshift-machine-api
Get the
BareMetalHost
resource for a specific host:$ oc get bmh <host_name> -n openshift-machine-api -o yaml
Where
<host_name>
is the name of the host.Example output
apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: creationTimestamp: "2022-06-16T10:48:33Z" finalizers: - baremetalhost.metal3.io generation: 2 name: openshift-worker-0 namespace: openshift-machine-api resourceVersion: "30099" uid: 1513ae9b-e092-409d-be1b-ad08edeb1271 spec: automatedCleaningMode: metadata bmc: address: redfish://10.46.61.19:443/redfish/v1/Systems/1 credentialsName: openshift-worker-0-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:c7:f7:b0 bootMode: UEFI consumerRef: apiVersion: machine.openshift.io/v1beta1 kind: Machine name: ocp-edge-958fk-worker-0-nrfcg namespace: openshift-machine-api customDeploy: method: install_coreos online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: worker-user-data-managed namespace: openshift-machine-api status: errorCount: 0 errorMessage: "" goodCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: "16120" hardware: cpu: arch: x86_64 clockMegahertz: 2300 count: 64 flags: - 3dnowprefetch - abm - acpi - adx - aes model: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz firmware: bios: date: 10/26/2020 vendor: HPE version: U30 hostname: openshift-worker-0 nics: - mac: 48:df:37:c7:f7:b3 model: 0x8086 0x1572 name: ens1f3 ramMebibytes: 262144 storage: - hctl: "0:0:0:0" model: VK000960GWTTB name: /dev/disk/by-id/scsi-<serial_number> sizeBytes: 960197124096 type: SSD vendor: ATA systemVendor: manufacturer: HPE productName: ProLiant DL380 Gen10 (868703-B21) serialNumber: CZ200606M3 lastUpdated: "2022-06-16T11:41:42Z" operationalStatus: OK poweredOn: true provisioning: ID: 217baa14-cfcf-4196-b764-744e184a3413 bootMode: UEFI customDeploy: method: install_coreos image: url: "" raid: hardwareRAIDVolumes: null softwareRAIDVolumes: [] rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> state: provisioned triedCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: "16120"
3.5. Editing a BareMetalHost resource
After you deploy an OpenShift Container Platform cluster on bare metal, you might need to edit a node’s BareMetalHost
resource. Consider the following examples:
- You deploy a cluster with the Assisted Installer and need to add or edit the baseboard management controller (BMC) host name or IP address.
- You want to move a node from one cluster to another without deprovisioning it.
Prerequisites
-
Ensure the node is in the
Provisioned
,ExternallyProvisioned
, orAvailable
state.
Procedure
Get the list of nodes:
$ oc get bmh -n openshift-machine-api
Before editing the node’s
BareMetalHost
resource, detach the node from Ironic by running the following command:$ oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached=true' 1
- 1
- Replace
<node_name>
with the name of the node.
Edit the
BareMetalHost
resource by running the following command:$ oc edit bmh <node_name> -n openshift-machine-api
Reattach the node to Ironic by running the following command:
$ oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached'-
3.6. Troubleshooting latency when deleting a BareMetalHost resource
When the Bare Metal Operator (BMO) deletes a BareMetalHost
resource, Ironic deprovisions the bare-metal host. For example, this might happen when scaling down a machine set. Deprovisioning involves a process known as "cleaning", which performs the following steps:
- Powering off the bare-metal host
- Booting a service RAM disk on the bare-metal host
- Removing partitioning metadata from all disks
- Powering off the bare-metal host again
If cleaning does not succeed, the deletion of the BareMetalHost
resource will take a long time and might not finish.
Do not remove the finalizers to force deletion of a BareMetalHost
resource. The provisioning back-end has its own database, which maintains a host record. Running actions will continue to run even if you try to force the deletion by removing the finalizers. You might face unexpected issues when attempting to add the bare-metal host later.
Procedure
- If the cleaning process can recover, wait for it to finish.
-
If cleaning cannot recover, disable the cleaning process by modifying the
BareMetalHost
resource and setting theautomatedCleaningMode
field todisabled
.
See "Editing a BareMetalHost resource" for additional details.
3.7. Attaching a non-bootable ISO to a bare-metal node
You can attach a generic, non-bootable ISO virtual media image to a provisioned node by using the DataImage
resource. After you apply the resource, the ISO image becomes accessible to the operating system after it has booted. This is useful for configuring a node after provisioning the operating system and before the node boots for the first time.
Prerequisites
- The node must use Redfish or drivers derived from it to support this feature.
-
The node must be in the
Provisioned
orExternallyProvisioned
state. -
The
name
must be the same as the name of the node defined in itsBareMetalHost
resource. -
You have a valid
url
to the ISO image.
Procedure
Create a
DataImage
resource:apiVersion: metal3.io/v1alpha1 kind: DataImage metadata: name: <node_name> 1 spec: url: "http://dataimage.example.com/non-bootable.iso" 2
Save the
DataImage
resource to a file by running the following command:$ vim <node_name>-dataimage.yaml
Apply the
DataImage
resource by running the following command:$ oc apply -f <node_name>-dataimage.yaml -n <node_namespace> 1
- 1
- Replace
<node_namespace>
so that the namespace matches the namespace for theBareMetalHost
resource. For example,openshift-machine-api
.
Reboot the node.
NoteTo reboot the node, attach the
reboot.metal3.io
annotation, or reset set theonline
status in theBareMetalHost
resource. A forced reboot of the bare-metal node will change the state of the node toNotReady
for awhile. For example, 5 minutes or more.View the
DataImage
resource by running the following command:$ oc get dataimage <node_name> -n openshift-machine-api -o yaml
Example output
apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: DataImage metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"metal3.io/v1alpha1","kind":"DataImage","metadata":{"annotations":{},"name":"bmh-node-1","namespace":"openshift-machine-api"},"spec":{"url":"http://dataimage.example.com/non-bootable.iso"}} creationTimestamp: "2024-06-10T12:00:00Z" finalizers: - dataimage.metal3.io generation: 1 name: bmh-node-1 namespace: openshift-machine-api ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: bmh-node-1 uid: 046cdf8e-0e97-485a-8866-e62d20e0f0b3 resourceVersion: "21695581" uid: c5718f50-44b6-4a22-a6b7-71197e4b7b69 spec: url: http://dataimage.example.com/non-bootable.iso status: attachedImage: url: http://dataimage.example.com/non-bootable.iso error: count: 0 message: "" lastReconciled: "2024-06-10T12:05:00Z"
3.8. About the HostFirmwareSettings resource
You can use the HostFirmwareSettings
resource to retrieve and manage the BIOS settings for a host. When a host moves to the Available
state, Ironic reads the host’s BIOS settings and creates the HostFirmwareSettings
resource. The resource contains the complete BIOS configuration returned from the baseboard management controller (BMC). Whereas, the firmware
field in the BareMetalHost
resource returns three vendor-independent fields, the HostFirmwareSettings
resource typically comprises many BIOS settings of vendor-specific fields per host.
The HostFirmwareSettings
resource contains two sections:
-
The
HostFirmwareSettings
spec. -
The
HostFirmwareSettings
status.
3.8.1. The HostFirmwareSettings
spec
The spec
section of the HostFirmwareSettings
resource defines the desired state of the host’s BIOS, and it is empty by default. Ironic uses the settings in the spec.settings
section to update the baseboard management controller (BMC) when the host is in the Preparing
state. Use the FirmwareSchema
resource to ensure that you do not send invalid name/value pairs to hosts. See "About the FirmwareSchema resource" for additional details.
Example
spec:
settings:
ProcTurboMode: Disabled1
- 1
- In the foregoing example, the
spec.settings
section contains a name/value pair that will set theProcTurboMode
BIOS setting toDisabled
.
Integer parameters listed in the status
section appear as strings. For example, "1"
. When setting integers in the spec.settings
section, the values should be set as integers without quotes. For example, 1
.
3.8.2. The HostFirmwareSettings
status
The status
represents the current state of the host’s BIOS.
Parameters | Description |
---|---|
status: conditions: - lastTransitionTime: message: observedGeneration: reason: status: type: |
The
|
status: schema: name: namespace: lastUpdated: |
The
|
status: settings: |
The |
3.9. Getting the HostFirmwareSettings resource
The HostFirmwareSettings
resource contains the vendor-specific BIOS properties of a physical host. You must get the HostFirmwareSettings
resource for a physical host to review its BIOS properties.
Procedure
Get the detailed list of
HostFirmwareSettings
resources:$ oc get hfs -n openshift-machine-api -o yaml
NoteYou can use
hostfirmwaresettings
as the long form ofhfs
with theoc get
command.Get the list of
HostFirmwareSettings
resources:$ oc get hfs -n openshift-machine-api
Get the
HostFirmwareSettings
resource for a particular host$ oc get hfs <host_name> -n openshift-machine-api -o yaml
Where
<host_name>
is the name of the host.
3.10. Editing the HostFirmwareSettings resource
You can edit the HostFirmwareSettings
of provisioned hosts.
You can only edit hosts when they are in the provisioned
state, excluding read-only values. You cannot edit hosts in the externally provisioned
state.
Procedure
Get the list of
HostFirmwareSettings
resources:$ oc get hfs -n openshift-machine-api
Edit a host’s
HostFirmwareSettings
resource:$ oc edit hfs <host_name> -n openshift-machine-api
Where
<host_name>
is the name of a provisioned host. TheHostFirmwareSettings
resource will open in the default editor for your terminal.Add name/value pairs to the
spec.settings
section:Example
spec: settings: name: value 1
- 1
- Use the
FirmwareSchema
resource to identify the available settings for the host. You cannot set values that are read-only.
- Save the changes and exit the editor.
Get the host’s machine name:
$ oc get bmh <host_name> -n openshift-machine name
Where
<host_name>
is the name of the host. The machine name appears under theCONSUMER
field.Annotate the machine to delete it from the machineset:
$ oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api
Where
<machine_name>
is the name of the machine to delete.Get a list of nodes and count the number of worker nodes:
$ oc get nodes
Get the machineset:
$ oc get machinesets -n openshift-machine-api
Scale the machineset:
$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>
Where
<machineset_name>
is the name of the machineset and<n-1>
is the decremented number of worker nodes.When the host enters the
Available
state, scale up the machineset to make theHostFirmwareSettings
resource changes take effect:$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>
Where
<machineset_name>
is the name of the machineset and<n>
is the number of worker nodes.
3.11. Verifying the HostFirmware Settings resource is valid
When the user edits the spec.settings
section to make a change to the HostFirmwareSetting
(HFS) resource, the Bare Metal Operator (BMO) validates the change against the FimwareSchema
resource, which is a read-only resource. If the setting is invalid, the BMO will set the Type
value of the status.Condition
setting to False
and also generate an event and store it in the HFS resource. Use the following procedure to verify that the resource is valid.
Procedure
Get a list of
HostFirmwareSetting
resources:$ oc get hfs -n openshift-machine-api
Verify that the
HostFirmwareSettings
resource for a particular host is valid:$ oc describe hfs <host_name> -n openshift-machine-api
Where
<host_name>
is the name of the host.Example output
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo
ImportantIf the response returns
ValidationFailed
, there is an error in the resource configuration and you must update the values to conform to theFirmwareSchema
resource.
3.12. About the FirmwareSchema resource
BIOS settings vary among hardware vendors and host models. A FirmwareSchema
resource is a read-only resource that contains the types and limits for each BIOS setting on each host model. The data comes directly from the BMC through Ironic. The FirmwareSchema
enables you to identify valid values you can specify in the spec
field of the HostFirmwareSettings
resource. The FirmwareSchema
resource has a unique identifier derived from its settings and limits. Identical host models use the same FirmwareSchema
identifier. It is likely that multiple instances of HostFirmwareSettings
use the same FirmwareSchema
.
Parameters | Description |
---|---|
<BIOS_setting_name> attribute_type: allowable_values: lower_bound: upper_bound: min_length: max_length: read_only: unique: |
The
|
3.13. Getting the FirmwareSchema resource
Each host model from each vendor has different BIOS settings. When editing the HostFirmwareSettings
resource’s spec
section, the name/value pairs you set must conform to that host’s firmware schema. To ensure you are setting valid name/value pairs, get the FirmwareSchema
for the host and review it.
Procedure
To get a list of
FirmwareSchema
resource instances, execute the following:$ oc get firmwareschema -n openshift-machine-api
To get a particular
FirmwareSchema
instance, execute:$ oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml
Where
<instance_name>
is the name of the schema instance stated in theHostFirmwareSettings
resource (see Table 3).
3.14. About the HostFirmwareComponents resource
Metal3 provides the HostFirmwareComponents
resource, which describes BIOS and baseboard management controller (BMC) firmware versions. The HostFirmwareComponents
resource contains two sections:
-
The
HostFirmwareComponents
spec -
The
HostFirmwareComponents
status
3.14.1. HostFirmwareComponents spec
The spec
section of the HostFirmwareComponents
resource defines the desired state of the host’s BIOS and BMC versions.
Parameters | Description |
---|---|
updates: component: url: |
The
|
3.14.2. HostFirmwareComponents status
The status
section of the HostFirmwareComponents
resource returns the current status of the host’s BIOS and BMC versions.
Parameters | Description |
---|---|
components: component: initialVersion: currentVersion: lastVersionFlashed: updatedAt: |
The
|
updates: component: url: |
The
|
3.15. Getting the HostFirmwareComponents resource
The HostFirmwareComponents
resource contains the specific firmware version of the BIOS and baseboard management controller (BMC) of a physical host. You must get the HostFirmwareComponents
resource for a physical host to review the firmware version and status.
Procedure
Get the detailed list of
HostFirmwareComponents
resources:$ oc get hostfirmwarecomponents -n openshift-machine-api -o yaml
Get the list of
HostFirmwareComponents
resources:$ oc get hostfirmwarecomponents -n openshift-machine-api
Get the
HostFirmwareComponents
resource for a particular host:$ oc get hostfirmwarecomponents <host_name> -n openshift-machine-api -o yaml
Where
<host_name>
is the name of the host.Example output
--- apiVersion: metal3.io/v1alpha1 kind: HostFirmwareComponents metadata: creationTimestamp: 2024-04-25T20:32:06Z" generation: 1 name: ostest-master-2 namespace: openshift-machine-api ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: ostest-master-2 uid: 16022566-7850-4dc8-9e7d-f216211d4195 resourceVersion: "2437" uid: 2038d63f-afc0-4413-8ffe-2f8e098d1f6c spec: updates: [] status: components: - component: bios currentVersion: 1.0.0 initialVersion: 1.0.0 - component: bmc currentVersion: "1.00" initialVersion: "1.00" conditions: - lastTransitionTime: "2024-04-25T20:32:06Z" message: "" observedGeneration: 1 reason: OK status: "True" type: Valid - lastTransitionTime: "2024-04-25T20:32:06Z" message: "" observedGeneration: 1 reason: OK status: "False" type: ChangeDetected lastUpdated: "2024-04-25T20:32:06Z" updates: []
3.16. Editing the HostFirmwareComponents resource
You can edit the HostFirmwareComponents
resource of a node.
Procedure
Get the detailed list of
HostFirmwareComponents
resources:$ oc get hostfirmwarecomponents -n openshift-machine-api -o yaml
Edit a host’s
HostFirmwareComponents
resource:$ oc edit <host_name> hostfirmwarecomponents -n openshift-machine-api 1
- 1
- Where
<host_name>
is the name of the host. TheHostFirmwareComponents
resource will open in the default editor for your terminal.
Example output
--- apiVersion: metal3.io/v1alpha1 kind: HostFirmwareComponents metadata: creationTimestamp: 2024-04-25T20:32:06Z" generation: 1 name: ostest-master-2 namespace: openshift-machine-api ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: ostest-master-2 uid: 16022566-7850-4dc8-9e7d-f216211d4195 resourceVersion: "2437" uid: 2038d63f-afc0-4413-8ffe-2f8e098d1f6c spec: updates: - name: bios 1 url: https://myurl.with.firmware.for.bios 2 - name: bmc 3 url: https://myurl.with.firmware.for.bmc 4 status: components: - component: bios currentVersion: 1.0.0 initialVersion: 1.0.0 - component: bmc currentVersion: "1.00" initialVersion: "1.00" conditions: - lastTransitionTime: "2024-04-25T20:32:06Z" message: "" observedGeneration: 1 reason: OK status: "True" type: Valid - lastTransitionTime: "2024-04-25T20:32:06Z" message: "" observedGeneration: 1 reason: OK status: "False" type: ChangeDetected lastUpdated: "2024-04-25T20:32:06Z"
- 1
- To set a BIOS version, set the
name
attribute tobios
. - 2
- To set a BIOS version, set the
url
attribute to the URL for the firmware version of the BIOS. - 3
- To set a BMC version, set the
name
attribute tobmc
. - 4
- To set a BMC version, set the
url
attribute to the URL for the firmware verison of the BMC.
- Save the changes and exit the editor.
Get the host’s machine name:
$ oc get bmh <host_name> -n openshift-machine name 1
- 1
- Where
<host_name>
is the name of the host. The machine name appears under theCONSUMER
field.
Annotate the machine to delete it from the machine set:
$ oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api 1
- 1
- Where
<machine_name>
is the name of the machine to delete.
Get a list of nodes and count the number of worker nodes:
$ oc get nodes
Get the machine set:
$ oc get machinesets -n openshift-machine-api
Scale the machine set:
$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1> 1
- 1
- Where
<machineset_name>
is the name of the machine set and<n-1>
is the decremented number of worker nodes.
When the host enters the
Available
state, scale up the machine set to make theHostFirmwareComponents
resource changes take effect:$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n> 1
- 1
- Where
<machineset_name>
is the name of the machine set and<n>
is the number of worker nodes.
Chapter 4. Configuring multi-architecture compute machines on an OpenShift cluster
4.1. About clusters with multi-architecture compute machines
An OpenShift Container Platform cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures.
When there are nodes with multiple architectures in your cluster, the architecture of your image must be consistent with the architecture of the node. You need to ensure that the pod is assigned to the node with the appropriate architecture and that it matches the image architecture. For more information on assigning pods to nodes, see Assigning pods to nodes.
The Cluster Samples Operator is not supported on clusters with multi-architecture compute machines. Your cluster can be created without this capability. For more information, see Cluster capabilities.
For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see Migrating to a cluster with multi-architecture compute machines.
4.1.1. Configuring your cluster with multi-architecture compute machines
To create a cluster with multi-architecture compute machines with different installation options and platforms, you can use the documentation in the following table:
Documentation section | Platform | User-provisioned installation | Installer-provisioned installation | Control Plane | Compute node |
---|---|---|---|---|---|
Creating a cluster with multi-architecture compute machines on Azure | Microsoft Azure | ✓ |
|
| |
Creating a cluster with multi-architecture compute machines on AWS | Amazon Web Services (AWS) | ✓ |
|
| |
Creating a cluster with multi-architecture compute machines on GCP | Google Cloud Platform (GCP) | ✓ |
|
| |
Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z | Bare metal | ✓ |
|
| |
IBM Power | ✓ |
|
| ||
IBM Z | ✓ |
|
| ||
Creating a cluster with multi-architecture compute machines on IBM Z® and IBM® LinuxONE with z/VM | IBM Z® and IBM® LinuxONE | ✓ |
|
| |
IBM Z® and IBM® LinuxONE | ✓ |
|
| ||
Creating a cluster with multi-architecture compute machines on IBM Power® | IBM Power® | ✓ |
|
|
Autoscaling from zero is currently not supported on Google Cloud Platform (GCP).
4.2. Creating a cluster with multi-architecture compute machine on Azure
To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see Installing a cluster on Azure with customizations.
You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines.
After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster.
4.2.1. Verifying cluster compatibility
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
-
Log in to the OpenShift CLI (
oc
). You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Verification
If you see the following output, your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.2.2. Creating a 64-bit ARM boot image using the Azure image gallery
The following procedure describes how to manually generate a 64-bit ARM boot image.
Prerequisites
-
You installed the Azure CLI (
az
). - You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.
Procedure
Log in to your Azure account:
$ az login
Create a storage account and upload the
aarch64
virtual hard disk (VHD) to your storage account. The OpenShift Container Platform installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group:$ az storage account create -n ${STORAGE_ACCOUNT_NAME} -g ${RESOURCE_GROUP} -l westus --sku Standard_LRS 1
- 1
- The
westus
object is an example region.
Create a storage container using the storage account you generated:
$ az storage container create -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME}
You must use the OpenShift Container Platform installation program JSON file to extract the URL and
aarch64
VHD name:Extract the
URL
field and set it toRHCOS_VHD_ORIGIN_URL
as the file name by running the following command:$ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url')
Extract the
aarch64
VHD name and set it toBLOB_NAME
as the file name by running the following command:$ BLOB_NAME=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd
Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container with the following commands:
$ end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
$ sas=`az storage container generate-sas -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry $end -o tsv`
Copy the RHCOS VHD into the storage container:
$ az storage blob copy start --account-name ${STORAGE_ACCOUNT_NAME} --sas-token "$sas" \ --source-uri "${RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "${BLOB_NAME}" --destination-container ${CONTAINER_NAME}
You can check the status of the copying process with the following command:
$ az storage blob show -c ${CONTAINER_NAME} -n ${BLOB_NAME} --account-name ${STORAGE_ACCOUNT_NAME} | jq .properties.copy
Example output
{ "completionTime": null, "destinationSnapshot": null, "id": "1fd97630-03ca-489a-8c4e-cfe839c9627d", "incrementalCopy": null, "progress": "17179869696/17179869696", "source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd", "status": "success", 1 "statusDescription": null }
- 1
- If the status parameter displays the
success
object, the copying process is complete.
Create an image gallery using the following command:
$ az sig create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME}
Use the image gallery to create an image definition. In the following example command,
rhcos-arm64
is the name of the image definition.$ az sig image-definition create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2
To get the URL of the VHD and set it to
RHCOS_VHD_URL
as the file name, run the following command:$ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGE_ACCOUNT_NAME} -c ${CONTAINER_NAME} -n "${BLOB_NAME}" -o tsv)
Use the
RHCOS_VHD_URL
file, your storage account, resource group, and image gallery to create an image version. In the following example,1.0.0
is the image version.$ az sig image-version create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGE_ACCOUNT_NAME} --os-vhd-uri ${RHCOS_VHD_URL}
Your
arm64
boot image is now generated. You can access the ID of your image with the following command:$ az sig image-version show -r $GALLERY_NAME -g $RESOURCE_GROUP -i rhcos-arm64 -e 1.0.0
The following example image ID is used in the
recourseID
parameter of the compute machine set:Example
resourceID
/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0
4.2.3. Creating a 64-bit x86 boot image using the Azure image gallery
The following procedure describes how to manually generate a 64-bit x86 boot image.
Prerequisites
-
You installed the Azure CLI (
az
). - You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.
Procedure
Log in to your Azure account by running the following command:
$ az login
Create a storage account and upload the
x86_64
virtual hard disk (VHD) to your storage account by running the following command. The OpenShift Container Platform installation program creates a resource group. However, the boot image can also be uploaded to a custom named resource group:$ az storage account create -n ${STORAGE_ACCOUNT_NAME} -g ${RESOURCE_GROUP} -l westus --sku Standard_LRS 1
- 1
- The
westus
object is an example region.
Create a storage container using the storage account you generated by running the following command:
$ az storage container create -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME}
Use the OpenShift Container Platform installation program JSON file to extract the URL and
x86_64
VHD name:Extract the
URL
field and set it toRHCOS_VHD_ORIGIN_URL
as the file name by running the following command:$ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".url')
Extract the
x86_64
VHD name and set it toBLOB_NAME
as the file name by running the following command:$ BLOB_NAME=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".release')-azure.x86_64.vhd
Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container by running the following commands:
$ end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
$ sas=`az storage container generate-sas -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry $end -o tsv`
Copy the RHCOS VHD into the storage container by running the following command:
$ az storage blob copy start --account-name ${STORAGE_ACCOUNT_NAME} --sas-token "$sas" \ --source-uri "${RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "${BLOB_NAME}" --destination-container ${CONTAINER_NAME}
You can check the status of the copying process by running the following command:
$ az storage blob show -c ${CONTAINER_NAME} -n ${BLOB_NAME} --account-name ${STORAGE_ACCOUNT_NAME} | jq .properties.copy
Example output
{ "completionTime": null, "destinationSnapshot": null, "id": "1fd97630-03ca-489a-8c4e-cfe839c9627d", "incrementalCopy": null, "progress": "17179869696/17179869696", "source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd", "status": "success", 1 "statusDescription": null }
- 1
- If the
status
parameter displays thesuccess
object, the copying process is complete.
Create an image gallery by running the following command:
$ az sig create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME}
Use the image gallery to create an image definition by running the following command:
$ az sig image-definition create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-x86_64 --publisher RedHat --offer x86_64 --sku x86_64 --os-type linux --architecture x64 --hyper-v-generation V2
In this example command,
rhcos-x86_64
is the name of the image definition.To get the URL of the VHD and set it to
RHCOS_VHD_URL
as the file name, run the following command:$ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGE_ACCOUNT_NAME} -c ${CONTAINER_NAME} -n "${BLOB_NAME}" -o tsv)
Use the
RHCOS_VHD_URL
file, your storage account, resource group, and image gallery to create an image version by running the following command:$ az sig image-version create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGE_ACCOUNT_NAME} --os-vhd-uri ${RHCOS_VHD_URL}
In this example,
1.0.0
is the image version.Optional: Access the ID of the generated
x86_64
boot image by running the following command:$ az sig image-version show -r $GALLERY_NAME -g $RESOURCE_GROUP -i rhcos-x86_64 -e 1.0.0
The following example image ID is used in the
recourseID
parameter of the compute machine set:Example
resourceID
/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/rhcos-x86_64/versions/1.0.0
4.2.4. Adding a multi-architecture compute machine set to your Azure cluster
After creating a multi-architecture cluster, you can add nodes with different architectures.
You can add multi-architecture compute machines to a multi-architecture cluster in the following ways:
- Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture.
- Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture.
To create a custom compute machine set on Azure, see "Creating a compute machine set on Azure".
Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig
custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator".
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You created a 64-bit ARM or 64-bit x86 boot image.
- You used the installation program to create a 64-bit ARM or 64-bit x86 single-architecture Azure cluster with the multi-architecture installer binary.
Procedure
-
Log in to the OpenShift CLI (
oc
). Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster.
Example
MachineSet
object for an Azure 64-bit ARM or 64-bit x86 compute nodeapiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: "" version: "" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: "<zone>"
Create the compute machine set by running the following command:
$ oc create -f <file_name> 1
- 1
- Replace
<file_name>
with the name of the YAML file with compute machine set configuration. For example:arm64-machine-set-0.yaml
, oramd64-machine-set-0.yaml
.
Verification
Verify that the new machines are running by running the following command:
$ oc get machineset -n openshift-machine-api
The output must include the machine set that you created.
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-machine-set-0 2 2 2 2 10m
You can check if the nodes are ready and schedulable by running the following command:
$ oc get nodes
4.3. Creating a cluster with multi-architecture compute machines on AWS
To create an AWS cluster with multi-architecture compute machines, you must first create a single-architecture AWS installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, see Installing a cluster on AWS with customizations.
You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines.
After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster.
4.3.1. Verifying cluster compatibility
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
-
Log in to the OpenShift CLI (
oc
). You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Verification
If you see the following output, your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.3.2. Adding a multi-architecture compute machine set to your AWS cluster
After creating a multi-architecture cluster, you can add nodes with different architectures.
You can add multi-architecture compute machines to a multi-architecture cluster in the following ways:
- Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture.
- Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture.
Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig
custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator".
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You used the installation program to create an 64-bit ARM or 64-bit x86 single-architecture AWS cluster with the multi-architecture installer binary.
Procedure
-
Log in to the OpenShift CLI (
oc
). Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster.
Example
MachineSet
object for an AWS 64-bit ARM or x86 compute nodeapiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data
- 1 2 3 9 13 14
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (
oc
) installed, you can obtain the infrastructure ID by running the following command:$ oc get -o jsonpath=‘{.status.infrastructureName}{“\n”}’ infrastructure cluster
- 4 7
- Specify the infrastructure ID, role node label, and zone.
- 5 6
- Specify the role node label to add.
- 8
- Specify a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for the nodes. The RHCOS AMI must be compatible with the machine architecture.
$ oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.<arch>.images.aws.regions."<region>".image'
- 10
- Specify a machine type that aligns with the CPU architecture of the chosen AMI. For more information, see "Tested instance types for AWS 64-bit ARM"
- 11
- Specify the zone. For example,
us-east-1a
. Ensure that the zone you select has machines with the required architecture. - 12
- Specify the region. For example,
us-east-1
. Ensure that the zone you select has machines with the required architecture.
Create the compute machine set by running the following command:
$ oc create -f <file_name> 1
- 1
- Replace
<file_name>
with the name of the YAML file with compute machine set configuration. For example:aws-arm64-machine-set-0.yaml
, oraws-amd64-machine-set-0.yaml
.
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
The output must include the machine set that you created.
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-machine-set-0 2 2 2 2 10m
You can check if the nodes are ready and schedulable by running the following command:
$ oc get nodes
4.4. Creating a cluster with multi-architecture compute machines on GCP
To create a Google Cloud Platform (GCP) cluster with multi-architecture compute machines, you must first create a single-architecture GCP installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, see Installing a cluster on GCP with customizations.
You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines.
After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster.
Secure booting is currently not supported on 64-bit ARM machines for GCP
4.4.1. Verifying cluster compatibility
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
-
Log in to the OpenShift CLI (
oc
). You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Verification
If you see the following output, your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.4.2. Adding a multi-architecture compute machine set to your GCP cluster
After creating a multi-architecture cluster, you can add nodes with different architectures.
You can add multi-architecture compute machines to a multi-architecture cluster in the following ways:
- Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture.
- Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture.
Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig
custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator".
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You used the installation program to create a 64-bit x86 or 64-bit ARM single-architecture GCP cluster with the multi-architecture installer binary.
Procedure
-
Log in to the OpenShift CLI (
oc
). Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster.
Example
MachineSet
object for a GCP 64-bit ARM or 64-bit x86 compute nodeapiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 5 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 6 region: us-central1 7 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a
- 1
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2
- Specify the role node label to add.
- 3
- Specify the path to the image that is used in current compute machine sets. You need the project and image name for your path to image.
To access the project and image name, run the following command:
$ oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.aarch64.images.gcp'
Example output
"gcp": { "release": "415.92.202309142014-0", "project": "rhcos-cloud", "name": "rhcos-415-92-202309142014-0-gcp-aarch64" }
Use the
project
andname
parameters from the output to create the path to image field in your machine set. The path to the image should follow the following format:$ projects/<project>/global/images/<image_name>
- 4
- Optional: Specify custom metadata in the form of a
key:value
pair. For example use cases, see the GCP documentation for setting custom metadata. - 5
- Specify a machine type that aligns with the CPU architecture of the chosen OS image. For more information, see "Tested instance types for GCP on 64-bit ARM infrastructures".
- 6
- Specify the name of the GCP project that you use for your cluster.
- 7
- Specify the region. For example,
us-central1
. Ensure that the zone you select has machines with the required architecture.
Create the compute machine set by running the following command:
$ oc create -f <file_name> 1
- 1
- Replace
<file_name>
with the name of the YAML file with compute machine set configuration. For example:gcp-arm64-machine-set-0.yaml
, orgcp-amd64-machine-set-0.yaml
.
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
The output must include the machine set that you created.
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-machine-set-0 2 2 2 2 10m
You can check if the nodes are ready and schedulable by running the following command:
$ oc get nodes
4.5. Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z
To create a cluster with multi-architecture compute machines on bare metal (x86_64
or aarch64
), IBM Power® (ppc64le
), or IBM Z® (s390x
) you must have an existing single-architecture cluster on one of these platforms. Follow the installations procedures for your platform:
- Installing a user provisioned cluster on bare metal. You can then add 64-bit ARM compute machines to your OpenShift Container Platform cluster on bare metal.
-
Installing a cluster on IBM Power®. You can then add
x86_64
compute machines to your OpenShift Container Platform cluster on IBM Power®. -
Installing a cluster on IBM Z® and IBM® LinuxONE. You can then add
x86_64
compute machines to your OpenShift Container Platform cluster on IBM Z® and IBM® LinuxONE.
The bare metal installer-provisioned infrastructure and the Bare Metal Operator do not support adding secondary architecture nodes during the initial cluster setup. You can add secondary architecture nodes manually only after the initial cluster setup.
Before you can add additional compute nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This allows you to add additional nodes to your cluster and deploy a cluster with multi-architecture compute machines.
Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig
object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator.
4.5.1. Verifying cluster compatibility
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
-
Log in to the OpenShift CLI (
oc
). You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Verification
If you see the following output, your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.5.2. Creating RHCOS machines using an ISO image
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
You must have the OpenShift CLI (
oc
) installed.
Procedure
Extract the Ignition config file from the cluster by running the following command:
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
-
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node:
$ curl -k http://<HTTP_server>/worker.ign
You can access the ISO image for booting your new machine by running to following command:
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
- Burn the ISO image to a disk and boot it directly.
- Use ISO redirection with a LOM interface.
Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment.
NoteYou can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the
coreos-installer
command as outlined in the following steps, instead of adding kernel arguments.Run the
coreos-installer
command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to:$ sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 12
- 1
- You must run the
coreos-installer
command by usingsudo
, because thecore
user does not have the required root privileges to perform the installation. - 2
- The
--ignition-hash
option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node.<digest>
is the Ignition config file SHA512 digest obtained in a preceding step.
NoteIf you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running
coreos-installer
.The following example initializes a bootstrap node installation to the
/dev/sda
device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2:$ sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
Monitor the progress of the RHCOS installation on the console of the machine.
ImportantEnsure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise.
- Continue to create more compute machines for your cluster.
4.5.3. Creating RHCOS machines by PXE or iPXE booting
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
Obtain the URLs of the RHCOS ISO image, compressed metal BIOS,
kernel
, andinitramfs
files that you uploaded to your HTTP server during cluster installation. - You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them.
-
If you use UEFI, you have access to the
grub.conf
file that you modified during OpenShift Container Platform installation.
Procedure
Confirm that your PXE or iPXE installation for the RHCOS images is correct.
For PXE:
DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2
- 1
- Specify the location of the live
kernel
file that you uploaded to your HTTP server. - 2
- Specify locations of the RHCOS files that you uploaded to your HTTP server. The
initrd
parameter value is the location of the liveinitramfs
file, thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_url
parameter value is the location of the liverootfs
file. Thecoreos.inst.ignition_url
andcoreos.live.rootfs_url
parameters only support HTTP and HTTPS.
NoteThis configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more
console=
arguments to theAPPEND
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?.For iPXE (
x86_64
+aarch64
):kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot
- 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
kernel
parameter value is the location of thekernel
file, theinitrd=main
argument is needed for booting on UEFI systems, thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your HTTP server.
NoteThis configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more
console=
arguments to thekernel
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section.NoteTo network boot the CoreOS
kernel
onaarch64
architecture, you need to use a version of iPXE build with theIMAGE_GZIP
option enabled. SeeIMAGE_GZIP
option in iPXE.For PXE (with UEFI and GRUB as second stage) on
aarch64
:menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }
- 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The
kernel
parameter value is the location of thekernel
file on your TFTP server. Thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file on your HTTP Server. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your TFTP server.
- Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
4.5.4. Approving the certificate signing requests for your machines
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4
The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4
NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.6. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM
To create a cluster with multi-architecture compute machines on IBM Z® and IBM® LinuxONE (s390x
) with z/VM, you must have an existing single-architecture x86_64
cluster. You can then add s390x
compute machines to your OpenShift Container Platform cluster.
Before you can add s390x
nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a RHCOS compute machine using a z/VM instance. This will allow you to add s390x
nodes to your cluster and deploy a cluster with multi-architecture compute machines.
To create an IBM Z® or IBM® LinuxONE (s390x
) cluster with multi-architecture compute machines on x86_64
, follow the instructions for Installing a cluster on IBM Z® and IBM® LinuxONE. You can then add x86_64
compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z.
Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig
object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator.
4.6.1. Verifying cluster compatibility
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
-
Log in to the OpenShift CLI (
oc
). You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Verification
If you see the following output, your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.6.2. Creating RHCOS machines on IBM Z with z/VM
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines running on IBM Z® with z/VM and attach them to your existing cluster.
Prerequisites
- You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes.
- You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create.
Procedure
Disable UDP aggregation.
Currently, UDP aggregation is not supported on IBM Z® and is not automatically deactivated on multi-architecture compute clusters with an
x86_64
control plane and additionals390x
compute machines. To ensure that the addtional compute nodes are added to the cluster correctly, you must manually disable UDP aggregation.Create a YAML file
udp-aggregation-config.yaml
with the following content:apiVersion: v1 kind: ConfigMap data: disable-udp-aggregation: "true" metadata: name: udp-aggregation-config namespace: openshift-network-operator
Create the ConfigMap resource by running the following command:
$ oc create -f udp-aggregation-config.yaml
Extract the Ignition config file from the cluster by running the following command:
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
-
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node:
$ curl -k http://<http_server>/worker.ign
Download the RHEL live
kernel
,initramfs
, androotfs
files by running the following commands:$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
-
Move the downloaded RHEL live
kernel
,initramfs
, androotfs
files to an HTTP or HTTPS server that is accessible from the z/VM guest you want to add. Create a parameter file for the z/VM guest. The following parameters are specific for the virtual machine:
Optional: To specify a static IP address, add an
ip=
parameter with the following entries, with each separated by a colon:- The IP address for the machine.
- An empty string.
- The gateway.
- The netmask.
-
The machine host and domain name in the form
hostname.domainname
. Omit this value to let RHCOS decide. - The network interface name. Omit this value to let RHCOS decide.
-
The value
none
.
-
For
coreos.inst.ignition_url=
, specify the URL to theworker.ign
file. Only HTTP and HTTPS protocols are supported. -
For
coreos.live.rootfs_url=
, specify the matching rootfs artifact for thekernel
andinitramfs
you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks:
-
For
coreos.inst.install_dev=
, specify/dev/dasda
. -
Use
rd.dasd=
to specify the DASD where RHCOS is to be installed. You can adjust further parameters if required.
The following is an example parameter file,
additional-worker-dasd.parm
:cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0
Write all options in the parameter file as a single line and make sure that you have no newline characters.
-
For
For installations on FCP-type disks, complete the following tasks:
Use
rd.zfcp=<adapter>,<wwpn>,<lun>
to specify the FCP disk where RHCOS is to be installed. For multipathing, repeat this step for each additional path.NoteWhen you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems.
Set the install device as:
coreos.inst.install_dev=/dev/sda
.NoteIf additional LUNs are configured with NPIV, FCP requires
zfcp.allow_lun_scan=0
. If you must enablezfcp.allow_lun_scan=1
because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node.You can adjust further parameters if required.
ImportantAdditional postinstallation steps are required to fully enable multipathing. For more information, see “Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks.
The following is an example parameter file,
additional-worker-fcp.parm
for a worker node with multipathing:cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/sda \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000
Write all options in the parameter file as a single line and make sure that you have no newline characters.
-
Transfer the
initramfs
,kernel
, parameter files, and RHCOS images to z/VM, for example, by using FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM. Punch the files to the virtual reader of the z/VM guest virtual machine.
See PUNCH in IBM® Documentation.
TipYou can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines.
- Log in to CMS on the bootstrap machine.
IPL the bootstrap machine from the reader by running the following command:
$ ipl c
See IPL in IBM® Documentation.
4.6.3. Approving the certificate signing requests for your machines
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4
The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4
NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.7. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE in an LPAR
To create a cluster with multi-architecture compute machines on IBM Z® and IBM® LinuxONE (s390x
) in an LPAR, you must have an existing single-architecture x86_64
cluster. You can then add s390x
compute machines to your OpenShift Container Platform cluster.
Before you can add s390x
nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a RHCOS compute machine using an LPAR instance. This will allow you to add s390x
nodes to your cluster and deploy a cluster with multi-architecture compute machines.
To create an IBM Z® or IBM® LinuxONE (s390x
) cluster with multi-architecture compute machines on x86_64
, follow the instructions for Installing a cluster on IBM Z® and IBM® LinuxONE. You can then add x86_64
compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z.
4.7.1. Verifying cluster compatibility
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
-
Log in to the OpenShift CLI (
oc
). You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Verification
If you see the following output, your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.7.2. Creating RHCOS machines on IBM Z with z/VM
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines running on IBM Z® with z/VM and attach them to your existing cluster.
Prerequisites
- You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes.
- You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create.
Procedure
Disable UDP aggregation.
Currently, UDP aggregation is not supported on IBM Z® and is not automatically deactivated on multi-architecture compute clusters with an
x86_64
control plane and additionals390x
compute machines. To ensure that the addtional compute nodes are added to the cluster correctly, you must manually disable UDP aggregation.Create a YAML file
udp-aggregation-config.yaml
with the following content:apiVersion: v1 kind: ConfigMap data: disable-udp-aggregation: "true" metadata: name: udp-aggregation-config namespace: openshift-network-operator
Create the ConfigMap resource by running the following command:
$ oc create -f udp-aggregation-config.yaml
Extract the Ignition config file from the cluster by running the following command:
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
-
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node:
$ curl -k http://<http_server>/worker.ign
Download the RHEL live
kernel
,initramfs
, androotfs
files by running the following commands:$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
-
Move the downloaded RHEL live
kernel
,initramfs
, androotfs
files to an HTTP or HTTPS server that is accessible from the z/VM guest you want to add. Create a parameter file for the z/VM guest. The following parameters are specific for the virtual machine:
Optional: To specify a static IP address, add an
ip=
parameter with the following entries, with each separated by a colon:- The IP address for the machine.
- An empty string.
- The gateway.
- The netmask.
-
The machine host and domain name in the form
hostname.domainname
. Omit this value to let RHCOS decide. - The network interface name. Omit this value to let RHCOS decide.
-
The value
none
.
-
For
coreos.inst.ignition_url=
, specify the URL to theworker.ign
file. Only HTTP and HTTPS protocols are supported. -
For
coreos.live.rootfs_url=
, specify the matching rootfs artifact for thekernel
andinitramfs
you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks:
-
For
coreos.inst.install_dev=
, specify/dev/dasda
. -
Use
rd.dasd=
to specify the DASD where RHCOS is to be installed. You can adjust further parameters if required.
The following is an example parameter file,
additional-worker-dasd.parm
:cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0
Write all options in the parameter file as a single line and make sure that you have no newline characters.
-
For
For installations on FCP-type disks, complete the following tasks:
Use
rd.zfcp=<adapter>,<wwpn>,<lun>
to specify the FCP disk where RHCOS is to be installed. For multipathing, repeat this step for each additional path.NoteWhen you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems.
Set the install device as:
coreos.inst.install_dev=/dev/sda
.NoteIf additional LUNs are configured with NPIV, FCP requires
zfcp.allow_lun_scan=0
. If you must enablezfcp.allow_lun_scan=1
because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node.You can adjust further parameters if required.
ImportantAdditional postinstallation steps are required to fully enable multipathing. For more information, see “Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks.
The following is an example parameter file,
additional-worker-fcp.parm
for a worker node with multipathing:cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/sda \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000
Write all options in the parameter file as a single line and make sure that you have no newline characters.
-
Transfer the
initramfs
,kernel
, parameter files, and RHCOS images to z/VM, for example, by using FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM. Punch the files to the virtual reader of the z/VM guest virtual machine.
See PUNCH in IBM® Documentation.
TipYou can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines.
- Log in to CMS on the bootstrap machine.
IPL the bootstrap machine from the reader by running the following command:
$ ipl c
See IPL in IBM® Documentation.
4.7.3. Approving the certificate signing requests for your machines
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4
The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4
NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.8. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM
To create a cluster with multi-architecture compute machines on IBM Z® and IBM® LinuxONE (s390x
) with RHEL KVM, you must have an existing single-architecture x86_64
cluster. You can then add s390x
compute machines to your OpenShift Container Platform cluster.
Before you can add s390x
nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a RHCOS compute machine using a RHEL KVM instance. This will allow you to add s390x
nodes to your cluster and deploy a cluster with multi-architecture compute machines.
To create an IBM Z® or IBM® LinuxONE (s390x
) cluster with multi-architecture compute machines on x86_64
, follow the instructions for Installing a cluster on IBM Z® and IBM® LinuxONE. You can then add x86_64
compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z.
Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig
object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator.
4.8.1. Verifying cluster compatibility
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
-
Log in to the OpenShift CLI (
oc
). You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Verification
If you see the following output, your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.8.2. Creating RHCOS machines using virt-install
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using virt-install
.
Prerequisites
- You have at least one LPAR running on RHEL 8.7 or later with KVM, referred to as RHEL KVM host in this procedure.
- The KVM/QEMU hypervisor is installed on the RHEL KVM host.
- You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes.
- An HTTP or HTTPS server is set up.
Procedure
Disable UDP aggregation.
Currently, UDP aggregation is not supported on IBM Z® and is not automatically deactivated on multi-architecture compute clusters with an
x86_64
control plane and additionals390x
compute machines. To ensure that the addtional compute nodes are added to the cluster correctly, you must manually disable UDP aggregation.Create a YAML file
udp-aggregation-config.yaml
with the following content:apiVersion: v1 kind: ConfigMap data: disable-udp-aggregation: "true" metadata: name: udp-aggregation-config namespace: openshift-network-operator
Create the ConfigMap resource by running the following command:
$ oc create -f udp-aggregation-config.yaml
Extract the Ignition config file from the cluster by running the following command:
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
-
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node:
$ curl -k http://<HTTP_server>/worker.ign
Download the RHEL live
kernel
,initramfs
, androotfs
files by running the following commands:$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
-
Move the downloaded RHEL live
kernel
,initramfs
androotfs
files to an HTTP or HTTPS server before you launchvirt-install
. Create the new KVM guest nodes using the RHEL
kernel
,initramfs
, and Ignition files; the new disk image; and adjusted parm line arguments.$ virt-install \ --connect qemu:///system \ --name <vm_name> \ --autostart \ --os-variant rhel9.4 \ 1 --cpu host \ --vcpus <vcpus> \ --memory <memory_mb> \ --disk <vm_name>.qcow2,size=<image_size> \ --network network=<virt_network_parm> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ 2 --extra-args "rd.neednet=1" \ --extra-args "coreos.inst.install_dev=/dev/vda" \ --extra-args "coreos.inst.ignition_url=http://<http_server>/worker.ign " \ 3 --extra-args "coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img" \ 4 --extra-args "ip=<ip>::<gateway>:<netmask>:<hostname>::none" \ 5 --extra-args "nameserver=<dns>" \ --extra-args "console=ttysclp0" \ --noautoconsole \ --wait
- 1
- For
os-variant
, specify the RHEL version for the RHCOS compute machine.rhel9.4
is the recommended version. To query the supported RHEL version of your operating system, run the following command:$ osinfo-query os -f short-id
NoteThe
os-variant
is case sensitive. - 2
- For
--location
, specify the location of the kernel/initrd on the HTTP or HTTPS server. - 3
- Specify the location of the
worker.ign
config file. Only HTTP and HTTPS protocols are supported. - 4
- Specify the location of the
rootfs
artifact for thekernel
andinitramfs
you are booting. Only HTTP and HTTPS protocols are supported - 5
- Optional: For
hostname
, specify the fully qualified hostname of the client machine.
NoteIf you are using HAProxy as a load balancer, update your HAProxy rules for
ingress-router-443
andingress-router-80
in the/etc/haproxy/haproxy.cfg
configuration file.- Continue to create more compute machines for your cluster.
4.8.3. Approving the certificate signing requests for your machines
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4
The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4
NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.9. Creating a cluster with multi-architecture compute machines on IBM Power
To create a cluster with multi-architecture compute machines on IBM Power® (ppc64le
), you must have an existing single-architecture (x86_64
) cluster. You can then add ppc64le
compute machines to your OpenShift Container Platform cluster.
Before you can add ppc64le
nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This will allow you to add ppc64le
nodes to your cluster and deploy a cluster with multi-architecture compute machines.
To create an IBM Power® (ppc64le
) cluster with multi-architecture compute machines on x86_64
, follow the instructions for Installing a cluster on IBM Power®. You can then add x86_64
compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z.
Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig
object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator.
4.9.1. Verifying cluster compatibility
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
).
When using multiple architectures, hosts for OpenShift Container Platform nodes must share the same storage layer. If they do not have the same storage layer, use a storage provider such as nfs-provisioner
.
You should limit the number of network hops between the compute and control plane as much as possible.
Procedure
-
Log in to the OpenShift CLI (
oc
). You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Verification
If you see the following output, your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.9.2. Creating RHCOS machines using an ISO image
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using an ISO image to create the machines.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
You must have the OpenShift CLI (
oc
) installed.
Procedure
Extract the Ignition config file from the cluster by running the following command:
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
-
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node:
$ curl -k http://<HTTP_server>/worker.ign
You can access the ISO image for booting your new machine by running to following command:
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
- Burn the ISO image to a disk and boot it directly.
- Use ISO redirection with a LOM interface.
Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment.
NoteYou can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the
coreos-installer
command as outlined in the following steps, instead of adding kernel arguments.Run the
coreos-installer
command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to:$ sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 12
- 1
- You must run the
coreos-installer
command by usingsudo
, because thecore
user does not have the required root privileges to perform the installation. - 2
- The
--ignition-hash
option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node.<digest>
is the Ignition config file SHA512 digest obtained in a preceding step.
NoteIf you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running
coreos-installer
.The following example initializes a bootstrap node installation to the
/dev/sda
device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2:$ sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
Monitor the progress of the RHCOS installation on the console of the machine.
ImportantEnsure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise.
- Continue to create more compute machines for your cluster.
4.9.3. Creating RHCOS machines by PXE or iPXE booting
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
Obtain the URLs of the RHCOS ISO image, compressed metal BIOS,
kernel
, andinitramfs
files that you uploaded to your HTTP server during cluster installation. - You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them.
-
If you use UEFI, you have access to the
grub.conf
file that you modified during OpenShift Container Platform installation.
Procedure
Confirm that your PXE or iPXE installation for the RHCOS images is correct.
For PXE:
DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2
- 1
- Specify the location of the live
kernel
file that you uploaded to your HTTP server. - 2
- Specify locations of the RHCOS files that you uploaded to your HTTP server. The
initrd
parameter value is the location of the liveinitramfs
file, thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_url
parameter value is the location of the liverootfs
file. Thecoreos.inst.ignition_url
andcoreos.live.rootfs_url
parameters only support HTTP and HTTPS.
NoteThis configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more
console=
arguments to theAPPEND
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?.For iPXE (
x86_64
+ppc64le
):kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot
- 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
kernel
parameter value is the location of thekernel
file, theinitrd=main
argument is needed for booting on UEFI systems, thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your HTTP server.
NoteThis configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more
console=
arguments to thekernel
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section.NoteTo network boot the CoreOS
kernel
onppc64le
architecture, you need to use a version of iPXE build with theIMAGE_GZIP
option enabled. SeeIMAGE_GZIP
option in iPXE.For PXE (with UEFI and GRUB as second stage) on
ppc64le
:menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }
- 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The
kernel
parameter value is the location of thekernel
file on your TFTP server. Thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file on your HTTP Server. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your TFTP server.
- Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
4.9.4. Approving the certificate signing requests for your machines
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4
The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:$ oc get nodes -o wide
Example output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.29.5 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.29.5 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9 master-0-x86 Ready control-plane,master 75d v1.29.5 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9 master-1-x86 Ready control-plane,master 75d v1.29.5 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9 master-2-x86 Ready control-plane,master 75d v1.29.5 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.29.5 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.29.5 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9
NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.10. Managing a cluster with multi-architecture compute machines
4.10.1. Scheduling workloads on clusters with multi-architecture compute machines
Deploying a workload on a cluster with compute nodes of different architectures requires attention and monitoring of your cluster. There might be further actions you need to take in order to successfully place pods in the nodes of your cluster.
You can use the Multiarch Tuning Operator to enable architecture-aware scheduling of workloads on clusters with multi-architecture compute machines. The Multiarch Tuning Operator implements additional scheduler predicates in the pods specifications based on the architectures that the pods can support at creation time. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator.
For more information on node affinity, scheduling, taints and tolerations, see the following documentation:
4.10.1.1. Sample multi-architecture node workload deployments
Before you schedule workloads on a cluster with compute nodes of different architectures, consider the following use cases:
- Using node affinity to schedule workloads on a node
You can allow a workload to be scheduled on only a set of nodes with architectures supported by its images, you can set the
spec.affinity.nodeAffinity
field in your pod’s template specification.Example deployment with the
nodeAffinity
set to certain architecturesapiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: 1 - amd64 - arm64
- 1
- Specify the supported architectures. Valid values include
amd64
,arm64
, or both values.
- Tainting every node for a specific architecture
You can taint a node to avoid workloads that are not compatible with its architecture to be scheduled on that node. In the case where your cluster is using a
MachineSet
object, you can add parameters to the.spec.template.spec.taints
field to avoid workloads being scheduled on nodes with non-supported architectures.Before you can taint a node, you must scale down the
MachineSet
object or remove available machines. You can scale down the machine set by using one of following commands:$ oc scale --replicas=0 machineset <machineset> -n openshift-machine-api
Or:
$ oc edit machineset <machineset> -n openshift-machine-api
For more information on scaling machine sets, see "Modifying a compute machine set".
Example
MachineSet
with a taint setapiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... spec: # ... template: # ... spec: # ... taints: - effect: NoSchedule key: multi-arch.openshift.io/arch value: arm64
You can also set a taint on a specific node by running the following command:
$ oc adm taint nodes <node-name> multi-arch.openshift.io/arch=arm64:NoSchedule
- Creating a default toleration
You can annotate a namespace so all of the workloads get the same default toleration by running the following command:
$ oc annotate namespace my-namespace \ 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{"operator": "Exists", "effect": "NoSchedule", "key": "multi-arch.openshift.io/arch"}]'
- Tolerating architecture taints in workloads
On a node with a defined taint, workloads will not be scheduled on that node. However, you can allow them to be scheduled by setting a toleration in the pod’s specification.
Example deployment with a toleration
apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: tolerations: - key: "multi-arch.openshift.io/arch" value: "arm64" operator: "Equal" effect: "NoSchedule"
This example deployment can also be allowed on nodes with the
multi-arch.openshift.io/arch=arm64
taint specified.- Using node affinity with taints and tolerations
When a scheduler computes the set of nodes to schedule a pod, tolerations can broaden the set while node affinity restricts the set. If you set a taint to the nodes of a specific architecture, the following example toleration is required for scheduling pods.
Example deployment with a node affinity and toleration set.
apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - arm64 tolerations: - key: "multi-arch.openshift.io/arch" value: "arm64" operator: "Equal" effect: "NoSchedule"
Additional resources
4.10.2. Enabling 64k pages on the Red Hat Enterprise Linux CoreOS (RHCOS) kernel
You can enable the 64k memory page in the Red Hat Enterprise Linux CoreOS (RHCOS) kernel on the 64-bit ARM compute machines in your cluster. The 64k page size kernel specification can be used for large GPU or high memory workloads. This is done using the Machine Config Operator (MCO) which uses a machine config pool to update the kernel. To enable 64k page sizes, you must dedicate a machine config pool for ARM64 to enable on the kernel.
Using 64k pages is exclusive to 64-bit ARM architecture compute nodes or clusters installed on 64-bit ARM machines. If you configure the 64k pages kernel on a machine config pool using 64-bit x86 machines, the machine config pool and MCO will degrade.
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You created a cluster with compute nodes of different architecture on one of the supported platforms.
Procedure
Label the nodes where you want to run the 64k page size kernel:
$ oc label node <node_name> <label>
Example command
$ oc label node worker-arm64-01 node-role.kubernetes.io/worker-64k-pages=
Create a machine config pool that contains the worker role that uses the ARM64 architecture and the
worker-64k-pages
role:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-64k-pages spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-64k-pages nodeSelector: matchLabels: node-role.kubernetes.io/worker-64k-pages: "" kubernetes.io/arch: arm64
Create a machine config on your compute node to enable
64k-pages
with the64k-pages
parameter.$ oc create -f <filename>.yaml
Example MachineConfig
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker-64k-pages" 1 name: 99-worker-64kpages spec: kernelType: 64k-pages 2
NoteThe
64k-pages
type is supported on only 64-bit ARM architecture based compute nodes. Therealtime
type is supported on only 64-bit x86 architecture based compute nodes.
Verification
To view your new
worker-64k-pages
machine config pool, run the following command:$ oc get mcp
Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9d55ac9a91127c36314e1efe7d77fbf8 True False False 3 3 3 0 361d worker rendered-worker-e7b61751c4a5b7ff995d64b967c421ff True False False 7 7 7 0 361d worker-64k-pages rendered-worker-64k-pages-e7b61751c4a5b7ff995d64b967c421ff True False False 2 2 2 0 35m
4.10.3. Importing manifest lists in image streams on your multi-architecture compute machines
On an OpenShift Container Platform 4.16 cluster with multi-architecture compute machines, the image streams in the cluster do not import manifest lists automatically. You must manually change the default importMode
option to the PreserveOriginal
option in order to import the manifest list.
Prerequisites
-
You installed the OpenShift Container Platform CLI (
oc
).
Procedure
The following example command shows how to patch the
ImageStream
cli-artifacts so that thecli-artifacts:latest
image stream tag is imported as a manifest list.$ oc patch is/cli-artifacts -n openshift -p '{"spec":{"tags":[{"name":"latest","importPolicy":{"importMode":"PreserveOriginal"}}]}}'
Verification
You can check that the manifest lists imported properly by inspecting the image stream tag. The following command will list the individual architecture manifests for a particular tag.
$ oc get istag cli-artifacts:latest -n openshift -oyaml
If the
dockerImageManifests
object is present, then the manifest list import was successful.Example output of the
dockerImageManifests
objectdockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux
4.11. Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator
The Multiarch Tuning Operator optimizes workload management within multi-architecture clusters and in single-architecture clusters transitioning to multi-architecture environments.
Architecture-aware workload scheduling allows the scheduler to place pods onto nodes that match the architecture of the pod images.
By default, the scheduler does not consider the architecture of a pod’s container images when determining the placement of new pods onto nodes.
To enable architecture-aware workload scheduling, you must create the ClusterPodPlacementConfig
object. When you create the ClusterPodPlacementConfig
object, the Multiarch Tuning Operator deploys the necessary operands to support architecture-aware workload scheduling.
When a pod is created, the operands perform the following actions:
-
Add the
multiarch.openshift.io/scheduling-gate
scheduling gate that prevents the scheduling of the pod. -
Compute a scheduling predicate that includes the supported architecture values for the
kubernetes.io/arch
label. -
Integrate the scheduling predicate as a
nodeAffinity
requirement in the pod specification. - Remove the scheduling gate from the pod.
Note the following operand behaviors:
-
If the
nodeSelector
field is already configured with thekubernetes.io/arch
label for a workload, the operand does not update thenodeAffinity
field for that workload. -
If the
nodeSelector
field is not configured with thekubernetes.io/arch
label for a workload, the operand updates thenodeAffinity
field for that workload. However, in thatnodeAffinity
field, the operand updates only the node selector terms that are not configured with thekubernetes.io/arch
label. -
If the
nodeName
field is already set, the Multiarch Tuning Operator does not process the pod.
4.11.1. Installing the Multiarch Tuning Operator by using the CLI
You can install the Multiarch Tuning Operator by using the OpenShift CLI (oc
).
Prerequisites
-
You have installed
oc
. -
You have logged in to
oc
as a user withcluster-admin
privileges.
Procedure
Create a new project named
openshift-multiarch-tuning-operator
by running the following command:$ oc create ns openshift-multiarch-tuning-operator
Create an
OperatorGroup
object:Create a YAML file with the configuration for creating an
OperatorGroup
object.Example YAML configuration for creating an
OperatorGroup
objectapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: {}
Create the
OperatorGroup
object by running the following command:$ oc create -f <file_name> 1
- 1
- Replace
<file_name>
with the name of the YAML file that contains theOperatorGroup
object configuration.
Create a
Subscription
object:Create a YAML file with the configuration for creating a
Subscription
object.Example YAML configuration for creating a
Subscription
objectapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: channel: stable name: multiarch-tuning-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic startingCSV: multiarch-tuning-operator.v1.0.0
Create the
Subscription
object by running the following command:$ oc create -f <file_name> 1
- 1
- Replace
<file_name>
with the name of the YAML file that contains theSubscription
object configuration.
For more details about configuring the Subscription
object and OperatorGroup
object, see "Installing from OperatorHub using the CLI".
Verification
To verify that the Multiarch Tuning Operator is installed, run the following command:
$ oc get csv -n openshift-multiarch-tuning-operator
Example output
NAME DISPLAY VERSION REPLACES PHASE multiarch-tuning-operator.v1.0.0 Multiarch Tuning Operator 1.0.0 multiarch-tuning-operator.v0.9.0 Succeeded
The installation is successful if the Operator is in
Succeeded
phase.Optional: To verify that the
OperatorGroup
object is created, run the following command:$ oc get operatorgroup -n openshift-multiarch-tuning-operator
Example output
NAME AGE openshift-multiarch-tuning-operator-q8zbb 133m
Optional: To verify that the
Subscription
object is created, run the following command:$ oc get subscription -n openshift-multiarch-tuning-operator
Example output
NAME PACKAGE SOURCE CHANNEL multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable
Additional resources
4.11.2. Installing the Multiarch Tuning Operator by using the web console
You can install the Multiarch Tuning Operator by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have access to the OpenShift Container Platform web console.
Procedure
- Log in to the OpenShift Container Platform web console.
- Navigate to Operators → OperatorHub.
- Enter Multiarch Tuning Operator in the search field.
- Click Multiarch Tuning Operator.
- Select the Multiarch Tuning Operator version from the Version list.
- Click Install
Set the following options on the Operator Installation page:
- Set Update Channel to stable.
- Set Installation Mode to All namespaces on the cluster.
Set Installed Namespace to Operator recommended Namespace or Select a Namespace.
The recommended Operator namespace is
openshift-multiarch-tuning-operator
. If theopenshift-multiarch-tuning-operator
namespace does not exist, it is created during the operator installation.If you select Select a namespace, you must select a namespace for the Operator from the Select Project list.
Update approval as Automatic or Manual.
If you select Automatic updates, Operator Lifecycle Manager (OLM) automatically updates the running instance of the Multiarch Tuning Operator without any intervention.
If you select Manual updates, OLM creates an update request. As a cluster administrator, you must manually approve the update request to update the Multiarch Tuning Operator to a newer version.
- Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
- Click Install.
Verification
- Navigate to Operators → Installed Operators.
-
Verify that the Multiarch Tuning Operator is listed with the Status field as Succeeded in the
openshift-multiarch-tuning-operator
namespace.
4.11.3. Multiarch Tuning Operator pod labels and architecture support overview
After installing the Multiarch Tuning Operator, you can verify the multi-architecture support for workloads in your cluster. You can identify and manage pods based on their architecture compatibility by using the pod labels. These labels are automatically set on the newly created pods to provide insights into their architecture support.
The following table describes the labels that the Multiarch Tuning Operator adds when you create a pod:
Label | Description |
---|---|
| The pod supports multiple architectures. |
| The pod supports only a single architecture. |
|
The pod supports the |
|
The pod supports the |
|
The pod supports the |
|
The pod supports the |
| The Operator has set the node affinity requirement for the architecture. |
| The Operator did not set the node affinity requirement. For example, when the pod already has a node affinity for the architecture, the Multiarch Tuning Operator adds this label to the pod. |
| The pod is gated. |
| The pod gate has been removed. |
| An error has occurred while building the node affinity requirements. |
4.11.4. Creating the ClusterPodPlacementConfig object
After installing the Multiarch Tuning Operator, you must create the ClusterPodPlacementConfig
object. When you create this object, the Multiarch Tuning Operator deploys an operand that enables architecture-aware workload scheduling.
You can create only one instance of the ClusterPodPlacementConfig
object.
Example ClusterPodPlacementConfig
object configuration
apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster 1 spec: logVerbosityLevel: Normal 2 namespaceSelector: 3 matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist
- 1
- You must set this field value to
cluster
. - 2
- Optional: You can set the field value to
Normal
,Debug
,Trace
, orTraceAll
. The value is set toNormal
by default. - 3
- Optional: You can configure the
namespaceSelector
to select the namespaces in which the Multiarch Tuning Operator’s pod placement operand must process thenodeAffinity
of the pods. All namespaces are considered by default.
In this example, the operator
field value is set to DoesNotExist
. Therefore, if the key
field value (multiarch.openshift.io/exclude-pod-placement
) is set as a label in a namespace, the operand does not process the nodeAffinity
of the pods in that namespace. Instead, the operand processes the nodeAffinity
of the pods in namespaces that do not contain the label.
If you want the operand to process the nodeAffinity
of the pods only in specific namespaces, you can configure the namespaceSelector
as follows:
namespaceSelector: matchExpressions: - key: multiarch.openshift.io/include-pod-placement operator: Exists
In this example, the operator
field value is set to Exists
. Therefore, the operand processes the nodeAffinity
of the pods only in namespaces that contain the multiarch.openshift.io/include-pod-placement
label.
This Operator excludes pods in namespaces starting with kube-
. It also excludes pods that are expected to be scheduled on control plane nodes.
4.11.4.1. Creating the ClusterPodPlacementConfig object by using the CLI
To deploy the pod placement operand that enables architecture-aware workload scheduling, you can create the ClusterPodPlacementConfig
object by using the OpenShift CLI (oc
).
Prerequisites
-
You have installed
oc
. -
You have logged in to
oc
as a user withcluster-admin
privileges. - You have installed the Multiarch Tuning Operator.
Procedure
Create a
ClusterPodPlacementConfig
object YAML file:Example
ClusterPodPlacementConfig
object configurationapiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster spec: logVerbosityLevel: Normal namespaceSelector: matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist
Create the
ClusterPodPlacementConfig
object by running the following command:$ oc create -f <file_name> 1
- 1
- Replace
<file_name>
with the name of theClusterPodPlacementConfig
object YAML file.
Verification
To check that the
ClusterPodPlacementConfig
object is created, run the following command:$ oc get clusterpodplacementconfig
Example output
NAME AGE cluster 29s
4.11.4.2. Creating the ClusterPodPlacementConfig object by using the web console
To deploy the pod placement operand that enables architecture-aware workload scheduling, you can create the ClusterPodPlacementConfig
object by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have access to the OpenShift Container Platform web console.
- You have installed the Multiarch Tuning Operator.
Procedure
- Log in to the OpenShift Container Platform web console.
- Navigate to Operators → Installed Operators.
- On the Installed Operators page, click Multiarch Tuning Operator.
- Click the Cluster Pod Placement Config tab.
- Select either Form view or YAML view.
-
Configure the
ClusterPodPlacementConfig
object parameters. - Click Create.
Optional: If you want to edit the
ClusterPodPlacementConfig
object, perform the following actions:- Click the Cluster Pod Placement Config tab.
- Select Edit ClusterPodPlacementConfig from the options menu.
-
Click YAML and edit the
ClusterPodPlacementConfig
object parameters. - Click Save.
Verification
-
On the Cluster Pod Placement Config page, check that the
ClusterPodPlacementConfig
object is in theReady
state.
4.11.5. Deleting the ClusterPodPlacementConfig object by using the CLI
You can create only one instance of the ClusterPodPlacementConfig
object. If you want to re-create this object, you must first delete the existing instance.
You can delete this object by using the OpenShift CLI (oc
).
Prerequisites
-
You have installed
oc
. -
You have logged in to
oc
as a user withcluster-admin
privileges.
Procedure
-
Log in to the OpenShift CLI (
oc
). Delete the
ClusterPodPlacementConfig
object by running the following command:$ oc delete clusterpodplacementconfig cluster
Verification
To check that the
ClusterPodPlacementConfig
object is deleted, run the following command:$ oc get clusterpodplacementconfig
Example output
No resources found
4.11.6. Deleting the ClusterPodPlacementConfig object by using the web console
You can create only one instance of the ClusterPodPlacementConfig
object. If you want to re-create this object, you must first delete the existing instance.
You can delete this object by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have access to the OpenShift Container Platform web console.
-
You have created the
ClusterPodPlacementConfig
object.
Procedure
- Log in to the OpenShift Container Platform web console.
- Navigate to Operators → Installed Operators.
- On the Installed Operators page, click Multiarch Tuning Operator.
- Click the Cluster Pod Placement Config tab.
- Select Delete ClusterPodPlacementConfig from the options menu.
- Click Delete.
Verification
-
On the Cluster Pod Placement Config page, check that the
ClusterPodPlacementConfig
object has been deleted.
4.11.7. Uninstalling the Multiarch Tuning Operator by using the CLI
You can uninstall the Multiarch Tuning Operator by using the OpenShift CLI (oc
).
Prerequisites
-
You have installed
oc
. -
You have logged in to
oc
as a user withcluster-admin
privileges. You deleted the
ClusterPodPlacementConfig
object.ImportantYou must delete the
ClusterPodPlacementConfig
object before uninstalling the Multiarch Tuning Operator. Uninstalling the Operator without deleting theClusterPodPlacementConfig
object can lead to unexpected behavior.
Procedure
Get the
Subscription
object name for the Multiarch Tuning Operator by running the following command:$ oc get subscription.operators.coreos.com -n <namespace> 1
- 1
- Replace
<namespace>
with the name of the namespace where you want to uninstall the Multiarch Tuning Operator.
Example output
NAME PACKAGE SOURCE CHANNEL openshift-multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable
Get the
currentCSV
value for the Multiarch Tuning Operator by running the following command:$ oc get subscription.operators.coreos.com <subscription_name> -n <namespace> -o yaml | grep currentCSV 1
- 1
- Replace
<subscription_name>
with theSubscription
object name. For example:openshift-multiarch-tuning-operator
. Replace<namespace>
with the name of the namespace where you want to uninstall the Multiarch Tuning Operator.
Example output
currentCSV: multiarch-tuning-operator.v1.0.0
Delete the
Subscription
object by running the following command:$ oc delete subscription.operators.coreos.com <subscription_name> -n <namespace> 1
- 1
- Replace
<subscription_name>
with theSubscription
object name. Replace<namespace>
with the name of the namespace where you want to uninstall the Multiarch Tuning Operator.
Example output
subscription.operators.coreos.com "openshift-multiarch-tuning-operator" deleted
Delete the CSV for the Multiarch Tuning Operator in the target namespace using the
currentCSV
value by running the following command:$ oc delete clusterserviceversion <currentCSV_value> -n <namespace> 1
- 1
- Replace
<currentCSV>
with thecurrentCSV
value for the Multiarch Tuning Operator. For example:multiarch-tuning-operator.v1.0.0
. Replace<namespace>
with the name of the namespace where you want to uninstall the Multiarch Tuning Operator.
Example output
clusterserviceversion.operators.coreos.com "multiarch-tuning-operator.v1.0.0" deleted
Verification
To verify that the Multiarch Tuning Operator is uninstalled, run the following command:
$ oc get csv -n <namespace> 1
- 1
- Replace
<namespace>
with the name of the namespace where you have uninstalled the Multiarch Tuning Operator.
Example output
No resources found in openshift-multiarch-tuning-operator namespace.
4.11.8. Uninstalling the Multiarch Tuning Operator by using the web console
You can uninstall the Multiarch Tuning Operator by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster with
cluster-admin
permissions. You deleted the
ClusterPodPlacementConfig
object.ImportantYou must delete the
ClusterPodPlacementConfig
object before uninstalling the Multiarch Tuning Operator. Uninstalling the Operator without deleting theClusterPodPlacementConfig
object can lead to unexpected behavior.
Procedure
- Log in to the OpenShift Container Platform web console.
- Navigate to Operators → OperatorHub.
- Enter Multiarch Tuning Operator in the search field.
- Click Multiarch Tuning Operator.
- Click the Details tab.
- From the Actions menu, select Uninstall Operator.
- When prompted, click Uninstall.
Verification
- Navigate to Operators → Installed Operators.
- On the Installed Operators page, verify that the Multiarch Tuning Operator is not listed.
4.12. Multiarch Tuning Operator release notes
The Multiarch Tuning Operator optimizes workload management within multi-architecture clusters and in single-architecture clusters transitioning to multi-architecture environments.
These release notes track the development of the Multiarch Tuning Operator.
For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator.
4.12.1. Release notes for the Multiarch Tuning Operator 1.0.0
Issued: 31 October 2024
4.12.1.1. New features and enhancements
- With this release, the Multiarch Tuning Operator supports custom network scenarios and cluster-wide custom registries configurations.
- With this release, you can identify pods based on their architecture compatibility by using the pod labels that the Multiarch Tuning Operator adds to newly created pods.
- With this release, you can monitor the behavior of the Multiarch Tuning Operator by using the metrics and alerts that are registered in the Cluster Monitoring Operator.
Chapter 5. Postinstallation cluster tasks
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements.
5.1. Available cluster customizations
You complete most of the cluster configuration and customization after you deploy your OpenShift Container Platform cluster. A number of configuration resources are available.
If you install your cluster on IBM Z®, not all features and functions are available.
You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider.
For current documentation of the settings that you control by using these resources, use the oc explain
command, for example oc explain builds --api-version=config.openshift.io/v1
5.1.1. Cluster configuration resources
All cluster configuration resources are globally scoped (not namespaced) and named cluster
.
Resource name | Description |
---|---|
| Provides API server configuration such as certificates and certificate authorities. |
| Controls the identity provider and authentication configuration for the cluster. |
| Controls default and enforced configuration for all builds on the cluster. |
| Configures the behavior of the web console interface, including the logout behavior. |
| Enables FeatureGates so that you can use Tech Preview features. |
| Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). |
| Configuration details related to routing such as the default domain for routes. |
| Configures identity providers and other behavior related to internal OAuth server flows. |
| Configures how projects are created including the project template. |
| Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. |
| Configures scheduler behavior such as profiles and default node selectors. |
5.1.2. Operator configuration resources
These configuration resources are cluster-scoped instances, named cluster
, which control the behavior of a specific component as owned by a particular Operator.
Resource name | Description |
---|---|
| Controls console appearance such as branding customizations |
| Configures OpenShift image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. |
| Configures the Samples Operator to control which example image streams and templates are installed on the cluster. |
5.1.3. Additional configuration resources
These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances.
Resource name | Instance name | Namespace | Description |
---|---|---|---|
|
|
| Controls the Alertmanager deployment parameters. |
|
|
| Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. |
5.1.4. Informational Resources
You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly.
Resource name | Instance name | Description |
---|---|---|
|
|
In OpenShift Container Platform 4.16, you must not customize the |
|
| You cannot modify the DNS settings for your cluster. You can check the DNS Operator status. |
|
| Configuration details allowing the cluster to interact with its cloud provider. |
|
| You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation. |
5.2. Adding worker nodes
After you deploy your OpenShift Container Platform cluster, you can add worker nodes to scale cluster resources. There are different ways you can add worker nodes depending on the installation method and the environment of your cluster.
5.2.1. Adding worker nodes to installer-provisioned infrastructure clusters
For installer-provisioned infrastructure clusters, you can manually or automatically scale the MachineSet
object to match the number of available bare-metal hosts.
To add a bare-metal host, you must configure all network prerequisites, configure an associated baremetalhost
object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console.
5.2.2. Adding worker nodes to user-provisioned infrastructure clusters
For user-provisioned infrastructure clusters, you can add worker nodes by using a RHEL or RHCOS ISO image and connecting it to your cluster using cluster Ignition config files. For RHEL worker nodes, the following example uses Ansible playbooks to add worker nodes to the cluster. For RHCOS worker nodes, the following example uses an ISO image and network booting to add worker nodes to the cluster.
5.2.3. Adding worker nodes to clusters managed by the Assisted Installer
For clusters managed by the Assisted Installer, you can add worker nodes by using the Red Hat OpenShift Cluster Manager console, the Assisted Installer REST API or you can manually add worker nodes using an ISO image and cluster Ignition config files.
5.2.4. Adding worker nodes to clusters managed by the multicluster engine for Kubernetes
For clusters managed by the multicluster engine for Kubernetes, you can add worker nodes by using the dedicated multicluster engine console.
5.3. Adjust worker nodes
If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new compute machine sets, scale them up, then scale the original compute machine set down before removing them.
5.3.1. Understanding the difference between compute machine sets and the machine config pool
MachineSet
objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider.
The MachineConfigPool
object allows MachineConfigController
components to define and provide the status of machines in the context of upgrades.
The MachineConfigPool
object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool.
The NodeSelector
object can be replaced with a reference to the MachineSet
object.
5.3.2. Scaling a compute machine set manually
To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set.
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets.
Prerequisites
-
Install an OpenShift Container Platform cluster and the
oc
command line. -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the compute machine sets that are in the cluster by running the following command:
$ oc get machinesets.machine.openshift.io -n openshift-machine-api
The compute machine sets are listed in the form of
<clusterid>-worker-<aws-region-az>
.View the compute machines that are in the cluster by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api
Set the annotation on the compute machine that you want to delete by running the following command:
$ oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true"
Scale the compute machine set by running one of the following commands:
$ oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api
Or:
$ oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api
TipYou can alternatively apply the following YAML to scale the compute machine set:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2
You can scale the compute machine set up or down. It takes several minutes for the new machines to be available.
ImportantBy default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine.
You can skip draining the node by annotating
machine.openshift.io/exclude-node-draining
in a specific machine.
Verification
Verify the deletion of the intended machine by running the following command:
$ oc get machines.machine.openshift.io
5.3.3. The compute machine set deletion policy
Random
, Newest
, and Oldest
are the three supported deletion options. The default is Random
, meaning that random machines are chosen and deleted when scaling compute machine sets down. The deletion policy can be set according to the use case by modifying the particular compute machine set:
spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>
Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/delete-machine=true
to the machine of interest, regardless of the deletion policy.
By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker compute machine set to 0
unless you first relocate the router pods.
Custom compute machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker compute machine sets are scaling down. This prevents service disruption.
5.3.4. Creating default cluster-wide node selectors
You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes.
With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels.
You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down.
You can add additional key/value pairs to a pod. But you cannot add a different value for a default key.
Procedure
To add a default cluster-wide node selector:
Edit the Scheduler Operator CR to add the default cluster-wide node selectors:
$ oc edit scheduler cluster
Example Scheduler Operator CR with a node selector
apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false
- 1
- Add a node selector with the appropriate
<key>:<value>
pairs.
After making this change, wait for the pods in the
openshift-kube-apiserver
project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy.Add labels to a node by using a compute machine set or editing the node directly:
Use a compute machine set to add labels to nodes managed by the compute machine set when a node is created:
Run the following command to add labels to a
MachineSet
object:$ oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1
- 1
- Add a
<key>/<value>
pair for each label.
For example:
$ oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api
TipYou can alternatively apply the following YAML to add labels to a compute machine set:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node"
Verify that the labels are added to the
MachineSet
object by using theoc edit
command:For example:
$ oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api
Example
MachineSet
objectapiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ...
Redeploy the nodes associated with that compute machine set by scaling down to
0
and scaling up the nodes:For example:
$ oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
$ oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
When the nodes are ready and available, verify that the label is added to the nodes by using the
oc get
command:$ oc get nodes -l <key>=<value>
For example:
$ oc get nodes -l type=user-node
Example output
NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.29.4
Add labels directly to a node:
Edit the
Node
object for the node:$ oc label nodes <name> <key>=<value>
For example, to label a node:
$ oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east
TipYou can alternatively apply the following YAML to add labels to a node:
kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east"
Verify that the labels are added to the node using the
oc get
command:$ oc get nodes -l <key>=<value>,<key>=<value>
For example:
$ oc get nodes -l type=user-node,region=east
Example output
NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.29.4
5.4. Improving cluster stability in high latency environments using worker latency profiles
If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator needs to change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner.
The Kubelet
process provides the starting point for monitoring cluster health. The Kubelet
sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager (kube controller
) reads the status values every 10 seconds, by default. If the kube controller
cannot read a node status value, it loses contact with that node after a configured period. The default behavior is:
-
The node controller on the control plane updates the node health to
Unhealthy
and marks the nodeReady
condition`Unknown`. - In response, the scheduler stops scheduling pods to that node.
-
The Node Lifecycle Controller adds a
node.kubernetes.io/unreachable
taint with aNoExecute
effect to the node and schedules any pods on the node for eviction after five minutes, by default.
This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet
evicts pods from the node even though the node is healthy.
To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet
and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal.
These worker latency profiles contain three sets of parameters that are predefined with carefully tuned values to control the reaction of the cluster to increased latency. There is no need to experime