Postinstallation configuration
Day 2 operations for OpenShift Container Platform
Abstract
Chapter 1. Postinstallation configuration overview Copy linkLink copied to clipboard!
After installing OpenShift Container Platform, a cluster administrator can configure and customize the following components:
- Machine
- Bare metal
- Cluster
- Node
- Network
- Storage
- Users
- Alerts and notifications
1.1. Post-installation configuration tasks Copy linkLink copied to clipboard!
You can perform the post-installation configuration tasks to configure your environment to meet your need.
The following lists details these configurations:
-
Configure operating system features: The Machine Config Operator (MCO) manages
MachineConfig
objects. By using the MCO, you can configure nodes and custom resources. Configure bare metal nodes: You can use the Bare Metal Operator (BMO) to manage bare metal hosts. The BMO can complete the following operations:
- Inspects hardware details of the host and report them to the bare metal host.
- Inspect firmware and configure BIOS settings.
- Provision hosts with a desired image.
- Clean disk contents for the host before or after provisioning the host.
Configure cluster features. You can modify the following features of an OpenShift Container Platform cluster:
- Image registry
- Networking configuration
- Image build behavior
- Identity provider
- The etcd configuration
- Machine set creation to handle the workloads
- Cloud provider credential management
Configuring a private cluster: By default, the installation program provisions OpenShift Container Platform by using a publicly accessible DNS and endpoints. To make your cluster accessible only from within an internal network, configure the following components to make them private:
- DNS
- Ingress Controller
- API server
Perform node operations: By default, OpenShift Container Platform uses Red Hat Enterprise Linux CoreOS (RHCOS) compute machines. You can perform the following node operations:
- Add and remove compute machines.
- Add and remove taints and tolerations.
- Configure the maximum number of pods per node.
- Enable Device Manager.
- Configure users: OAuth access tokens allow users to authenticate themselves to the API. You can configure OAuth to perform the following tasks:
- Specify an identity provider
- Use role-based access control to define and supply permissions to users
- Install an Operator from OperatorHub
- Configuring alert notifications: By default, firing alerts are displayed on the Alerting UI of the web console. You can also configure OpenShift Container Platform to send alert notifications to external systems.
Chapter 2. Configuring a private cluster Copy linkLink copied to clipboard!
After you install an OpenShift Container Platform version 4.15 cluster, you can set some of its core components to be private.
2.1. About private clusters Copy linkLink copied to clipboard!
By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
2.1.1. DNS Copy linkLink copied to clipboard!
If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster’s own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for *.apps
, for the Ingress
object, and api
, for the API server.
The *.apps
records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster.
2.1.2. Ingress Controller Copy linkLink copied to clipboard!
Because the default Ingress
object is created as public, the load balancer is internet-facing and in the public subnets.
The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure.
2.1.3. API server Copy linkLink copied to clipboard!
By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic.
On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster’s access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz
path.
On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer.
On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster.
2.2. Configuring DNS records to be published in a private zone Copy linkLink copied to clipboard!
For all OpenShift Container Platform clusters, whether public or private, DNS records are published in a public zone by default.
You can remove the public zone from the cluster DNS configuration to avoid exposing DNS records to the public. You might want to avoid exposing sensitive information, such as internal domain names, internal IP addresses, or the number of clusters at an organization, or you might simply have no need to publish records publicly. If all the clients that should be able to connect to services within the cluster use a private DNS service that has the DNS records from the private zone, then there is no need to have a public DNS record for the cluster.
After you deploy a cluster, you can modify its DNS to use only a private zone by modifying the DNS
custom resource (CR). Modifying the DNS
CR in this way means that any DNS records that are subsequently created are not published to public DNS servers, which keeps knowledge of the DNS records isolated to internal users. This can be done when you configure the cluster to be private, or if you never want DNS records to be publicly resolvable.
Alternatively, even in a private cluster, you might keep the public zone for DNS records because it allows clients to resolve DNS names for applications running on that cluster. For example, an organization can have machines that connect to the public internet and then establish VPN connections for certain private IP ranges in order to connect to private IP addresses. The DNS lookups from these machines use the public DNS to determine the private addresses of those services, and then connect to the private addresses over the VPN.
Procedure
Review the
DNS
CR for your cluster by running the following command and observing the output:oc get dnses.config.openshift.io/cluster -o yaml
$ oc get dnses.config.openshift.io/cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the
spec
section contains both a private and a public zone.Patch the
DNS
CR to remove the public zone by running the following command:oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
dns.config.openshift.io/cluster patched
dns.config.openshift.io/cluster patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Ingress Operator consults the
DNS
CR definition when it creates DNS records forIngressController
objects. If only private zones are specified, only private records are created.ImportantExisting DNS records are not modified when you remove the public zone. You must manually delete previously published public DNS records if you no longer want them to be published publicly.
Verification
Review the
DNS
CR for your cluster and confirm that the public zone was removed, by running the following command and observing the output:oc get dnses.config.openshift.io/cluster -o yaml
$ oc get dnses.config.openshift.io/cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Setting the Ingress Controller to private Copy linkLink copied to clipboard!
After you deploy a cluster, you can modify its Ingress Controller to use only a private zone.
Procedure
Modify the default Ingress Controller to use only an internal endpoint:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced
ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The public DNS entry is removed, and the private zone entry is updated.
2.4. Restricting the API server to private Copy linkLink copied to clipboard!
After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Have access to the web console as a user with
admin
privileges.
Procedure
In the web portal or console for your cloud provider, take the following actions:
Locate and delete the appropriate load balancer component:
- For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.
-
For Azure, delete the
api-internal-v4
rule for the public load balancer.
-
For Azure, configure the Ingress Controller endpoint publishing scope to
Internal
. For more information, see "Configuring the Ingress Controller endpoint publishing scope to Internal". -
For the Azure public load balancer, if you configure the Ingress Controller endpoint publishing scope to
Internal
and there are no existing inbound rules in the public load balancer, you must create an outbound rule explicitly to provide outbound traffic for the backend address pool. For more information, see the Microsoft Azure documentation about adding outbound rules. -
Delete the
api.$clustername.$yourdomain
orapi.$clustername
DNS entry in the public zone.
AWS clusters: Remove the external load balancers:
ImportantYou can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers.
If your cluster uses a control plane machine set, delete the lines in the control plane machine set custom resource that configure your public or external load balancer:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your cluster does not use a control plane machine set, you must delete the external load balancers from each control plane machine.
From your terminal, list the cluster machines by running the following command:
oc get machine -n openshift-machine-api
$ oc get machine -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane machines contain
master
in the name.Remove the external load balancer from each control plane machine:
Edit a control plane machine object to by running the following command:
oc edit machines -n openshift-machine-api <control_plane_name>
$ oc edit machines -n openshift-machine-api <control_plane_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine object to modify.
Remove the lines that describe the external load balancer, which are marked in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save your changes and exit the object specification.
- Repeat this process for each of the control plane machines.
2.5. Configuring a private storage endpoint on Azure Copy linkLink copied to clipboard!
You can leverage the Image Registry Operator to use private endpoints on Azure, which enables seamless configuration of private storage accounts when OpenShift Container Platform is deployed on private Azure clusters. This allows you to deploy the image registry without exposing public-facing storage endpoints.
Do not configure a private storage endpoint on Microsoft Azure Red Hat OpenShift (ARO), because the endpoint can put your Microsoft Azure Red Hat OpenShift cluster in an unrecoverable state.
You can configure the Image Registry Operator to use private storage endpoints on Azure in one of two ways:
- By configuring the Image Registry Operator to discover the VNet and subnet names
- With user-provided Azure Virtual Network (VNet) and subnet names
2.5.1. Limitations for configuring a private storage endpoint on Azure Copy linkLink copied to clipboard!
The following limitations apply when configuring a private storage endpoint on Azure:
-
When configuring the Image Registry Operator to use a private storage endpoint, public network access to the storage account is disabled. Consequently, pulling images from the registry outside of OpenShift Container Platform only works by setting
disableRedirect: true
in the registry Operator configuration. With redirect enabled, the registry redirects the client to pull images directly from the storage account, which will no longer work due to disabled public network access. For more information, see "Disabling redirect when using a private storage endpoint on Azure". - This operation cannot be undone by the Image Registry Operator.
2.5.2. Configuring a private storage endpoint on Azure by enabling the Image Registry Operator to discover VNet and subnet names Copy linkLink copied to clipboard!
The following procedure shows you how to set up a private storage endpoint on Azure by configuring the Image Registry Operator to discover VNet and subnet names.
Prerequisites
- You have configured the image registry to run on Azure.
Your network has been set up using the Installer Provisioned Infrastructure installation method.
For users with a custom network setup, see "Configuring a private storage endpoint on Azure with user-provided VNet and subnet names".
Procedure
Edit the Image Registry Operator
config
object and setnetworkAccess.type
toInternal
:oc edit configs.imageregistry/cluster
$ oc edit configs.imageregistry/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Enter the following command to confirm that the Operator has completed provisioning. This might take a few minutes.
oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
$ oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If the registry is exposed by a route, and you are configuring your storage account to be private, you must disable redirect if you want pulls external to the cluster to continue to work. Enter the following command to disable redirect on the Image Operator configuration:
oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
$ oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen redirect is enabled, pulling images from outside of the cluster will not work.
Verification
Fetch the registry service name by running the following command:
oc registry info --internal=true
$ oc registry info --internal=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
image-registry.openshift-image-registry.svc:5000
image-registry.openshift-image-registry.svc:5000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter debug mode by running the following command:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the suggested
chroot
command. For example:chroot /host
$ chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to log in to your container registry:
podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
$ podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Login Succeeded!
Login Succeeded!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to verify that you can pull an image from the registry:
podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
$ podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.3. Configuring a private storage endpoint on Azure with user-provided VNet and subnet names Copy linkLink copied to clipboard!
Use the following procedure to configure a storage account that has public network access disabled and is exposed behind a private storage endpoint on Azure.
Prerequisites
- You have configured the image registry to run on Azure.
- You must know the VNet and subnet names used for your Azure environment.
- If your network was configured in a separate resource group in Azure, you must also know its name.
Procedure
Edit the Image Registry Operator
config
object and configure the private endpoint using your VNet and subnet names:oc edit configs.imageregistry/cluster
$ oc edit configs.imageregistry/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Enter the following command to confirm that the Operator has completed provisioning. This might take a few minutes.
oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
$ oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen redirect is enabled, pulling images from outside of the cluster will not work.
Verification
Fetch the registry service name by running the following command:
oc registry info --internal=true
$ oc registry info --internal=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
image-registry.openshift-image-registry.svc:5000
image-registry.openshift-image-registry.svc:5000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter debug mode by running the following command:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the suggested
chroot
command. For example:chroot /host
$ chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to log in to your container registry:
podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
$ podman login --tls-verify=false -u unused -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Login Succeeded!
Login Succeeded!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to verify that you can pull an image from the registry:
podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
$ podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.4. Optional: Disabling redirect when using a private storage endpoint on Azure Copy linkLink copied to clipboard!
By default, redirect is enabled when using the image registry. Redirect allows off-loading of traffic from the registry pods into the object storage, which makes pull faster. When redirect is enabled and the storage account is private, users from outside of the cluster are unable to pull images from the registry.
In some cases, users might want to disable redirect so that users from outside of the cluster can pull images from the registry.
Use the following procedure to disable redirect.
Prerequisites
- You have configured the image registry to run on Azure.
- You have configured a route.
Procedure
Enter the following command to disable redirect on the image registry configuration:
oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
$ oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Fetch the registry service name by running the following command:
oc registry info
$ oc registry info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
default-route-openshift-image-registry.<cluster_dns>
default-route-openshift-image-registry.<cluster_dns>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to log in to your container registry:
podman login --tls-verify=false -u unused -p $(oc whoami -t) default-route-openshift-image-registry.<cluster_dns>
$ podman login --tls-verify=false -u unused -p $(oc whoami -t) default-route-openshift-image-registry.<cluster_dns>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Login Succeeded!
Login Succeeded!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to verify that you can pull an image from the registry:
podman pull --tls-verify=false default-route-openshift-image-registry.<cluster_dns>
$ podman pull --tls-verify=false default-route-openshift-image-registry.<cluster_dns> /openshift/tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Bare metal configuration Copy linkLink copied to clipboard!
When deploying OpenShift Container Platform on bare metal hosts, there are times when you need to make changes to the host either before or after provisioning. This can include inspecting the host’s hardware, firmware, and firmware details. It can also include formatting disks or changing modifiable firmware settings.
3.1. About the Bare Metal Operator Copy linkLink copied to clipboard!
Use the Bare Metal Operator (BMO) to provision, manage, and inspect bare-metal hosts in your cluster.
The BMO uses three resources to complete these tasks:
-
BareMetalHost
-
HostFirmwareSettings
-
FirmwareSchema
The BMO maintains an inventory of the physical hosts in the cluster by mapping each bare-metal host to an instance of the BareMetalHost
custom resource definition. Each BareMetalHost
resource features hardware, software, and firmware details. The BMO continually inspects the bare-metal hosts in the cluster to ensure each BareMetalHost
resource accurately details the components of the corresponding host.
The BMO also uses the HostFirmwareSettings
resource and the FirmwareSchema
resource to detail firmware specifications for the bare-metal host.
The BMO interfaces with bare-metal hosts in the cluster by using the Ironic API service. The Ironic service uses the Baseboard Management Controller (BMC) on the host to interface with the machine.
Some common tasks you can complete by using the BMO include the following:
- Provision bare-metal hosts to the cluster with a specific image
- Format a host’s disk contents before provisioning or after deprovisioning
- Turn on or off a host
- Change firmware settings
- View the host’s hardware details
3.1.1. Bare Metal Operator architecture Copy linkLink copied to clipboard!
The Bare Metal Operator (BMO) uses three resources to provision, manage, and inspect bare-metal hosts in your cluster. The following diagram illustrates the architecture of these resources:
BareMetalHost
The BareMetalHost
resource defines a physical host and its properties. When you provision a bare-metal host to the cluster, you must define a BareMetalHost
resource for that host. For ongoing management of the host, you can inspect the information in the BareMetalHost
or update this information.
The BareMetalHost
resource features provisioning information such as the following:
- Deployment specifications such as the operating system boot image or the custom RAM disk
- Provisioning state
- Baseboard Management Controller (BMC) address
- Desired power state
The BareMetalHost
resource features hardware information such as the following:
- Number of CPUs
- MAC address of a NIC
- Size of the host’s storage device
- Current power state
HostFirmwareSettings
You can use the HostFirmwareSettings
resource to retrieve and manage the firmware settings for a host. When a host moves to the Available
state, the Ironic service reads the host’s firmware settings and creates the HostFirmwareSettings
resource. There is a one-to-one mapping between the BareMetalHost
resource and the HostFirmwareSettings
resource.
You can use the HostFirmwareSettings
resource to inspect the firmware specifications for a host or to update a host’s firmware specifications.
You must adhere to the schema specific to the vendor firmware when you edit the spec
field of the HostFirmwareSettings
resource. This schema is defined in the read-only FirmwareSchema
resource.
FirmwareSchema
Firmware settings vary among hardware vendors and host models. A FirmwareSchema
resource is a read-only resource that contains the types and limits for each firmware setting on each host model. The data comes directly from the BMC by using the Ironic service. The FirmwareSchema
resource enables you to identify valid values you can specify in the spec
field of the HostFirmwareSettings
resource.
A FirmwareSchema
resource can apply to many BareMetalHost
resources if the schema is the same.
3.2. About the BareMetalHost resource Copy linkLink copied to clipboard!
Metal3 introduces the concept of the BareMetalHost
resource, which defines a physical host and its properties. The BareMetalHost
resource contains two sections:
-
The
BareMetalHost
spec -
The
BareMetalHost
status
3.2.1. The BareMetalHost spec Copy linkLink copied to clipboard!
The spec
section of the BareMetalHost
resource defines the desired state of the host.
Parameters | Description |
---|---|
|
An interface to enable or disable automated cleaning during provisioning and de-provisioning. When set to |
bmc: address: credentialsName: disableCertificateVerification:
|
The
|
| The MAC address of the NIC used for provisioning the host. |
|
The boot mode of the host. It defaults to |
|
A reference to another resource that is using the host. It could be empty if another resource is not currently using the host. For example, a |
| A human-provided string to help identify the host. |
| A boolean indicating whether the host provisioning and deprovisioning are managed externally. When set:
|
|
Contains information about the BIOS configuration of bare-metal hosts. Currently,
|
image: url: checksum: checksumType: format:
|
The
|
| A reference to the secret containing the network configuration data and its namespace, so that it can be attached to the host before the host boots to set up the network. |
|
A boolean indicating whether the host should be powered on ( |
raid: hardwareRAIDVolumes: softwareRAIDVolumes:
| (Optional) Contains the information about the RAID configuration for bare-metal hosts. If not specified, it retains the current configuration. Note OpenShift Container Platform 4.15 supports hardware RAID on the installation drive for BMCs, including:
OpenShift Container Platform 4.15 does not support software RAID on the installation drive. See the following configuration settings:
You can set the spec: raid: hardwareRAIDVolume: []
If you receive an error message indicating that the driver does not support RAID, set the |
|
The
|
3.2.2. The BareMetalHost status Copy linkLink copied to clipboard!
The BareMetalHost
status represents the host’s current state, and includes tested credentials, current hardware details, and other information.
Parameters | Description |
---|---|
| A reference to the secret and its namespace holding the last set of baseboard management controller (BMC) credentials the system was able to validate as working. |
| Details of the last error reported by the provisioning backend, if any. |
| Indicates the class of problem that has caused the host to enter an error state. The error types are:
|
|
The
|
hardware: firmware:
| Contains BIOS firmware information. For example, the hardware vendor and version. |
|
The
|
hardware: ramMebibytes:
| The host’s amount of memory in Mebibytes (MiB). |
|
The
|
hardware: systemVendor: manufacturer: productName: serialNumber:
|
Contains information about the host’s |
| The timestamp of the last time the status of the host was updated. |
| The status of the server. The status is one of the following:
|
| Boolean indicating whether the host is powered on. |
|
The
|
| A reference to the secret and its namespace holding the last set of BMC credentials that were sent to the provisioning backend. |
3.3. Getting the BareMetalHost resource Copy linkLink copied to clipboard!
The BareMetalHost
resource contains the properties of a physical host. You must get the BareMetalHost
resource for a physical host to review its properties.
Procedure
Get the list of
BareMetalHost
resources:oc get bmh -n openshift-machine-api -o yaml
$ oc get bmh -n openshift-machine-api -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use
baremetalhost
as the long form ofbmh
withoc get
command.Get the list of hosts:
oc get bmh -n openshift-machine-api
$ oc get bmh -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
BareMetalHost
resource for a specific host:oc get bmh <host_name> -n openshift-machine-api -o yaml
$ oc get bmh <host_name> -n openshift-machine-api -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>
is the name of the host.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. About the HostFirmwareSettings resource Copy linkLink copied to clipboard!
You can use the HostFirmwareSettings
resource to retrieve and manage the BIOS settings for a host. When a host moves to the Available
state, Ironic reads the host’s BIOS settings and creates the HostFirmwareSettings
resource. The resource contains the complete BIOS configuration returned from the baseboard management controller (BMC). Whereas, the firmware
field in the BareMetalHost
resource returns three vendor-independent fields, the HostFirmwareSettings
resource typically comprises many BIOS settings of vendor-specific fields per host.
The HostFirmwareSettings
resource contains two sections:
-
The
HostFirmwareSettings
spec. -
The
HostFirmwareSettings
status.
3.4.1. The HostFirmwareSettings spec Copy linkLink copied to clipboard!
The spec
section of the HostFirmwareSettings
resource defines the desired state of the host’s BIOS, and it is empty by default. Ironic uses the settings in the spec.settings
section to update the baseboard management controller (BMC) when the host is in the Preparing
state. Use the FirmwareSchema
resource to ensure that you do not send invalid name/value pairs to hosts. See "About the FirmwareSchema resource" for additional details.
Example
spec: settings: ProcTurboMode: Disabled
spec:
settings:
ProcTurboMode: Disabled
- 1
- In the foregoing example, the
spec.settings
section contains a name/value pair that will set theProcTurboMode
BIOS setting toDisabled
.
Integer parameters listed in the status
section appear as strings. For example, "1"
. When setting integers in the spec.settings
section, the values should be set as integers without quotes. For example, 1
.
3.4.2. The HostFirmwareSettings status Copy linkLink copied to clipboard!
The status
represents the current state of the host’s BIOS.
Parameters | Description |
---|---|
|
The
|
status: schema: name: namespace: lastUpdated:
|
The
|
status: settings:
|
The |
3.5. Getting the HostFirmwareSettings resource Copy linkLink copied to clipboard!
The HostFirmwareSettings
resource contains the vendor-specific BIOS properties of a physical host. You must get the HostFirmwareSettings
resource for a physical host to review its BIOS properties.
Procedure
Get the detailed list of
HostFirmwareSettings
resources:oc get hfs -n openshift-machine-api -o yaml
$ oc get hfs -n openshift-machine-api -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use
hostfirmwaresettings
as the long form ofhfs
with theoc get
command.Get the list of
HostFirmwareSettings
resources:oc get hfs -n openshift-machine-api
$ oc get hfs -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
HostFirmwareSettings
resource for a particular hostoc get hfs <host_name> -n openshift-machine-api -o yaml
$ oc get hfs <host_name> -n openshift-machine-api -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>
is the name of the host.
3.6. Editing the HostFirmwareSettings resource Copy linkLink copied to clipboard!
You can edit the HostFirmwareSettings
of provisioned hosts.
You can only edit hosts when they are in the provisioned
state, excluding read-only values. You cannot edit hosts in the externally provisioned
state.
Procedure
Get the list of
HostFirmwareSettings
resources:oc get hfs -n openshift-machine-api
$ oc get hfs -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit a host’s
HostFirmwareSettings
resource:oc edit hfs <host_name> -n openshift-machine-api
$ oc edit hfs <host_name> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>
is the name of a provisioned host. TheHostFirmwareSettings
resource will open in the default editor for your terminal.Add name/value pairs to the
spec.settings
section:Example
spec: settings: name: value
spec: settings: name: value
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the
FirmwareSchema
resource to identify the available settings for the host. You cannot set values that are read-only.
- Save the changes and exit the editor.
Get the host’s machine name:
oc get bmh <host_name> -n openshift-machine name
$ oc get bmh <host_name> -n openshift-machine name
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>
is the name of the host. The machine name appears under theCONSUMER
field.Annotate the machine to delete it from the machineset:
oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api
$ oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<machine_name>
is the name of the machine to delete.Get a list of nodes and count the number of worker nodes:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the machineset:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale the machineset:
oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>
$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<machineset_name>
is the name of the machineset and<n-1>
is the decremented number of worker nodes.When the host enters the
Available
state, scale up the machineset to make theHostFirmwareSettings
resource changes take effect:oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>
$ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<machineset_name>
is the name of the machineset and<n>
is the number of worker nodes.
3.7. Verifying the HostFirmware Settings resource is valid Copy linkLink copied to clipboard!
When the user edits the spec.settings
section to make a change to the HostFirmwareSetting
(HFS) resource, the Bare Metal Operator (BMO) validates the change against the FimwareSchema
resource, which is a read-only resource. If the setting is invalid, the BMO will set the Type
value of the status.Condition
setting to False
and also generate an event and store it in the HFS resource. Use the following procedure to verify that the resource is valid.
Procedure
Get a list of
HostFirmwareSetting
resources:oc get hfs -n openshift-machine-api
$ oc get hfs -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
HostFirmwareSettings
resource for a particular host is valid:oc describe hfs <host_name> -n openshift-machine-api
$ oc describe hfs <host_name> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host_name>
is the name of the host.Example output
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the response returns
ValidationFailed
, there is an error in the resource configuration and you must update the values to conform to theFirmwareSchema
resource.
3.8. About the FirmwareSchema resource Copy linkLink copied to clipboard!
BIOS settings vary among hardware vendors and host models. A FirmwareSchema
resource is a read-only resource that contains the types and limits for each BIOS setting on each host model. The data comes directly from the BMC through Ironic. The FirmwareSchema
enables you to identify valid values you can specify in the spec
field of the HostFirmwareSettings
resource. The FirmwareSchema
resource has a unique identifier derived from its settings and limits. Identical host models use the same FirmwareSchema
identifier. It is likely that multiple instances of HostFirmwareSettings
use the same FirmwareSchema
.
Parameters | Description |
---|---|
|
The
|
3.9. Getting the FirmwareSchema resource Copy linkLink copied to clipboard!
Each host model from each vendor has different BIOS settings. When editing the HostFirmwareSettings
resource’s spec
section, the name/value pairs you set must conform to that host’s firmware schema. To ensure you are setting valid name/value pairs, get the FirmwareSchema
for the host and review it.
Procedure
To get a list of
FirmwareSchema
resource instances, execute the following:oc get firmwareschema -n openshift-machine-api
$ oc get firmwareschema -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get a particular
FirmwareSchema
instance, execute:oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml
$ oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<instance_name>
is the name of the schema instance stated in theHostFirmwareSettings
resource (see Table 3).
Chapter 4. Configuring multi-architecture compute machines on an OpenShift cluster Copy linkLink copied to clipboard!
4.1. About clusters with multi-architecture compute machines Copy linkLink copied to clipboard!
An OpenShift Container Platform cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. Clusters with multi-architecture compute machines are available only on Amazon Web Services (AWS) or Microsoft Azure installer-provisioned infrastructures and bare metal, IBM Power®, and IBM Z® user-provisioned infrastructures with 64-bit x86 control plane machines.
When there are nodes with multiple architectures in your cluster, the architecture of your image must be consistent with the architecture of the node. You need to ensure that the pod is assigned to the node with the appropriate architecture and that it matches the image architecture. For more information on assigning pods to nodes, see Assigning pods to nodes.
The Cluster Samples Operator is not supported on clusters with multi-architecture compute machines. Your cluster can be created without this capability. For more information, see Cluster capabilities.
For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see Migrating to a cluster with multi-architecture compute machines.
4.1.1. Configuring your cluster with multi-architecture compute machines Copy linkLink copied to clipboard!
To create a cluster with multi-architecture compute machines with different installation options and platforms, you can use the documentation in the following table:
Documentation section | Platform | User-provisioned installation | Installer-provisioned installation | Control Plane | Compute node |
---|---|---|---|---|---|
Creating a cluster with multi-architecture compute machines on Azure | Microsoft Azure | ✓ |
|
| |
Creating a cluster with multi-architecture compute machines on AWS | Amazon Web Services (AWS) | ✓ |
|
| |
Creating a cluster with multi-architecture compute machines on GCP | Google Cloud Platform (GCP) | ✓ |
|
| |
Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z | Bare metal | ✓ |
|
| |
IBM Power | ✓ |
|
| ||
IBM Z | ✓ |
|
| ||
Creating a cluster with multi-architecture compute machines on IBM Z® and IBM® LinuxONE with z/VM | IBM Z® and IBM® LinuxONE | ✓ |
|
| |
IBM Z® and IBM® LinuxONE | ✓ |
|
| ||
Creating a cluster with multi-architecture compute machines on IBM Power® | IBM Power® | ✓ |
|
|
Autoscaling from zero is currently not supported on Google Cloud Platform (GCP).
4.2. Creating a cluster with multi-architecture compute machine on Azure Copy linkLink copied to clipboard!
To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see Installing a cluster on Azure with customizations. You can then add an ARM64 compute machine set to your cluster to create a cluster with multi-architecture compute machines.
The following procedures explain how to generate an ARM64 boot image and create an Azure compute machine set that uses the ARM64 boot image. This adds ARM64 compute nodes to your cluster and deploys the amount of ARM64 virtual machines (VM) that you need.
4.2.1. Verifying cluster compatibility Copy linkLink copied to clipboard!
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
)
Procedure
You can check that your cluster uses the architecture payload by running the following command:
oc adm release info -o jsonpath="{ .metadata.metadata}"
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If you see the following output, then your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, then your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
{ "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.2.2. Creating an ARM64 boot image using the Azure image gallery Copy linkLink copied to clipboard!
The following procedure describes how to manually generate an ARM64 boot image.
Prerequisites
-
You installed the Azure CLI (
az
). - You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.
Procedure
Log in to your Azure account:
az login
$ az login
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a storage account and upload the
arm64
virtual hard disk (VHD) to your storage account. The OpenShift Container Platform installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group:az storage account create -n ${STORAGE_ACCOUNT_NAME} -g ${RESOURCE_GROUP} -l westus --sku Standard_LRS
$ az storage account create -n ${STORAGE_ACCOUNT_NAME} -g ${RESOURCE_GROUP} -l westus --sku Standard_LRS
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
westus
object is an example region.
Create a storage container using the storage account you generated:
az storage container create -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME}
$ az storage container create -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must use the OpenShift Container Platform installation program JSON file to extract the URL and
aarch64
VHD name:Extract the
URL
field and set it toRHCOS_VHD_ORIGIN_URL
as the file name by running the following command:RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url')
$ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the
aarch64
VHD name and set it toBLOB_NAME
as the file name by running the following command:BLOB_NAME=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd
$ BLOB_NAME=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container with the following commands:
end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
$ end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sas=`az storage container generate-sas -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry $end -o tsv`
$ sas=`az storage container generate-sas -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry $end -o tsv`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the RHCOS VHD into the storage container:
az storage blob copy start --account-name ${STORAGE_ACCOUNT_NAME} --sas-token "$sas" \ --source-uri "${RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "${BLOB_NAME}" --destination-container ${CONTAINER_NAME}
$ az storage blob copy start --account-name ${STORAGE_ACCOUNT_NAME} --sas-token "$sas" \ --source-uri "${RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "${BLOB_NAME}" --destination-container ${CONTAINER_NAME}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check the status of the copying process with the following command:
az storage blob show -c ${CONTAINER_NAME} -n ${BLOB_NAME} --account-name ${STORAGE_ACCOUNT_NAME} | jq .properties.copy
$ az storage blob show -c ${CONTAINER_NAME} -n ${BLOB_NAME} --account-name ${STORAGE_ACCOUNT_NAME} | jq .properties.copy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the status parameter displays the
success
object, the copying process is complete.
Create an image gallery using the following command:
az sig create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME}
$ az sig create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the image gallery to create an image definition. In the following example command,
rhcos-arm64
is the name of the image definition.az sig image-definition create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2
$ az sig image-definition create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the URL of the VHD and set it to
RHCOS_VHD_URL
as the file name, run the following command:RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGE_ACCOUNT_NAME} -c ${CONTAINER_NAME} -n "${BLOB_NAME}" -o tsv)
$ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGE_ACCOUNT_NAME} -c ${CONTAINER_NAME} -n "${BLOB_NAME}" -o tsv)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
RHCOS_VHD_URL
file, your storage account, resource group, and image gallery to create an image version. In the following example,1.0.0
is the image version.az sig image-version create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGE_ACCOUNT_NAME} --os-vhd-uri ${RHCOS_VHD_URL}
$ az sig image-version create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGE_ACCOUNT_NAME} --os-vhd-uri ${RHCOS_VHD_URL}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Your
arm64
boot image is now generated. You can access the ID of your image with the following command:az sig image-version show -r $GALLERY_NAME -g $RESOURCE_GROUP -i rhcos-arm64 -e 1.0.0
$ az sig image-version show -r $GALLERY_NAME -g $RESOURCE_GROUP -i rhcos-arm64 -e 1.0.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example image ID is used in the
recourseID
parameter of the compute machine set:Example
resourceID
/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0
/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3. Adding a multi-architecture compute machine set to your cluster Copy linkLink copied to clipboard!
To add ARM64 compute nodes to your cluster, you must create an Azure compute machine set that uses the ARM64 boot image. To create your own custom compute machine set on Azure, see "Creating a compute machine set on Azure".
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
Create a compute machine set and modify the
resourceID
andvmSize
parameters with the following command. This compute machine set will control thearm64
worker nodes in your cluster:oc create -f arm64-machine-set-0.yaml
$ oc create -f arm64-machine-set-0.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample YAML compute machine set with
arm64
boot imageCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the new ARM64 machines are running by entering the following command:
oc get machineset -n openshift-machine-api
$ oc get machineset -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check that the nodes are ready and scheduable with the following command:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Creating a cluster with multi-architecture compute machines on AWS Copy linkLink copied to clipboard!
To create an AWS cluster with multi-architecture compute machines, you must first create a single-architecture AWS installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, refer to Installing a cluster on AWS with customizations. You can then add a ARM64 compute machine set to your AWS cluster.
4.3.1. Verifying cluster compatibility Copy linkLink copied to clipboard!
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
)
Procedure
You can check that your cluster uses the architecture payload by running the following command:
oc adm release info -o jsonpath="{ .metadata.metadata}"
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If you see the following output, then your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, then your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
{ "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.3.2. Adding an ARM64 compute machine set to your cluster Copy linkLink copied to clipboard!
To configure a cluster with multi-architecture compute machines, you must create a AWS ARM64 compute machine set. This adds ARM64 compute nodes to your cluster so that your cluster has multi-architecture compute machines.
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You used the installation program to create an AMD64 single-architecture AWS cluster with the multi-architecture installer binary.
Procedure
Create and modify a compute machine set, this will control the ARM64 compute nodes in your cluster.
oc create -f aws-arm64-machine-set-0.yaml
$ oc create -f aws-arm64-machine-set-0.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample YAML compute machine set to deploy an ARM64 compute node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 2 3 9 13 14
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
oc get -o jsonpath=‘{.status.infrastructureName}{“\n”}’ infrastructure cluster
$ oc get -o jsonpath=‘{.status.infrastructureName}{“\n”}’ infrastructure cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 4 7
- Specify the infrastructure ID, role node label, and zone.
- 5 6
- Specify the role node label to add.
- 8
- Specify an ARM64 supported Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes.
oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.<arch>.images.aws.regions."<region>".image'
$ oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.<arch>.images.aws.regions."<region>".image'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 10
- Specify an ARM64 supported machine type. For more information, refer to "Tested instance types for AWS 64-bit ARM"
- 11
- Specify the zone, for example
us-east-1a
. Ensure that the zone you select offers 64-bit ARM machines. - 12
- Specify the region, for example,
us-east-1
. Ensure that the zone you select offers 64-bit ARM machines.
Verification
View the list of compute machine sets by entering the following command:
oc get machineset -n openshift-machine-api
$ oc get machineset -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then see your created ARM64 machine set.
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-arm64-machine-set-0 2 2 2 2 10m
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-arm64-machine-set-0 2 2 2 2 10m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check that the nodes are ready and scheduable with the following command:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Creating a cluster with multi-architecture compute machines on GCP Copy linkLink copied to clipboard!
To create a Google Cloud Platform (GCP) cluster with multi-architecture compute machines, you must first create a single-architecture GCP installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, refer to Installing a cluster on GCP with customizations. You can then add ARM64 compute machines sets to your GCP cluster.
Secure booting is currently not supported on ARM64 machines for GCP
4.4.1. Verifying cluster compatibility Copy linkLink copied to clipboard!
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
)
Procedure
You can check that your cluster uses the architecture payload by running the following command:
oc adm release info -o jsonpath="{ .metadata.metadata}"
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If you see the following output, then your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, then your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
{ "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.4.2. Adding an ARM64 compute machine set to your GCP cluster Copy linkLink copied to clipboard!
To configure a cluster with multi-architecture compute machines, you must create a GCP ARM64 compute machine set. This adds ARM64 compute nodes to your cluster.
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You used the installation program to create an AMD64 single-architecture AWS cluster with the multi-architecture installer binary.
Procedure
Create and modify a compute machine set, this controls the ARM64 compute nodes in your cluster:
oc create -f gcp-arm64-machine-set-0.yaml
$ oc create -f gcp-arm64-machine-set-0.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample GCP YAML compute machine set to deploy an ARM64 compute node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 2
- Specify the role node label to add.
- 3
- Specify the path to the image that is used in current compute machine sets. You need the project and image name for your path to image.
To access the project and image name, run the following command:
oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.aarch64.images.gcp'
$ oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.aarch64.images.gcp'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
"gcp": { "release": "415.92.202309142014-0", "project": "rhcos-cloud", "name": "rhcos-415-92-202309142014-0-gcp-aarch64" }
"gcp": { "release": "415.92.202309142014-0", "project": "rhcos-cloud", "name": "rhcos-415-92-202309142014-0-gcp-aarch64" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
project
andname
parameters from the output to create the path to image field in your machine set. The path to the image should follow the following format:projects/<project>/global/images/<image_name>
$ projects/<project>/global/images/<image_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 4
- Optional: Specify custom metadata in the form of a
key:value
pair. For example use cases, see the GCP documentation for setting custom metadata. - 5
- Specify an ARM64 supported machine type. For more information, refer to Tested instance types for GCP on 64-bit ARM infrastructures in "Additional resources".
- 6
- Specify the name of the GCP project that you use for your cluster.
- 7
- Specify the region, for example,
us-central1
. Ensure that the zone you select offers 64-bit ARM machines.
Verification
View the list of compute machine sets by entering the following command:
oc get machineset -n openshift-machine-api
$ oc get machineset -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then see your created ARM64 machine set.
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-arm64-machine-set-0 2 2 2 2 10m
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-arm64-machine-set-0 2 2 2 2 10m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check that the nodes are ready and scheduable with the following command:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
4.5. Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z Copy linkLink copied to clipboard!
To create a cluster with multi-architecture compute machines on bare metal (x86_64
), IBM Power® (ppc64le
), or IBM Z® (s390x
) you must have an existing single-architecture cluster on one of these platforms. Follow the installations procedures for your platform:
- Installing a user provisioned cluster on bare metal. You can then add 64-bit ARM compute machines to your OpenShift Container Platform cluster on bare metal.
-
Installing a cluster on IBM Power®. You can then add
x86_64
compute machines to your OpenShift Container Platform cluster on IBM Power®. -
Installing a cluster on IBM Z® and IBM® LinuxONE. You can then add
x86_64
compute machines to your OpenShift Container Platform cluster on IBM Z® and IBM® LinuxONE.
Before you can add additional compute nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This will allow you to add additional nodes to your cluster and deploy a cluster with multi-architecture compute machines.
4.5.1. Verifying cluster compatibility Copy linkLink copied to clipboard!
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
)
Procedure
You can check that your cluster uses the architecture payload by running the following command:
oc adm release info -o jsonpath="{ .metadata.metadata}"
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If you see the following output, then your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, then your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
{ "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.5.2. Creating RHCOS machines using an ISO image Copy linkLink copied to clipboard!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
You must have the OpenShift CLI (
oc
) installed.
Procedure
Extract the Ignition config file from the cluster by running the following command:
oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node:
curl -k http://<HTTP_server>/worker.ign
$ curl -k http://<HTTP_server>/worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can access the ISO image for booting your new machine by running to following command:
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
- Burn the ISO image to a disk and boot it directly.
- Use ISO redirection with a LOM interface.
Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment.
NoteYou can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the
coreos-installer
command as outlined in the following steps, instead of adding kernel arguments.Run the
coreos-installer
command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to:sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest>
$ sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest>
1 2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must run the
coreos-installer
command by usingsudo
, because thecore
user does not have the required root privileges to perform the installation. - 2
- The
--ignition-hash
option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node.<digest>
is the Ignition config file SHA512 digest obtained in a preceding step.
NoteIf you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running
coreos-installer
.The following example initializes a bootstrap node installation to the
/dev/sda
device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2:sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
$ sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the progress of the RHCOS installation on the console of the machine.
ImportantEnsure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise.
- Continue to create more compute machines for your cluster.
4.5.3. Creating RHCOS machines by PXE or iPXE booting Copy linkLink copied to clipboard!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
Obtain the URLs of the RHCOS ISO image, compressed metal BIOS,
kernel
, andinitramfs
files that you uploaded to your HTTP server during cluster installation. - You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them.
-
If you use UEFI, you have access to the
grub.conf
file that you modified during OpenShift Container Platform installation.
Procedure
Confirm that your PXE or iPXE installation for the RHCOS images is correct.
For PXE:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the location of the live
kernel
file that you uploaded to your HTTP server. - 2
- Specify locations of the RHCOS files that you uploaded to your HTTP server. The
initrd
parameter value is the location of the liveinitramfs
file, thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_url
parameter value is the location of the liverootfs
file. Thecoreos.inst.ignition_url
andcoreos.live.rootfs_url
parameters only support HTTP and HTTPS.
NoteThis configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more
console=
arguments to theAPPEND
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?.For iPXE (
x86_64
+aarch64
):kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img boot
kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign
1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img
3 boot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
kernel
parameter value is the location of thekernel
file, theinitrd=main
argument is needed for booting on UEFI systems, thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your HTTP server.
NoteThis configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more
console=
arguments to thekernel
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section.NoteTo network boot the CoreOS
kernel
onaarch64
architecture, you need to use a version of iPXE build with theIMAGE_GZIP
option enabled. SeeIMAGE_GZIP
option in iPXE.For PXE (with UEFI and GRUB as second stage) on
aarch64
:menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign initrd rhcos-<version>-live-initramfs.<architecture>.img }
menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign
1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img
3 }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The
kernel
parameter value is the location of thekernel
file on your TFTP server. Thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file on your HTTP Server. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your TFTP server.
- Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
4.5.4. Approving the certificate signing requests for your machines Copy linkLink copied to clipboard!
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.6. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM Copy linkLink copied to clipboard!
To create a cluster with multi-architecture compute machines on IBM Z® and IBM® LinuxONE (s390x
) with z/VM, you must have an existing single-architecture x86_64
cluster. You can then add s390x
compute machines to your OpenShift Container Platform cluster.
Before you can add s390x
nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a RHCOS compute machine using a z/VM instance. This will allow you to add s390x
nodes to your cluster and deploy a cluster with multi-architecture compute machines.
To create an IBM Z® or IBM® LinuxONE (s390x
) cluster with multi-architecture compute machines on x86_64
, follow the instructions for Installing a cluster on IBM Z® and IBM® LinuxONE. You can then add x86_64
compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z.
4.6.1. Verifying cluster compatibility Copy linkLink copied to clipboard!
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
)
Procedure
You can check that your cluster uses the architecture payload by running the following command:
oc adm release info -o jsonpath="{ .metadata.metadata}"
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If you see the following output, then your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, then your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
{ "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.6.2. Creating RHCOS machines on IBM Z with z/VM Copy linkLink copied to clipboard!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines running on IBM Z® with z/VM and attach them to your existing cluster.
Prerequisites
- You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes.
- You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create.
Procedure
Disable UDP aggregation.
Currently, UDP aggregation is not supported on IBM Z® and is not automatically deactivated on multi-architecture compute clusters with an
x86_64
control plane and additionals390x
compute machines. To ensure that the addtional compute nodes are added to the cluster correctly, you must manually disable UDP aggregation.Create a YAML file
udp-aggregation-config.yaml
with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the ConfigMap resource by running the following command:
oc create -f udp-aggregation-config.yaml
$ oc create -f udp-aggregation-config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Extract the Ignition config file from the cluster by running the following command:
oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node:
curl -k http://<HTTP_server>/worker.ign
$ curl -k http://<HTTP_server>/worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the RHEL live
kernel
,initramfs
, androotfs
files by running the following commands:curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Move the downloaded RHEL live
kernel
,initramfs
, androotfs
files to an HTTP or HTTPS server that is accessible from the z/VM guest you want to add. Create a parameter file for the z/VM guest. The following parameters are specific for the virtual machine:
Optional: To specify a static IP address, add an
ip=
parameter with the following entries, with each separated by a colon:- The IP address for the machine.
- An empty string.
- The gateway.
- The netmask.
-
The machine host and domain name in the form
hostname.domainname
. If you omit this value, RHCOS obtains the hostname through a reverse DNS lookup. - The network interface name. If you omit this value, RHCOS applies the IP configuration to all available interfaces.
-
The value
none
.
-
For
coreos.inst.ignition_url=
, specify the URL to theworker.ign
file. Only HTTP and HTTPS protocols are supported. -
For
coreos.live.rootfs_url=
, specify the matching rootfs artifact for thekernel
andinitramfs
you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks:
-
For
coreos.inst.install_dev=
, specify/dev/dasda
. -
Use
rd.dasd=
to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged.
The following is an example parameter file,
additional-worker-dasd.parm
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Write all options in the parameter file as a single line and make sure that you have no newline characters.
-
For
For installations on FCP-type disks, complete the following tasks:
Use
rd.zfcp=<adapter>,<wwpn>,<lun>
to specify the FCP disk where RHCOS is to be installed. For multipathing, repeat this step for each additional path.NoteWhen you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems.
Set the install device as:
coreos.inst.install_dev=/dev/sda
.NoteIf additional LUNs are configured with NPIV, FCP requires
zfcp.allow_lun_scan=0
. If you must enablezfcp.allow_lun_scan=1
because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node.Leave all other parameters unchanged.
ImportantAdditional postinstallation steps are required to fully enable multipathing. For more information, see “Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks.
The following is an example parameter file,
additional-worker-fcp.parm
for a worker node with multipathing:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Write all options in the parameter file as a single line and make sure that you have no newline characters.
-
Transfer the
initramfs
,kernel
, parameter files, and RHCOS images to z/VM, for example, by using FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM. Punch the files to the virtual reader of the z/VM guest virtual machine.
See PUNCH in IBM® Documentation.
TipYou can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines.
- Log in to CMS on the bootstrap machine.
IPL the bootstrap machine from the reader by running the following command:
ipl c
$ ipl c
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See IPL in IBM® Documentation.
4.6.3. Approving the certificate signing requests for your machines Copy linkLink copied to clipboard!
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.7. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM Copy linkLink copied to clipboard!
To create a cluster with multi-architecture compute machines on IBM Z® and IBM® LinuxONE (s390x
) with RHEL KVM, you must have an existing single-architecture x86_64
cluster. You can then add s390x
compute machines to your OpenShift Container Platform cluster.
Before you can add s390x
nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a RHCOS compute machine using a RHEL KVM instance. This will allow you to add s390x
nodes to your cluster and deploy a cluster with multi-architecture compute machines.
To create an IBM Z® or IBM® LinuxONE (s390x
) cluster with multi-architecture compute machines on x86_64
, follow the instructions for Installing a cluster on IBM Z® and IBM® LinuxONE. You can then add x86_64
compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z.
4.7.1. Verifying cluster compatibility Copy linkLink copied to clipboard!
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
)
Procedure
You can check that your cluster uses the architecture payload by running the following command:
oc adm release info -o jsonpath="{ .metadata.metadata}"
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If you see the following output, then your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, then your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
{ "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.7.2. Creating RHCOS machines using virt-install Copy linkLink copied to clipboard!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using virt-install
.
Prerequisites
- You have at least one LPAR running on RHEL 8.7 or later with KVM, referred to as RHEL KVM host in this procedure.
- The KVM/QEMU hypervisor is installed on the RHEL KVM host.
- You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes.
- An HTTP or HTTPS server is set up.
Procedure
Disable UDP aggregation.
Currently, UDP aggregation is not supported on IBM Z® and is not automatically deactivated on multi-architecture compute clusters with an
x86_64
control plane and additionals390x
compute machines. To ensure that the addtional compute nodes are added to the cluster correctly, you must manually disable UDP aggregation.Create a YAML file
udp-aggregation-config.yaml
with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the ConfigMap resource by running the following command:
oc create -f udp-aggregation-config.yaml
$ oc create -f udp-aggregation-config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Extract the Ignition config file from the cluster by running the following command:
oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node:
curl -k http://<HTTP_server>/worker.ign
$ curl -k http://<HTTP_server>/worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the RHEL live
kernel
,initramfs
, androotfs
files by running the following commands:curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Move the downloaded RHEL live
kernel
,initramfs
androotfs
files to an HTTP or HTTPS server before you launchvirt-install
. Create the new KVM guest nodes using the RHEL
kernel
,initramfs
, and Ignition files; the new disk image; and adjusted parm line arguments.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
os-variant
, specify the RHEL version for the RHCOS compute machine.rhel9.2
is the recommended version. To query the supported RHEL version of your operating system, run the following command:osinfo-query os -f short-id
$ osinfo-query os -f short-id
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
os-variant
is case sensitive. - 2
- For
--location
, specify the location of the kernel/initrd on the HTTP or HTTPS server. - 3
- For
coreos.inst.ignition_url=
, specify theworker.ign
Ignition file for the machine role. Only HTTP and HTTPS protocols are supported. - 4
- For
coreos.live.rootfs_url=
, specify the matching rootfs artifact for thekernel
andinitramfs
you are booting. Only HTTP and HTTPS protocols are supported. - 5
- Optional: For
hostname
, specify the fully qualified hostname of the client machine.
NoteIf you are using HAProxy as a load balancer, update your HAProxy rules for
ingress-router-443
andingress-router-80
in the/etc/haproxy/haproxy.cfg
configuration file.- Continue to create more compute machines for your cluster.
4.7.3. Approving the certificate signing requests for your machines Copy linkLink copied to clipboard!
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.8. Creating a cluster with multi-architecture compute machines on IBM Power Copy linkLink copied to clipboard!
To create a cluster with multi-architecture compute machines on IBM Power® (ppc64le
), you must have an existing single-architecture (x86_64
) cluster. You can then add ppc64le
compute machines to your OpenShift Container Platform cluster.
Before you can add ppc64le
nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines.
The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This will allow you to add ppc64le
nodes to your cluster and deploy a cluster with multi-architecture compute machines.
To create an IBM Power® (ppc64le
) cluster with multi-architecture compute machines on x86_64
, follow the instructions for Installing a cluster on IBM Power®. You can then add x86_64
compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z.
4.8.1. Verifying cluster compatibility Copy linkLink copied to clipboard!
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
Prerequisites
-
You installed the OpenShift CLI (
oc
)
When using multiple architectures, hosts for OpenShift Container Platform nodes must share the same storage layer. If they do not have the same storage layer, use a storage provider such as nfs-provisioner
.
You should limit the number of network hops between the compute and control plane as much as possible.
Procedure
You can check that your cluster uses the architecture payload by running the following command:
oc adm release info -o jsonpath="{ .metadata.metadata}"
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If you see the following output, then your cluster is using the multi-architecture payload:
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
{ "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, then your cluster is not using the multi-architecture payload:
{ "url": "https://access.redhat.com/errata/<errata_version>" }
{ "url": "https://access.redhat.com/errata/<errata_version>" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.
4.8.2. Creating RHCOS machines using an ISO image Copy linkLink copied to clipboard!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using an ISO image to create the machines.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
You must have the OpenShift CLI (
oc
) installed.
Procedure
Extract the Ignition config file from the cluster by running the following command:
oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node:
curl -k http://<HTTP_server>/worker.ign
$ curl -k http://<HTTP_server>/worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can access the ISO image for booting your new machine by running to following command:
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
- Burn the ISO image to a disk and boot it directly.
- Use ISO redirection with a LOM interface.
Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment.
NoteYou can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the
coreos-installer
command as outlined in the following steps, instead of adding kernel arguments.Run the
coreos-installer
command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to:sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest>
$ sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest>
1 2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must run the
coreos-installer
command by usingsudo
, because thecore
user does not have the required root privileges to perform the installation. - 2
- The
--ignition-hash
option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node.<digest>
is the Ignition config file SHA512 digest obtained in a preceding step.
NoteIf you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running
coreos-installer
.The following example initializes a bootstrap node installation to the
/dev/sda
device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2:sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
$ sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the progress of the RHCOS installation on the console of the machine.
ImportantEnsure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise.
- Continue to create more compute machines for your cluster.
4.8.3. Creating RHCOS machines by PXE or iPXE booting Copy linkLink copied to clipboard!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
Obtain the URLs of the RHCOS ISO image, compressed metal BIOS,
kernel
, andinitramfs
files that you uploaded to your HTTP server during cluster installation. - You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them.
-
If you use UEFI, you have access to the
grub.conf
file that you modified during OpenShift Container Platform installation.
Procedure
Confirm that your PXE or iPXE installation for the RHCOS images is correct.
For PXE:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the location of the live
kernel
file that you uploaded to your HTTP server. - 2
- Specify locations of the RHCOS files that you uploaded to your HTTP server. The
initrd
parameter value is the location of the liveinitramfs
file, thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_url
parameter value is the location of the liverootfs
file. Thecoreos.inst.ignition_url
andcoreos.live.rootfs_url
parameters only support HTTP and HTTPS.
NoteThis configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more
console=
arguments to theAPPEND
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?.For iPXE (
x86_64
+ppc64le
):kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img boot
kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign
1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img
3 boot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
kernel
parameter value is the location of thekernel
file, theinitrd=main
argument is needed for booting on UEFI systems, thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your HTTP server.
NoteThis configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more
console=
arguments to thekernel
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section.NoteTo network boot the CoreOS
kernel
onppc64le
architecture, you need to use a version of iPXE build with theIMAGE_GZIP
option enabled. SeeIMAGE_GZIP
option in iPXE.For PXE (with UEFI and GRUB as second stage) on
ppc64le
:menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign initrd rhcos-<version>-live-initramfs.<architecture>.img }
menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign
1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img
3 }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The
kernel
parameter value is the location of thekernel
file on your TFTP server. Thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file on your HTTP Server. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your TFTP server.
- Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
4.8.4. Approving the certificate signing requests for your machines Copy linkLink copied to clipboard!
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:oc get nodes -o wide
$ oc get nodes -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.9. Managing your cluster with multi-architecture compute machines Copy linkLink copied to clipboard!
4.9.1. Scheduling workloads on clusters with multi-architecture compute machines Copy linkLink copied to clipboard!
Deploying a workload on a cluster with compute nodes of different architectures requires attention and monitoring of your cluster. There might be further actions you need to take in order to successfully place pods in the nodes of your cluster.
For more detailed information on node affinity, scheduling, taints and tolerlations, see the following documentatinon:
4.9.1.1. Sample multi-architecture node workload deployments Copy linkLink copied to clipboard!
Before you schedule workloads on a cluster with compute nodes of different architectures, consider the following use cases:
- Using node affinity to schedule workloads on a node
You can allow a workload to be scheduled on only a set of nodes with architectures supported by its images, you can set the
spec.affinity.nodeAffinity
field in your pod’s template specification.Example deployment with the
nodeAffinity
set to certain architecturesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the supported architectures. Valid values include
amd64
,arm64
, or both values.
- Tainting every node for a specific architecture
You can taint a node to avoid workloads that are not compatible with its architecture to be scheduled on that node. In the case where your cluster is using a
MachineSet
object, you can add parameters to the.spec.template.spec.taints
field to avoid workloads being scheduled on nodes with non-supported architectures.Before you can taint a node, you must scale down the
MachineSet
object or remove available machines. You can scale down the machine set by using one of following commands:oc scale --replicas=0 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=0 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on scaling machine sets, see "Modifying a compute machine set".
Example
MachineSet
with a taint setCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also set a taint on a specific node by running the following command:
oc adm taint nodes <node-name> multi-arch.openshift.io/arch=arm64:NoSchedule
$ oc adm taint nodes <node-name> multi-arch.openshift.io/arch=arm64:NoSchedule
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Creating a default toleration
You can annotate a namespace so all of the workloads get the same default toleration by running the following command:
oc annotate namespace my-namespace \ 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{"operator": "Exists", "effect": "NoSchedule", "key": "multi-arch.openshift.io/arch"}]'
$ oc annotate namespace my-namespace \ 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{"operator": "Exists", "effect": "NoSchedule", "key": "multi-arch.openshift.io/arch"}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Tolerating architecture taints in workloads
On a node with a defined taint, workloads will not be scheduled on that node. However, you can allow them to be scheduled by setting a toleration in the pod’s specification.
Example deployment with a toleration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example deployment can also be allowed on nodes with the
multi-arch.openshift.io/arch=arm64
taint specified.- Using node affinity with taints and tolerations
When a scheduler computes the set of nodes to schedule a pod, tolerations can broaden the set while node affinity restricts the set. If you set a taint to the nodes of a specific architecture, the following example toleration is required for scheduling pods.
Example deployment with a node affinity and toleration set.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
4.9.2. Enabling 64k pages on the Red Hat Enterprise Linux CoreOS (RHCOS) kernel Copy linkLink copied to clipboard!
You can enable the 64k memory page in the Red Hat Enterprise Linux CoreOS (RHCOS) kernel on the 64-bit ARM compute machines in your cluster. The 64k page size kernel specification can be used for large GPU or high memory workloads. This is done using the Machine Config Operator (MCO) which uses a machine config pool to update the kernel. To enable 64k page sizes, you must dedicate a machine config pool for ARM64 to enable on the kernel.
Using 64k pages is exclusive to 64-bit ARM architecture compute nodes or clusters installed on 64-bit ARM machines. If you configure the 64k pages kernel on a machine config pool using 64-bit x86 machines, the machine config pool and MCO will degrade.
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You created a cluster with compute nodes of different architecture on one of the supported platforms.
Procedure
Label the nodes where you want to run the 64k page size kernel:
oc label node <node_name> <label>
$ oc label node <node_name> <label>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc label node worker-arm64-01 node-role.kubernetes.io/worker-64k-pages=
$ oc label node worker-arm64-01 node-role.kubernetes.io/worker-64k-pages=
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a machine config pool that contains the worker role that uses the ARM64 architecture and the
worker-64k-pages
role:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a machine config on your compute node to enable
64k-pages
with the64k-pages
parameter.oc create -f <filename>.yaml
$ oc create -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example MachineConfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
64k-pages
type is supported on only 64-bit ARM architecture based compute nodes. Therealtime
type is supported on only 64-bit x86 architecture based compute nodes.
Verification
To view your new
worker-64k-pages
machine config pool, run the following command:oc get mcp
$ oc get mcp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9d55ac9a91127c36314e1efe7d77fbf8 True False False 3 3 3 0 361d worker rendered-worker-e7b61751c4a5b7ff995d64b967c421ff True False False 7 7 7 0 361d worker-64k-pages rendered-worker-64k-pages-e7b61751c4a5b7ff995d64b967c421ff True False False 2 2 2 0 35m
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9d55ac9a91127c36314e1efe7d77fbf8 True False False 3 3 3 0 361d worker rendered-worker-e7b61751c4a5b7ff995d64b967c421ff True False False 7 7 7 0 361d worker-64k-pages rendered-worker-64k-pages-e7b61751c4a5b7ff995d64b967c421ff True False False 2 2 2 0 35m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.3. Importing manifest lists in image streams on your multi-architecture compute machines Copy linkLink copied to clipboard!
On an OpenShift Container Platform 4.15 cluster with multi-architecture compute machines, the image streams in the cluster do not import manifest lists automatically. You must manually change the default importMode
option to the PreserveOriginal
option in order to import the manifest list.
Prerequisites
-
You installed the OpenShift Container Platform CLI (
oc
).
Procedure
The following example command shows how to patch the
ImageStream
cli-artifacts so that thecli-artifacts:latest
image stream tag is imported as a manifest list.oc patch is/cli-artifacts -n openshift -p '{"spec":{"tags":[{"name":"latest","importPolicy":{"importMode":"PreserveOriginal"}}]}}'
$ oc patch is/cli-artifacts -n openshift -p '{"spec":{"tags":[{"name":"latest","importPolicy":{"importMode":"PreserveOriginal"}}]}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can check that the manifest lists imported properly by inspecting the image stream tag. The following command will list the individual architecture manifests for a particular tag.
oc get istag cli-artifacts:latest -n openshift -oyaml
$ oc get istag cli-artifacts:latest -n openshift -oyaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
dockerImageManifests
object is present, then the manifest list import was successful.Example output of the
dockerImageManifests
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Postinstallation machine configuration tasks Copy linkLink copied to clipboard!
There are times when you need to make changes to the operating systems running on OpenShift Container Platform nodes. This can include changing settings for network time service, adding kernel arguments, or configuring journaling in a specific way.
Aside from a few specialized features, most changes to operating systems on OpenShift Container Platform nodes can be done by creating what are referred to as MachineConfig
objects that are managed by the Machine Config Operator.
Tasks in this section describe how to use features of the Machine Config Operator to configure operating system features on OpenShift Container Platform nodes.
NetworkManager stores new network configurations to /etc/NetworkManager/system-connections/
in a key file format.
Previously, NetworkManager stored new network configurations to /etc/sysconfig/network-scripts/
in the ifcfg format. Starting with RHEL 9.0, RHEL stores new network configurations at /etc/NetworkManager/system-connections/
in a key file format. The connections configurations stored to /etc/sysconfig/network-scripts/
in the old format still work uninterrupted. Modifications in existing profiles continue updating the older files.
5.1. About the Machine Config Operator Copy linkLink copied to clipboard!
OpenShift Container Platform 4.15 integrates both operating system and cluster management. Because the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes, OpenShift Container Platform provides an opinionated lifecycle management experience that simplifies the orchestration of node upgrades.
OpenShift Container Platform employs three daemon sets and controllers to simplify node management. These daemon sets orchestrate operating system updates and configuration changes to the hosts by using standard Kubernetes-style constructs. They include:
-
The
machine-config-controller
, which coordinates machine upgrades from the control plane. It monitors all of the cluster nodes and orchestrates their configuration updates. -
The
machine-config-daemon
daemon set, which runs on each node in the cluster and updates a machine to configuration as defined by machine config and as instructed by the MachineConfigController. When the node detects a change, it drains off its pods, applies the update, and reboots. These changes come in the form of Ignition configuration files that apply the specified machine configuration and control kubelet configuration. The update itself is delivered in a container. This process is key to the success of managing OpenShift Container Platform and RHCOS updates together. -
The
machine-config-server
daemon set, which provides the Ignition config files to control plane nodes as they join the cluster.
The machine configuration is a subset of the Ignition configuration. The machine-config-daemon
reads the machine configuration to see if it needs to do an OSTree update or if it must apply a series of systemd kubelet file changes, configuration changes, or other changes to the operating system or OpenShift Container Platform configuration.
When you perform node management operations, you create or modify a KubeletConfig
custom resource (CR).
When changes are made to a machine configuration, the Machine Config Operator (MCO) automatically reboots all corresponding nodes in order for the changes to take effect.
To prevent the nodes from automatically rebooting after machine configuration changes, before making the changes, you must pause the autoreboot process by setting the spec.paused
field to true
in the corresponding machine config pool. When paused, machine configuration changes are not applied until you set the spec.paused
field to false
and the nodes have rebooted into the new configuration.
The following modifications do not trigger a node reboot:
When the MCO detects any of the following changes, it applies the update without draining or rebooting the node:
-
Changes to the SSH key in the
spec.config.passwd.users.sshAuthorizedKeys
parameter of a machine config. -
Changes to the global pull secret or pull secret in the
openshift-config
namespace. -
Automatic rotation of the
/etc/kubernetes/kubelet-ca.crt
certificate authority (CA) by the Kubernetes API Server Operator.
-
Changes to the SSH key in the
When the MCO detects changes to the
/etc/containers/registries.conf
file, such as adding or editing anImageDigestMirrorSet
,ImageTagMirrorSet
, orImageContentSourcePolicy
object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes:-
The addition of a registry with the
pull-from-mirror = "digest-only"
parameter set for each mirror. -
The addition of a mirror with the
pull-from-mirror = "digest-only"
parameter set in a registry. -
The addition of items to the
unqualified-search-registries
list.
-
The addition of a registry with the
There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded
until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated.
5.1.1. Understanding the Machine Config Operator node drain behavior Copy linkLink copied to clipboard!
When you use a machine config to change a system feature, such as adding new config files, modifying systemd units or kernel arguments, or updating SSH keys, the Machine Config Operator (MCO) applies those changes and ensures that each node is in the desired configuration state.
After you make the changes, the MCO generates a new rendered machine config. In the majority of cases, when applying the new rendered machine config, the Operator performs the following steps on each affected node until all of the affected nodes have the updated configuration:
- Cordon. The MCO marks the node as not schedulable for additional workloads.
- Drain. The MCO terminates all running workloads on the node, causing the workloads to be rescheduled onto other nodes.
- Apply. The MCO writes the new configuration to the nodes as needed.
- Reboot. The MCO restarts the node.
- Uncordon. The MCO marks the node as schedulable for workloads.
Throughout this process, the MCO maintains the required number of pods based on the MaxUnavailable
value set in the machine config pool.
There are conditions which can prevent the MCO from draining a node. If the MCO fails to drain a node, the Operator will be unable to reboot the node, preventing any changes made to the node through a machine config. For more information and mitigation steps, see the MCCDrainError runbook.
If the MCO drains pods on the master node, note the following conditions:
- In single-node OpenShift clusters, the MCO skips the drain operation.
- The MCO does not drain static pods in order to prevent interference with services, such as etcd.
In certain cases the nodes are not drained. For more information, see "About the Machine Config Operator."
You can mitigate the disruption caused by drain and reboot cycles by disabling control plane reboots. For more information, see "Disabling the Machine Config Operator from automatically rebooting."
5.1.2. Understanding configuration drift detection Copy linkLink copied to clipboard!
There might be situations when the on-disk state of a node differs from what is configured in the machine config. This is known as configuration drift. For example, a cluster admin might manually modify a file, a systemd unit file, or a file permission that was configured through a machine config. This causes configuration drift. Configuration drift can cause problems between nodes in a Machine Config Pool or when the machine configs are updated.
The Machine Config Operator (MCO) uses the Machine Config Daemon (MCD) to check nodes for configuration drift on a regular basis. If detected, the MCO sets the node and the machine config pool (MCP) to Degraded
and reports the error. A degraded node is online and operational, but, it cannot be updated.
The MCD performs configuration drift detection upon each of the following conditions:
- When a node boots.
- After any of the files (Ignition files and systemd drop-in units) specified in the machine config are modified outside of the machine config.
Before a new machine config is applied.
NoteIf you apply a new machine config to the nodes, the MCD temporarily shuts down configuration drift detection. This shutdown is needed because the new machine config necessarily differs from the machine config on the nodes. After the new machine config is applied, the MCD restarts detecting configuration drift using the new machine config.
When performing configuration drift detection, the MCD validates that the file contents and permissions fully match what the currently-applied machine config specifies. Typically, the MCD detects configuration drift in less than a second after the detection is triggered.
If the MCD detects configuration drift, the MCD performs the following tasks:
- Emits an error to the console logs
- Emits a Kubernetes event
- Stops further detection on the node
-
Sets the node and MCP to
degraded
You can check if you have a degraded node by listing the MCPs:
oc get mcp worker
$ oc get mcp worker
If you have a degraded MCP, the DEGRADEDMACHINECOUNT
field is non-zero, similar to the following output:
Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-404caf3180818d8ac1f50c32f14b57c3 False True True 2 1 1 1 5h51m
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
worker rendered-worker-404caf3180818d8ac1f50c32f14b57c3 False True True 2 1 1 1 5h51m
You can determine if the problem is caused by configuration drift by examining the machine config pool:
oc describe mcp worker
$ oc describe mcp worker
Example output
Or, if you know which node is degraded, examine that node:
oc describe node/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4
$ oc describe node/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4
Example output
- 1
- The error message indicating that configuration drift was detected between the node and the listed machine config. Here the error message indicates that the contents of the
/etc/mco-test-file
, which was added by the machine config, has changed outside of the machine config. - 2
- The state of the node is
Degraded
.
You can correct configuration drift and return the node to the Ready
state by performing one of the following remediations:
- Ensure that the contents and file permissions of the files on the node match what is configured in the machine config. You can manually rewrite the file contents or change the file permissions.
Generate a force file on the degraded node. The force file causes the MCD to bypass the usual configuration drift detection and reapplies the current machine config.
NoteGenerating a force file on a node causes that node to reboot.
5.1.3. Checking machine config pool status Copy linkLink copied to clipboard!
To see the status of the Machine Config Operator (MCO), its sub-components, and the resources it manages, use the following oc
commands:
Procedure
To see the number of MCO-managed nodes available on your cluster for each machine config pool (MCP), run the following command:
oc get machineconfigpool
$ oc get machineconfigpool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4… True False False 3 3 3 0 4h42m worker rendered-worker-f4b64… False True False 3 2 2 0 4h42m
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4… True False False 3 3 3 0 4h42m worker rendered-worker-f4b64… False True False 3 2 2 0 4h42m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- UPDATED
-
The
True
status indicates that the MCO has applied the current machine config to the nodes in that MCP. The current machine config is specified in theSTATUS
field in theoc get mcp
output. TheFalse
status indicates a node in the MCP is updating. - UPDATING
-
The
True
status indicates that the MCO is applying the desired machine config, as specified in theMachineConfigPool
custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. TheFalse
status indicates that all nodes in the MCP are updated. - DEGRADED
-
A
True
status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. AFalse
status indicates that all nodes in the MCP are ready. - MACHINECOUNT
- Indicates the total number of machines in that MCP.
- READYMACHINECOUNT
-
Indicates the number of machines that are both running the current machine config and are ready for scheduling. This count is always less than or equal to the
UPDATEDMACHINECOUNT
number. - UPDATEDMACHINECOUNT
- Indicates the total number of machines in that MCP that have the current machine config.
- DEGRADEDMACHINECOUNT
- Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable.
In the previous output, there are three control plane (master) nodes and three worker nodes. The control plane MCP and the associated nodes are updated to the current machine config. The nodes in the worker MCP are being updated to the desired machine config. Two of the nodes in the worker MCP are updated and one is still updating, as indicated by the
UPDATEDMACHINECOUNT
being2
. There are no issues, as indicated by theDEGRADEDMACHINECOUNT
being0
andDEGRADED
beingFalse
.While the nodes in the MCP are updating, the machine config listed under
CONFIG
is the current machine config, which the MCP is being updated from. When the update is complete, the listed machine config is the desired machine config, which the MCP was updated to.NoteIf a node is being cordoned, that node is not included in the
READYMACHINECOUNT
, but is included in theMACHINECOUNT
. Also, the MCP status is set toUPDATING
. Because the node has the current machine config, it is counted in theUPDATEDMACHINECOUNT
total:Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4… True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a… False True False 3 2 3 0 4h42m
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4… True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a… False True False 3 2 3 0 4h42m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check the status of the nodes in an MCP by examining the
MachineConfigPool
custom resource, run the following command: :oc describe mcp worker
$ oc describe mcp worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a node is being cordoned, the node is not included in the
Ready Machine Count
. It is included in theUnavailable Machine Count
:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To see each existing
MachineConfig
object, run the following command:oc get machineconfigs
$ oc get machineconfigs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the
MachineConfig
objects listed asrendered
are not meant to be changed or deleted.To view the contents of a particular machine config (in this case,
01-master-kubelet
), run the following command:oc describe machineconfigs 01-master-kubelet
$ oc describe machineconfigs 01-master-kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output from the command shows that this
MachineConfig
object contains both configuration files (cloud.conf
andkubelet.conf
) and a systemd service (Kubernetes Kubelet):Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If something goes wrong with a machine config that you apply, you can always back out that change. For example, if you had run oc create -f ./myconfig.yaml
to apply a machine config, you could remove that machine config by running the following command:
oc delete -f ./myconfig.yaml
$ oc delete -f ./myconfig.yaml
If that was the only problem, the nodes in the affected pool should return to a non-degraded state. This actually causes the rendered configuration to roll back to its previously rendered state.
If you add your own machine configs to your cluster, you can use the commands shown in the previous example to check their status and the related status of the pool to which they are applied.
5.1.4. Checking machine config node status Copy linkLink copied to clipboard!
During updates you might want to monitor the progress of individual nodes in case issues arise and you need to troubleshoot a node.
To see the status of the Machine Config Operator (MCO) updates to your cluster, use the following oc
commands:
Improved MCO state reporting is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Procedure
Get a summary of update statuses for all nodes in all machine config pools by running the following command:
oc get machineconfignodes
$ oc get machineconfignodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- UPDATED
-
The
True
status indicates that the MCO has applied the current machine config to that particular node. TheFalse
status indicates that the node is currently updating. TheUnknown
status means the operation is processing. - UPDATEPREPARED
-
The
False
status indicates that the MCO has not started reconciling the new machine configs to be distributed. TheTrue
status indicates that the MCO has completed this phase of the update. TheUnknown
status means the operation is processing. - UPDATEEXECUTED
-
The
False
status indicates that the MCO has not started cordoning and draining the node. It also indicates that the disk state and operating system have not started updating. TheTrue
status indicates that the MCO has completed this phase of the update. TheUnknown
status means the operation is processing. - UPDATEPOSTACTIONCOMPLETED
-
The
False
status indicates that the MCO has not started rebooting the node or closing the daemon. TheTrue
status indicates that the MCO has completed reboot and updating the node status. TheUnknown
status indicates either that an error has occurred during the update process at this phase, or that the MCO is currently applying the update. - UPDATECOMPLETED
-
The
False
status indicates that the MCO has not started uncordoning the node and updating the node state and metrics. TheTrue
status indicates that the MCO has finished updating the node state and available metrics. - RESUMED
The
False
status indicates that the MCO has not started the config drift monitor. TheTrue
status indicates that the node has resumed operation. TheUnknown
status means the operation is processing.NoteWithin the primary phases previously described, you can have secondary phases which you can use to see the update progression in more detail. You can get more information that includes secondary phases of updates by using the
-o wide
option of the preceding command. This provides the additionalUPDATECOMPATIBLE
,UPDATEFILESANDOS
,DRAINEDNODE
,CORDONEDNODE
,REBOOTNODE
,RELOADEDCRIO
andUNCORDONED
columns. These secondary phases do not always occur and depend on the type of update you want to apply.
Check the update status of nodes in a specific machine config pool by running the following command:
oc get machineconfignodes $(oc get machineconfignodes -o json | jq -r '.items[]|select(.spec.pool.name=="<pool_name>")|.metadata.name')
$ oc get machineconfignodes $(oc get machineconfignodes -o json | jq -r '.items[]|select(.spec.pool.name=="<pool_name>")|.metadata.name')
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the pool is the
MachineConfigPool
object name.
Example output
NAME UPDATED UPDATEPREPARED UPDATEEXECUTED UPDATEPOSTACTIONCOMPLETE UPDATECOMPLETE RESUMED ip-10-0-48-226.ec2.internal True False False False False False ip-10-0-5-241.ec2.internal True False False False False False ip-10-0-74-108.ec2.internal True False False False False False
NAME UPDATED UPDATEPREPARED UPDATEEXECUTED UPDATEPOSTACTIONCOMPLETE UPDATECOMPLETE RESUMED ip-10-0-48-226.ec2.internal True False False False False False ip-10-0-5-241.ec2.internal True False False False False False ip-10-0-74-108.ec2.internal True False False False False False
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the update status of an individual node by running the following command:
oc describe machineconfignode/<node_name>
$ oc describe machineconfignode/<node_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the node is the
MachineConfigNode
object name.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The desired configuration specified in the
spec.configversion.desired
field updates immediately when a new configuration is detected on the node. - 2
- The desired configuration specified in the
status.configversion.desired
field updates only when the new configuration is validated by the Machine Config Daemon (MCD). The MCD performs validation by checking the current phase of the update. If the update successfully passes theUPDATEPREPARED
phase, then the status adds the new configuration.
5.1.5. Viewing and interacting with certificates Copy linkLink copied to clipboard!
The following certificates are handled in the cluster by the Machine Config Controller (MCC) and can be found in the ControllerConfig
resource:
-
/etc/kubernetes/kubelet-ca.crt
-
/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem
-
/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt
The MCC also handles the image registry certificates and its associated user bundle certificate.
You can get information about the listed certificates, including the underyling bundle the certificate comes from, and the signing and subject data.
Prerequisites
-
This procedure contains optional steps that require that the
python-yq
RPM package is installed.
Procedure
Get detailed certificate information by running the following command:
oc get controllerconfig/machine-config-controller -o yaml | yq -y '.status.controllerCertificates'
$ oc get controllerconfig/machine-config-controller -o yaml | yq -y '.status.controllerCertificates'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get a simpler version of the information found in the
ControllerConfig
resource by checking the machine config pool status using the following command:oc get mcp master -o yaml | yq -y '.status.certExpirys'
$ oc get mcp master -o yaml | yq -y '.status.certExpirys'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This method is meant for OpenShift Container Platform applications that already consume machine config pool information.
Check which image registry certificates are on the nodes:
Log in to a node:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell:chroot /host
sh-5.1# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Look at the contents of the
/etc/docker/cert.d
directory:ls /etc/docker/certs.d
sh-5.1# ls /etc/docker/certs.d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000
image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2. Using MachineConfig objects to configure nodes Copy linkLink copied to clipboard!
You can use the tasks in this section to create MachineConfig
objects that modify files, systemd unit files, and other operating system features running on OpenShift Container Platform nodes. For more ideas on working with machine configs, see content related to updating SSH authorized keys, verifying image signatures, enabling SCTP, and configuring iSCSI initiatornames for OpenShift Container Platform.
OpenShift Container Platform supports Ignition specification version 3.4. You should base all new machine configs you create going forward on Ignition specification version 3.4. If you are upgrading your OpenShift Container Platform cluster, any existing machine configs with a previous Ignition specification will be translated automatically to specification version 3.4.
There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded
until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. For more information on configuration drift, see Understanding configuration drift detection.
Use the following "Configuring chrony time service" procedure as a model for how to go about adding other configuration files to OpenShift Container Platform nodes.
5.2.1. Configuring chrony time service Copy linkLink copied to clipboard!
You can set the time server and related settings used by the chrony time service (chronyd
) by modifying the contents of the chrony.conf
file and passing those contents to your nodes as a machine config.
Procedure
Create a Butane config including the contents of the
chrony.conf
file. For example, to configure chrony on worker nodes, create a99-worker-chrony.bu
file.NoteThe Butane version you specify in the config file should match the OpenShift Container Platform version and always ends in
0
. For example,4.15.0
. See "Creating machine configs with Butane" for information about Butane.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 2
- On control plane nodes, substitute
master
forworker
in both of these locations. - 3
- Specify an octal value mode for the
mode
field in the machine config file. After creating the file and applying the changes, themode
is converted to a decimal value. You can check the YAML file with the commandoc get mc <mc-name> -o yaml
. - 4
- Specify any valid, reachable time source, such as the one provided by your DHCP server.
NoteFor all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port
123
. If an external NTP time server is configured, you must open UDP port123
.Alternatively, you can specify any of the following NTP servers:
1.rhel.pool.ntp.org
,2.rhel.pool.ntp.org
, or3.rhel.pool.ntp.org
. When you use NTP with your DHCP server, you must set thesourcedir /run/chrony-dhcp
parameter in thechrony.conf
file.Use Butane to generate a
MachineConfig
object file,99-worker-chrony.yaml
, containing the configuration to be delivered to the nodes:butane 99-worker-chrony.bu -o 99-worker-chrony.yaml
$ butane 99-worker-chrony.bu -o 99-worker-chrony.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configurations in one of two ways:
-
If the cluster is not running yet, after you generate manifest files, add the
MachineConfig
object file to the<installation_directory>/openshift
directory, and then continue to create the cluster. If the cluster is already running, apply the file:
oc apply -f ./99-worker-chrony.yaml
$ oc apply -f ./99-worker-chrony.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
If the cluster is not running yet, after you generate manifest files, add the
5.2.2. Disabling the chrony time service Copy linkLink copied to clipboard!
You can disable the chrony time service (chronyd
) for nodes with a specific role by using a MachineConfig
custom resource (CR).
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create the
MachineConfig
CR that disableschronyd
for the specified node role.Save the following YAML in the
disable-chronyd.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Node role where you want to disable
chronyd
, for example,master
.
Create the
MachineConfig
CR by running the following command:oc create -f disable-chronyd.yaml
$ oc create -f disable-chronyd.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.3. Adding kernel arguments to nodes Copy linkLink copied to clipboard!
In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set.
Improper use of kernel arguments can result in your systems becoming unbootable.
Examples of kernel arguments you could set include:
-
nosmt: Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider
nosmt
in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. - systemd.unified_cgroup_hierarchy: Enables Linux control group version 2 (cgroup v2). cgroup v2 is the next version of the kernel control group and offers multiple improvements.
enforcing=0: Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging.
WarningDisabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster.
See Kernel.org kernel parameters for a list and descriptions of kernel arguments.
In the following procedure, you create a MachineConfig
object that identifies:
- A set of machines to which you want to add the kernel argument. In this case, machines with a worker role.
- Kernel arguments that are appended to the end of the existing kernel arguments.
- A label that indicates where in the list of machine configs the change is applied.
Prerequisites
- Have administrative privilege to a working OpenShift Container Platform cluster.
Procedure
List existing
MachineConfig
objects for your OpenShift Container Platform cluster to determine how to label your machine config:oc get MachineConfig
$ oc get MachineConfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
MachineConfig
object file that identifies the kernel argument (for example,05-worker-kernelarg-selinuxpermissive.yaml
)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new machine config:
oc create -f 05-worker-kernelarg-selinuxpermissive.yaml
$ oc create -f 05-worker-kernelarg-selinuxpermissive.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the machine configs to see that the new one was added:
oc get MachineConfig
$ oc get MachineConfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the nodes:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can see that scheduling on each worker node is disabled as the change is being applied.
Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command-line arguments (in
/proc/cmdline
on the host):oc debug node/ip-10-0-141-105.ec2.internal
$ oc debug node/ip-10-0-141-105.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see the
enforcing=0
argument added to the other kernel arguments.
5.2.4. Enabling multipathing with kernel arguments on RHCOS Copy linkLink copied to clipboard!
Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Postinstallation support is available by activating multipathing via the machine config.
Enabling multipathing during installation is supported and recommended for nodes provisioned in OpenShift Container Platform. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. For more information about enabling multipathing during installation time, see "Enabling multipathing post installation" in the Installing on bare metal documentation.
On IBM Z® and IBM® LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z® and IBM® LinuxONE.
When an OpenShift Container Platform cluster is installed or configured as a postinstallation activity on a single VIOS host with "vSCSI" storage on IBM Power® with multipath configured, the CoreOS nodes with multipath enabled fail to boot. This behavior is expected, as only one path is available to the node.
Prerequisites
- You have a running OpenShift Container Platform cluster.
- You are logged in to the cluster as a user with administrative privileges.
- You have confirmed that the disk is enabled for multipathing. Multipathing is only supported on hosts that are connected to a SAN via an HBA adapter.
Procedure
To enable multipathing postinstallation on control plane nodes:
Create a machine config file, such as
99-master-kargs-mpath.yaml
, that instructs the cluster to add themaster
label and that identifies the multipath kernel argument, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To enable multipathing postinstallation on worker nodes:
Create a machine config file, such as
99-worker-kargs-mpath.yaml
, that instructs the cluster to add theworker
label and that identifies the multipath kernel argument, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the new machine config by using either the master or worker YAML file you previously created:
oc create -f ./99-worker-kargs-mpath.yaml
$ oc create -f ./99-worker-kargs-mpath.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the machine configs to see that the new one was added:
oc get MachineConfig
$ oc get MachineConfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the nodes:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can see that scheduling on each worker node is disabled as the change is being applied.
Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command-line arguments (in
/proc/cmdline
on the host):oc debug node/ip-10-0-141-105.ec2.internal
$ oc debug node/ip-10-0-141-105.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see the added kernel arguments.
5.2.5. Adding a real-time kernel to nodes Copy linkLink copied to clipboard!
Some OpenShift Container Platform workloads require a high degree of determinism.While Linux is not a real-time operating system, the Linux real-time kernel includes a preemptive scheduler that provides the operating system with real-time characteristics.
If your OpenShift Container Platform workloads require these real-time characteristics, you can switch your machines to the Linux real-time kernel. For OpenShift Container Platform, 4.15 you can make this switch using a MachineConfig
object. Although making the change is as simple as changing a machine config kernelType
setting to realtime
, there are a few other considerations before making the change:
- Currently, real-time kernel is supported only on worker nodes, and only for radio access network (RAN) use.
- The following procedure is fully supported with bare metal installations that use systems that are certified for Red Hat Enterprise Linux for Real Time 8.
- Real-time support in OpenShift Container Platform is limited to specific subscriptions.
- The following procedure is also supported for use with Google Cloud Platform.
Prerequisites
- Have a running OpenShift Container Platform cluster (version 4.4 or later).
- Log in to the cluster as a user with administrative privileges.
Procedure
Create a machine config for the real-time kernel: Create a YAML file (for example,
99-worker-realtime.yaml
) that contains aMachineConfig
object for therealtime
kernel type. This example tells the cluster to use a real-time kernel for all worker nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the machine config to the cluster. Type the following to add the machine config to the cluster:
oc create -f 99-worker-realtime.yaml
$ oc create -f 99-worker-realtime.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the real-time kernel: Once each impacted node reboots, log in to the cluster and run the following commands to make sure that the real-time kernel has replaced the regular kernel for the set of nodes you configured:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.28.5 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.28.5 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.28.5
NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.28.5 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.28.5 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc debug node/ip-10-0-143-147.us-east-2.compute.internal
$ oc debug node/ip-10-0-143-147.us-east-2.compute.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The kernel name contains
rt
and text “PREEMPT RT” indicates that this is a real-time kernel.To go back to the regular kernel, delete the
MachineConfig
object:oc delete -f 99-worker-realtime.yaml
$ oc delete -f 99-worker-realtime.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.6. Configuring journald settings Copy linkLink copied to clipboard!
If you need to configure settings for the journald
service on OpenShift Container Platform nodes, you can do that by modifying the appropriate configuration file and passing the file to the appropriate pool of nodes as a machine config.
This procedure describes how to modify journald
rate limiting settings in the /etc/systemd/journald.conf
file and apply them to worker nodes. See the journald.conf
man page for information on how to use that file.
Prerequisites
- Have a running OpenShift Container Platform cluster.
- Log in to the cluster as a user with administrative privileges.
Procedure
Create a Butane config file,
40-worker-custom-journald.bu
, that includes an/etc/systemd/journald.conf
file with the required settings.NoteThe Butane version you specify in the config file should match the OpenShift Container Platform version and always ends in
0
. For example,4.15.0
. See "Creating machine configs with Butane" for information about Butane.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use Butane to generate a
MachineConfig
object file,40-worker-custom-journald.yaml
, containing the configuration to be delivered to the worker nodes:butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml
$ butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the machine config to the pool:
oc apply -f 40-worker-custom-journald.yaml
$ oc apply -f 40-worker-custom-journald.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the new machine config is applied and that the nodes are not in a degraded state. It might take a few minutes. The worker pool will show the updates in progress, as each node successfully has the new machine config applied:
oc get machineconfigpool
$ oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check that the change was applied, you can log in to a worker node:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.7. Adding extensions to RHCOS Copy linkLink copied to clipboard!
RHCOS is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to OpenShift Container Platform clusters across all platforms. While adding software packages to RHCOS systems is generally discouraged, the MCO provides an extensions
feature you can use to add a minimal set of features to RHCOS nodes.
Currently, the following extensions are available:
-
usbguard: Adding the
usbguard
extension protects RHCOS systems from attacks from intrusive USB devices. See USBGuard for details. -
kerberos: Adding the
kerberos
extension provides a mechanism that allows both users and machines to identify themselves to the network to receive defined, limited access to the areas and services that an administrator has configured. See Using Kerberos for details, including how to set up a Kerberos client and mount a Kerberized NFS share.
The following procedure describes how to use a machine config to add one or more extensions to your RHCOS nodes.
Prerequisites
- Have a running OpenShift Container Platform cluster (version 4.6 or later).
- Log in to the cluster as a user with administrative privileges.
Procedure
Create a machine config for extensions: Create a YAML file (for example,
80-extensions.yaml
) that contains aMachineConfig
extensions
object. This example tells the cluster to add theusbguard
extension.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the machine config to the cluster. Type the following to add the machine config to the cluster:
oc create -f 80-extensions.yaml
$ oc create -f 80-extensions.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This sets all worker nodes to have rpm packages for
usbguard
installed.Check that the extensions were applied:
oc get machineconfig 80-worker-extensions
$ oc get machineconfig 80-worker-extensions
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.4.0 57s
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.4.0 57s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the new machine config is now applied and that the nodes are not in a degraded state. It may take a few minutes. The worker pool will show the updates in progress, as each machine successfully has the new machine config applied:
oc get machineconfigpool
$ oc get machineconfigpool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the extensions. To check that the extension was applied, run:
oc get node | grep worker
$ oc get node | grep worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.28.5
NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc debug node/ip-10-0-169-2.us-east-2.compute.internal
$ oc debug node/ip-10-0-169-2.us-east-2.compute.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
... To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm
... To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.8. Loading custom firmware blobs in the machine config manifest Copy linkLink copied to clipboard!
Because the default location for firmware blobs in /usr/lib
is read-only, you can locate a custom firmware blob by updating the search path. This enables you to load local firmware blobs in the machine config manifest when the blobs are not managed by RHCOS.
Procedure
Create a Butane config file,
98-worker-firmware-blob.bu
, that updates the search path so that it is root-owned and writable to local storage. The following example places the custom blob file from your local workstation onto nodes under/var/lib/firmware
.NoteThe Butane version you specify in the config file should match the OpenShift Container Platform version and always ends in
0
. For example,4.15.0
. See "Creating machine configs with Butane" for information about Butane.Butane config file for custom firmware blob
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Sets the path on the node where the firmware package is copied to.
- 2
- Specifies a file with contents that are read from a local file directory on the system running Butane. The path of the local file is relative to a
files-dir
directory, which must be specified by using the--files-dir
option with Butane in the following step. - 3
- Sets the permissions for the file on the RHCOS node. It is recommended to set
0644
permissions. - 4
- The
firmware_class.path
parameter customizes the kernel search path of where to look for the custom firmware blob that was copied from your local workstation onto the root file system of the node. This example uses/var/lib/firmware
as the customized path.
Run Butane to generate a
MachineConfig
object file that uses a copy of the firmware blob on your local workstation named98-worker-firmware-blob.yaml
. The firmware blob contains the configuration to be delivered to the nodes. The following example uses the--files-dir
option to specify the directory on your workstation where the local file or files are located:butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>
$ butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configurations to the nodes in one of two ways:
-
If the cluster is not running yet, after you generate manifest files, add the
MachineConfig
object file to the<installation_directory>/openshift
directory, and then continue to create the cluster. If the cluster is already running, apply the file:
oc apply -f 98-worker-firmware-blob.yaml
$ oc apply -f 98-worker-firmware-blob.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A
MachineConfig
object YAML file is created for you to finish configuring your machines.
-
If the cluster is not running yet, after you generate manifest files, add the
-
Save the Butane config in case you need to update the
MachineConfig
object in the future.
5.2.9. Changing the core user password for node access Copy linkLink copied to clipboard!
By default, Red Hat Enterprise Linux CoreOS (RHCOS) creates a user named core
on the nodes in your cluster. You can use the core
user to access the node through a cloud provider serial console or a bare metal baseboard controller manager (BMC). This can be helpful, for example, if a node is down and you cannot access that node by using SSH or the oc debug node
command. However, by default, there is no password for this user, so you cannot log in without creating one.
You can create a password for the core
user by using a machine config. The Machine Config Operator (MCO) assigns the password and injects the password into the /etc/shadow
file, allowing you to log in with the core
user. The MCO does not examine the password hash. As such, the MCO cannot report if there is a problem with the password.
- The password works only through a cloud provider serial console or a BMC. It does not work with SSH.
-
If you have a machine config that includes an
/etc/shadow
file or a systemd unit that sets a password, it takes precedence over the password hash.
You can change the password, if needed, by editing the machine config you used to create the password. Also, you can remove the password by deleting the machine config. Deleting the machine config does not remove the user account.
Procedure
Using a tool that is supported by your operating system, create a hashed password. For example, create a hashed password using
mkpasswd
by running the following command:mkpasswd -m SHA-512 testpass
$ mkpasswd -m SHA-512 testpass
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
$6$CBZwA6s6AVFOtiZe$aUKDWpthhJEyR3nnhM02NM1sKCpHn9XN.NPrJNQ3HYewioaorpwL3mKGLxvW0AOb4pJxqoqP4nFX77y0p00.8.
$ $6$CBZwA6s6AVFOtiZe$aUKDWpthhJEyR3nnhM02NM1sKCpHn9XN.NPrJNQ3HYewioaorpwL3mKGLxvW0AOb4pJxqoqP4nFX77y0p00.8.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a machine config file that contains the
core
username and the hashed password:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the machine config by running the following command:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The nodes do not reboot and should become available in a few moments. You can use the
oc get mcp
to watch for the machine config pools to be updated, as shown in the following example:NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-d686a3ffc8fdec47280afec446fce8dd True False False 3 3 3 0 64m worker rendered-worker-4605605a5b1f9de1d061e9d350f251e5 False True False 3 0 0 0 64m
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-d686a3ffc8fdec47280afec446fce8dd True False False 3 3 3 0 64m worker rendered-worker-4605605a5b1f9de1d061e9d350f251e5 False True False 3 0 0 0 64m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After the nodes return to the
UPDATED=True
state, start a debug session for a node by running the following command:oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell by running the following command:chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the contents of the
/etc/shadow
file:Example output
... core:$6$2sE/010goDuRSxxv$o18K52wor.wIwZp:19418:0:99999:7::: ...
... core:$6$2sE/010goDuRSxxv$o18K52wor.wIwZp:19418:0:99999:7::: ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The hashed password is assigned to the
core
user.
5.3. Configuring MCO-related custom resources Copy linkLink copied to clipboard!
Besides managing MachineConfig
objects, the MCO manages two custom resources (CRs): KubeletConfig
and ContainerRuntimeConfig
. Those CRs let you change node-level settings impacting how the Kubelet and CRI-O container runtime services behave.
5.3.1. Creating a KubeletConfig CR to edit kubelet parameters Copy linkLink copied to clipboard!
The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller
added to the Machine Config Controller (MCC). This lets you use a KubeletConfig
custom resource (CR) to edit the kubelet parameters.
As the fields in the kubeletConfig
object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig
object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation.
Consider the following guidance:
-
Edit an existing
KubeletConfig
CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. -
Create one
KubeletConfig
CR for each machine config pool with all the config changes you want for that pool. -
As needed, create multiple
KubeletConfig
CRs with a limit of 10 per cluster. For the firstKubeletConfig
CR, the Machine Config Operator (MCO) creates a machine config appended withkubelet
. With each subsequent CR, the controller creates anotherkubelet
machine config with a numeric suffix. For example, if you have akubelet
machine config with a-2
suffix, the nextkubelet
machine config is appended with-3
.
If you are applying a kubelet or container runtime config to a custom machine config pool, the custom role in the machineConfigSelector
must match the name of the custom machine config pool.
For example, because the following custom machine config pool is named infra
, the custom role must also be infra
:
If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3
machine config before deleting the kubelet-2
machine config.
If you have a machine config with a kubelet-9
suffix, and you create another KubeletConfig
CR, a new machine config is not created, even if there are fewer than 10 kubelet
machine configs.
Example KubeletConfig
CR
oc get kubeletconfig
$ oc get kubeletconfig
NAME AGE set-kubelet-config 15m
NAME AGE
set-kubelet-config 15m
Example showing a KubeletConfig
machine config
oc get mc | grep kubelet
$ oc get mc | grep kubelet
... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.4.0 26m ...
...
99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.4.0 26m
...
The following procedure is an example to show how to configure the maximum number of pods per node, the maximum PIDs per node, and the maximum container log size size on the worker nodes.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CR for the type of node you want to configure. Perform one of the following steps:View the machine config pool:
oc describe machineconfigpool <name>
$ oc describe machineconfigpool <name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc describe machineconfigpool worker
$ oc describe machineconfigpool worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If a label has been added it appears under
labels
.
If the label is not present, add a key/value pair:
oc label machineconfigpool worker custom-kubelet=set-kubelet-config
$ oc label machineconfigpool worker custom-kubelet=set-kubelet-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
View the available machine configuration objects that you can select:
oc get machineconfig
$ oc get machineconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, the two kubelet-related configs are
01-master-kubelet
and01-worker-kubelet
.Check the current value for the maximum pods per node:
oc describe node <node_name>
$ oc describe node <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94
$ oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Look for
value: pods: <value>
in theAllocatable
stanza:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the worker nodes as needed:
Create a YAML file similar to the following that contains the kubelet configuration:
ImportantKubelet configurations that target a specific machine config pool also affect any dependent pools. For example, creating a kubelet configuration for the pool containing worker nodes will also apply to any subset pools, including the pool containing infrastructure nodes. To avoid this, you must create a new machine config pool with a selection expression that only includes worker nodes, and have your kubelet configuration target this new pool.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use
podPidsLimit
to set the maximum number of PIDs in any pod. -
Use
containerLogMaxSize
to set the maximum size of the container log file before it is rotated. Use
maxPods
to set the maximum pods per node.NoteThe rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values,
50
forkubeAPIQPS
and100
forkubeAPIBurst
, are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Use
Update the machine config pool for workers with the label:
oc label machineconfigpool worker custom-kubelet=set-kubelet-config
$ oc label machineconfigpool worker custom-kubelet=set-kubelet-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
KubeletConfig
object:oc create -f change-maxPods-cr.yaml
$ oc create -f change-maxPods-cr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
KubeletConfig
object is created:oc get kubeletconfig
$ oc get kubeletconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE set-kubelet-config 15m
NAME AGE set-kubelet-config 15m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes.
Verify that the changes are applied to the node:
Check on a worker node that the
maxPods
value changed:oc describe node <node_name>
$ oc describe node <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Locate the
Allocatable
stanza:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the
pods
parameter should report the value you set in theKubeletConfig
object.
Verify the change in the
KubeletConfig
object:oc get kubeletconfigs set-kubelet-config -o yaml
$ oc get kubeletconfigs set-kubelet-config -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This should show a status of
True
andtype:Success
, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2. Creating a ContainerRuntimeConfig CR to edit CRI-O parameters Copy linkLink copied to clipboard!
You can change some of the settings associated with the OpenShift Container Platform CRI-O runtime for the nodes associated with a specific machine config pool (MCP). Using a ContainerRuntimeConfig
custom resource (CR), you set the configuration values and add a label to match the MCP. The MCO then rebuilds the crio.conf
and storage.conf
configuration files on the associated nodes with the updated values.
To revert the changes implemented by using a ContainerRuntimeConfig
CR, you must delete the CR. Removing the label from the machine config pool does not revert the changes.
You can modify the following settings by using a ContainerRuntimeConfig
CR:
-
Log level: The
logLevel
parameter sets the CRI-Olog_level
parameter, which is the level of verbosity for log messages. The default isinfo
(log_level = info
). Other options includefatal
,panic
,error
,warn
,debug
, andtrace
. -
Overlay size: The
overlaySize
parameter sets the CRI-O Overlay storage driversize
parameter, which is the maximum size of a container image. -
Container runtime: The
defaultRuntime
parameter sets the container runtime to eitherrunc
orcrun
. The default isrunc
.
You should have one ContainerRuntimeConfig
CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all the pools, you only need one ContainerRuntimeConfig
CR for all the pools.
You should edit an existing ContainerRuntimeConfig
CR to modify existing settings or add new settings instead of creating a new CR for each change. It is recommended to create a new ContainerRuntimeConfig
CR only to modify a different machine config pool, or for changes that are intended to be temporary so that you can revert the changes.
You can create multiple ContainerRuntimeConfig
CRs, as needed, with a limit of 10 per cluster. For the first ContainerRuntimeConfig
CR, the MCO creates a machine config appended with containerruntime
. With each subsequent CR, the controller creates a new containerruntime
machine config with a numeric suffix. For example, if you have a containerruntime
machine config with a -2
suffix, the next containerruntime
machine config is appended with -3
.
If you want to delete the machine configs, you should delete them in reverse order to avoid exceeding the limit. For example, you should delete the containerruntime-3
machine config before deleting the containerruntime-2
machine config.
If you have a machine config with a containerruntime-9
suffix, and you create another ContainerRuntimeConfig
CR, a new machine config is not created, even if there are fewer than 10 containerruntime
machine configs.
Example showing multiple ContainerRuntimeConfig
CRs
oc get ctrcfg
$ oc get ctrcfg
Example output
NAME AGE ctr-overlay 15m ctr-level 5m45s
NAME AGE
ctr-overlay 15m
ctr-level 5m45s
Example showing multiple containerruntime
machine configs
oc get mc | grep container
$ oc get mc | grep container
Example output
The following example sets the log_level
field to debug
and sets the overlay size to 8 GB:
Example ContainerRuntimeConfig
CR
- 1
- Specifies the machine config pool label. For a container runtime config, the role must match the name of the associated machine config pool.
- 2
- Optional: Specifies the level of verbosity for log messages.
- 3
- Optional: Specifies the maximum size of a container image.
- 4
- Optional: Specifies the container runtime to deploy to new containers. The default value is
runc
.
Procedure
To change CRI-O settings using the ContainerRuntimeConfig
CR:
Create a YAML file for the
ContainerRuntimeConfig
CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ContainerRuntimeConfig
CR:oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the CR is created:
oc get ContainerRuntimeConfig
$ oc get ContainerRuntimeConfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE overlay-size 3m19s
NAME AGE overlay-size 3m19s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that a new
containerruntime
machine config is created:oc get machineconfigs | grep containerrun
$ oc get machineconfigs | grep containerrun
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.4.0 31s
99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.4.0 31s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the machine config pool until all are shown as ready:
oc get mcp worker
$ oc get mcp worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the settings were applied in CRI-O:
Open an
oc debug
session to a node in the machine config pool and runchroot /host
.oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the changes in the
crio.conf
file:crio config | grep 'log_level'
sh-4.4# crio config | grep 'log_level'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
log_level = "debug"
log_level = "debug"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the changes in the `storage.conf`file:
head -n 7 /etc/containers/storage.conf
sh-4.4# head -n 7 /etc/containers/storage.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.3. Setting the default maximum container root partition size for Overlay with CRI-O Copy linkLink copied to clipboard!
The root partition of each container shows all of the available disk space of the underlying host. Follow this guidance to set a maximum partition size for the root disk of all containers.
To configure the maximum Overlay size, as well as other CRI-O options like the log level, you can create the following ContainerRuntimeConfig
custom resource definition (CRD):
Procedure
Create the configuration object:
oc apply -f overlaysize.yml
$ oc apply -f overlaysize.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To apply the new CRI-O configuration to your worker nodes, edit the worker machine config pool:
oc edit machineconfigpool worker
$ oc edit machineconfigpool worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
custom-crio
label based on thematchLabels
name you set in theContainerRuntimeConfig
CRD:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the changes, then view the machine configs:
oc get machineconfigs
$ oc get machineconfigs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow New
99-worker-generated-containerruntime
andrendered-worker-xyz
objects are created:Example output
99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.4.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.4.0 7m36s
99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.4.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.4.0 7m36s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After those objects are created, monitor the machine config pool for the changes to be applied:
oc get mcp worker
$ oc get mcp worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The worker nodes show
UPDATING
asTrue
, as well as the number of machines, the number updated, and other details:Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When complete, the worker nodes transition back to
UPDATING
asFalse
, and theUPDATEDMACHINECOUNT
number matches theMACHINECOUNT
:Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Looking at a worker machine, you see that the new 8 GB max size configuration is applied to all of the workers:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Looking inside a container, you see that the root partition is now 8 GB:
Example output
~ $ df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /
~ $ df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.4. Creating a drop-in file for the default capabilities of CRI-O Copy linkLink copied to clipboard!
You can change some of the settings associated with the OpenShift Container Platform CRI-O runtime for the nodes associated with a specific machine config pool (MCP). By using a controller custom resource (CR), you set the configuration values and add a label to match the MCP. The Machine Config Operator (MCO) then rebuilds the crio.conf
and default.conf
configuration files on the associated nodes with the updated values.
Earlier versions of OpenShift Container Platform included specific machine configs by default. If you updated to a later version of OpenShift Container Platform, those machine configs were retained to ensure that clusters running on the same OpenShift Container Platform version have the same machine configs.
You can create multiple ContainerRuntimeConfig
CRs, as needed, with a limit of 10 per cluster. For the first ContainerRuntimeConfig
CR, the MCO creates a machine config appended with containerruntime
. With each subsequent CR, the controller creates a containerruntime
machine config with a numeric suffix. For example, if you have a containerruntime
machine config with a -2
suffix, the next containerruntime
machine config is appended with -3
.
If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, delete the containerruntime-3
machine config before you delete the containerruntime-2
machine config.
If you have a machine config with a containerruntime-9
suffix and you create another ContainerRuntimeConfig
CR, a new machine config is not created, even if there are fewer than 10 containerruntime
machine configs.
Example of multiple ContainerRuntimeConfig
CRs
oc get ctrcfg
$ oc get ctrcfg
Example output
NAME AGE ctr-overlay 15m ctr-level 5m45s
NAME AGE
ctr-overlay 15m
ctr-level 5m45s
Example of multiple containerruntime
related system configs
cat /proc/1/status | grep Cap
$ cat /proc/1/status | grep Cap
capsh --decode=<decode_CapBnd_value>
$ capsh --decode=<decode_CapBnd_value>
- 1
- Replace
<decode_CapBnd_value>
with the specific value you want to decode.
Chapter 6. Postinstallation cluster tasks Copy linkLink copied to clipboard!
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements.
6.1. Available cluster customizations Copy linkLink copied to clipboard!
You complete most of the cluster configuration and customization after you deploy your OpenShift Container Platform cluster. A number of configuration resources are available.
If you install your cluster on IBM Z®, not all features and functions are available.
You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider.
For current documentation of the settings that you control by using these resources, use the oc explain
command, for example oc explain builds --api-version=config.openshift.io/v1
6.1.1. Cluster configuration resources Copy linkLink copied to clipboard!
All cluster configuration resources are globally scoped (not namespaced) and named cluster
.
Resource name | Description |
---|---|
| Provides API server configuration such as certificates and certificate authorities. |
| Controls the identity provider and authentication configuration for the cluster. |
| Controls default and enforced configuration for all builds on the cluster. |
| Configures the behavior of the web console interface, including the logout behavior. |
| Enables FeatureGates so that you can use Tech Preview features. |
| Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). |
| Configuration details related to routing such as the default domain for routes. |
| Configures identity providers and other behavior related to internal OAuth server flows. |
| Configures how projects are created including the project template. |
| Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. |
| Configures scheduler behavior such as profiles and default node selectors. |
6.1.2. Operator configuration resources Copy linkLink copied to clipboard!
These configuration resources are cluster-scoped instances, named cluster
, which control the behavior of a specific component as owned by a particular Operator.
Resource name | Description |
---|---|
| Controls console appearance such as branding customizations |
| Configures OpenShift image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. |
| Configures the Samples Operator to control which example image streams and templates are installed on the cluster. |
6.1.3. Additional configuration resources Copy linkLink copied to clipboard!
These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances.
Resource name | Instance name | Namespace | Description |
---|---|---|---|
|
|
| Controls the Alertmanager deployment parameters. |
|
|
| Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. |
6.1.4. Informational Resources Copy linkLink copied to clipboard!
You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly.
Resource name | Instance name | Description |
---|---|---|
|
|
In OpenShift Container Platform 4.15, you must not customize the |
|
| You cannot modify the DNS settings for your cluster. You can view the DNS Operator status. |
|
| Configuration details allowing the cluster to interact with its cloud provider. |
|
| You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation. |
6.2. Adding worker nodes Copy linkLink copied to clipboard!
After you deploy your OpenShift Container Platform cluster, you can add worker nodes to scale cluster resources. There are different ways you can add worker nodes depending on the installation method and the environment of your cluster.
6.2.1. Adding worker nodes to installer-provisioned infrastructure clusters Copy linkLink copied to clipboard!
For installer-provisioned infrastructure clusters, you can manually or automatically scale the MachineSet
object to match the number of available bare-metal hosts.
To add a bare-metal host, you must configure all network prerequisites, configure an associated baremetalhost
object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console.
6.2.2. Adding worker nodes to user-provisioned infrastructure clusters Copy linkLink copied to clipboard!
For user-provisioned infrastructure clusters, you can add worker nodes by using a RHEL or RHCOS ISO image and connecting it to your cluster using cluster Ignition config files. For RHEL worker nodes, the following example uses Ansible playbooks to add worker nodes to the cluster. For RHCOS worker nodes, the following example uses an ISO image and network booting to add worker nodes to the cluster.
6.2.3. Adding worker nodes to clusters managed by the Assisted Installer Copy linkLink copied to clipboard!
For clusters managed by the Assisted Installer, you can add worker nodes by using the Red Hat OpenShift Cluster Manager console, the Assisted Installer REST API or you can manually add worker nodes using an ISO image and cluster Ignition config files.
6.2.4. Adding worker nodes to clusters managed by the multicluster engine for Kubernetes Copy linkLink copied to clipboard!
For clusters managed by the multicluster engine for Kubernetes, you can add worker nodes by using the dedicated multicluster engine console.
6.3. Adjust worker nodes Copy linkLink copied to clipboard!
If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new compute machine sets, scale them up, then scale the original compute machine set down before removing them.
6.3.1. Understanding the difference between compute machine sets and the machine config pool Copy linkLink copied to clipboard!
MachineSet
objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider.
The MachineConfigPool
object allows MachineConfigController
components to define and provide the status of machines in the context of upgrades.
The MachineConfigPool
object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool.
The NodeSelector
object can be replaced with a reference to the MachineSet
object.
6.3.2. Scaling a compute machine set manually Copy linkLink copied to clipboard!
To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set.
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets.
Prerequisites
-
Install an OpenShift Container Platform cluster and the
oc
command line. -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the compute machine sets that are in the cluster by running the following command:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The compute machine sets are listed in the form of
<clusterid>-worker-<aws-region-az>
.View the compute machines that are in the cluster by running the following command:
oc get machine -n openshift-machine-api
$ oc get machine -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the annotation on the compute machine that you want to delete by running the following command:
oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true"
$ oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale the compute machine set by running one of the following commands:
oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to scale the compute machine set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can scale the compute machine set up or down. It takes several minutes for the new machines to be available.
ImportantBy default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine.
You can skip draining the node by annotating
machine.openshift.io/exclude-node-draining
in a specific machine.
Verification
Verify the deletion of the intended machine by running the following command:
oc get machines
$ oc get machines
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.3. The compute machine set deletion policy Copy linkLink copied to clipboard!
Random
, Newest
, and Oldest
are the three supported deletion options. The default is Random
, meaning that random machines are chosen and deleted when scaling compute machine sets down. The deletion policy can be set according to the use case by modifying the particular compute machine set:
spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>
spec:
deletePolicy: <delete_policy>
replicas: <desired_replica_count>
Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/delete-machine=true
to the machine of interest, regardless of the deletion policy.
By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker compute machine set to 0
unless you first relocate the router pods.
Custom compute machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker compute machine sets are scaling down. This prevents service disruption.
6.3.4. Creating default cluster-wide node selectors Copy linkLink copied to clipboard!
You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes.
With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels.
You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down.
You can add additional key/value pairs to a pod. But you cannot add a different value for a default key.
Procedure
To add a default cluster-wide node selector:
Edit the Scheduler Operator CR to add the default cluster-wide node selectors:
oc edit scheduler cluster
$ oc edit scheduler cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Scheduler Operator CR with a node selector
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add a node selector with the appropriate
<key>:<value>
pairs.
After making this change, wait for the pods in the
openshift-kube-apiserver
project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy.Add labels to a node by using a compute machine set or editing the node directly:
Use a compute machine set to add labels to nodes managed by the compute machine set when a node is created:
Run the following command to add labels to a
MachineSet
object:oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api
$ oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add a
<key>/<value>
pair for each label.
For example:
oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api
$ oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to add labels to a compute machine set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the labels are added to the
MachineSet
object by using theoc edit
command:For example:
oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api
$ oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
MachineSet
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Redeploy the nodes associated with that compute machine set by scaling down to
0
and scaling up the nodes:For example:
oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
$ oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
$ oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the nodes are ready and available, verify that the label is added to the nodes by using the
oc get
command:oc get nodes -l <key>=<value>
$ oc get nodes -l <key>=<value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get nodes -l type=user-node
$ oc get nodes -l type=user-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.28.5
NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add labels directly to a node:
Edit the
Node
object for the node:oc label nodes <name> <key>=<value>
$ oc label nodes <name> <key>=<value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to label a node:
oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east
$ oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to add labels to a node:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the labels are added to the node using the
oc get
command:oc get nodes -l <key>=<value>,<key>=<value>
$ oc get nodes -l <key>=<value>,<key>=<value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get nodes -l type=user-node,region=east
$ oc get nodes -l type=user-node,region=east
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.28.5
NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Improving cluster stability in high latency environments using worker latency profiles Copy linkLink copied to clipboard!
If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator needs to change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner.
The Kubelet
process provides the starting point for monitoring cluster health. The Kubelet
sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager (kube controller
) reads the status values every 10 seconds, by default. If the kube controller
cannot read a node status value, it loses contact with that node after a configured period. The default behavior is:
-
The node controller on the control plane updates the node health to
Unhealthy
and marks the nodeReady
condition`Unknown`. - In response, the scheduler stops scheduling pods to that node.
-
The Node Lifecycle Controller adds a
node.kubernetes.io/unreachable
taint with aNoExecute
effect to the node and schedules any pods on the node for eviction after five minutes, by default.
This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet
evicts pods from the node even though the node is healthy.
To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet
and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal.
These worker latency profiles contain three sets of parameters that are predefined with carefully tuned values to control the reaction of the cluster to increased latency. There is no need to experimentally find the best values manually.
You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network.
6.4.1. Understanding worker latency profiles Copy linkLink copied to clipboard!
Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency
, node-monitor-grace-period
, default-not-ready-toleration-seconds
and default-unreachable-toleration-seconds
. These parameters can use values which allow you to control the reaction of the cluster to latency issues without needing to determine the best values by using manual methods.
Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability.
All worker latency profiles configure the following parameters:
- node-status-update-frequency
- Specifies how often the kubelet posts node status to the API server.
- node-monitor-grace-period
-
Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the
node.kubernetes.io/not-ready
ornode.kubernetes.io/unreachable
taint to the node. - default-not-ready-toleration-seconds
- Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node.
- default-unreachable-toleration-seconds
- Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node.
The following Operators monitor the changes to the worker latency profiles and respond accordingly:
-
The Machine Config Operator (MCO) updates the
node-status-update-frequency
parameter on the worker nodes. -
The Kubernetes Controller Manager updates the
node-monitor-grace-period
parameter on the control plane nodes. -
The Kubernetes API Server Operator updates the
default-not-ready-toleration-seconds
anddefault-unreachable-toleration-seconds
parameters on the control plane nodes.
Although the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections:
- Default worker latency profile
With the
Default
profile, eachKubelet
updates it’s status every 10 seconds (node-status-update-frequency
). TheKube Controller Manager
checks the statuses ofKubelet
every 5 seconds (node-monitor-grace-period
).The Kubernetes Controller Manager waits 40 seconds (
node-monitor-grace-period
) for a status update fromKubelet
before considering theKubelet
unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with thenode.kubernetes.io/not-ready
ornode.kubernetes.io/unreachable
taint and evicts the pods on that node.If a pod is on a node that has the
NoExecute
taint, the pod runs according totolerationSeconds
. If the node has no taint, it will be evicted in 300 seconds (default-not-ready-toleration-seconds
anddefault-unreachable-toleration-seconds
settings of theKube API Server
).Expand Profile Component Parameter Value Default
kubelet
node-status-update-frequency
10s
Kubelet Controller Manager
node-monitor-grace-period
40s
Kubernetes API Server Operator
default-not-ready-toleration-seconds
300s
Kubernetes API Server Operator
default-unreachable-toleration-seconds
300s
- Medium worker latency profile
Use the
MediumUpdateAverageReaction
profile if the network latency is slightly higher than usual.The
MediumUpdateAverageReaction
profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has thetolerationSeconds
parameter, the eviction waits for the period specified by that parameter.The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts.
Expand Profile Component Parameter Value MediumUpdateAverageReaction
kubelet
node-status-update-frequency
20s
Kubelet Controller Manager
node-monitor-grace-period
2m
Kubernetes API Server Operator
default-not-ready-toleration-seconds
60s
Kubernetes API Server Operator
default-unreachable-toleration-seconds
60s
- Low worker latency profile
Use the
LowUpdateSlowReaction
profile if the network latency is extremely high.The
LowUpdateSlowReaction
profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has thetolerationSeconds
parameter, the eviction waits for the period specified by that parameter.The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts.
Expand Profile Component Parameter Value LowUpdateSlowReaction
kubelet
node-status-update-frequency
1m
Kubelet Controller Manager
node-monitor-grace-period
5m
Kubernetes API Server Operator
default-not-ready-toleration-seconds
60s
Kubernetes API Server Operator
default-unreachable-toleration-seconds
60s
The latency profiles do not support custom machine config pools, only the default worker machine config pools.
6.4.2. Using and changing worker latency profiles Copy linkLink copied to clipboard!
To change a worker latency profile to deal with network latency, edit the node.config
object to add the name of the profile. You can change the profile at any time as latency increases or decreases.
You must move one worker latency profile at a time. For example, you cannot move directly from the Default
profile to the LowUpdateSlowReaction
worker latency profile. You must move from the Default
worker latency profile to the MediumUpdateAverageReaction
profile first, then to LowUpdateSlowReaction
. Similarly, when returning to the Default
profile, you must move from the low profile to the medium profile first, then to Default
.
You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster.
Procedure
To move from the default worker latency profile:
Move to the medium worker latency profile:
Edit the
node.config
object:oc edit nodes.config/cluster
$ oc edit nodes.config/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add
spec.workerLatencyProfile: MediumUpdateAverageReaction
:Example
node.config
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the medium worker latency policy.
Scheduling on each worker node is disabled as the change is being applied.
Optional: Move to the low worker latency profile:
Edit the
node.config
object:oc edit nodes.config/cluster
$ oc edit nodes.config/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
spec.workerLatencyProfile
value toLowUpdateSlowReaction
:Example
node.config
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies use of the low worker latency policy.
Scheduling on each worker node is disabled as the change is being applied.
Verification
When all nodes return to the
Ready
condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied:oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5
$ oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies that the profile is applied and active.
To change the medium profile to default or change the default to medium, edit the node.config
object and set the spec.workerLatencyProfile
parameter to the appropriate value.
6.5. Managing control plane machines Copy linkLink copied to clipboard!
Control plane machine sets provide management capabilities for control plane machines that are similar to what compute machine sets provide for compute machines. The availability and initial status of control plane machine sets on your cluster depend on your cloud provider and the version of OpenShift Container Platform that you installed. For more information, see Getting started with control plane machine sets.
6.6. Creating infrastructure machine sets for production environments Copy linkLink copied to clipboard!
You can create a compute machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets.
To create an infrastructure node, you can use a machine set, assign a label to the nodes, or use a machine config pool.
For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds.
Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label.
6.6.1. Creating a compute machine set Copy linkLink copied to clipboard!
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view values of a specific compute machine set custom resource (CR), run the following command:
oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the list of compute machine sets by running the following command:
oc get machineset -n openshift-machine-api
$ oc get machineset -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
6.6.2. Creating an infrastructure node Copy linkLink copied to clipboard!
See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API.
Requirements of the cluster dictate that infrastructure (infra) nodes, be provisioned. The installation program provisions only control plane and worker nodes. Worker nodes can be designated as infrastructure nodes through labeling. You can then use taints and tolerations to move appropriate workloads to the infrastructure nodes. For more information, see "Moving resources to infrastructure machine sets".
You can optionally create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces and creates an intersection with any existing node selectors on a pod, which additionally constrains the pod’s selector.
If the default node selector key conflicts with the key of a pod’s label, then the default node selector is not applied.
However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra=""
, when a pod’s label is set to a different node role, such as node-role.kubernetes.io/master=""
, can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles.
You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts.
Procedure
Add a label to the worker nodes that you want to act as infrastructure nodes:
oc label node <node-name> node-role.kubernetes.io/infra=""
$ oc label node <node-name> node-role.kubernetes.io/infra=""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check to see if applicable nodes now have the
infra
role:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create a default cluster-wide node selector:
Edit the
Scheduler
object:oc edit scheduler cluster
$ oc edit scheduler cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
defaultNodeSelector
field with the appropriate node selector:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This example node selector deploys pods on infrastructure nodes by default.
- Save the file to apply the changes.
You can now move infrastructure resources to the new infrastructure nodes. Also, remove any workloads that you do not want, or that do not belong, on the new infrastructure node. See the list of workloads supported for use on infrastructure nodes in "OpenShift Container Platform infrastructure components".
6.6.3. Creating a machine config pool for infrastructure machines Copy linkLink copied to clipboard!
If you need infrastructure machines to have dedicated configurations, you must create an infra pool.
Creating a custom machine configuration pool overrides default worker pool configurations if they refer to the same file or unit.
Procedure
Add a label to the node you want to assign as the infra node with a specific label:
oc label node <node_name> <label>
$ oc label node <node_name> <label>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=
$ oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a machine config pool that contains both the worker role and your custom role as machine config selector:
cat infra.mcp.yaml
$ cat infra.mcp.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteCustom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool.
After you have the YAML file, you can create the machine config pool:
oc create -f infra.mcp.yaml
$ oc create -f infra.mcp.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the machine configs to ensure that the infrastructure configuration rendered successfully:
oc get machineconfig
$ oc get machineconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see a new machine config, with the
rendered-infra-*
prefix.Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as
infra
. Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes.NoteAfter you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration.
Create a machine config:
cat infra.mc.yaml
$ cat infra.mc.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the label you added to the node as a
nodeSelector
.
Apply the machine config to the infra-labeled nodes:
oc create -f infra.mc.yaml
$ oc create -f infra.mc.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Confirm that your new machine config pool is available:
oc get mcp
$ oc get mcp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, a worker node was changed to an infra node.
6.7. Assigning machine set resources to infrastructure nodes Copy linkLink copied to clipboard!
After creating an infrastructure machine set, the worker
and infra
roles are applied to new infra nodes. Nodes with the infra
role are not counted toward the total number of subscriptions that are required to run the environment, even when the worker
role is also applied.
However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control.
6.7.1. Binding infrastructure node workloads using taints and tolerations Copy linkLink copied to clipboard!
If you have an infrastructure node that has the infra
and worker
roles assigned, you must configure the node so that user workloads are not assigned to it.
It is recommended that you preserve the dual infra,worker
label that is created for infrastructure nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker
label from the node, you must create a custom pool to manage it. A node with a label other than master
or worker
is not recognized by the MCO without a custom pool. Maintaining the worker
label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra
label communicates to the cluster that it does not count toward the total number of subscriptions.
Prerequisites
-
Configure additional
MachineSet
objects in your OpenShift Container Platform cluster.
Procedure
Add a taint to the infrastructure node to prevent scheduling user workloads on it:
Determine if the node has the taint:
oc describe nodes <node_name>
$ oc describe nodes <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the next step.
If you have not configured a taint to prevent scheduling user workloads on it:
oc adm taint nodes <node_name> <key>=<value>:<effect>
$ oc adm taint nodes <node_name> <key>=<value>:<effect>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule
$ oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively edit the pod specification to add the taint:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These examples place a taint on
node1
that has thenode-role.kubernetes.io/infra
key and theNoSchedule
taint effect. Nodes with theNoSchedule
effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.If you added a
NoSchedule
taint to the infrastructure node, any pods that are controlled by a daemon set on that node are marked asmisscheduled
. You must either delete the pods or add a toleration to the pods as shown in the Red Hat Knowledgebase solution add toleration onmisscheduled
DNS pods. Note that you cannot add a toleration to a daemon set object that is managed by an operator.NoteIf a descheduler is used, pods violating node taints could be evicted from the cluster.
Add tolerations to the pods that you want to schedule on the infrastructure node, such as the router, registry, and monitoring workloads. Referencing the previous examples, add the following tolerations to the
Pod
object specification:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This toleration matches the taint created by the
oc adm taint
command. A pod with this toleration can be scheduled onto the infrastructure node.NoteMoving pods for an Operator installed via OLM to an infrastructure node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.
- Schedule the pod to the infrastructure node by using a scheduler. See the documentation for "Controlling pod placement using the scheduler" for details.
- Remove any workloads that you do not want, or that do not belong, on the new infrastructure node. See the list of workloads supported for use on infrastructure nodes in "OpenShift Container Platform infrastructure components".
6.8. Moving resources to infrastructure machine sets Copy linkLink copied to clipboard!
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.
6.8.1. Moving the router Copy linkLink copied to clipboard!
You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node.
Prerequisites
- Configure additional compute machine sets in your OpenShift Container Platform cluster.
Procedure
View the
IngressController
custom resource for the router Operator:oc get ingresscontroller default -n openshift-ingress-operator -o yaml
$ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command output resembles the following text:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
ingresscontroller
resource and change thenodeSelector
to use theinfra
label:oc edit ingresscontroller default -n openshift-ingress-operator
$ oc edit ingresscontroller default -n openshift-ingress-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add a
nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
parameter in the format shown or use<key>: <value>
pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration.
Confirm that the router pod is running on the
infra
node.View the list of router pods and note the node name of the running pod:
oc get pod -n openshift-ingress -o wide
$ oc get pod -n openshift-ingress -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the running pod is on the
ip-10-0-217-226.ec2.internal
node.View the node status of the running pod:
oc get node <node_name>
$ oc get node <node_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the
<node_name>
that you obtained from the pod list.
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.28.5
NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because the role list includes
infra
, the pod is running on the correct node.
6.8.2. Moving the default registry Copy linkLink copied to clipboard!
You configure the registry Operator to deploy its pods to different nodes.
Prerequisites
- Configure additional compute machine sets in your OpenShift Container Platform cluster.
Procedure
View the
config/instance
object:oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
$ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
config/instance
object:oc edit configs.imageregistry.operator.openshift.io/cluster
$ oc edit configs.imageregistry.operator.openshift.io/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add a
nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
parameter in the format shown or use<key>: <value>
pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Verify the registry pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry pod is located:
oc get pods -o wide -n openshift-image-registry
$ oc get pods -o wide -n openshift-image-registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the node has the label you specified:
oc describe node <node_name>
$ oc describe node <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the command output and confirm that
node-role.kubernetes.io/infra
is in theLABELS
list.
6.8.3. Moving the monitoring solution Copy linkLink copied to clipboard!
The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map and change thenodeSelector
to use theinfra
label:oc edit configmap cluster-monitoring-config -n openshift-monitoring
$ oc edit configmap cluster-monitoring-config -n openshift-monitoring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add a
nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
parameter in the format shown or use<key>: <value>
pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration.
Watch the monitoring pods move to the new machines:
watch 'oc get pod -n openshift-monitoring -o wide'
$ watch 'oc get pod -n openshift-monitoring -o wide'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a component has not moved to the
infra
node, delete the pod with this component:oc delete pod -n openshift-monitoring <pod>
$ oc delete pod -n openshift-monitoring <pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The component from the deleted pod is re-created on the
infra
node.
6.8.4. Moving logging resources Copy linkLink copied to clipboard!
For information about moving logging resources, see:
6.9. Applying autoscaling to your cluster Copy linkLink copied to clipboard!
Applying autoscaling to an OpenShift Container Platform cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster.
For more information, see Applying autoscaling to an OpenShift Container Platform cluster.
6.10. Configuring Linux cgroup Copy linkLink copied to clipboard!
As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.14 or later will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 or later will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation.
cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information, and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2.
You can change between cgroup v1 and cgroup v2, as needed. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster.
In Telco, clusters using PerformanceProfile
for low latency, real-time, and Data Plane Development Kit (DPDK) workloads automatically revert to cgroups v1 due to the lack of cgroups v2 support. Enabling cgroup v2 is not supported if you are using PerformanceProfile
.
Prerequisites
- You have a running OpenShift Container Platform cluster that uses version 4.12 or later.
- You are logged in to the cluster as a user with administrative privileges.
Procedure
Configure the wanted cgroup version on your nodes:
Edit the
node.config
object:oc edit nodes.config/cluster
$ oc edit nodes.config/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add
spec.cgroupMode: "v1"
:Example
node.config
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables cgroup v1.
Verification
Check the machine configs to see that the new machine configs were added:
oc get mc
$ oc get mc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- New machine configs are created, as expected.
Check that the new
kernelArguments
were added to the new machine configs:oc describe mc <name>
$ oc describe mc <name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for cgroup v1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After a node returns to the
Ready
state, start a debug session for that node:oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell:chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
sys/fs/cgroup/cgroup2fs
file is present on your nodes. This file is created by cgroup v1:stat -c %T -f /sys/fs/cgroup
$ stat -c %T -f /sys/fs/cgroup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
cgroup2fs
cgroup2fs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.11. Enabling Technology Preview features using FeatureGates Copy linkLink copied to clipboard!
You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the FeatureGate
custom resource (CR).
6.11.1. Understanding feature gates Copy linkLink copied to clipboard!
You can use the FeatureGate
custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default.
You can activate the following feature set by using the FeatureGate
CR:
TechPreviewNoUpgrade
. This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters.WarningEnabling the
TechPreviewNoUpgrade
feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters.The following Technology Preview features are enabled by this feature set:
-
External cloud providers. Enables support for external cloud providers for clusters on vSphere, AWS, Azure, and GCP. Support for OpenStack is GA. This is an internal feature that most users do not need to interact with. (
ExternalCloudProvider
) -
Shared Resources CSI Driver in OpenShift Builds. Enables the Container Storage Interface (CSI). (
CSIDriverSharedResource
) -
Swap memory on nodes. Enables swap memory use for OpenShift Container Platform workloads on a per-node basis. (
NodeSwap
) -
OpenStack Machine API Provider. This gate has no effect and is planned to be removed from this feature set in a future release. (
MachineAPIProviderOpenStack
) -
Insights Operator. Enables the
InsightsDataGather
CRD, which allows users to configure some Insights data gathering options. The feature set also enables theDataGather
CRD, which allows users to run Insights data gathering on-demand. (InsightsConfigAPI
) -
Retroactive Default Storage Class. Enables OpenShift Container Platform to retroactively assign the default storage class to PVCs if there was no default storage class when the PVC was created.(
RetroactiveDefaultStorageClass
) -
Dynamic Resource Allocation API. Enables a new API for requesting and sharing resources between pods and containers. This is an internal feature that most users do not need to interact with. (
DynamicResourceAllocation
) -
Pod security admission enforcement. Enables the restricted enforcement mode for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. (
OpenShiftPodSecurityAdmission
) -
StatefulSet pod availability upgrading limits. Enables users to define the maximum number of statefulset pods unavailable during updates which reduces application downtime. (
MaxUnavailableStatefulSet
) -
Admin Network Policy and Baseline Admin Network Policy. Enables
AdminNetworkPolicy
andBaselineAdminNetworkPolicy
resources, which are part of the Network Policy V2 API, in clusters running the OVN-Kubernetes CNI plugin. Cluster administrators can apply cluster-scoped policies and safeguards for an entire cluster before namespaces are created. Network administrators can secure clusters by enforcing network traffic controls that cannot be overridden by users. Network administrators can enforce optional baseline network traffic controls that can be overridden by users in the cluster, if necessary. Currently, these APIs support only expressing policies for intra-cluster traffic. (AdminNetworkPolicy
) -
MatchConditions
is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list ofmatchConditions
matches all requests. (admissionWebhookMatchConditions
) -
Gateway API. To enable the OpenShift Container Platform Gateway API, set the value of the
enabled
field totrue
in thetechPreview.gatewayAPI
specification of theServiceMeshControlPlane
resource.(gateGatewayAPI
) -
gcpLabelsTags
-
vSphereStaticIPs
-
routeExternalCertificate
-
automatedEtcdBackup
-
gcpClusterHostedDNS
-
vSphereControlPlaneMachineset
-
dnsNameResolver
-
machineConfigNodes
-
metricsServer
-
installAlternateInfrastructureAWS
-
sdnLiveMigration
-
mixedCPUsAllocation
-
managedBootImages
-
onClusterBuild
-
signatureStores
-
External cloud providers. Enables support for external cloud providers for clusters on vSphere, AWS, Azure, and GCP. Support for OpenStack is GA. This is an internal feature that most users do not need to interact with. (
6.11.2. Enabling feature sets using the web console Copy linkLink copied to clipboard!
You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate
custom resource (CR).
Procedure
To enable feature sets:
- In the OpenShift Container Platform web console, switch to the Administration → Custom Resource Definitions page.
- On the Custom Resource Definitions page, click FeatureGate.
- On the Custom Resource Definition Details page, click the Instances tab.
- Click the cluster feature gate, then click the YAML tab.
Edit the cluster instance to add specific feature sets:
WarningEnabling the
TechPreviewNoUpgrade
feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters.Sample Feature Gate custom resource
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied.
Verification
You can verify that the feature gates are enabled by looking at the kubelet.conf
file on a node after the nodes return to the ready state.
- From the Administrator perspective in the web console, navigate to Compute → Nodes.
- Select a node.
- In the Node details page, click Terminal.
In the terminal window, change your root directory to
/host
:chroot /host
sh-4.2# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the
kubelet.conf
file:cat /etc/kubernetes/kubelet.conf
sh-4.2# cat /etc/kubernetes/kubelet.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
... ...
# ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The features that are listed as
true
are enabled on your cluster.NoteThe features listed vary depending upon the OpenShift Container Platform version.
6.11.3. Enabling feature sets using the CLI Copy linkLink copied to clipboard!
You can use the OpenShift CLI (oc
) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate
custom resource (CR).
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
To enable feature sets:
Edit the
FeatureGate
CR namedcluster
:oc edit featuregate cluster
$ oc edit featuregate cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningEnabling the
TechPreviewNoUpgrade
feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters.Sample FeatureGate custom resource
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied.
Verification
You can verify that the feature gates are enabled by looking at the kubelet.conf
file on a node after the nodes return to the ready state.
- From the Administrator perspective in the web console, navigate to Compute → Nodes.
- Select a node.
- In the Node details page, click Terminal.
In the terminal window, change your root directory to
/host
:chroot /host
sh-4.2# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the
kubelet.conf
file:cat /etc/kubernetes/kubelet.conf
sh-4.2# cat /etc/kubernetes/kubelet.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
... ...
# ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The features that are listed as
true
are enabled on your cluster.NoteThe features listed vary depending upon the OpenShift Container Platform version.
6.12. etcd tasks Copy linkLink copied to clipboard!
Back up etcd, enable or disable etcd encryption, or defragment etcd data.
6.12.1. About etcd encryption Copy linkLink copied to clipboard!
By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.
When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:
- Secrets
- Config maps
- Routes
- OAuth access tokens
- OAuth authorize tokens
When you enable etcd encryption, encryption keys are created. You must have these keys to restore from an etcd backup.
Etcd encryption only encrypts values, not keys. Resource types, namespaces, and object names are unencrypted.
If etcd encryption is enabled during a backup, the static_kuberesources_<datetimestamp>.tar.gz
file contains the encryption keys for the etcd snapshot. For security reasons, store this file separately from the etcd snapshot. However, this file is required to restore a previous state of etcd from the respective etcd snapshot.
6.12.2. Supported encryption types Copy linkLink copied to clipboard!
The following encryption types are supported for encrypting etcd data in OpenShift Container Platform:
- AES-CBC
- Uses AES-CBC with PKCS#7 padding and a 32 byte key to perform the encryption. The encryption keys are rotated weekly.
- AES-GCM
- Uses AES-GCM with a random nonce and a 32 byte key to perform the encryption. The encryption keys are rotated weekly.
6.12.3. Enabling etcd encryption Copy linkLink copied to clipboard!
You can enable etcd encryption to encrypt sensitive resources in your cluster.
Do not back up etcd resources until the initial encryption process is completed. If the encryption process is not completed, the backup might be only partially encrypted.
After you enable etcd encryption, several changes can occur:
- The etcd encryption might affect the memory consumption of a few resources.
- You might notice a transient affect on backup performance because the leader must serve the backup.
- A disk I/O can affect the node that receives the backup state.
You can encrypt the etcd database in either AES-GCM or AES-CBC encryption.
To migrate your etcd database from one encryption type to the other, you can modify the API server’s spec.encryption.type
field. Migration of the etcd data to the new encryption type occurs automatically.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Modify the
APIServer
object:oc edit apiserver
$ oc edit apiserver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
spec.encryption.type
field toaesgcm
oraescbc
:spec: encryption: type: aesgcm
spec: encryption: type: aesgcm
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set to
aesgcm
for AES-GCM encryption oraescbc
for AES-CBC encryption.
Save the file to apply the changes.
The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of the etcd database.
Verify that etcd encryption was successful.
Review the
Encrypted
status condition for the OpenShift API server to verify that its resources were successfully encrypted:oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output shows
EncryptionCompleted
upon successful encryption:EncryptionCompleted All resources encrypted: routes.route.openshift.io
EncryptionCompleted All resources encrypted: routes.route.openshift.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output shows
EncryptionInProgress
, encryption is still in progress. Wait a few minutes and try again.Review the
Encrypted
status condition for the Kubernetes API server to verify that its resources were successfully encrypted:oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output shows
EncryptionCompleted
upon successful encryption:EncryptionCompleted All resources encrypted: secrets, configmaps
EncryptionCompleted All resources encrypted: secrets, configmaps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output shows
EncryptionInProgress
, encryption is still in progress. Wait a few minutes and try again.Review the
Encrypted
status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted:oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output shows
EncryptionCompleted
upon successful encryption:EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io
EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output shows
EncryptionInProgress
, encryption is still in progress. Wait a few minutes and try again.
6.12.4. Disabling etcd encryption Copy linkLink copied to clipboard!
You can disable encryption of etcd data in your cluster.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Modify the
APIServer
object:oc edit apiserver
$ oc edit apiserver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
encryption
field type toidentity
:spec: encryption: type: identity
spec: encryption: type: identity
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
identity
type is the default value and means that no encryption is performed.
Save the file to apply the changes.
The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.
Verify that etcd decryption was successful.
Review the
Encrypted
status condition for the OpenShift API server to verify that its resources were successfully decrypted:oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output shows
DecryptionCompleted
upon successful decryption:DecryptionCompleted Encryption mode set to identity and everything is decrypted
DecryptionCompleted Encryption mode set to identity and everything is decrypted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output shows
DecryptionInProgress
, decryption is still in progress. Wait a few minutes and try again.Review the
Encrypted
status condition for the Kubernetes API server to verify that its resources were successfully decrypted:oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output shows
DecryptionCompleted
upon successful decryption:DecryptionCompleted Encryption mode set to identity and everything is decrypted
DecryptionCompleted Encryption mode set to identity and everything is decrypted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output shows
DecryptionInProgress
, decryption is still in progress. Wait a few minutes and try again.Review the
Encrypted
status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted:oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output shows
DecryptionCompleted
upon successful decryption:DecryptionCompleted Encryption mode set to identity and everything is decrypted
DecryptionCompleted Encryption mode set to identity and everything is decrypted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output shows
DecryptionInProgress
, decryption is still in progress. Wait a few minutes and try again.
6.12.5. Backing up etcd data Copy linkLink copied to clipboard!
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. You have checked whether the cluster-wide proxy is enabled.
TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.
Procedure
Start a debug session as root for a control plane node:
oc debug --as-root node/<node_name>
$ oc debug --as-root node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change your root directory to
/host
in the debug shell:chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the cluster-wide proxy is enabled, export the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables by running the following commands:export HTTP_PROXY=http://<your_proxy.example.com>:8080
$ export HTTP_PROXY=http://<your_proxy.example.com>:8080
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export HTTPS_PROXY=https://<your_proxy.example.com>:8080
$ export HTTPS_PROXY=https://<your_proxy.example.com>:8080
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NO_PROXY=<example.com>
$ export NO_PROXY=<example.com>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
cluster-backup.sh
script in the debug shell and pass in the location to save the backup to.TipThe
cluster-backup.sh
script is maintained as a component of the etcd Cluster Operator and is a wrapper around theetcdctl snapshot save
command./usr/local/bin/cluster-backup.sh /home/core/assets/backup
sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example script output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two files are created in the
/home/core/assets/backup/
directory on the control plane host:-
snapshot_<datetimestamp>.db
: This file is the etcd snapshot. Thecluster-backup.sh
script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz
: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.NoteIf etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot.
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.
-
6.12.6. Defragmenting etcd data Copy linkLink copied to clipboard!
For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.
Monitor these key metrics:
-
etcd_server_quota_backend_bytes
, which is the current quota limit -
etcd_mvcc_db_total_size_in_use_in_bytes
, which indicates the actual database usage after a history compaction -
etcd_mvcc_db_total_size_in_bytes
, which shows the database size, including free space waiting for defragmentation
Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.
History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.
Defragmentation occurs automatically, but you can also trigger it manually.
Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user.
6.12.6.1. Automatic defragmentation Copy linkLink copied to clipboard!
The etcd Operator automatically defragments disks. No manual intervention is needed.
Verify that the defragmentation process is successful by viewing one of these logs:
- etcd logs
- cluster-etcd-operator pod
- operator status error log
Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the next running instance or the component resumes work again after the restart.
Example log output for successful defragmentation
etcd member has been defragmented: <member_name>, memberID: <member_id>
etcd member has been defragmented: <member_name>, memberID: <member_id>
Example log output for unsuccessful defragmentation
failed defrag on member: <member_name>, memberID: <member_id>: <error_message>
failed defrag on member: <member_name>, memberID: <member_id>: <error_message>
6.12.6.2. Manual defragmentation Copy linkLink copied to clipboard!
A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases:
- When etcd uses more than 50% of its available space for more than 10 minutes
- When etcd is actively using less than 50% of its total database size for more than 10 minutes
You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024
Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.
Follow this procedure to defragment etcd data on each etcd member.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Determine which etcd member is the leader, because the leader should be defragmented last.
Get the list of etcd pods:
oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
$ oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Choose a pod and run the following command to determine which etcd member is the leader:
oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Based on the
IS LEADER
column of this output, thehttps://10.0.199.170:2379
endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader isetcd-ip-10-0-199-170.example.redhat.com
.
Defragment an etcd member.
Connect to the running etcd container, passing in the name of a pod that is not the leader:
oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Unset the
ETCDCTL_ENDPOINTS
environment variable:unset ETCDCTL_ENDPOINTS
sh-4.4# unset ETCDCTL_ENDPOINTS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Defragment the etcd member:
etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Finished defragmenting etcd member[https://localhost:2379]
Finished defragmenting etcd member[https://localhost:2379]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a timeout error occurs, increase the value for
--command-timeout
until the command succeeds.Verify that the database size was reduced:
etcdctl endpoint status -w table --cluster
sh-4.4# etcdctl endpoint status -w table --cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.
Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.
Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.
If any
NOSPACE
alarms were triggered due to the space quota being exceeded, clear them.Check if there are any
NOSPACE
alarms:etcdctl alarm list
sh-4.4# etcdctl alarm list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
memberID:12345678912345678912 alarm:NOSPACE
memberID:12345678912345678912 alarm:NOSPACE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Clear the alarms:
etcdctl alarm disarm
sh-4.4# etcdctl alarm disarm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.12.7. Restoring to a previous cluster state Copy linkLink copied to clipboard!
You can use a saved etcd
backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
If your cluster uses a control plane machine set, see "Recovering a degraded etcd Operator" in "Troubleshooting the control plane machine set" for an etcd recovery procedure.
When you restore your cluster, you must use an etcd
backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd
backup that was taken from 4.7.2.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role through a certificate-basedkubeconfig
file, like the one that was used during installation. - A healthy control plane host to use as the recovery host.
- SSH access to control plane hosts.
-
A backup directory containing both the
etcd
snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats:snapshot_<datetimestamp>.db
andstatic_kuberesources_<datetimestamp>.tar.gz
.
For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one.
Procedure
- Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
Establish SSH connectivity to each of the control plane nodes, including the recovery host.
kube-apiserver
becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.ImportantIf you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
Copy the
etcd
backup directory to the recovery control plane host.This procedure assumes that you copied the
backup
directory containing theetcd
snapshot and the resources for the static pods to the/home/core/
directory of your recovery control plane host.Stop the static pods on any other control plane nodes.
NoteYou do not need to stop the static pods on the recovery host.
- Access a control plane host that is not the recovery host.
Move the existing etcd pod file out of the kubelet manifest directory by running:
sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
etcd
pods are stopped by using:sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is not empty, wait a few minutes and check again.
Move the existing
kube-apiserver
file out of the kubelet manifest directory by running:sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
kube-apiserver
containers are stopped by running:sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard"
$ sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is not empty, wait a few minutes and check again.
Move the existing
kube-controller-manager
file out of the kubelet manifest directory by using:sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
kube-controller-manager
containers are stopped by running:sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard"
$ sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is not empty, wait a few minutes and check again.
Move the existing
kube-scheduler
file out of the kubelet manifest directory by using:sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
kube-scheduler
containers are stopped by using:sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard"
$ sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is not empty, wait a few minutes and check again.
Move the
etcd
data directory to a different location with the following example:sudo mv -v /var/lib/etcd/ /tmp
$ sudo mv -v /var/lib/etcd/ /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
/etc/kubernetes/manifests/keepalived.yaml
file exists and the node is deleted, follow these steps:Move the
/etc/kubernetes/manifests/keepalived.yaml
file out of the kubelet manifest directory:sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that any containers managed by the
keepalived
daemon are stopped:sudo crictl ps --name keepalived
$ sudo crictl ps --name keepalived
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Check if the control plane has any Virtual IPs (VIPs) assigned to it:
ip -o address | egrep '<api_vip>|<ingress_vip>'
$ ip -o address | egrep '<api_vip>|<ingress_vip>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each reported VIP, run the following command to remove it:
sudo ip address del <reported_vip> dev <reported_vip_device>
$ sudo ip address del <reported_vip> dev <reported_vip_device>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Repeat this step on each of the other control plane hosts that is not the recovery host.
- Access the recovery control plane host.
If the
keepalived
daemon is in use, verify that the recovery control plane node owns the VIP:ip -o address | grep <api_vip>
$ ip -o address | grep <api_vip>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly.
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables.TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.Run the restore script on the recovery control plane host and pass in the path to the
etcd
backup directory:sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup
$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example script output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The cluster-restore.sh script must show that
etcd
,kube-apiserver
,kube-controller-manager
, andkube-scheduler
pods are stopped and then started at the end of the restore process.NoteThe restore process can cause nodes to enter the
NotReady
state if the node certificates were updated after the lastetcd
backup.Check the nodes to ensure they are in the
Ready
state.Run the following command:
oc get nodes -w
$ oc get nodes -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It can take several minutes for all nodes to report their state.
If any nodes are in the
NotReady
state, log in to the nodes and remove all of the PEM files from the/var/lib/kubelet/pki
directory on each node. You can SSH into the nodes or use the terminal window in the web console.ssh -i <ssh-key-path> core@<master-hostname>
$ ssh -i <ssh-key-path> core@<master-hostname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample
pki
directorypwd ls
sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart the kubelet service on all control plane hosts.
From the recovery host, run:
sudo systemctl restart kubelet.service
$ sudo systemctl restart kubelet.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step on all other control plane hosts.
Approve the pending Certificate Signing Requests (CSRs):
NoteClusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step.
Get the list of current CSRs by running:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid by running:
oc describe csr <csr_name>
$ oc describe csr <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
Approve each valid
node-bootstrapper
CSR by running:oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For user-provisioned installations, approve each valid kubelet service CSR by running:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the single member control plane has started successfully.
From the recovery host, verify that the
etcd
container is running by using:sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the recovery host, verify that the
etcd
pod is running by using:oc -n openshift-etcd get pods -l k8s-app=etcd
$ oc -n openshift-etcd get pods -l k8s-app=etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the status is
Pending
, or the output lists more than one runningetcd
pod, wait a few minutes and check again.
If you are using the
OVNKubernetes
network plugin, you must restartovnkube-controlplane
pods.Delete all of the
ovnkube-controlplane
pods by running:oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane
$ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all of the
ovnkube-controlplane
pods were redeployed by using:oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane
$ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all the nodes one by one. Use the following steps to restart OVN-Kubernetes pods on each node:
ImportantRestart OVN-Kubernetes pods in the following order:
- The recovery control plane host
- The other control plane hosts (if available)
- The other nodes
NoteValidating and mutating admission webhooks can reject pods. If you add any additional webhooks with the
failurePolicy
set toFail
, then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again.Alternatively, you can temporarily set the
failurePolicy
toIgnore
while restoring the cluster state. After the cluster state is restored successfully, you can set thefailurePolicy
toFail
.Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run:
sudo rm -f /var/lib/ovn-ic/etc/*.db
$ sudo rm -f /var/lib/ovn-ic/etc/*.db
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the OpenVSwitch services. Access the node by using Secure Shell (SSH) and run the following command:
sudo systemctl restart ovs-vswitchd ovsdb-server
$ sudo systemctl restart ovs-vswitchd ovsdb-server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ovnkube-node
pod on the node by running the following command, replacing<node>
with the name of the node that you are restarting:oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>
$ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the OVN pods by running the following command:
oc get po -n openshift-ovn-kubernetes
$ oc get po -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If any OVN pods are in the
Terminating
status, delete the node that is running that OVN pod by running the following command. Replace<node>
with the name of the node you are deleting:oc delete node <node>
$ oc delete node <node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use SSH to log in to the OVN pod node with the
Terminating
status by running the following command:ssh -i <ssh-key-path> core@<node>
$ ssh -i <ssh-key-path> core@<node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move all PEM files from the
/var/lib/kubelet/pki
directory by running the following command:sudo mv /var/lib/kubelet/pki/* /tmp
$ sudo mv /var/lib/kubelet/pki/* /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the kubelet service by running the following command:
sudo systemctl restart kubelet.service
$ sudo systemctl restart kubelet.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Return to the recovery etcd machines by running the following command:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending
NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all new CSRs by running the following command, replacing
csr-<uuid>
with the name of the CSR:oc adm certificate approve csr-<uuid>
oc adm certificate approve csr-<uuid>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the node is back by running the following command:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the
ovnkube-node
pod is running again with:oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>
$ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt might take several minutes for the pods to restart.
Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and
etcd
automatically scales up.If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal".
WarningDo not delete and re-create the machine for the recovery host.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps:
WarningDo not delete and re-create the machine for the recovery host.
For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node".
Obtain the machine for one of the lost control plane hosts.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the control plane machine for the lost control plane host,
ip-10-0-131-183.ec2.internal
.
Delete the machine of the lost control plane host by running:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine for the lost control plane host.
A new machine is automatically provisioned after deleting the machine of the lost control plane host.
Verify that a new machine has been created by running:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3
is being created and is ready after the phase changes fromProvisioning
toRunning
.
It might take a few minutes for the new machine to be created. The
etcd
cluster Operator will automatically sync when the machine or node returns to a healthy state.- Repeat these steps for each lost control plane host that is not the recovery host.
Turn off the quorum guard by entering:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command ensures that you can successfully re-create secrets and roll out the static pods.
In a separate terminal window within the recovery host, export the recovery
kubeconfig
file by running:export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig
$ export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Force
etcd
redeployment.In the same terminal window where you exported the recovery
kubeconfig
file, run:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
When the
etcd
cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.Turn the quorum guard back on by entering:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the
unsupportedConfigOverrides
section is removed from the object by running:oc get etcd/cluster -oyaml
$ oc get etcd/cluster -oyaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
In a terminal that has access to the cluster as a
cluster-admin
user, run:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition foretcd
to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 7
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.After
etcd
is redeployed, force new rollouts for the control plane.kube-apiserver
will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.In a terminal that has access to the cluster as a
cluster-admin
user, run:Force a new rollout for
kube-apiserver
:oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 7
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the Kubernetes controller manager by running the following command:
oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision by running:
oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 7
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the
kube-scheduler
by running:oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision by using:
oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 7
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.
Verify that all control plane hosts have started and joined the cluster.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc -n openshift-etcd get pods -l k8s-app=etcd
$ oc -n openshift-etcd get pods -l k8s-app=etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To ensure that all workloads return to normal operation following a recovery procedure, restart all control plane nodes.
On completion of the previous procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login
might not immediately work until the OAuth server pods are restarted.
Consider using the system:admin
kubeconfig
file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command:
export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig
Issue the following command to display your authenticated user name:
oc whoami
$ oc whoami
6.12.8. Issues and workarounds for restoring a persistent storage state Copy linkLink copied to clipboard!
If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet
object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.
The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.
The following are some example scenarios that produce an out-of-date status:
- MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.
- Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.
- Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.
A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from
/dev/disk/by-id
or/dev
directories. This situation might cause the local PVs to refer to devices that no longer exist.To fix this problem, an administrator must:
- Manually remove the PVs with invalid devices.
- Remove symlinks from respective nodes.
-
Delete
LocalVolume
orLocalVolumeSet
objects (see Storage → Configuring persistent storage → Persistent storage using local volumes → Deleting the Local Storage Operator Resources).
6.13. Pod disruption budgets Copy linkLink copied to clipboard!
Understand and configure pod disruption budgets.
6.13.1. Understanding how to use pod disruption budgets to specify the number of pods that must be up Copy linkLink copied to clipboard!
A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance.
PodDisruptionBudget
is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures).
A PodDisruptionBudget
object’s configuration consists of the following key parts:
- A label selector, which is a label query over a set of pods.
An availability level, which specifies the minimum number of pods that must be available simultaneously, either:
-
minAvailable
is the number of pods must always be available, even during a disruption. -
maxUnavailable
is the number of pods can be unavailable during a disruption.
-
Available
refers to the number of pods that has condition Ready=True
. Ready=True
refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services.
A maxUnavailable
of 0%
or 0
or a minAvailable
of 100%
or equal to the number of replicas is permitted but can block nodes from being drained.
The default setting for maxUnavailable
is 1
for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3
for the control plane pool.
You can check for pod disruption budgets across all projects with the following:
oc get poddisruptionbudget --all-namespaces
$ oc get poddisruptionbudget --all-namespaces
The following example contains some values that are specific to OpenShift Container Platform on AWS.
Example output
The PodDisruptionBudget
is considered healthy when there are at least minAvailable
pods running in the system. Every pod above that limit can be evicted.
Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements.
6.13.2. Specifying the number of pods that must be up with pod disruption budgets Copy linkLink copied to clipboard!
You can use a PodDisruptionBudget
object to specify the minimum number or percentage of replicas that must be up at a time.
Procedure
To configure a pod disruption budget:
Create a YAML file with the an object definition similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
PodDisruptionBudget
is part of thepolicy/v1
API group.- 2
- The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example,
20%
. - 3
- A label query over a set of resources. The result of
matchLabels
andmatchExpressions
are logically conjoined. Leave this parameter blank, for exampleselector {}
, to select all pods in the project.
Or:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
PodDisruptionBudget
is part of thepolicy/v1
API group.- 2
- The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example,
20%
. - 3
- A label query over a set of resources. The result of
matchLabels
andmatchExpressions
are logically conjoined. Leave this parameter blank, for exampleselector {}
, to select all pods in the project.
Run the following command to add the object to project:
oc create -f </path/to/file> -n <project_name>
$ oc create -f </path/to/file> -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.13.3. Specifying the eviction policy for unhealthy pods Copy linkLink copied to clipboard!
When you use pod disruption budgets (PDBs) to specify how many pods must be available simultaneously, you can also define the criteria for how unhealthy pods are considered for eviction.
You can choose one of the following policies:
- IfHealthyBudget
- Running pods that are not yet healthy can be evicted only if the guarded application is not disrupted.
- AlwaysAllow
Running pods that are not yet healthy can be evicted regardless of whether the criteria in the pod disruption budget is met. This policy can help evict malfunctioning applications, such as ones with pods stuck in the
CrashLoopBackOff
state or failing to report theReady
status.NoteIt is recommended to set the
unhealthyPodEvictionPolicy
field toAlwaysAllow
in thePodDisruptionBudget
object to support the eviction of misbehaving applications during a node drain. The default behavior is to wait for the application pods to become healthy before the drain can proceed.
Procedure
Create a YAML file that defines a
PodDisruptionBudget
object and specify the unhealthy pod eviction policy:Example
pod-disruption-budget.yaml
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Choose either
IfHealthyBudget
orAlwaysAllow
as the unhealthy pod eviction policy. The default isIfHealthyBudget
when theunhealthyPodEvictionPolicy
field is empty.
Create the
PodDisruptionBudget
object by running the following command:oc create -f pod-disruption-budget.yaml
$ oc create -f pod-disruption-budget.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
With a PDB that has the AlwaysAllow
unhealthy pod eviction policy set, you can now drain nodes and evict the pods for a malfunctioning application guarded by this PDB.
Chapter 7. Postinstallation node tasks Copy linkLink copied to clipboard!
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements through certain node tasks.
7.1. Adding RHEL compute machines to an OpenShift Container Platform cluster Copy linkLink copied to clipboard!
Understand and work with RHEL compute nodes.
7.1.1. About adding RHEL compute nodes to a cluster Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.15, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64
architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster.
If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks.
For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default.
- Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster.
- Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines.
7.1.2. System requirements for RHEL compute nodes Copy linkLink copied to clipboard!
The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements:
- You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information.
- Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
Each system must meet the following hardware requirements:
- Physical or virtual system, or an instance running on a public or private IaaS.
Base operating system: Use RHEL 8.8 or a later version with the minimal installation option.
ImportantAdding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported.
If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
- If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation.
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
- NetworkManager 1.0 or later.
- 1 vCPU.
- Minimum 8 GB RAM.
-
Minimum 15 GB hard disk space for the file system containing
/var/
. -
Minimum 1 GB hard disk space for the file system containing
/usr/local/bin/
. Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library.
-
Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the
disk.enableUUID=TRUE
attribute must be set. - Each system must be able to access the cluster’s API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster’s API service endpoints.
-
Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the
7.1.2.1. Certificate signing requests management Copy linkLink copied to clipboard!
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
7.1.3. Preparing the machine to run the playbook Copy linkLink copied to clipboard!
Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.15 cluster, you must prepare a RHEL 8 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it.
Prerequisites
-
Install the OpenShift CLI (
oc
) on the machine that you run the playbook on. -
Log in as a user with
cluster-admin
permission.
Procedure
-
Ensure that the
kubeconfig
file for the cluster and the installation program that you used to install the cluster are on the RHEL 8 machine. One way to accomplish this is to use the same machine that you used to install the cluster. - Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN.
Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.
ImportantIf you use SSH key-based authentication, you must manage the key with an SSH agent.
If you have not already done so, register the machine with RHSM and attach a pool with an
OpenShift
subscription to it:Register the machine with RHSM:
subscription-manager register --username=<user_name> --password=<password>
# subscription-manager register --username=<user_name> --password=<password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data from RHSM:
subscription-manager refresh
# subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available subscriptions:
subscription-manager list --available --matches '*OpenShift*'
# subscription-manager list --available --matches '*OpenShift*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
subscription-manager attach --pool=<pool_id>
# subscription-manager attach --pool=<pool_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Enable the repositories required by OpenShift Container Platform 4.15:
subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.15-for-rhel-8-x86_64-rpms"
# subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.15-for-rhel-8-x86_64-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the required packages, including
openshift-ansible
:yum install openshift-ansible openshift-clients jq
# yum install openshift-ansible openshift-clients jq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
openshift-ansible
package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. Theopenshift-clients
provides theoc
CLI, and thejq
package improves the display of JSON output on your command line.
7.1.4. Preparing a RHEL compute node Copy linkLink copied to clipboard!
Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories.
On each host, register with RHSM:
subscription-manager register --username=<user_name> --password=<password>
# subscription-manager register --username=<user_name> --password=<password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data from RHSM:
subscription-manager refresh
# subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available subscriptions:
subscription-manager list --available --matches '*OpenShift*'
# subscription-manager list --available --matches '*OpenShift*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
subscription-manager attach --pool=<pool_id>
# subscription-manager attach --pool=<pool_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable all yum repositories:
Disable all the enabled RHSM repositories:
subscription-manager repos --disable="*"
# subscription-manager repos --disable="*"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the remaining yum repositories and note their names under
repo id
, if any:yum repolist
# yum repolist
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
yum-config-manager
to disable the remaining yum repositories:yum-config-manager --disable <repo_id>
# yum-config-manager --disable <repo_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, disable all repositories:
yum-config-manager --disable \*
# yum-config-manager --disable \*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this might take a few minutes if you have a large number of available repositories
Enable only the repositories required by OpenShift Container Platform 4.15:
subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.15-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms"
# subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.15-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop and disable firewalld on the host:
systemctl disable --now firewalld.service
# systemctl disable --now firewalld.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker.
7.1.5. Adding a RHEL compute machine to your cluster Copy linkLink copied to clipboard!
You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.15 cluster.
Prerequisites
- You installed the required packages and performed the necessary configuration on the machine that you run the playbook on.
- You prepared the RHEL hosts for installation.
Procedure
Perform the following steps on the machine that you prepared to run the playbook:
Create an Ansible inventory file that is named
/<path>/inventory/hosts
that defines your compute machine hosts and required variables:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the user name that runs the Ansible tasks on the remote compute machines.
- 2
- If you do not specify
root
for theansible_user
, you must setansible_become
toTrue
and assign the user sudo permissions. - 3
- Specify the path and file name of the
kubeconfig
file for your cluster. - 4
- List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine.
Navigate to the Ansible playbook directory:
cd /usr/share/ansible/openshift-ansible
$ cd /usr/share/ansible/openshift-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml
$ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<path>
, specify the path to the Ansible inventory file that you created.
7.1.6. Required parameters for the Ansible hosts file Copy linkLink copied to clipboard!
You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster.
Parameter | Description | Values |
---|---|---|
| The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. |
A user name on the system. The default value is |
|
If the values of |
|
|
Specifies a path and file name to a local directory that contains the | The path and name of the configuration file. |
7.1.7. Optional: Removing RHCOS compute machines from a cluster Copy linkLink copied to clipboard!
After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources.
Prerequisites
- You have added RHEL compute machines to your cluster.
Procedure
View the list of machines and record the node names of the RHCOS compute machines:
oc get nodes -o wide
$ oc get nodes -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each RHCOS compute machine, delete the node:
Mark the node as unschedulable by running the
oc adm cordon
command:oc adm cordon <node_name>
$ oc adm cordon <node_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the node name of one of the RHCOS compute machines.
Drain all the pods from the node:
oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the node name of the RHCOS compute machine that you isolated.
Delete the node:
oc delete nodes <node_name>
$ oc delete nodes <node_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the node name of the RHCOS compute machine that you drained.
Review the list of compute machines to ensure that only the RHEL nodes remain:
oc get nodes -o wide
$ oc get nodes -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove the RHCOS machines from the load balancer for your cluster’s compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines.
7.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster Copy linkLink copied to clipboard!
You can add more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to your OpenShift Container Platform cluster on bare metal.
Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines.
7.2.1. Prerequisites Copy linkLink copied to clipboard!
- You installed a cluster on bare metal.
- You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure.
7.2.2. Creating RHCOS machines using an ISO image Copy linkLink copied to clipboard!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
You must have the OpenShift CLI (
oc
) installed.
Procedure
Extract the Ignition config file from the cluster by running the following command:
oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Upload the
worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node:
curl -k http://<HTTP_server>/worker.ign
$ curl -k http://<HTTP_server>/worker.ign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can access the ISO image for booting your new machine by running to following command:
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
- Burn the ISO image to a disk and boot it directly.
- Use ISO redirection with a LOM interface.
Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment.
NoteYou can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the
coreos-installer
command as outlined in the following steps, instead of adding kernel arguments.Run the
coreos-installer
command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to:sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest>
$ sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest>
1 2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must run the
coreos-installer
command by usingsudo
, because thecore
user does not have the required root privileges to perform the installation. - 2
- The
--ignition-hash
option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node.<digest>
is the Ignition config file SHA512 digest obtained in a preceding step.
NoteIf you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running
coreos-installer
.The following example initializes a bootstrap node installation to the
/dev/sda
device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2:sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
$ sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the progress of the RHCOS installation on the console of the machine.
ImportantEnsure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise.
- Continue to create more compute machines for your cluster.
7.2.3. Creating RHCOS machines by PXE or iPXE booting Copy linkLink copied to clipboard!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
Obtain the URLs of the RHCOS ISO image, compressed metal BIOS,
kernel
, andinitramfs
files that you uploaded to your HTTP server during cluster installation. - You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them.
-
If you use UEFI, you have access to the
grub.conf
file that you modified during OpenShift Container Platform installation.
Procedure
Confirm that your PXE or iPXE installation for the RHCOS images is correct.
For PXE:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the location of the live
kernel
file that you uploaded to your HTTP server. - 2
- Specify locations of the RHCOS files that you uploaded to your HTTP server. The
initrd
parameter value is the location of the liveinitramfs
file, thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_url
parameter value is the location of the liverootfs
file. Thecoreos.inst.ignition_url
andcoreos.live.rootfs_url
parameters only support HTTP and HTTPS.
NoteThis configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more
console=
arguments to theAPPEND
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?.For iPXE (
x86_64
+aarch64
):kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img boot
kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign
1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img
3 boot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP server. The
kernel
parameter value is the location of thekernel
file, theinitrd=main
argument is needed for booting on UEFI systems, thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your HTTP server.
NoteThis configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more
console=
arguments to thekernel
line. For example, addconsole=tty0 console=ttyS0
to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section.NoteTo network boot the CoreOS
kernel
onaarch64
architecture, you need to use a version of iPXE build with theIMAGE_GZIP
option enabled. SeeIMAGE_GZIP
option in iPXE.For PXE (with UEFI and GRUB as second stage) on
aarch64
:menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign initrd rhcos-<version>-live-initramfs.<architecture>.img }
menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign
1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img
3 }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The
kernel
parameter value is the location of thekernel
file on your TFTP server. Thecoreos.live.rootfs_url
parameter value is the location of therootfs
file, and thecoreos.inst.ignition_url
parameter value is the location of the worker Ignition config file on your HTTP Server. - 2
- If you use multiple NICs, specify a single interface in the
ip
option. For example, to use DHCP on a NIC that is namedeno1
, setip=eno1:dhcp
. - 3
- Specify the location of the
initramfs
file that you uploaded to your TFTP server.
- Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
7.2.4. Approving the certificate signing requests for your machines Copy linkLink copied to clipboard!
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
Pending
orApproved
status for each machine that you added to the cluster:oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approver
if the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec
,oc rsh
, andoc logs
commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapper
service account in thesystem:node
orsystem:admin
groups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the remaining CSRs are not approved, and are in the
Pending
status, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After all client and server CSRs have been approved, the machines have the
Ready
status. Verify this by running the following command:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Ready
status.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
7.2.5. Adding a new RHCOS worker node with a custom /var partition in AWS Copy linkLink copied to clipboard!
OpenShift Container Platform supports partitioning devices during installation by using machine configs that are processed during the bootstrap. However, if you use /var
partitioning, the device name must be determined at installation and cannot be changed. You cannot add different instance types as nodes if they have a different device naming schema. For example, if you configured the /var
partition with the default AWS device name for m4.large
instances, dev/xvdb
, you cannot directly add an AWS m5.large
instance, as m5.large
instances use a /dev/nvme1n1
device by default. The device might fail to partition due to the different naming schema.
The procedure in this section shows how to add a new Red Hat Enterprise Linux CoreOS (RHCOS) compute node with an instance that uses a different device name from what was configured at installation. You create a custom user data secret and configure a new compute machine set. These steps are specific to an AWS cluster. The principles apply to other cloud deployments also. However, the device naming schema is different for other deployments and should be determined on a per-case basis.
Procedure
On a command line, change to the
openshift-machine-api
namespace:oc project openshift-machine-api
$ oc project openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new secret from the
worker-user-data
secret:Export the
userData
section of the secret to a text file:oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt
$ oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the text file to add the
storage
,filesystems
, andsystemd
stanzas for the partitions you want to use for the new node. You can specify any Ignition configuration parameters as needed.NoteDo not change the values in the
ignition
stanza.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies an absolute path to the AWS block device.
- 2
- Specifies the size of the data partition in Mebibytes.
- 3
- Specifies the start of the partition in Mebibytes. When adding a data partition to the boot disk, a minimum value of 25000 MB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
- 4
- Specifies an absolute path to the
/var
partition. - 5
- Specifies the filesystem format.
- 6
- Specifies the mount-point of the filesystem while Ignition is running relative to where the root filesystem will be mounted. This is not necessarily the same as where it should be mounted in the real root, but it is encouraged to make it the same.
- 7
- Defines a systemd mount unit that mounts the
/dev/disk/by-partlabel/var
device to the/var
partition.
Extract the
disableTemplating
section from thework-user-data
secret to a text file:oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt
$ oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new user data secret file from the two text files. This user data secret passes the additional node partition information in the
userData.txt
file to the newly created node.oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt
$ oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a new compute machine set for the new node:
Create a new compute machine set YAML file, similar to the following, which is configured for AWS. Add the required partitions and the newly-created user data secret:
TipUse an existing compute machine set as a template and change the parameters as needed for the new node.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the compute machine set:
$ oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The machines might take a few moments to become available.
Verify that the new partition and nodes are created:
Verify that the compute machine set is created:
oc get machineset
$ oc get machineset
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s
NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the new compute machine set.
Verify that the new node is created:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is new new node.
Verify that the custom
/var
partition is created on the new node:oc debug node/<node-name> -- chroot /host lsblk
$ oc debug node/<node-name> -- chroot /host lsblk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk
$ oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
nvme1n1
device is mounted to the/var
partition.
7.3. Deploying machine health checks Copy linkLink copied to clipboard!
Understand and deploy machine health checks.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
oc get infrastructure cluster -o jsonpath='{.status.platform}'
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
7.3.1. About machine health checks Copy linkLink copied to clipboard!
You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets.
To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady
status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor.
The controller that observes a MachineHealthCheck
resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted
event.
To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy
threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention.
Consider the timeouts carefully, accounting for workloads and requirements.
- Long timeouts can result in long periods of downtime for the workload on the unhealthy machine.
-
Too short timeouts can result in a remediation loop. For example, the timeout for checking the
NotReady
status must be long enough to allow the machine to complete the startup process.
To stop the check, remove the resource.
7.3.1.1. Limitations when deploying machine health checks Copy linkLink copied to clipboard!
There are limitations to consider before deploying a machine health check:
- Only machines owned by a machine set are remediated by a machine health check.
- If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately.
-
If the corresponding node for a machine does not join the cluster after the
nodeStartupTimeout
, the machine is remediated. -
A machine is remediated immediately if the
Machine
resource phase isFailed
.
7.3.2. Sample MachineHealthCheck resource Copy linkLink copied to clipboard!
The MachineHealthCheck
resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file:
- 1
- Specify the name of the machine health check to deploy.
- 2 3
- Specify a label for the machine pool that you want to check.
- 4
- Specify the machine set to track in
<cluster_name>-<label>-<zone>
format. For example,prod-node-us-east-1a
. - 5 6
- Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine.
- 7
- Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by
maxUnhealthy
, remediation is not performed. - 8
- Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy.
The matchLabels
are examples only; you must map your machine groups based on your specific needs.
7.3.2.1. Short-circuiting machine health check remediation Copy linkLink copied to clipboard!
Short-circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy
field in the MachineHealthCheck
resource.
If the user defines a value for the maxUnhealthy
field, before remediating any machines, the MachineHealthCheck
compares the value of maxUnhealthy
with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy
limit.
If maxUnhealthy
is not set, the value defaults to 100%
and the machines are remediated regardless of the state of the cluster.
The appropriate maxUnhealthy
value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck
covers. For example, you can use the maxUnhealthy
value to cover multiple compute machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy
setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
If you configure a MachineHealthCheck
resource for the control plane, set the value of maxUnhealthy
to 1
.
This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress.
If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish.
The maxUnhealthy
field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy
value.
7.3.2.1.1. Setting maxUnhealthy by using an absolute value Copy linkLink copied to clipboard!
If maxUnhealthy
is set to 2
:
- Remediation will be performed if 2 or fewer nodes are unhealthy
- Remediation will not be performed if 3 or more nodes are unhealthy
These values are independent of how many machines are being checked by the machine health check.
7.3.2.1.2. Setting maxUnhealthy by using percentages Copy linkLink copied to clipboard!
If maxUnhealthy
is set to 40%
and there are 25 machines being checked:
- Remediation will be performed if 10 or fewer nodes are unhealthy
- Remediation will not be performed if 11 or more nodes are unhealthy
If maxUnhealthy
is set to 40%
and there are 6 machines being checked:
- Remediation will be performed if 2 or fewer nodes are unhealthy
- Remediation will not be performed if 3 or more nodes are unhealthy
The allowed number of machines is rounded down when the percentage of maxUnhealthy
machines that are checked is not a whole number.
7.3.3. Creating a machine health check resource Copy linkLink copied to clipboard!
You can create a MachineHealthCheck
resource for machine sets in your cluster.
You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets.
Prerequisites
-
Install the
oc
command-line interface.
Procedure
-
Create a
healthcheck.yml
file that contains the definition of your machine health check. Apply the
healthcheck.yml
file to your cluster:oc apply -f healthcheck.yml
$ oc apply -f healthcheck.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.4. Scaling a compute machine set manually Copy linkLink copied to clipboard!
To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set.
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets.
Prerequisites
-
Install an OpenShift Container Platform cluster and the
oc
command line. -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the compute machine sets that are in the cluster by running the following command:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The compute machine sets are listed in the form of
<clusterid>-worker-<aws-region-az>
.View the compute machines that are in the cluster by running the following command:
oc get machine -n openshift-machine-api
$ oc get machine -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the annotation on the compute machine that you want to delete by running the following command:
oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true"
$ oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale the compute machine set by running one of the following commands:
oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to scale the compute machine set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can scale the compute machine set up or down. It takes several minutes for the new machines to be available.
ImportantBy default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine.
You can skip draining the node by annotating
machine.openshift.io/exclude-node-draining
in a specific machine.
Verification
Verify the deletion of the intended machine by running the following command:
oc get machines
$ oc get machines
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.5. Understanding the difference between compute machine sets and the machine config pool Copy linkLink copied to clipboard!
MachineSet
objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider.
The MachineConfigPool
object allows MachineConfigController
components to define and provide the status of machines in the context of upgrades.
The MachineConfigPool
object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool.
The NodeSelector
object can be replaced with a reference to the MachineSet
object.
7.4. Recommended node host practices Copy linkLink copied to clipboard!
The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore
and maxPods
.
When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in:
- Increased CPU utilization.
- Slow pod scheduling.
- Potential out-of-memory scenarios, depending on the amount of memory in the node.
- Exhausting the pool of IP addresses.
- Resource overcommitting, leading to poor user application performance.
In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running.
Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload.
The podsPerCore
parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore
is set to 10
on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40
.
kubeletConfig: podsPerCore: 10
kubeletConfig:
podsPerCore: 10
Setting podsPerCore
to 0
disables this limit. The default is 0
. The value of the podsPerCore
parameter cannot exceed the value of the maxPods
parameter.
The maxPods
parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node.
kubeletConfig: maxPods: 250
kubeletConfig:
maxPods: 250
7.4.1. Creating a KubeletConfig CR to edit kubelet parameters Copy linkLink copied to clipboard!
The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller
added to the Machine Config Controller (MCC). This lets you use a KubeletConfig
custom resource (CR) to edit the kubelet parameters.
As the fields in the kubeletConfig
object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig
object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation.
Consider the following guidance:
-
Edit an existing
KubeletConfig
CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. -
Create one
KubeletConfig
CR for each machine config pool with all the config changes you want for that pool. -
As needed, create multiple
KubeletConfig
CRs with a limit of 10 per cluster. For the firstKubeletConfig
CR, the Machine Config Operator (MCO) creates a machine config appended withkubelet
. With each subsequent CR, the controller creates anotherkubelet
machine config with a numeric suffix. For example, if you have akubelet
machine config with a-2
suffix, the nextkubelet
machine config is appended with-3
.
If you are applying a kubelet or container runtime config to a custom machine config pool, the custom role in the machineConfigSelector
must match the name of the custom machine config pool.
For example, because the following custom machine config pool is named infra
, the custom role must also be infra
:
If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3
machine config before deleting the kubelet-2
machine config.
If you have a machine config with a kubelet-9
suffix, and you create another KubeletConfig
CR, a new machine config is not created, even if there are fewer than 10 kubelet
machine configs.
Example KubeletConfig
CR
oc get kubeletconfig
$ oc get kubeletconfig
NAME AGE set-kubelet-config 15m
NAME AGE
set-kubelet-config 15m
Example showing a KubeletConfig
machine config
oc get mc | grep kubelet
$ oc get mc | grep kubelet
... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.4.0 26m ...
...
99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.4.0 26m
...
The following procedure is an example to show how to configure the maximum number of pods per node, the maximum PIDs per node, and the maximum container log size size on the worker nodes.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CR for the type of node you want to configure. Perform one of the following steps:View the machine config pool:
oc describe machineconfigpool <name>
$ oc describe machineconfigpool <name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc describe machineconfigpool worker
$ oc describe machineconfigpool worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If a label has been added it appears under
labels
.
If the label is not present, add a key/value pair:
oc label machineconfigpool worker custom-kubelet=set-kubelet-config
$ oc label machineconfigpool worker custom-kubelet=set-kubelet-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
View the available machine configuration objects that you can select:
oc get machineconfig
$ oc get machineconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, the two kubelet-related configs are
01-master-kubelet
and01-worker-kubelet
.Check the current value for the maximum pods per node:
oc describe node <node_name>
$ oc describe node <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94
$ oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Look for
value: pods: <value>
in theAllocatable
stanza:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the worker nodes as needed:
Create a YAML file similar to the following that contains the kubelet configuration:
ImportantKubelet configurations that target a specific machine config pool also affect any dependent pools. For example, creating a kubelet configuration for the pool containing worker nodes will also apply to any subset pools, including the pool containing infrastructure nodes. To avoid this, you must create a new machine config pool with a selection expression that only includes worker nodes, and have your kubelet configuration target this new pool.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use
podPidsLimit
to set the maximum number of PIDs in any pod. -
Use
containerLogMaxSize
to set the maximum size of the container log file before it is rotated. Use
maxPods
to set the maximum pods per node.NoteThe rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values,
50
forkubeAPIQPS
and100
forkubeAPIBurst
, are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Use
Update the machine config pool for workers with the label:
oc label machineconfigpool worker custom-kubelet=set-kubelet-config
$ oc label machineconfigpool worker custom-kubelet=set-kubelet-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
KubeletConfig
object:oc create -f change-maxPods-cr.yaml
$ oc create -f change-maxPods-cr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
KubeletConfig
object is created:oc get kubeletconfig
$ oc get kubeletconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE set-kubelet-config 15m
NAME AGE set-kubelet-config 15m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes.
Verify that the changes are applied to the node:
Check on a worker node that the
maxPods
value changed:oc describe node <node_name>
$ oc describe node <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Locate the
Allocatable
stanza:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the
pods
parameter should report the value you set in theKubeletConfig
object.
Verify the change in the
KubeletConfig
object:oc get kubeletconfigs set-kubelet-config -o yaml
$ oc get kubeletconfigs set-kubelet-config -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This should show a status of
True
andtype:Success
, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.3. Control plane node sizing Copy linkLink copied to clipboard!
The control plane node resource requirements depend on the number and type of nodes and objects in the cluster. The following control plane node size recommendations are based on the results of a control plane density focused testing, or Cluster-density. This test creates the following objects across a given number of namespaces:
- 1 image stream
- 1 build
-
5 deployments, with 2 pod replicas in a
sleep
state, mounting 4 secrets, 4 config maps, and 1 downward API volume each - 5 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the previous deployments
- 1 route pointing to the first of the previous services
- 10 secrets containing 2048 random string characters
- 10 config maps containing 2048 random string characters
Number of worker nodes | Cluster-density (namespaces) | CPU cores | Memory (GB) |
---|---|---|---|
24 | 500 | 4 | 16 |
120 | 1000 | 8 | 32 |
252 | 4000 | 16, but 24 if using the OVN-Kubernetes network plug-in | 64, but 128 if using the OVN-Kubernetes network plug-in |
501, but untested with the OVN-Kubernetes network plug-in | 4000 | 16 | 96 |
The data from the table above is based on an OpenShift Container Platform running on top of AWS, using r5.4xlarge instances as control-plane nodes and m5.2xlarge instances as worker nodes.
On a large and dense cluster with three control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted, or fails. The failures can be due to unexpected issues with power, network, underlying infrastructure, or intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available, which leads to increase in the resource usage. This is also expected during upgrades because the control plane nodes are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources.
The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running
phase.
Operator Lifecycle Manager (OLM ) runs on the control plane nodes and its memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing.
Number of namespaces | OLM memory at idle state (GB) | OLM memory with 5 user operators installed (GB) |
---|---|---|
500 | 0.823 | 1.7 |
1000 | 1.2 | 2.5 |
1500 | 1.7 | 3.2 |
2000 | 2 | 4.4 |
3000 | 2.7 | 5.6 |
4000 | 3.8 | 7.6 |
5000 | 4.2 | 9.02 |
6000 | 5.8 | 11.3 |
7000 | 6.6 | 12.9 |
8000 | 6.9 | 14.8 |
9000 | 8 | 17.7 |
10,000 | 9.9 | 21.6 |
You can modify the control plane node size in a running OpenShift Container Platform 4.15 cluster for the following configurations only:
- Clusters installed with a user-provisioned installation method.
- AWS clusters installed with an installer-provisioned infrastructure installation method.
- Clusters that use a control plane machine set to manage control plane machines.
For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation.
The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShift SDN as the network plugin.
In OpenShift Container Platform 4.15, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and previous versions. The sizes are determined taking that into consideration.
7.4.4. Setting up CPU Manager Copy linkLink copied to clipboard!
To configure CPU manager, create a KubeletConfig custom resource (CR) and apply it to the desired set of nodes.
Procedure
Label a node by running the following command:
oc label node perf-node.example.com cpumanager=true
# oc label node perf-node.example.com cpumanager=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable CPU Manager for all compute nodes, edit the CR by running the following command:
oc edit machineconfigpool worker
# oc edit machineconfigpool worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
custom-kubelet: cpumanager-enabled
label tometadata.labels
section.metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled
metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
KubeletConfig
,cpumanager-kubeletconfig.yaml
, custom resource (CR). Refer to the label created in the previous step to have the correct nodes updated with the new kubelet config. See themachineConfigPoolSelector
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a policy:
-
none
. This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. -
static
. This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. Ifstatic
, you must use a lowercases
.
-
- 2
- Optional. Specify the CPU Manager reconcile frequency. The default is
5s
.
Create the dynamic kubelet config by running the following command:
oc create -f cpumanager-kubeletconfig.yaml
# oc create -f cpumanager-kubeletconfig.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed.
Check for the merged kubelet config by running the following command:
oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7
# oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the compute node for the updated
kubelet.conf
file by running the following command:oc debug node/perf-node.example.com
# oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
cpuManagerPolicy: static cpuManagerReconcilePeriod: 5s
cpuManagerPolicy: static
1 cpuManagerReconcilePeriod: 5s
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project by running the following command:
oc new-project <project_name>
$ oc new-project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod:
cat cpumanager-pod.yaml
# cat cpumanager-pod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod:
oc create -f cpumanager-pod.yaml
# oc create -f cpumanager-pod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the pod is scheduled to the node that you labeled by running the following command:
oc describe pod cpumanager
# oc describe pod cpumanager
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that a CPU has been exclusively assigned to the pod by running the following command:
oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2
# oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m
NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
cgroups
are set up correctly. Get the process ID (PID) of thepause
process by running the following commands:oc debug node/perf-node.example.com
# oc debug node/perf-node.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl status | grep -B5 pause
sh-4.2# systemctl status | grep -B5 pause
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the output returns multiple pause process entries, you must identify the correct pause process.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that pods of quality of service (QoS) tier
Guaranteed
are placed within thekubepods.slice
subdirectory by running the following commands:cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope
# cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope
Copy to Clipboard Copied! Toggle word wrap Toggle overflow for i in `ls cpuset.cpus cgroup.procs` ; do echo -n "$i "; cat $i ; done
# for i in `ls cpuset.cpus cgroup.procs` ; do echo -n "$i "; cat $i ; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NotePods of other QoS tiers end up in child
cgroups
of the parentkubepods
.Example output
cpuset.cpus 1 tasks 32706
cpuset.cpus 1 tasks 32706
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the allowed CPU list for the task by running the following command:
grep ^Cpus_allowed_list /proc/32706/status
# grep ^Cpus_allowed_list /proc/32706/status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Cpus_allowed_list: 1
Cpus_allowed_list: 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that another pod on the system cannot run on the core allocated for the
Guaranteed
pod. For example, to verify the pod in thebesteffort
QoS tier, run the following commands:cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus
# cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe node perf-node.example.com
# oc describe node perf-node.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This VM has two CPU cores. The
system-reserved
setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at theNode Allocatable
amount. You can see thatAllocatable CPU
is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled:NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s
NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.5. Huge pages Copy linkLink copied to clipboard!
Understand and configure huge pages.
7.5.1. What huge pages do Copy linkLink copied to clipboard!
Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.
A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.
7.5.2. How huge pages are consumed by apps Copy linkLink copied to clipboard!
Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size.
Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size>
, where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi
. Unlike CPU or memory, huge pages do not support over-commitment.
- 1
- Specify the amount of memory for
hugepages
as the exact amount to be allocated. Do not specify this value as the amount of memory forhugepages
multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify100MB
directly.
Allocating huge pages of a specific size
Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size>
. The <size>
value must be specified in bytes with an optional scale suffix [kKmMgG
]. The default huge page size can be defined with the default_hugepagesz=<size>
boot parameter.
Huge page requirements
- Huge page requests must equal the limits. This is the default if limits are specified, but requests are not.
- Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration.
-
EmptyDir
volumes backed by huge pages must not consume more huge page memory than the pod request. -
Applications that consume huge pages via
shmget()
withSHM_HUGETLB
must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group.
7.5.3. Configuring huge pages at boot time Copy linkLink copied to clipboard!
Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes.
Procedure
To minimize node reboots, the order of the steps below needs to be followed:
Label all nodes that need the same huge pages setting by a label.
oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=
$ oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file with the following content and name it
hugepages-tuned-boottime.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Tuned
hugepages
objectoc create -f hugepages-tuned-boottime.yaml
$ oc create -f hugepages-tuned-boottime.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file with the following content and name it
hugepages-mcp.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the machine config pool:
oc create -f hugepages-mcp.yaml
$ oc create -f hugepages-mcp.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Given enough non-fragmented memory, all the nodes in the worker-hp
machine config pool should now have 50 2Mi huge pages allocated.
oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}"
$ oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}"
100Mi
The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes.
7.6. Understanding device plugins Copy linkLink copied to clipboard!
The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them.
OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors.
A device plugin is a gRPC service running on the nodes (external to the kubelet
) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs):
7.6.1. Example device plugins Copy linkLink copied to clipboard!
For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go.
7.6.2. Methods for deploying a device plugin Copy linkLink copied to clipboard!
- Daemon sets are the recommended approach for device plugin deployments.
- Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager.
- Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context.
- More specific details regarding deployment steps can be found with each device plugin implementation.
7.6.3. Understanding the Device Manager Copy linkLink copied to clipboard!
Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins.
You can advertise specialized hardware without requiring any upstream code changes.
OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors.
Device Manager advertises devices as Extended Resources. User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource.
Upon start, the device plugin registers itself with Device Manager invoking Register
on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests.
Device Manager, while processing a new registration request, invokes ListAndWatch
remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection.
While handling a new pod admission request, Kubelet passes requested Extended Resources
to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate
RPC is invoked at that particular device plugin.
Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation.
7.6.4. Enabling Device Manager Copy linkLink copied to clipboard!
Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes.
Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins.
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure by entering the following command. Perform one of the following steps:View the machine config:
oc describe machineconfig <name>
# oc describe machineconfig <name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc describe machineconfig 00-worker
# oc describe machineconfig 00-worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker
Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Label required for the Device Manager.
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a Device Manager CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Device Manager:
oc create -f devicemgr.yaml
$ oc create -f devicemgr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kubeletconfig.machineconfiguration.openshift.io/devicemgr created
kubeletconfig.machineconfiguration.openshift.io/devicemgr created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled.
7.7. Taints and tolerations Copy linkLink copied to clipboard!
Understand and work with taints and tolerations.
7.7.1. Understanding taints and tolerations Copy linkLink copied to clipboard!
A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration.
You apply taints to a node through the Node
specification (NodeSpec
) and apply tolerations to a pod through the Pod
specification (PodSpec
). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint.
Example taint in a node specification
Example toleration in a Pod
spec
Taints and tolerations consist of a key, value, and effect.
Parameter | Description | ||||||
---|---|---|---|---|---|---|---|
|
The | ||||||
|
The | ||||||
| The effect is one of the following:
| ||||||
|
|
If you add a
NoSchedule
taint to a control plane node, the node must have thenode-role.kubernetes.io/master=:NoSchedule
taint, which is added by default.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
A toleration matches a taint:
If the
operator
parameter is set toEqual
:-
the
key
parameters are the same; -
the
value
parameters are the same; -
the
effect
parameters are the same.
-
the
If the
operator
parameter is set toExists
:-
the
key
parameters are the same; -
the
effect
parameters are the same.
-
the
The following taints are built into OpenShift Container Platform:
-
node.kubernetes.io/not-ready
: The node is not ready. This corresponds to the node conditionReady=False
. -
node.kubernetes.io/unreachable
: The node is unreachable from the node controller. This corresponds to the node conditionReady=Unknown
. -
node.kubernetes.io/memory-pressure
: The node has memory pressure issues. This corresponds to the node conditionMemoryPressure=True
. -
node.kubernetes.io/disk-pressure
: The node has disk pressure issues. This corresponds to the node conditionDiskPressure=True
. -
node.kubernetes.io/network-unavailable
: The node network is unavailable. -
node.kubernetes.io/unschedulable
: The node is unschedulable. -
node.cloudprovider.kubernetes.io/uninitialized
: When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure
: The node has pid pressure. This corresponds to the node conditionPIDPressure=True
.ImportantOpenShift Container Platform does not set a default pid.available
evictionHard
.
7.7.2. Adding taints and tolerations Copy linkLink copied to clipboard!
You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration.
Procedure
Add a toleration to a pod by editing the
Pod
spec to include atolerations
stanza:Sample pod configuration file with an Equal operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Sample pod configuration file with an Exists operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
Exists
operator does not take avalue
.
This example places a taint on
node1
that has keykey1
, valuevalue1
, and taint effectNoExecute
.Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table:
oc adm taint nodes <node_name> <key>=<value>:<effect>
$ oc adm taint nodes <node_name> <key>=<value>:<effect>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc adm taint nodes node1 key1=value1:NoExecute
$ oc adm taint nodes node1 key1=value1:NoExecute
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command places a taint on
node1
that has keykey1
, valuevalue1
, and effectNoExecute
.NoteIf you add a
NoSchedule
taint to a control plane node, the node must have thenode-role.kubernetes.io/master=:NoSchedule
taint, which is added by default.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto
node1
.
7.7.3. Adding taints and tolerations using a compute machine set Copy linkLink copied to clipboard!
You can add taints to nodes using a compute machine set. All nodes associated with the MachineSet
object are updated with the taint. Tolerations respond to taints added by a compute machine set in the same manner as taints added directly to the nodes.
Procedure
Add a toleration to a pod by editing the
Pod
spec to include atolerations
stanza:Sample pod configuration file with
Equal
operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Sample pod configuration file with
Exists
operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the taint to the
MachineSet
object:Edit the
MachineSet
YAML for the nodes you want to taint or you can create a newMachineSet
object:oc edit machineset <machineset>
$ oc edit machineset <machineset>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the taint to the
spec.template.spec
section:Example taint in a compute machine set specification
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example places a taint that has the key
key1
, valuevalue1
, and taint effectNoExecute
on the nodes.Scale down the compute machine set to 0:
oc scale --replicas=0 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=0 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to scale the compute machine set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the machines to be removed.
Scale up the compute machine set as needed:
oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the machines to start. The taint is added to the nodes associated with the
MachineSet
object.
7.7.4. Binding a user to a node using taints and tolerations Copy linkLink copied to clipboard!
If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster.
If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label.
Procedure
To configure a node so that users can use only that node:
Add a corresponding taint to those nodes:
For example:
oc adm taint nodes node1 dedicated=groupName:NoSchedule
$ oc adm taint nodes node1 dedicated=groupName:NoSchedule
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to add the taint:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add a toleration to the pods by writing a custom admission controller.
7.7.5. Controlling nodes with special hardware using taints and tolerations Copy linkLink copied to clipboard!
In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes.
You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware.
Procedure
To ensure nodes with specialized hardware are reserved for specific pods:
Add a toleration to pods that need the special hardware.
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Taint the nodes that have the specialized hardware using one of the following commands:
oc adm taint nodes <node-name> disktype=ssd:NoSchedule
$ oc adm taint nodes <node-name> disktype=ssd:NoSchedule
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule
$ oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to add the taint:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.7.6. Removing taints and tolerations Copy linkLink copied to clipboard!
You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration.
Procedure
To remove taints and tolerations:
To remove a taint from a node:
oc adm taint nodes <node-name> <key>-
$ oc adm taint nodes <node-name> <key>-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc adm taint nodes ip-10-0-132-248.ec2.internal key1-
$ oc adm taint nodes ip-10-0-132-248.ec2.internal key1-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
node/ip-10-0-132-248.ec2.internal untainted
node/ip-10-0-132-248.ec2.internal untainted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove a toleration from a pod, edit the
Pod
spec to remove the toleration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.8. Topology Manager Copy linkLink copied to clipboard!
Understand and work with Topology Manager.
7.8.1. Topology Manager policies Copy linkLink copied to clipboard!
Topology Manager aligns Pod
resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod
resources.
Topology Manager supports four allocation policies, which you assign in the KubeletConfig
custom resource (CR) named cpumanager-enabled
:
none
policy- This is the default policy and does not perform any topology alignment.
best-effort
policy-
For each container in a pod with the
best-effort
topology management policy, kubelet tries to align all the required resources on a NUMA node according to the preferred NUMA node affinity for that container. Even if the allocation is not possible due to insufficient resources, the Topology Manager still admits the pod but the allocation is shared with other NUMA nodes. restricted
policy-
For each container in a pod with the
restricted
topology management policy, kubelet determines the theoretical minimum number of NUMA nodes that can fulfill the request. If the actual allocation requires more than the that number of NUMA nodes, the Topology Manager rejects the admission, placing the pod in aTerminated
state. If the number of NUMA nodes can fulfill the request, the Topology Manager admits the pod and the pod starts running. single-numa-node
policy-
For each container in a pod with the
single-numa-node
topology management policy, kubelet admits the pod if all the resources required by the pod can be allocated on the same NUMA node. If a single NUMA node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in aTerminated
state with a pod admission failure.
7.8.2. Setting up Topology Manager Copy linkLink copied to clipboard!
To use Topology Manager, you must configure an allocation policy in the KubeletConfig
custom resource (CR) named cpumanager-enabled
. This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file.
Prerequisites
-
Configure the CPU Manager policy to be
static
.
Procedure
To activate Topology Manager:
Configure the Topology Manager allocation policy in the custom resource.
oc edit KubeletConfig cpumanager-enabled
$ oc edit KubeletConfig cpumanager-enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.8.3. Pod interactions with Topology Manager policies Copy linkLink copied to clipboard!
The example Pod
specs illustrate pod interactions with Topology Manager.
The following pod runs in the BestEffort
QoS class because no resource requests or limits are specified.
spec: containers: - name: nginx image: nginx
spec:
containers:
- name: nginx
image: nginx
The next pod runs in the Burstable
QoS class because requests are less than limits.
If the selected policy is anything other than none
, Topology Manager would process all the pods and it enforces resource alignment only for the Guaranteed
Qos Pod
specification. When the Topology Manager policy is set to none
, the relevant containers are pinned to any available CPU without considering NUMA affinity. This is the default behavior and it does not optimize for performance-sensitive workloads. Other values enable the use of topology awareness information from device plugins core resources, such as CPU and memory. The Topology Manager attempts to align the CPU, memory, and device allocations according to the topology of the node when the policy is set to other values than none
. For more information about the available values, see Topology Manager policies.
The following example pod runs in the Guaranteed
QoS class because requests are equal to limits.
Topology Manager would consider this pod. The Topology Manager would consult the Hint Providers, which are the CPU Manager, the Device Manager, and the Memory Manager, to get topology hints for the pod.
Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.
7.9. Resource requests and overcommitment Copy linkLink copied to clipboard!
For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node.
The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service.
Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 100% overcommitted.
7.10. Cluster-level overcommit using the Cluster Resource Override Operator Copy linkLink copied to clipboard!
The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits.
The Operator modifies the ratio between the requests and limits that are set on developer containers. In conjunction with a per-project limit range that specifies limits and defaults, you can achieve the desired level of overcommit.
You must install the Cluster Resource Override Operator by using the OpenShift Container Platform console or CLI as shown in the following sections. After you deploy the Cluster Resource Override Operator, the Operator modifies all new pods in specific namespaces. The Operator does not edit pods that existed before you deployed the Operator.
During the installation, you create a ClusterResourceOverride
custom resource (CR), where you set the level of overcommit, as shown in the following example:
- 1
- The name must be
cluster
. - 2
- Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50.
- 3
- Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25.
- 4
- Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200.
The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange
object with default limits per individual project or configure limits in Pod
specs for the overrides to apply.
When configured, you can enable overrides on a per-project basis by applying the following label to the Namespace
object for each project where you want the overrides to apply. For example, you can configure override so that infrastructure components are not subject to the overrides.
The Operator watches for the ClusterResourceOverride
CR and ensures that the ClusterResourceOverride
admission webhook is installed into the same namespace as the operator.
For example, a pod has the following resources limits:
The Cluster Resource Override Operator intercepts the original pod request, then overrides the resources according to the configuration set in the ClusterResourceOverride
object.
- 1
- The CPU limit has been overridden to
1
because thelimitCPUToMemoryPercent
parameter is set to200
in theClusterResourceOverride
object. As such, 200% of the memory limit, 512Mi in CPU terms, is 1 CPU core. - 2
- The CPU request is now
250m
because thecpuRequestToLimit
is set to25
in theClusterResourceOverride
object. As such, 25% of the 1 CPU core is 250m.
7.10.1. Installing the Cluster Resource Override Operator using the web console Copy linkLink copied to clipboard!
You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
-
The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To install the Cluster Resource Override Operator using the OpenShift Container Platform web console:
In the OpenShift Container Platform web console, navigate to Home → Projects
- Click Create Project.
-
Specify
clusterresourceoverride-operator
as the name of the project. - Click Create.
Navigate to Operators → OperatorHub.
- Choose ClusterResourceOverride Operator from the list of available Operators and click Install.
- On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode.
- Make sure clusterresourceoverride-operator is selected for Installed Namespace.
- Select an Update Channel and Approval Strategy.
- Click Install.
On the Installed Operators page, click ClusterResourceOverride.
- On the ClusterResourceOverride Operator details page, click Create ClusterResourceOverride.
On the Create ClusterResourceOverride page, click YAML view and edit the YAML template to set the overcommit values as needed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name must be
cluster
. - 2
- Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50.
- 3
- Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
- 4
- Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
- Click Create.
Check the current state of the admission webhook by checking the status of the cluster custom resource:
- On the ClusterResourceOverride Operator page, click cluster.
On the ClusterResourceOverride Details page, click YAML. The
mutatingWebhookConfigurationRef
section appears when the webhook is called.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Reference to the
ClusterResourceOverride
admission webhook.
7.10.2. Installing the Cluster Resource Override Operator using the CLI Copy linkLink copied to clipboard!
You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
-
The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To install the Cluster Resource Override Operator using the CLI:
Create a namespace for the Cluster Resource Override Operator:
Create a
Namespace
object YAML file (for example,cro-namespace.yaml
) for the Cluster Resource Override Operator:apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator
apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create -f cro-namespace.yaml
$ oc create -f cro-namespace.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an Operator group:
Create an
OperatorGroup
object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Operator Group:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create -f cro-og.yaml
$ oc create -f cro-og.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a subscription:
Create a
Subscription
object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the subscription:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create -f cro-sub.yaml
$ oc create -f cro-sub.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
ClusterResourceOverride
custom resource (CR) object in theclusterresourceoverride-operator
namespace:Change to the
clusterresourceoverride-operator
namespace.oc project clusterresourceoverride-operator
$ oc project clusterresourceoverride-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ClusterResourceOverride
object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name must be
cluster
. - 2
- Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50.
- 3
- Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
- 4
- Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
Create the
ClusterResourceOverride
object:oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create -f cro-cr.yaml
$ oc create -f cro-cr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the current state of the admission webhook by checking the status of the cluster custom resource.
oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml
$ oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
mutatingWebhookConfigurationRef
section appears when the webhook is called.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Reference to the
ClusterResourceOverride
admission webhook.
7.10.3. Configuring cluster-level overcommit Copy linkLink copied to clipboard!
The Cluster Resource Override Operator requires a ClusterResourceOverride
custom resource (CR) and a label for each project where you want the Operator to control overcommit.
Prerequisites
-
The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To modify cluster-level overcommit:
Edit the
ClusterResourceOverride
CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50.
- 2
- Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
- 3
- Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add this label to each project.
7.11. Node-level overcommit Copy linkLink copied to clipboard!
You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects.
7.11.1. Understanding compute resources and containers Copy linkLink copied to clipboard!
The node-enforced behavior for compute resources is specific to the resource type.
7.11.1.1. Understanding container CPU requests Copy linkLink copied to clipboard!
A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container.
For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled.
7.11.1.2. Understanding container memory requests Copy linkLink copied to clipboard!
A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node’s resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount.
7.11.2. Understanding overcommitment and quality of service classes Copy linkLink copied to clipboard!
A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity.
In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class.
A pod is designated as one of three QoS classes with decreasing order of priority:
Priority | Class Name | Description |
---|---|---|
1 (highest) | Guaranteed | If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed. |
2 | Burstable | If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable. |
3 (lowest) | BestEffort | If requests and limits are not set for any of the resources, then the pod is classified as BestEffort. |
Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first:
- Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted.
- Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist.
- BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory.
7.11.2.1. Understanding how to reserve memory across quality of service tiers Copy linkLink copied to clipboard!
You can use the qos-reserved
parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes.
OpenShift Container Platform uses the qos-reserved
parameter as follows:
-
A value of
qos-reserved=memory=100%
will prevent theBurstable
andBestEffort
QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM onBestEffort
andBurstable
workloads in favor of increasing memory resource guarantees forGuaranteed
andBurstable
workloads. -
A value of
qos-reserved=memory=50%
will allow theBurstable
andBestEffort
QoS classes to consume half of the memory requested by a higher QoS class. -
A value of
qos-reserved=memory=0%
will allow aBurstable
andBestEffort
QoS classes to consume up to the full node allocatable amount if available, but increases the risk that aGuaranteed
workload will not have access to requested memory. This condition effectively disables this feature.
7.11.3. Understanding swap memory and QOS Copy linkLink copied to clipboard!
You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement.
For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed.
Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure, resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event.
If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure.
7.11.4. Understanding nodes overcommitment Copy linkLink copied to clipboard!
In an overcommitted environment, it is important to properly configure your node to provide best system behavior.
When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory.
To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory
parameter to 1
, overriding the default operating system setting.
OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom
parameter to 0
. A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority
You can view the current setting by running the following commands on your nodes:
sysctl -a |grep commit
$ sysctl -a |grep commit
Example output
#... vm.overcommit_memory = 0 #...
#...
vm.overcommit_memory = 0
#...
sysctl -a |grep panic
$ sysctl -a |grep panic
Example output
#... vm.panic_on_oom = 0 #...
#...
vm.panic_on_oom = 0
#...
The above flags should already be set on nodes, and no further action is required.
You can also perform the following configurations for each node:
- Disable or enforce CPU limits using CPU CFS quotas
- Reserve resources for system processes
- Reserve memory across quality of service tiers
7.11.5. Disabling or enforcing CPU limits using CPU CFS quotas Copy linkLink copied to clipboard!
Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel.
If you disable CPU limit enforcement, it is important to understand the impact on your node:
- If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel.
- If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel.
- If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure by entering the following command:oc edit machineconfigpool <name>
$ oc edit machineconfigpool <name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc edit machineconfigpool worker
$ oc edit machineconfigpool worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The label appears under Labels.
TipIf the label is not present, add a key/value pair such as:
oc label machineconfigpool worker custom-kubelet=small-pods
$ oc label machineconfigpool worker custom-kubelet=small-pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a disabling CPU limits
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create the CR:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.11.6. Reserving resources for system processes Copy linkLink copied to clipboard!
To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory.
Procedure
To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes.
7.11.7. Disabling overcommitment for a node Copy linkLink copied to clipboard!
When enabled, overcommitment can be disabled on each node.
Procedure
To disable overcommitment in a node run the following command on that node:
sysctl -w vm.overcommit_memory=0
$ sysctl -w vm.overcommit_memory=0
7.12. Project-level limits Copy linkLink copied to clipboard!
To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed.
For information on project-level resource limits, see Additional resources.
Alternatively, you can disable overcommitment for specific projects.
7.12.1. Disabling overcommitment for a project Copy linkLink copied to clipboard!
When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment.
Procedure
To disable overcommitment in a project:
- Create or edit the namespace object file.
Add the following annotation:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Setting this annotation to
false
disables overcommit for this namespace.
7.13. Freeing node resources using garbage collection Copy linkLink copied to clipboard!
Understand and use garbage collection.
7.13.1. Understanding how terminated containers are removed through garbage collection Copy linkLink copied to clipboard!
Container garbage collection removes terminated containers by using eviction thresholds.
When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs
.
- eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period.
- eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action.
The following table lists the eviction thresholds:
Node condition | Eviction signal | Description |
---|---|---|
MemoryPressure |
| The available memory on the node. |
DiskPressure |
|
The available disk space or inodes on the node root file system, |
For evictionHard
you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly.
If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true
and false
. As a consequence, the scheduler could make poor scheduling decisions.
To protect against this oscillation, use the evictionpressure-transition-period
flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false.
Setting the evictionPressureTransitionPeriod
parameter to 0
configures the default value of 5 minutes. You cannot set an eviction pressure transition period to zero seconds.
7.13.2. Understanding how images are removed through garbage collection Copy linkLink copied to clipboard!
Image garbage collection removes images that are not referenced by any running pods.
OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor.
The policy for image garbage collection is based on two conditions:
- The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85.
- The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80.
For image garbage collection, you can modify any of the following variables using a custom resource.
Setting | Description |
---|---|
| The minimum age for an unused image before the image is removed by garbage collection. The default is 2m. |
|
The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85. This value must be greater than the |
|
The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80. This value must be less than the |
Two lists of images are retrieved in each garbage collector run:
- A list of images currently running in at least one pod.
- A list of images available on a host.
As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the previous spins. All images are then sorted by the time stamp.
Once the collection starts, the oldest images get deleted first until the stopping criterion is met.
7.13.3. Configuring garbage collection for containers and images Copy linkLink copied to clipboard!
As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig
object for each machine config pool.
OpenShift Container Platform supports only one kubeletConfig
object for each machine config pool.
You can configure any combination of the following:
- Soft eviction for containers
- Hard eviction for containers
- Eviction for images
Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure by entering the following command:oc edit machineconfigpool <name>
$ oc edit machineconfigpool <name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc edit machineconfigpool worker
$ oc edit machineconfigpool worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The label appears under Labels.
TipIf the label is not present, add a key/value pair such as:
oc label machineconfigpool worker custom-kubelet=small-pods
$ oc label machineconfigpool worker custom-kubelet=small-pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a custom resource (CR) for your configuration change.
ImportantIf there is one file system, or if
/var/lib/kubelet
and/var/lib/containers/
are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction.Sample configuration for a container garbage collection CR:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name for the object.
- 2
- Specify the label from the machine config pool.
- 3
- For container garbage collection: Type of eviction:
evictionSoft
orevictionHard
. - 4
- For container garbage collection: Eviction thresholds based on a specific eviction trigger signal.
- 5
- For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to
eviction-hard
. - 6
- For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For
evictionHard
you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. - 7
- For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. Setting the
evictionPressureTransitionPeriod
parameter to0
configures the default value of 5 minutes. - 8
- For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection.
- 9
- For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the
imageGCLowThresholdPercent
value. - 10
- For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the
imageGCHighThresholdPercent
value.
Run the following command to create the CR:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create -f gc-container.yaml
$ oc create -f gc-container.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kubeletconfig.machineconfiguration.openshift.io/gc-container created
kubeletconfig.machineconfiguration.openshift.io/gc-container created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with
UPDATING
as 'true` until the change is fully implemented:oc get machineconfigpool
$ oc get machineconfigpool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True
NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.14. Using the Node Tuning Operator Copy linkLink copied to clipboard!
Understand and use the Node Tuning Operator.
The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs.
The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node.
Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal.
The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications.
The cluster administrator configures a performance profile to define node-level settings such as the following:
- Updating the kernel to kernel-rt.
- Choosing CPUs for housekeeping.
- Choosing CPUs for running workloads.
Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles.
The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later.
In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator.
7.14.1. Accessing an example Node Tuning Operator specification Copy linkLink copied to clipboard!
Use this process to access an example Node Tuning Operator specification.
Procedure
Run the following command to access an example Node Tuning Operator specification:
oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator
oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities.
While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator.
7.14.2. Custom tuning specification Copy linkLink copied to clipboard!
The custom resource (CR) for the Operator has two major sections. The first section, profile:
, is a list of TuneD profiles and their names. The second, recommend:
, defines the profile selection logic.
Multiple custom tuning specifications can co-exist as multiple CRs in the Operator’s namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated.
Management state
The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState
field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows:
- Managed: the Operator will update its operands as configuration resources are updated
- Unmanaged: the Operator will ignore changes to the configuration resources
- Removed: the Operator will remove its operands and resources the Operator provisioned
Profile data
The profile:
section lists TuneD profiles and their names.
Recommended profiles
The profile:
selection logic is defined by the recommend:
section of the CR. The recommend:
section is a list of items to recommend the profiles based on a selection criteria.
recommend: <recommend-item-1> # ... <recommend-item-n>
recommend:
<recommend-item-1>
# ...
<recommend-item-n>
The individual items of the list:
- 1
- Optional.
- 2
- A dictionary of key/value
MachineConfig
labels. The keys must be unique. - 3
- If omitted, profile match is assumed unless a profile with a higher priority matches first or
machineConfigLabels
is set. - 4
- An optional list.
- 5
- Profile ordering priority. Lower numbers mean higher priority (
0
is the highest priority). - 6
- A TuneD profile to apply on a match. For example
tuned_profile_1
. - 7
- Optional operand configuration.
- 8
- Turn debugging on or off for the TuneD daemon. Options are
true
for on orfalse
for off. The default isfalse
. - 9
- Turn
reapply_sysctl
functionality on or off for the TuneD daemon. Options aretrue
for on andfalse
for off.
<match>
is an optional list recursively defined as follows:
- label: <label_name> value: <label_value> type: <label_type> <match>
- label: <label_name>
value: <label_value>
type: <label_type>
<match>
If <match>
is not omitted, all nested <match>
sections must also evaluate to true
. Otherwise, false
is assumed and the profile with the respective <match>
section will not be applied or recommended. Therefore, the nesting (child <match>
sections) works as logical AND operator. Conversely, if any item of the <match>
list matches, the entire <match>
list evaluates to true
. Therefore, the list acts as logical OR operator.
If machineConfigLabels
is defined, machine config pool based matching is turned on for the given recommend:
list item. <mcLabels>
specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name>
. This involves finding all machine config pools with machine config selector matching <mcLabels>
and setting the profile <tuned_profile_name>
on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role.
The list items match
and machineConfigLabels
are connected by the logical OR operator. The match
item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true
, the machineConfigLabels
item is not considered.
When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool.
Example: Node or pod label based matching
The CR above is translated for the containerized TuneD daemon into its recommend.conf
file based on the profile priorities. The profile with the highest priority (10
) is openshift-control-plane-es
and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch
label set. If not, the entire <match>
section evaluates as false
. If there is such a pod with the label, in order for the <match>
section to evaluate to true
, the node label also needs to be node-role.kubernetes.io/master
or node-role.kubernetes.io/infra
.
If the labels for the profile with priority 10
matched, openshift-control-plane-es
profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile (openshift-control-plane
) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master
or node-role.kubernetes.io/infra
.
Finally, the profile openshift-node
has the lowest priority of 30
. It lacks the <match>
section and, therefore, will always match. It acts as a profile catch-all to set openshift-node
profile, if no other profile with higher priority matches on a given node.
Example: Machine config pool based matching
To minimize node reboots, label the target nodes with a label the machine config pool’s node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself.
Cloud provider-specific TuneD profiles
With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools.
This functionality takes advantage of spec.providerID
node object values in the form of <cloud-provider>://<cloud-provider-specific-id>
and writes the file /var/lib/tuned/provider
with the value <cloud-provider>
in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider>
profile if such profile exists.
The openshift
profile that both openshift-control-plane
and openshift-node
profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider>
that will be applied to all Cloud provider-specific cluster nodes.
Example GCE Cloud provider profile
Due to profile inheritance, any setting specified in the provider-<cloud-provider>
profile will be overwritten by the openshift
profile and its child profiles.
7.14.3. Default profiles set on a cluster Copy linkLink copied to clipboard!
The following are the default profiles set on a cluster.
Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec
command to view the contents of these profiles:
oc exec $tuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \;
$ oc exec $tuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \;
7.14.4. Supported TuneD daemon plugins Copy linkLink copied to clipboard!
Excluding the [main]
section, the following TuneD plugins are supported when using custom profiles defined in the profile:
section of the Tuned CR:
- audio
- cpu
- disk
- eeepc_she
- modules
- mounts
- net
- scheduler
- scsi_host
- selinux
- sysctl
- sysfs
- usb
- video
- vm
- bootloader
There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported:
- script
- systemd
The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes.
Additional resources
7.15. Configuring the maximum number of pods per node Copy linkLink copied to clipboard!
Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore
and maxPods
. If you use both options, the lower of the two limits the number of pods on a node.
For example, if podsPerCore
is set to 10
on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure by entering the following command:oc edit machineconfigpool <name>
$ oc edit machineconfigpool <name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc edit machineconfigpool worker
$ oc edit machineconfigpool worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The label appears under Labels.
TipIf the label is not present, add a key/value pair such as:
oc label machineconfigpool worker custom-kubelet=small-pods
$ oc label machineconfigpool worker custom-kubelet=small-pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a
max-pods
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSetting
podsPerCore
to0
disables this limit.In the above example, the default value for
podsPerCore
is10
and the default value formaxPods
is250
. This means that unless the node has 25 cores or more, by default,podsPerCore
will be the limiting factor.Run the following command to create the CR:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the
MachineConfigPool
CRDs to see if the change is applied. TheUPDATING
column reportsTrue
if the change is picked up by the Machine Config Controller:oc get machineconfigpools
$ oc get machineconfigpools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False
NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the change is complete, the
UPDATED
column reportsTrue
.oc get machineconfigpools
$ oc get machineconfigpools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False
NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.16. Machine scaling with static IP addresses Copy linkLink copied to clipboard!
After you deployed your cluster to run nodes with static IP addresses, you can scale an instance of a machine or a machine set to use one of these static IP addresses.
7.16.1. Scaling machines to use static IP addresses Copy linkLink copied to clipboard!
You can scale additional machine sets to use pre-defined static IP addresses on your cluster. For this configuration, you need to create a machine resource YAML file and then define static IP addresses in this file.
Static IP addresses for vSphere nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
You included
featureSet:TechPreviewNoUpgrade
as the initial entry in theinstall-config.yaml
file. - You deployed a cluster that runs at least one node with a configured static IP address.
Procedure
Create a machine resource YAML file and define static IP address network information in the
network
parameter.Example of a machine resource YAML file with static IP address information defined in the
network
parameter.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The IP address for the default gateway for the network interface.
- 2
- Lists IPv4, IPv6, or both IP addresses that installation program passes to the network interface. Both IP families must use the same network interface for the default network.
- 3
- Lists a DNS nameserver. You can define up to 3 DNS nameservers. Consider defining more than one DNS nameserver to take advantage of DNS resolution if that one DNS nameserver becomes unreachable.
Create a
machine
custom resource (CR) by entering the following command in your terminal:oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.16.2. Machine set scaling of machines with configured static IP addresses Copy linkLink copied to clipboard!
You can use a machine set to scale machines with configured static IP addresses.
Static IP addresses for vSphere nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
After you configure a machine set to request a static IP address for a machine, the machine controller creates an IPAddressClaim
resource in the openshift-machine-api
namespace. The external controller then creates an IPAddress
resource and binds any static IP addresses to the IPAddressClaim
resource.
Your organization might use numerous types of IP address management (IPAM) services. If you want to enable a particular IPAM service on OpenShift Container Platform, you might need to manually create the IPAddressClaim
resource in a YAML definition and then bind a static IP address to this resource by entering the following command in your oc
CLI:
oc create -f <ipaddressclaim_filename>
$ oc create -f <ipaddressclaim_filename>
The following demonstrates an example of an IPAddressClaim
resource:
The machine controller updates the machine with a status of IPAddressClaimed
to indicate that a static IP address has successfully bound to the IPAddressClaim
resource. The machine controller applies the same status to a machine with multiple IPAddressClaim
resources that each contain a bound static IP address.The machine controller then creates a virtual machine and applies static IP addresses to any nodes listed in the providerSpec
of a machine’s configuration.
7.16.3. Using a machine set to scale machines with configured static IP addresses Copy linkLink copied to clipboard!
You can use a machine set to scale machines with configured static IP addresses.
Static IP addresses for vSphere nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The example in the procedure demonstrates the use of controllers for scaling machines in a machine set.
Prerequisites
-
You included
featureSet:TechPreviewNoUpgrade
as the initial entry in theinstall-config.yaml
file. - You deployed a cluster that runs at least one node with a configured static IP address.
Procedure
Configure a machine set by specifying IP pool information in the
network.devices.addressesFromPools
schema of the machine set’s YAML file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies an IP pool, which lists a static IP address or a range of static IP addresses. The IP Pool can either be a reference to a custom resource definition (CRD) or a resource supported by the
IPAddressClaims
resource handler. The machine controller accesses static IP addresses listed in the machine set’s configuration and then allocates each address to each machine. - 2
- Lists a nameserver. You must specify a nameserver for nodes that receive static IP address, because the Dynamic Host Configuration Protocol (DHCP) network configuration does not support static IP addresses.
Scale the machine set by entering the following commands in your
oc
CLI:oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After each machine is scaled up, the machine controller creates an
IPAddressClaim
resource.Optional: Check that the
IPAddressClaim
resource exists in theopenshift-machine-api
namespace by entering the following command:oc get ipaddressclaims.ipam.cluster.x-k8s.io -n openshift-machine-api
$ oc get ipaddressclaims.ipam.cluster.x-k8s.io -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc
CLI output that lists two IP pools listed in theopenshift-machine-api
namespaceNAME POOL NAME POOL KIND cluster-dev-9n5wg-worker-0-m7529-claim-0-0 static-ci-pool IPPool cluster-dev-9n5wg-worker-0-wdqkt-claim-0-0 static-ci-pool IPPool
NAME POOL NAME POOL KIND cluster-dev-9n5wg-worker-0-m7529-claim-0-0 static-ci-pool IPPool cluster-dev-9n5wg-worker-0-wdqkt-claim-0-0 static-ci-pool IPPool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
IPAddress
resource by entering the following command:oc create -f ipaddress.yaml
$ oc create -f ipaddress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example shows an
IPAddress
resource with defined network configuration information and one defined static IP address:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, the external controller automatically scans any resources in the machine set for recognizable address pool types. When the external controller finds
kind: IPPool
defined in theIPAddress
resource, the controller binds any static IP addresses to theIPAddressClaim
resource.Update the
IPAddressClaim
status with a reference to theIPAddress
resource:oc --type=merge patch IPAddressClaim cluster-dev-9n5wg-worker-0-m7529-claim-0-0 -p='{"status":{"addressRef": {"name": "cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0"}}}' -n openshift-machine-api --subresource=status
$ oc --type=merge patch IPAddressClaim cluster-dev-9n5wg-worker-0-m7529-claim-0-0 -p='{"status":{"addressRef": {"name": "cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0"}}}' -n openshift-machine-api --subresource=status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. Postinstallation network configuration Copy linkLink copied to clipboard!
After installing OpenShift Container Platform, you can further expand and customize your network to your requirements.
8.1. Using the Cluster Network Operator Copy linkLink copied to clipboard!
You can use the Cluster Network Operator (CNO) to deploy and manage cluster network components on an OpenShift Container Platform cluster, including the Container Network Interface (CNI) network plugin selected for the cluster during installation. For more information, see Cluster Network Operator in OpenShift Container Platform.
8.2. Network configuration tasks Copy linkLink copied to clipboard!
8.2.1. Creating default network policies for a new project Copy linkLink copied to clipboard!
As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy
objects when you create a new project.
8.2.1.1. Modifying the template for new projects Copy linkLink copied to clipboard!
As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.
To create your own custom project template:
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
Procedure
-
Log in as a user with
cluster-admin
privileges. Generate the default project template:
oc adm create-bootstrap-project-template -o yaml > template.yaml
$ oc adm create-bootstrap-project-template -o yaml > template.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use a text editor to modify the generated
template.yaml
file by adding objects or modifying existing objects. The project template must be created in the
openshift-config
namespace. Load your modified template:oc create -f template.yaml -n openshift-config
$ oc create -f template.yaml -n openshift-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the project configuration resource using the web console or CLI.
Using the web console:
- Navigate to the Administration → Cluster Settings page.
- Click Configuration to view all configuration resources.
- Find the entry for Project and click Edit YAML.
Using the CLI:
Edit the
project.config.openshift.io/cluster
resource:oc edit project.config.openshift.io/cluster
$ oc edit project.config.openshift.io/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the
spec
section to include theprojectRequestTemplate
andname
parameters, and set the name of your uploaded project template. The default name isproject-request
.Project configuration resource with custom project template
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After you save your changes, create a new project to verify that your changes were successfully applied.
8.2.1.2. Adding network policies to the new project template Copy linkLink copied to clipboard!
As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy
objects specified in the template in the project.
Prerequisites
-
Your cluster uses a default CNI network plugin that supports
NetworkPolicy
objects, such as the OpenShift SDN network plugin withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN. -
You installed the OpenShift CLI (
oc
). -
You must log in to the cluster with a user with
cluster-admin
privileges. - You must have created a custom default project template for new projects.
Procedure
Edit the default template for a new project by running the following command:
oc edit template <project_template> -n openshift-config
$ oc edit template <project_template> -n openshift-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<project_template>
with the name of the default template that you configured for your cluster. The default template name isproject-request
.In the template, add each
NetworkPolicy
object as an element to theobjects
parameter. Theobjects
parameter accepts a collection of one or more objects.In the following example, the
objects
parameter collection includes severalNetworkPolicy
objects.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands:
Create a new project:
oc new-project <project>
$ oc new-project <project>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<project>
with the name for the project you are creating.
Confirm that the network policy objects in the new project template exist in the new project:
oc get networkpolicy
$ oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 9. Configuring image streams and image registries Copy linkLink copied to clipboard!
You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. For more information, see Using image pull secrets.
For information about images and configuring image streams or image registries, see the following documentation:
9.1. Configuring image streams for a disconnected cluster Copy linkLink copied to clipboard!
After installing OpenShift Container Platform in a disconnected environment, configure the image streams for the Cluster Samples Operator and the must-gather
image stream.
9.1.1. Cluster Samples Operator assistance for mirroring Copy linkLink copied to clipboard!
During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image
in the openshift-cluster-samples-operator
namespace. The imagestreamtag-to-image
config map contains an entry, the populating image, for each image stream tag.
The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name>
.
During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed
. If you choose to change it to Managed
, it installs samples.
The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators’s objects to reach the services they require.
You can use this config map as a reference for which images need to be mirrored for your image streams to import.
-
While the Cluster Samples Operator is set to
Removed
, you can create your mirrored registry, or determine which existing mirrored registry you want to use. - Mirror the samples you want to the mirrored registry using the new config map as your guide.
-
Add any of the image streams you did not mirror to the
skippedImagestreams
list of the Cluster Samples Operator configuration object. -
Set
samplesRegistry
of the Cluster Samples Operator configuration object to the mirrored registry. -
Then set the Cluster Samples Operator to
Managed
to install the image streams you have mirrored.
9.1.2. Using Cluster Samples Operator image streams with alternate or mirrored registries Copy linkLink copied to clipboard!
Most image streams in the openshift
namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io.
The cli
, installer
, must-gather
, and tests
image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure.
The Cluster Samples Operator must be set to Managed
in a disconnected environment. To install the image streams, you have a mirrored registry.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. - Create a pull secret for your mirror registry.
Procedure
Access the images of a specific image stream to mirror, for example:
oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io
$ oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mirror images from registry.redhat.io associated with any image streams you need
oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest ${MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest
$ oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest ${MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the cluster’s image configuration object:
oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the required trusted CAs for the mirror in the cluster’s image configuration object:
oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
samplesRegistry
field in the Cluster Samples Operator configuration object to contain thehostname
portion of the mirror location defined in the mirror configuration:oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator
$ oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis is required because the image stream import process does not use the mirror or search mechanism at this time.
Add any image streams that are not mirrored into the
skippedImagestreams
field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator toRemoved
in the Cluster Samples Operator configuration object.NoteThe Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them.
Many of the templates in the
openshift
namespace reference the image streams. So usingRemoved
to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams.
9.1.3. Preparing your cluster to gather support data Copy linkLink copied to clipboard!
Clusters using a restricted network must import the default must-gather image to gather debugging data for Red Hat support. The must-gather image is not imported by default, and clusters on a restricted network do not have access to the internet to pull the latest image from a remote repository.
Procedure
If you have not added your mirror registry’s trusted CA to your cluster’s image configuration object as part of the Cluster Samples Operator configuration, perform the following steps:
Create the cluster’s image configuration object:
oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the required trusted CAs for the mirror in the cluster’s image configuration object:
oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Import the default must-gather image from your installation payload:
oc import-image is/must-gather -n openshift
$ oc import-image is/must-gather -n openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
When running the oc adm must-gather
command, use the --image
flag and point to the payload image, as in the following example:
oc adm must-gather --image=$(oc adm release info --image-for must-gather)
$ oc adm must-gather --image=$(oc adm release info --image-for must-gather)
9.2. Configuring periodic importing of Cluster Sample Operator image stream tags Copy linkLink copied to clipboard!
You can ensure that you always have access to the latest versions of the Cluster Sample Operator images by periodically importing the image stream tags when new versions become available.
Procedure
Fetch all the imagestreams in the
openshift
namespace by running the following command:oc get imagestreams -nopenshift
oc get imagestreams -nopenshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Fetch the tags for every imagestream in the
openshift
namespace by running the following command:oc get is <image-stream-name> -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift
$ oc get is <image-stream-name> -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift
$ oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12
1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Schedule periodic importing of images for each tag present in the image stream by running the following command:
oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift
$ oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift
$ oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift $ oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default.
Verify the scheduling status of the periodic import by running the following command:
oc get imagestream <image-stream-name> -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift
oc get imagestream <image-stream-name> -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get imagestream ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift
oc get imagestream ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true
Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 10. Postinstallation storage configuration Copy linkLink copied to clipboard!
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration.
By default, containers operate by using the ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. To store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods:
- Dynamic provisioning
- You can dynamically provision storage on-demand by defining and creating storage classes that control different levels of storage, including storage access.
- Static provisioning
- You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options.
10.1. Dynamic provisioning Copy linkLink copied to clipboard!
Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. See Dynamic provisioning.
10.2. Recommended configurable storage technology Copy linkLink copied to clipboard!
The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application.
Storage type | Block | File | Object |
---|---|---|---|
1
2 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk.
5 For metrics, using file storage with the 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform’s PVs or PVCs. Apps must integrate with the object storage REST API. | |||
ROX1 | Yes4 | Yes4 | Yes |
RWX2 | No | Yes | Yes |
Registry | Configurable | Configurable | Recommended |
Scaled registry | Not configurable | Configurable | Recommended |
Metrics3 | Recommended | Configurable5 | Not configurable |
Elasticsearch Logging | Recommended | Configurable6 | Not supported6 |
Loki Logging | Not configurable | Not configurable | Recommended |
Apps | Recommended | Recommended | Not configurable7 |
A scaled registry is an OpenShift image registry where two or more pod replicas are running.
10.2.1. Specific application storage recommendations Copy linkLink copied to clipboard!
Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.
Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.
10.2.1.1. Registry Copy linkLink copied to clipboard!
In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment:
- The storage technology does not have to support RWX access mode.
- The storage technology must ensure read-after-write consistency.
- The preferred storage technology is object storage followed by block storage.
- File storage is not recommended for OpenShift image registry cluster deployment with production workloads.
10.2.1.2. Scaled registry Copy linkLink copied to clipboard!
In a scaled/HA OpenShift image registry cluster deployment:
- The storage technology must support RWX access mode.
- The storage technology must ensure read-after-write consistency.
- The preferred storage technology is object storage.
- Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported.
- Object storage should be S3 or Swift compliant.
- For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage.
- Block storage is not configurable.
- The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production?.
10.2.1.3. Metrics Copy linkLink copied to clipboard!
In an OpenShift Container Platform hosted metrics cluster deployment:
- The preferred storage technology is block storage.
- Object storage is not configurable.
It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads.
10.2.1.4. Logging Copy linkLink copied to clipboard!
In an OpenShift Container Platform hosted logging cluster deployment:
Loki Operator:
- The preferred storage technology is S3 compatible Object storage.
- Block storage is not configurable.
OpenShift Elasticsearch Operator:
- The preferred storage technology is block storage.
- Object storage is not supported.
As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
10.2.1.5. Applications Copy linkLink copied to clipboard!
Application use cases vary from application to application, as described in the following examples:
- Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster.
- Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer.
10.2.2. Other specific application storage recommendations Copy linkLink copied to clipboard!
It is not recommended to use RAID configurations on Write
intensive workloads, such as etcd
. If you are running etcd
with a RAID configuration, you might be at risk of encountering performance issues with your workloads.
- Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases.
- Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage.
- The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices.
10.3. Deploy Red Hat OpenShift Data Foundation Copy linkLink copied to clipboard!
Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation.
OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide.
If you are looking for Red Hat OpenShift Data Foundation information about… | See the following Red Hat OpenShift Data Foundation documentation: |
---|---|
What’s new, known issues, notable bug fixes, and Technology Previews | |
Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations | |
Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster | |
Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure | Deploying OpenShift Data Foundation 4.12 using bare metal infrastructure |
Instructions on deploying OpenShift Data Foundation on Red Hat OpenShift Container Platform VMware vSphere clusters | |
Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage | Deploying OpenShift Data Foundation 4.12 using Amazon Web Services |
Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters | Deploying and managing OpenShift Data Foundation 4.12 using Google Cloud |
Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Azure clusters | Deploying and managing OpenShift Data Foundation 4.12 using Microsoft Azure |
Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power® infrastructure | |
Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z® infrastructure | Deploying OpenShift Data Foundation on IBM Z® infrastructure |
Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone | |
Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) | |
Safely replacing storage devices for Red Hat OpenShift Data Foundation | |
Safely replacing a node in a Red Hat OpenShift Data Foundation cluster | |
Scaling operations in Red Hat OpenShift Data Foundation | |
Monitoring a Red Hat OpenShift Data Foundation 4.12 cluster | |
Resolve issues encountered during operations | |
Migrating your OpenShift Container Platform cluster from version 3 to version 4 |
Chapter 11. Preparing for users Copy linkLink copied to clipboard!
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including taking steps to prepare for users.
11.1. Understanding identity provider configuration Copy linkLink copied to clipboard!
The OpenShift Container Platform control plane includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API.
As an administrator, you can configure OAuth to specify an identity provider after you install your cluster.
11.1.1. About identity providers in OpenShift Container Platform Copy linkLink copied to clipboard!
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
11.1.2. Supported identity providers Copy linkLink copied to clipboard!
You can configure the following types of identity providers:
Identity provider | Description |
---|---|
Configure the | |
Configure the | |
Configure the | |
Configure a | |
Configure a | |
Configure a | |
Configure a | |
Configure a | |
Configure an |
After you define an identity provider, you can use RBAC to define and apply permissions.
11.1.3. Identity provider parameters Copy linkLink copied to clipboard!
The following parameters are common to all identity providers:
Parameter | Description |
---|---|
| The provider name is prefixed to provider user names to form an identity name. |
| Defines how new identities are mapped to users when they log in. Enter one of the following values:
|
When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod
parameter to add
.
11.1.4. Sample identity provider CR Copy linkLink copied to clipboard!
The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider.
Sample identity provider CR
11.2. Using RBAC to define and apply permissions Copy linkLink copied to clipboard!
Understand and apply role-based access control.
11.2.1. RBAC overview Copy linkLink copied to clipboard!
Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project.
Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects.
Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action.
Authorization is managed using:
Authorization object | Description |
---|---|
Rules |
Sets of permitted verbs on a set of objects. For example, whether a user or service account can |
Roles | Collections of rules. You can associate, or bind, users and groups to multiple roles. |
Bindings | Associations between users and/or groups with a role. |
There are two levels of RBAC roles and bindings that control authorization:
RBAC level | Description |
---|---|
Cluster RBAC | Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. |
Local RBAC | Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. |
A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation.
This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles.
During evaluation, both the cluster role bindings and the local role bindings are used. For example:
- Cluster-wide "allow" rules are checked.
- Locally-bound "allow" rules are checked.
- Deny by default.
11.2.1.1. Default cluster roles Copy linkLink copied to clipboard!
OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally.
It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly.
Default cluster role | Description |
---|---|
|
A project manager. If used in a local binding, an |
| A user that can get basic information about projects and users. |
| A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. |
| A user that can get basic cluster status information. |
| A user that can get or view most of the objects but cannot modify them. |
| A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. |
| A user that can create their own projects. |
| A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. |
Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin
role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin
to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin
, plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin
.
The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below.
The get pods/exec
, get pods/*
, and get *
rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges.
11.2.1.2. Evaluating authorization Copy linkLink copied to clipboard!
OpenShift Container Platform evaluates authorization by using:
- Identity
- The user name and list of groups that the user belongs to.
- Action
The action you perform. In most cases, this consists of:
- Project: The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities.
-
Verb : The action itself:
get
,list
,create
,update
,delete
,deletecollection
, orwatch
. - Resource name: The API endpoint that you access.
- Bindings
- The full list of bindings, the associations between users or groups with a role.
OpenShift Container Platform evaluates authorization by using the following steps:
- The identity and the project-scoped action is used to find all bindings that apply to the user or their groups.
- Bindings are used to locate all the roles that apply.
- Roles are used to find all the rules that apply.
- The action is checked against each rule to find a match.
- If no matching rule is found, the action is then denied by default.
Remember that users and groups can be associated with, or bound to, multiple roles at the same time.
Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with.
The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin.
Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level.
11.2.1.2.1. Cluster role aggregation Copy linkLink copied to clipboard!
The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation, where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources.
11.2.2. Projects and namespaces Copy linkLink copied to clipboard!
A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces.
Namespaces provide a unique scope for:
- Named resources to avoid basic naming collisions.
- Delegated management authority to trusted users.
- The ability to limit community resource consumption.
Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users.
A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects.
Projects can have a separate name
, displayName
, and description
.
-
The mandatory
name
is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. -
The optional
displayName
is how the project is displayed in the web console (defaults toname
). -
The optional
description
can be a more detailed description of the project and is also visible in the web console.
Each project scopes its own set of:
Object | Description |
---|---|
| Pods, services, replication controllers, etc. |
| Rules for which users can or cannot perform actions on objects. |
| Quotas for each kind of object that can be limited. |
| Service accounts act automatically with designated access to objects in the project. |
Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects.
Developers and administrators can interact with projects by using the CLI or the web console.
11.2.3. Default projects Copy linkLink copied to clipboard!
OpenShift Container Platform comes with a number of default projects, and projects starting with openshift-
are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical.
Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components.
The following default projects are considered highly privileged: default
, kube-public
, kube-system
, openshift
, openshift-infra
, openshift-node
, and other system-created projects that have the openshift.io/run-level
label set to 0
or 1
. Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects.
11.2.4. Viewing cluster roles and bindings Copy linkLink copied to clipboard!
You can use the oc
CLI to view cluster roles and bindings by using the oc describe
command.
Prerequisites
-
Install the
oc
CLI. - Obtain permission to view the cluster roles and bindings.
Users with the cluster-admin
default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings.
Procedure
To view the cluster roles and their associated rule sets:
oc describe clusterrole.rbac
$ oc describe clusterrole.rbac
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles:
oc describe clusterrolebinding.rbac
$ oc describe clusterrolebinding.rbac
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2.5. Viewing local roles and bindings Copy linkLink copied to clipboard!
You can use the oc
CLI to view local roles and bindings by using the oc describe
command.
Prerequisites
-
Install the
oc
CLI. Obtain permission to view the local roles and bindings:
-
Users with the
cluster-admin
default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. -
Users with the
admin
default cluster role bound locally can view and manage roles and bindings in that project.
-
Users with the
Procedure
To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project:
oc describe rolebinding.rbac
$ oc describe rolebinding.rbac
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view the local role bindings for a different project, add the
-n
flag to the command:oc describe rolebinding.rbac -n joe-project
$ oc describe rolebinding.rbac -n joe-project
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2.6. Adding roles to users Copy linkLink copied to clipboard!
You can use the oc adm
administrator CLI to manage the roles and bindings.
Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy
commands.
You can bind any of the default cluster roles to local users or groups in your project.
Procedure
Add a role to a user in a specific project:
oc adm policy add-role-to-user <role> <user> -n <project>
$ oc adm policy add-role-to-user <role> <user> -n <project>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, you can add the
admin
role to thealice
user injoe
project by running:oc adm policy add-role-to-user admin alice -n joe
$ oc adm policy add-role-to-user admin alice -n joe
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to add the role to the user:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the local role bindings and verify the addition in the output:
oc describe rolebinding.rbac -n <project>
$ oc describe rolebinding.rbac -n <project>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to view the local role bindings for the
joe
project:oc describe rolebinding.rbac -n joe
$ oc describe rolebinding.rbac -n joe
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
alice
user has been added to theadmins
RoleBinding
.
11.2.7. Creating a local role Copy linkLink copied to clipboard!
You can create a local role for a project and then bind it to a user.
Procedure
To create a local role for a project, run the following command:
oc create role <name> --verb=<verb> --resource=<resource> -n <project>
$ oc create role <name> --verb=<verb> --resource=<resource> -n <project>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this command, specify:
-
<name>
, the local role’s name -
<verb>
, a comma-separated list of the verbs to apply to the role -
<resource>
, the resources that the role applies to -
<project>
, the project name
For example, to create a local role that allows a user to view pods in the
blue
project, run the following command:oc create role podview --verb=get --resource=pod -n blue
$ oc create role podview --verb=get --resource=pod -n blue
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
To bind the new role to a user, run the following command:
oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue
$ oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2.8. Creating a cluster role Copy linkLink copied to clipboard!
You can create a cluster role.
Procedure
To create a cluster role, run the following command:
oc create clusterrole <name> --verb=<verb> --resource=<resource>
$ oc create clusterrole <name> --verb=<verb> --resource=<resource>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this command, specify:
-
<name>
, the local role’s name -
<verb>
, a comma-separated list of the verbs to apply to the role -
<resource>
, the resources that the role applies to
For example, to create a cluster role that allows a user to view pods, run the following command:
oc create clusterrole podviewonly --verb=get --resource=pod
$ oc create clusterrole podviewonly --verb=get --resource=pod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
11.2.9. Local role binding commands Copy linkLink copied to clipboard!
When you manage a user or group’s associated roles for local role bindings using the following operations, a project may be specified with the -n
flag. If it is not specified, then the current project is used.
You can use the following commands for local RBAC management.
Command | Description |
---|---|
| Indicates which users can perform an action on a resource. |
| Binds a specified role to specified users in the current project. |
| Removes a given role from specified users in the current project. |
| Removes specified users and all of their roles in the current project. |
| Binds a given role to specified groups in the current project. |
| Removes a given role from specified groups in the current project. |
| Removes specified groups and all of their roles in the current project. |
11.2.10. Cluster role binding commands Copy linkLink copied to clipboard!
You can also manage cluster role bindings using the following operations. The -n
flag is not used for these operations because cluster role bindings use non-namespaced resources.
Command | Description |
---|---|
| Binds a given role to specified users for all projects in the cluster. |
| Removes a given role from specified users for all projects in the cluster. |
| Binds a given role to specified groups for all projects in the cluster. |
| Removes a given role from specified groups for all projects in the cluster. |
11.2.11. Creating a cluster admin Copy linkLink copied to clipboard!
The cluster-admin
role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources.
Prerequisites
- You must have created a user to define as the cluster admin.
Procedure
Define the user as a cluster admin:
oc adm policy add-cluster-role-to-user cluster-admin <user>
$ oc adm policy add-cluster-role-to-user cluster-admin <user>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3. The kubeadmin user Copy linkLink copied to clipboard!
OpenShift Container Platform creates a cluster administrator, kubeadmin
, after the installation process completes.
This user has the cluster-admin
role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program’s output. For example:
INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>
INFO Install complete!
INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI.
INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes).
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com
INFO Login to the console with user: kubeadmin, password: <provided>
11.3.1. Removing the kubeadmin user Copy linkLink copied to clipboard!
After you define an identity provider and create a new cluster-admin
user, you can remove the kubeadmin
to improve cluster security.
If you follow this procedure before another user is a cluster-admin
, then OpenShift Container Platform must be reinstalled. It is not possible to undo this command.
Prerequisites
- You must have configured at least one identity provider.
-
You must have added the
cluster-admin
role to a user. - You must be logged in as an administrator.
Procedure
Remove the
kubeadmin
secrets:oc delete secrets kubeadmin -n kube-system
$ oc delete secrets kubeadmin -n kube-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Populating OperatorHub from mirrored Operator catalogs Copy linkLink copied to clipboard!
If you mirrored Operator catalogs for use with disconnected clusters, you can populate OperatorHub with the Operators from your mirrored catalogs. You can use the generated manifests from the mirroring process to create the required ImageContentSourcePolicy
and CatalogSource
objects.
11.4.1. Prerequisites Copy linkLink copied to clipboard!
11.4.1.1. Creating the ImageContentSourcePolicy object Copy linkLink copied to clipboard!
After mirroring Operator catalog content to your mirror registry, create the required ImageContentSourcePolicy
(ICSP) object. The ICSP object configures nodes to translate between the image references stored in Operator manifests and the mirrored registry.
Procedure
On a host with access to the disconnected cluster, create the ICSP by running the following command to specify the
imageContentSourcePolicy.yaml
file in your manifests directory:oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml
$ oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<path/to/manifests/dir>
is the path to the manifests directory for your mirrored content.You can now create a
CatalogSource
object to reference your mirrored index image and Operator content.
11.4.1.2. Adding a catalog source to a cluster Copy linkLink copied to clipboard!
Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource
object that references an index image. OperatorHub uses catalog sources to populate the user interface.
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.
Prerequisites
- You built and pushed an index image to a registry.
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create a
CatalogSource
object that references your index image. If you used theoc adm catalog mirror
command to mirror your catalog to a target registry, you can use the generatedcatalogSource.yaml
file in your manifests directory as a starting point.Modify the following to your specifications and save it as a
catalogSource.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you mirrored content to local files before uploading to a registry, remove any backslash (
/
) characters from themetadata.name
field to avoid an "invalid resource name" error when you create the object. - 2
- If you want the catalog source to be available globally to users in all namespaces, specify the
openshift-marketplace
namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. - 3
- Specify the value of
legacy
orrestricted
. If the field is not set, the default value islegacy
. In a future OpenShift Container Platform release, it is planned that the default value will berestricted
. If your catalog cannot run withrestricted
permissions, it is recommended that you manually set this field tolegacy
. - 4
- Specify your index image. If you specify a tag after the image name, for example
:v4.15
, the catalog source pod uses an image pull policy ofAlways
, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example@sha256:<id>
, the image pull policy isIfNotPresent
, meaning the pod pulls the image only if it does not already exist on the node. - 5
- Specify your name or an organization name publishing the catalog.
- 6
- Catalog sources can automatically check for new versions to keep up to date.
Use the file to create the
CatalogSource
object:oc apply -f catalogSource.yaml
$ oc apply -f catalogSource.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the following resources are created successfully.
Check the pods:
oc get pods -n openshift-marketplace
$ oc get pods -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the catalog source:
oc get catalogsource -n openshift-marketplace
$ oc get catalogsource -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the package manifest:
oc get packagemanifest -n openshift-marketplace
$ oc get packagemanifest -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CATALOG AGE jaeger-product My Operator Catalog 93s
NAME CATALOG AGE jaeger-product My Operator Catalog 93s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.
11.5. About Operator installation with OperatorHub Copy linkLink copied to clipboard!
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a cluster administrator, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.
During installation, you must determine the following initial settings for the Operator:
- Installation Mode
- Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces… to make the Operator available to all users and projects.
- Update Channel
- If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
- Approval Strategy
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
11.5.1. Installing from OperatorHub by using the web console Copy linkLink copied to clipboard!
You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
Procedure
- Navigate in the web console to the Operators → OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type
jaeger
to find the Jaeger Operator.You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
NoteChoosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.
- Read the information about the Operator and click Install.
On the Install Operator page, configure your Operator installation:
If you want to install a specific version of an Operator, select an Update channel and Version from the lists. You can browse the various versions of an Operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install.
NoteThe version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise, Manual approval is required when not installing the latest version for the selected channel.
Installing an Operator with Manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. If you want to update Operators independently, install Operators into separate namespaces.
Confirm the installation mode for the Operator:
-
All namespaces on the cluster (default) installs the Operator in the default
openshift-operators
namespace to watch and be made available to all namespaces in the cluster. This option is not always available. - A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
-
All namespaces on the cluster (default) installs the Operator in the default
For clusters on cloud providers with token authentication enabled:
- If the cluster uses AWS STS (STS Mode in the web console), enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role’s ARN, follow the procedure described in Preparing AWS account.
- If the cluster uses Microsoft Entra Workload ID (Workload Identity / Federated Identity Mode in the web console), add the client ID, tenant ID, and subscription ID in the appropriate field.
For Update approval, select either the Automatic or Manual approval strategy.
ImportantIf the web console shows that the cluster uses AWS STS or Microsoft Entra Workload ID, you must set Update approval to Manual.
Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster:
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
- If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
Verification
After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should eventually resolve to Succeeded in the relevant namespace.
NoteFor the All namespaces… installation mode, the status resolves to Succeeded in the
openshift-operators
namespace, but the status is Copied if you check in other namespaces.If it does not:
-
Check the logs in any pods in the
openshift-operators
project (or other relevant namespace if A specific namespace… installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.
-
Check the logs in any pods in the
When the Operator is installed, the metadata indicates which channel and version are installed.
NoteThe Channel and Version dropdown menus are still available for viewing other version metadata in this catalog context.
11.5.2. Installing from OperatorHub by using the CLI Copy linkLink copied to clipboard!
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc
command to create or update a Subscription
object.
For SingleNamespace
install mode, you must also ensure an appropriate Operator group exists in the related namespace. An Operator group, defined by an OperatorGroup
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.
In most cases, the web console method of this procedure is preferred because it automates tasks in the background, such as handling the creation of OperatorGroup
and Subscription
objects automatically when choosing SingleNamespace
mode.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
).
Procedure
View the list of Operators available to the cluster from OperatorHub:
oc get packagemanifests -n openshift-marketplace
$ oc get packagemanifests -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 11.1. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
oc describe packagemanifests <operator_name> -n openshift-marketplace
$ oc describe packagemanifests <operator_name> -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 11.2. Example output
TipYou can print an Operator’s version and channel information in YAML format by running the following command:
oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml
$ oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog:
oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml
$ oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you do not specify the Operator’s catalog, running the
oc get packagemanifest
andoc describe packagemanifest
commands might return a package from an unexpected catalog if the following conditions are met:- Multiple catalogs are installed in the same namespace.
- The catalogs contain the same Operators or Operators with the same name.
If the Operator you intend to install supports the
AllNamespaces
install mode, and you choose to use this mode, skip this step, because theopenshift-operators
namespace already has an appropriate Operator group in place by default, calledglobal-operators
.If the Operator you intend to install supports the
SingleNamespace
install mode, and you choose to use this mode, you must ensure an appropriate Operator group exists in the related namespace. If one does not exist, you can create create one by following these steps:ImportantYou can only have one Operator group per namespace. For more information, see "Operator groups".
Create an
OperatorGroup
object YAML file, for exampleoperatorgroup.yaml
, forSingleNamespace
install mode:Example
OperatorGroup
object forSingleNamespace
install modeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
OperatorGroup
object:oc apply -f operatorgroup.yaml
$ oc apply -f operatorgroup.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Subscription
object to subscribe a namespace to an Operator:Create a YAML file for the
Subscription
object, for examplesubscription.yaml
:NoteIf you want to subscribe to a specific version of an Operator, set the
startingCSV
field to the desired version and set theinstallPlanApproval
field toManual
to prevent the Operator from automatically upgrading if a later version exists in the catalog. For details, see the following "ExampleSubscription
object with a specific starting Operator version".Example 11.3. Example
Subscription
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For default
AllNamespaces
install mode usage, specify theopenshift-operators
namespace. Alternatively, you can specify a custom global namespace, if you have created one. ForSingleNamespace
install mode usage, specify the relevant single namespace. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplace
for the default OperatorHub catalog sources. - 6
- The
env
parameter defines a list of environment variables that must exist in all containers in the pod created by OLM. - 7
- The
envFrom
parameter defines a list of sources to populate environment variables in the container. - 8
- The
volumes
parameter defines a list of volumes that must exist on the pod created by OLM. - 9
- The
volumeMounts
parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If avolumeMount
references avolume
that does not exist, OLM fails to deploy the Operator. - 10
- The
tolerations
parameter defines a list of tolerations for the pod created by OLM. - 11
- The
resources
parameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelector
parameter defines aNodeSelector
for the pod created by OLM.
Example 11.4. Example
Subscription
object with a specific starting Operator versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the approval strategy to
Manual
in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. - 2
- Set a specific version of an Operator CSV.
For clusters on cloud providers with token authentication enabled, configure your
Subscription
object by following these steps:Ensure the
Subscription
object is set to manual update approvals:kind: Subscription # ... spec: installPlanApproval: Manual
kind: Subscription # ... spec: installPlanApproval: Manual
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
Include the relevant cloud provider-specific fields in the
Subscription
object’sconfig
section:If the cluster is in AWS STS mode, include the following fields:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Include the role ARN details.
If the cluster is in Microsoft Entra Workload ID mode, include the following fields:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<client_id>
- Is the client ID.
<tenant_id>
-
Is the tenant ID.
<subscription_id>
Is the subscription ID.
Create the
Subscription
object by running the following command:oc apply -f subscription.yaml
$ oc apply -f subscription.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
If you set the
installPlanApproval
field toManual
, manually approve the pending install plan to complete the Operator installation. For more information, see "Manually approving a pending Operator update".
At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Verification
Check the status of the
Subscription
object for your installed Operator by running the following command:oc describe subscription <subscription_name> -n <namespace>
$ oc describe subscription <subscription_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you created an Operator group for
SingleNamespace
install mode, check the status of theOperatorGroup
object by running the following command:oc describe operatorgroup <operatorgroup_name> -n <namespace>
$ oc describe operatorgroup <operatorgroup_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 12. Changing the cloud provider credentials configuration Copy linkLink copied to clipboard!
For supported configurations, you can change how OpenShift Container Platform authenticates with your cloud provider.
To determine which cloud credentials strategy your cluster uses, see Determining the Cloud Credential Operator mode.
12.1. Rotating or removing cloud provider credentials Copy linkLink copied to clipboard!
After installing OpenShift Container Platform, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation.
To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
12.1.1. Rotating cloud provider credentials with the Cloud Credential Operator utility Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) utility ccoctl
supports updating secrets for clusters installed on IBM Cloud®.
12.1.1.1. Rotating API keys Copy linkLink copied to clipboard!
You can rotate API keys for your existing service IDs and update the corresponding secrets.
Prerequisites
-
You have configured the
ccoctl
binary. - You have existing service IDs in a live OpenShift Container Platform cluster installed.
Procedure
Use the
ccoctl
utility to rotate your API keys for the service IDs and update the secrets:ccoctl <provider_name> refresh-keys \ --kubeconfig <openshift_kubeconfig_file> \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <name>
$ ccoctl <provider_name> refresh-keys \
1 --kubeconfig <openshift_kubeconfig_file> \
2 --credentials-requests-dir <path_to_credential_requests_directory> \
3 --name <name>
4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgrade
feature set, you must include the--enable-tech-preview
parameter.
12.1.2. Rotating cloud provider credentials manually Copy linkLink copied to clipboard!
If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.
Prerequisites
Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:
- For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported.
- For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere are supported.
- You have changed the credentials that are used to interface with your cloud provider.
- The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.
Procedure
- In the Administrator perspective of the web console, navigate to Workloads → Secrets.
In the table on the Secrets page, find the root secret for your cloud provider.
Expand Platform Secret name AWS
aws-creds
Azure
azure-credentials
GCP
gcp-credentials
RHOSP
openstack-credentials
VMware vSphere
vsphere-creds
-
Click the Options menu
in the same row as the secret and select Edit Secret.
- Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.
- Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.
If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials.
NoteIf the vSphere CSI Driver Operator is enabled, this step is not required.
To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the
cluster-admin
role and run the following command:oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date )"'"}}' \ --type=merge
$ oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date )"'"}}' \ --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports
Progressing=true
. To view the status, run the following command:oc get co kube-controller-manager
$ oc get co kube-controller-manager
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual
CredentialsRequest
objects.-
Log in to the OpenShift Container Platform CLI as a user with the
cluster-admin
role. Get the names and namespaces of all referenced component secrets:
oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'
$ oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<provider_spec>
is the corresponding value for your cloud provider:-
AWS:
AWSProviderSpec
-
GCP:
GCPProviderSpec
Partial example output for AWS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
AWS:
Delete each of the referenced component secrets:
oc delete secret <secret_name> \ -n <secret_namespace>
$ oc delete secret <secret_name> \
1 -n <secret_namespace>
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example deletion of an AWS secret
oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers
$ oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones.
-
Log in to the OpenShift Container Platform CLI as a user with the
Verification
To verify that the credentials have changed:
- In the Administrator perspective of the web console, navigate to Workloads → Secrets.
- Verify that the contents of the Value field or fields have changed.
12.1.3. Removing cloud provider credentials Copy linkLink copied to clipboard!
For clusters that use the Cloud Credential Operator (CCO) in mint mode, the administrator-level credential is stored in the kube-system
namespace. The CCO uses the admin
credential to process the CredentialsRequest
objects in the cluster and create users for components with limited permissions.
After installing an OpenShift Container Platform cluster with the CCO in mint mode, you can remove the administrator-level credential secret from the kube-system
namespace in the cluster. The CCO only requires the administrator-level credential during changes that require reconciling new or modified CredentialsRequest
custom resources, such as minor cluster version updates.
Before performing a minor version cluster update (for example, updating from OpenShift Container Platform 4.16 to 4.17), you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the update might be blocked.
Prerequisites
- Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP.
Procedure
- In the Administrator perspective of the web console, navigate to Workloads → Secrets.
In the table on the Secrets page, find the root secret for your cloud provider.
Expand Platform Secret name AWS
aws-creds
GCP
gcp-credentials
-
Click the Options menu
in the same row as the secret and select Delete Secret.
12.2. Enabling token-based authentication Copy linkLink copied to clipboard!
After installing an Microsoft Azure OpenShift Container Platform cluster, you can enable Microsoft Entra Workload ID to use short-term credentials.
12.2.1. Configuring the Cloud Credential Operator utility Copy linkLink copied to clipboard!
To configure an existing cluster to create and manage cloud credentials from outside of the cluster, extract and prepare the Cloud Credential Operator utility (ccoctl
) binary.
The ccoctl
utility is a Linux binary that must run in a Linux environment.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Set a variable for the OpenShift Container Platform release image by running the following command:
RELEASE_IMAGE=$(oc get clusterversion -o jsonpath={..desired.image})
$ RELEASE_IMAGE=$(oc get clusterversion -o jsonpath={..desired.image})
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the architecture of the
$RELEASE_IMAGE
matches the architecture of the environment in which you will use theccoctl
tool.Extract the
ccoctl
binary from the CCO container image within the OpenShift Container Platform release image by running the following command:oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
$ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the permissions to make
ccoctl
executable by running the following command:chmod 775 ccoctl
$ chmod 775 ccoctl
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that
ccoctl
is ready to use, display the help file. Use a relative file name when you run the command, for example:./ccoctl.rhel9
$ ./ccoctl.rhel9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.2. Enabling Microsoft Entra Workload ID on an existing cluster Copy linkLink copied to clipboard!
If you did not configure your Microsoft Azure OpenShift Container Platform cluster to use Microsoft Entra Workload ID during installation, you can enable this authentication method on an existing cluster.
The process to enable Workload ID on an existing cluster is disruptive and takes a significant amount of time. Before proceeding, observe the following considerations:
- Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour.
- During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready.
- After starting this process, do not attempt to update the cluster until it is complete. If an update is triggered, the process to enable Workload ID on an existing cluster fails.
Prerequisites
- You have installed an OpenShift Container Platform cluster on Microsoft Azure.
-
You have access to the cluster using an account with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
). -
You have extracted and prepared the Cloud Credential Operator utility (
ccoctl
) binary. -
You have access to your Azure account by using the Azure CLI (
az
).
Procedure
-
Create an output directory for the manifests that the
ccoctl
utility generates. This procedure uses./output_dir
as an example. Extract the service account public signing key for the cluster to the output directory by running the following command:
oc get configmap \ --namespace openshift-kube-apiserver bound-sa-token-signing-certs \ --output 'go-template={{index .data "service-account-001.pub"}}' > ./output_dir/serviceaccount-signer.public
$ oc get configmap \ --namespace openshift-kube-apiserver bound-sa-token-signing-certs \ --output 'go-template={{index .data "service-account-001.pub"}}' > ./output_dir/serviceaccount-signer.public
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This procedure uses a file named
serviceaccount-signer.public
as an example.
Use the extracted service account public signing key to create an OpenID Connect (OIDC) issuer and Azure blob storage container with OIDC configuration files by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The value of the
name
parameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the--oidc-resource-group-name
argument with the existing group name as its value. - 2
- Specify the region of the existing cluster.
- 3
- Specify the subscription ID of the existing cluster.
- 4
- Specify the file that contains the service account public signing key for the cluster.
Verify that the configuration file for the Azure pod identity webhook was created by running the following command:
ll ./output_dir/manifests
$ ll ./output_dir/manifests
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml
total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml
1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The file
azure-ad-pod-identity-webhook-config.yaml
contains the Azure pod identity webhook configuration.
Set an
OIDC_ISSUER_URL
variable with the OIDC issuer URL from the generated manifests in the output directory by running the following command:OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print $2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`
$ OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print $2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
spec.serviceAccountIssuer
parameter of the clusterauthentication
configuration by running the following command:oc patch authentication cluster \ --type=merge \ -p "{\"spec\":{\"serviceAccountIssuer\":\"${OIDC_ISSUER_URL}\"}}"
$ oc patch authentication cluster \ --type=merge \ -p "{\"spec\":{\"serviceAccountIssuer\":\"${OIDC_ISSUER_URL}\"}}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the configuration update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all of the pods in the cluster by running the following command:
oc adm reboot-machine-config-pool mcp/worker mcp/master
$ oc adm reboot-machine-config-pool mcp/worker mcp/master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting a pod updates the
serviceAccountIssuer
field and refreshes the service account public signing key.Monitor the restart and update process by running the following command:
oc adm wait-for-node-reboot nodes --all
$ oc adm wait-for-node-reboot nodes --all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebooted
All nodes rebooted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the Cloud Credential Operator
spec.credentialsMode
parameter toManual
by running the following command:oc patch cloudcredential cluster \ --type=merge \ --patch '{"spec":{"credentialsMode":"Manual"}}'
$ oc patch cloudcredential cluster \ --type=merge \ --patch '{"spec":{"credentialsMode":"Manual"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the list of
CredentialsRequest
objects from the OpenShift Container Platform release image by running the following command:oc adm release extract \ --credentials-requests \ --included \ --to <path_to_directory_for_credentials_requests> \ --registry-config ~/.pull-secret
$ oc adm release extract \ --credentials-requests \ --included \ --to <path_to_directory_for_credentials_requests> \ --registry-config ~/.pull-secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command might take a few moments to run.
Set an
AZURE_INSTALL_RG
variable with the Azure resource group name by running the following command:AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`
$ AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
ccoctl
utility to create managed identities for allCredentialsRequest
objects by running the following command:NoteThe following command does not show all available options. For a complete list of options, including those that might be necessary for your specific use case, run
$ ccoctl azure create-managed-identities --help
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Azure pod identity webhook configuration for Workload ID by running the following command:
oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml
$ oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the secrets generated by the
ccoctl
utility by running the following command:find ./output_dir/manifests -iname "openshift*yaml" -print0 | xargs -I {} -0 -t oc replace -f {}
$ find ./output_dir/manifests -iname "openshift*yaml" -print0 | xargs -I {} -0 -t oc replace -f {}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This process might take several minutes.
Restart all of the pods in the cluster by running the following command:
oc adm reboot-machine-config-pool mcp/worker mcp/master
$ oc adm reboot-machine-config-pool mcp/worker mcp/master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting a pod updates the
serviceAccountIssuer
field and refreshes the service account public signing key.Monitor the restart and update process by running the following command:
oc adm wait-for-node-reboot nodes --all
$ oc adm wait-for-node-reboot nodes --all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebooted
All nodes rebooted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the configuration update progress by running the following command:
oc adm wait-for-stable-cluster
$ oc adm wait-for-stable-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
All clusteroperators are stable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Remove the Azure root credentials secret by running the following command:
oc delete secret -n kube-system azure-credentials
$ oc delete secret -n kube-system azure-credentials
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.3. Verifying that a cluster uses short-term credentials Copy linkLink copied to clipboard!
You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster.
Prerequisites
-
You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility (
ccoctl
) to implement short-term credentials. -
You installed the OpenShift CLI (
oc
). -
You are logged in as a user with
cluster-admin
privileges.
Procedure
Verify that the CCO is configured to operate in manual mode by running the following command:
oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode}
$ oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following output confirms that the CCO is operating in manual mode:
Example output
Manual
Manual
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the cluster does not have
root
credentials by running the following command:oc get secrets \ -n kube-system <secret_name>
$ oc get secrets \ -n kube-system <secret_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<secret_name>
is the name of the root secret for your cloud provider.Expand Platform Secret name Amazon Web Services (AWS)
aws-creds
Microsoft Azure
azure-credentials
Google Cloud Platform (GCP)
gcp-credentials
An error confirms that the root secret is not present on the cluster.
Example output for an AWS cluster
Error from server (NotFound): secrets "aws-creds" not found
Error from server (NotFound): secrets "aws-creds" not found
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the components are using short-term security credentials for individual components by running the following command:
oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }'
$ oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command displays the value of the
.spec.serviceAccountIssuer
parameter in the clusterAuthentication
object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster.Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command:
oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}'
$ oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow An output that contains the
azure_client_id
andazure_federated_token_file
felids confirms that the components are assuming the Azure client ID.Azure clusters: Verify that the pod identity webhook is running by running the following command:
oc get pods \ -n openshift-cloud-credential-operator
$ oc get pods \ -n openshift-cloud-credential-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m
NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 13. Configuring alert notifications Copy linkLink copied to clipboard!
In OpenShift Container Platform, an alert is fired when the conditions defined in an alerting rule are true. An alert provides a notification that a set of circumstances are apparent within a cluster. Firing alerts can be viewed in the Alerting UI in the OpenShift Container Platform web console by default. After an installation, you can configure OpenShift Container Platform to send alert notifications to external systems.
13.1. Sending notifications to external systems Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.15, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types:
- PagerDuty
- Webhook
- Slack
- Microsoft Teams
Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review.
Checking that alerting is operational by using the watchdog alert
OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider.
Chapter 14. Converting a connected cluster to a disconnected cluster Copy linkLink copied to clipboard!
There might be some scenarios where you need to convert your OpenShift Container Platform cluster from a connected cluster to a disconnected cluster.
A disconnected cluster, also known as a restricted cluster, does not have an active connection to the internet. As such, you must mirror the contents of your registries and installation media. You can create this mirror registry on a host that can access both the internet and your closed network, or copy images to a device that you can move across network boundaries.
This topic describes the general process for converting an existing, connected cluster into a disconnected cluster.
14.1. About the mirror registry Copy linkLink copied to clipboard!
You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift, a small-scale container registry included with OpenShift Container Platform subscriptions.
You can use any container registry that supports Docker v2-2, such as Red Hat Quay, the mirror registry for Red Hat OpenShift, Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry.
The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.
If choosing a container registry that is not the mirror registry for Red Hat OpenShift, it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters.
When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.
For mirrored registries, to view the source of pulled images, you must review the Trying to access
log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images
command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location.
Red Hat does not test third party registries with OpenShift Container Platform.
14.2. Prerequisites Copy linkLink copied to clipboard!
-
The
oc
client is installed. - A running cluster.
An installed mirror registry, which is a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries:
If you have an subscription to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator.
- The mirror repository must be configured to share images. For example, a Red Hat Quay repository requires Organizations in order to share images.
- Access to the internet to obtain the necessary container images.
14.3. Preparing the cluster for mirroring Copy linkLink copied to clipboard!
Before disconnecting your cluster, you must mirror, or copy, the images to a mirror registry that is reachable by every node in your disconnected cluster. In order to mirror the images, you must prepare your cluster by:
- Adding the mirror registry certificates to the list of trusted CAs on your host.
-
Creating a
.dockerconfigjson
file that contains your image pull secret, which is from thecloud.openshift.com
token.
Procedure
Configuring credentials that allow image mirroring:
Add the CA certificate for the mirror registry, in the simple PEM or DER file formats, to the list of trusted CAs. For example:
cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/
$ cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - where,
</path/to/cert.crt>
- Specifies the path to the certificate on your local file system.
- where,
Update the CA trust. For example, in Linux:
update-ca-trust
$ update-ca-trust
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the
.dockerconfigjson
file from the global pull secret:oc extract secret/pull-secret -n openshift-config --confirm --to=.
$ oc extract secret/pull-secret -n openshift-config --confirm --to=.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
.dockerconfigjson
.dockerconfigjson
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
.dockerconfigjson
file to add your mirror registry and authentication credentials and save it as a new file:{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}},"<registry>:<port>/<namespace>/":{"auth":"<token>"}}}
{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}},"<registry>:<port>/<namespace>/":{"auth":"<token>"}}}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<local_registry>
- Specifies the registry domain name, and optionally the port, that your mirror registry uses to serve content.
auth
- Specifies the base64-encoded user name and password for your mirror registry.
<registry>:<port>/<namespace>
- Specifies the mirror registry details.
<token>
Specifies the base64-encoded
username:password
for your mirror registry.For example:
{"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==","email":"user@example.com"},
$ {"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==","email":"user@example.com"}, "quay.io":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==","email":"user@example.com"}, "registry.connect.redhat.com"{"auth":"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==","email":"user@example.com"}, "registry.redhat.io":{"auth":"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==","email":"user@example.com"}, "registry.svc.ci.openshift.org":{"auth":"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV"},"my-registry:5000/my-namespace/":{"auth":"dXNlcm5hbWU6cGFzc3dvcmQ="}}}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.4. Mirroring the images Copy linkLink copied to clipboard!
After the cluster is properly configured, you can mirror the images from your external repositories to the mirror repository.
Procedure
Mirror the Operator Lifecycle Manager (OLM) images:
oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds>
$ oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
product-version
-
Specifies the tag that corresponds to the version of OpenShift Container Platform to install, such as
4.8
. mirror_registry
-
Specifies the fully qualified domain name (FQDN) for the target registry and namespace to mirror the Operator content to, where
<namespace>
is any existing namespace on the registry. reg_creds
-
Specifies the location of your modified
.dockerconfigjson
file.
For example:
oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'
$ oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mirror the content for any other Red Hat-provided Operator:
oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds>
$ oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
index_image
- Specifies the index image for the catalog that you want to mirror.
mirror_registry
-
Specifies the FQDN for the target registry and namespace to mirror the Operator content to, where
<namespace>
is any existing namespace on the registry. reg_creds
- Optional: Specifies the location of your registry credentials file, if required.
For example:
oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'
$ oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mirror the OpenShift Container Platform image repository:
oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture>
$ oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
product-version
-
Specifies the tag that corresponds to the version of OpenShift Container Platform to install, such as
4.8.15-x86_64
. architecture
-
Specifies the type of architecture for your server, such as
x86_64
. local_registry
- Specifies the registry domain name for your mirror repository.
local_repository
-
Specifies the name of the repository to create in your registry, such as
ocp4/openshift4
.
For example:
oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64
$ oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mirror any other registries, as needed:
oc image mirror <online_registry>/my/image:latest <mirror_registry>
$ oc image mirror <online_registry>/my/image:latest <mirror_registry>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional information
- For more information about mirroring Operator catalogs, see Mirroring an Operator catalog.
-
For more information about the
oc adm catalog mirror
command, see the OpenShift CLI administrator command reference.
14.5. Configuring the cluster for the mirror registry Copy linkLink copied to clipboard!
After creating and mirroring the images to the mirror registry, you must modify your cluster so that pods can pull images from the mirror registry.
You must:
- Add the mirror registry credentials to the global pull secret.
- Add the mirror registry server certificate to the cluster.
Create an
ImageContentSourcePolicy
custom resource (ICSP), which associates the mirror registry with the source registry.Add mirror registry credential to the cluster global pull-secret:
oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location>
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Provide the path to the new pull secret file.
For example:
oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the CA-signed mirror registry server certificate to the nodes in the cluster:
Create a config map that includes the server certificate for the mirror registry
oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=$path/ca.crt -n openshift-config
$ oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=$path/ca.crt -n openshift-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config
S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the config map to update the
image.config.openshift.io/cluster
custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster:oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"<config_map_name>"}}}' --type=merge
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"<config_map_name>"}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an ICSP to redirect container pull requests from the online registries to the mirror registry:
Create the
ImageContentSourcePolicy
custom resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the ICSP object:
oc create -f registryrepomirror.yaml
$ oc create -f registryrepomirror.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created
imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OpenShift Container Platform applies the changes to this CR to all nodes in the cluster.
Verify that the credentials, CA, and ICSP for mirror registry were added:
Log into a node:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell:chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
config.json
file for the credentials:cat /var/lib/kubelet/config.json
sh-4.4# cat /var/lib/kubelet/config.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"auths":{"brew.registry.redhat.io":{"xx=="},"brewregistry.stage.redhat.io":{"auth":"xxx=="},"mirror.registry.com:443":{"auth":"xx="}}}
{"auths":{"brew.registry.redhat.io":{"xx=="},"brewregistry.stage.redhat.io":{"auth":"xxx=="},"mirror.registry.com:443":{"auth":"xx="}}}
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Ensure that the mirror registry and credentials are present.
Change to the
certs.d
directorycd /etc/docker/certs.d/
sh-4.4# cd /etc/docker/certs.d/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the certificates in the
certs.d
directory:ls
sh-4.4# ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000 mirror.registry.com:443
image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000 mirror.registry.com:443
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Ensure that the mirror registry is in the list.
Check that the ICSP added the mirror registry to the
registries.conf
file:cat /etc/containers/registries.conf
sh-4.4# cat /etc/containers/registries.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
registry.mirror
parameters indicate that the mirror registry is searched before the original registry.Exit the node.
exit
sh-4.4# exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.6. Ensure applications continue to work Copy linkLink copied to clipboard!
Before disconnecting the cluster from the network, ensure that your cluster is working as expected and all of your applications are working as expected.
Procedure
Use the following commands to check the status of your cluster:
Ensure your pods are running:
oc get pods --all-namespaces
$ oc get pods --all-namespaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure your nodes are in the READY status:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.7. Disconnect the cluster from the network Copy linkLink copied to clipboard!
After mirroring all the required repositories and configuring your cluster to work as a disconnected cluster, you can disconnect the cluster from the network.
The Insights Operator is degraded when the cluster loses its Internet connection. You can avoid this problem by temporarily disabling the Insights Operator until you can restore it.
14.8. Restoring a degraded Insights Operator Copy linkLink copied to clipboard!
Disconnecting the cluster from the network necessarily causes the cluster to lose the Internet connection. The Insights Operator becomes degraded because it requires access to Red Hat Insights.
This topic describes how to recover from a degraded Insights Operator.
Procedure
Edit your
.dockerconfigjson
file to remove thecloud.openshift.com
entry, for example:"cloud.openshift.com":{"auth":"<hash>","email":"user@example.com"}
"cloud.openshift.com":{"auth":"<hash>","email":"user@example.com"}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Update the cluster secret with the edited
.dockerconfigjson
file:oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Insights Operator is no longer degraded:
oc get co insights
$ oc get co insights
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.9. Restoring the network Copy linkLink copied to clipboard!
If you want to reconnect a disconnected cluster and pull images from online registries, delete the cluster’s ImageContentSourcePolicy (ICSP) objects. Without the ICSP, pull requests to external registries are no longer redirected to the mirror registry.
Procedure
View the ICSP objects in your cluster:
oc get imagecontentsourcepolicy
$ oc get imagecontentsourcepolicy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h
NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete all the ICSP objects you created when disconnecting your cluster:
oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name>
$ oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0
$ oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
imagecontentsourcepolicy.operator.openshift.io "mirror-ocp" deleted imagecontentsourcepolicy.operator.openshift.io "ocp4-index-0" deleted imagecontentsourcepolicy.operator.openshift.io "qe45-index-0" deleted
imagecontentsourcepolicy.operator.openshift.io "mirror-ocp" deleted imagecontentsourcepolicy.operator.openshift.io "ocp4-index-0" deleted imagecontentsourcepolicy.operator.openshift.io "qe45-index-0" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for all the nodes to restart and return to the READY status and verify that the
registries.conf
file is pointing to the original registries and not the mirror registries:Log into a node:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell:chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Examine the
registries.conf
file:cat /etc/containers/registries.conf
sh-4.4# cat /etc/containers/registries.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
registry
andregistry.mirror
entries created by the ICSPs you deleted are removed.
Chapter 15. Configuring additional devices in an IBM Z or IBM LinuxONE environment Copy linkLink copied to clipboard!
After installing OpenShift Container Platform, you can configure additional devices for your cluster in an IBM Z® or IBM® LinuxONE environment, which is installed with z/VM. The following devices can be configured:
- Fibre Channel Protocol (FCP) host
- FCP LUN
- DASD
- qeth
You can configure devices by adding udev rules using the Machine Config Operator (MCO) or you can configure devices manually.
The procedures described here apply only to z/VM installations. If you have installed your cluster with RHEL KVM on IBM Z® or IBM® LinuxONE infrastructure, no additional configuration is needed inside the KVM guest after the devices were added to the KVM guests. However, both in z/VM and RHEL KVM environments the next steps to configure the Local Storage Operator and Kubernetes NMState Operator need to be applied.
15.1. Configuring additional devices using the Machine Config Operator (MCO) Copy linkLink copied to clipboard!
Tasks in this section describe how to use features of the Machine Config Operator (MCO) to configure additional devices in an IBM Z® or IBM® LinuxONE environment. Configuring devices with the MCO is persistent but only allows specific configurations for compute nodes. MCO does not allow control plane nodes to have different configurations.
Prerequisites
- You are logged in to the cluster as a user with administrative privileges.
- The device must be available to the z/VM guest.
- The device is already attached.
-
The device is not included in the
cio_ignore
list, which can be set in the kernel parameters. You have created a
MachineConfig
object file with the following YAML:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.1.1. Configuring a Fibre Channel Protocol (FCP) host Copy linkLink copied to clipboard!
The following is an example of how to configure an FCP host adapter with N_Port Identifier Virtualization (NPIV) by adding a udev rule.
Procedure
Take the following sample udev rule
441-zfcp-host-0.0.8000.rules
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Convert the rule to Base64 encoded by running the following command:
base64 /path/to/file/
$ base64 /path/to/file/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the following MCO sample profile into a YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.1.2. Configuring an FCP LUN Copy linkLink copied to clipboard!
The following is an example of how to configure an FCP LUN by adding a udev rule. You can add new FCP LUNs or add additional paths to LUNs that are already configured with multipathing.
Procedure
Take the following sample udev rule
41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Convert the rule to Base64 encoded by running the following command:
base64 /path/to/file/
$ base64 /path/to/file/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the following MCO sample profile into a YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.1.3. Configuring DASD Copy linkLink copied to clipboard!
The following is an example of how to configure a DASD device by adding a udev rule.
Procedure
Take the following sample udev rule
41-dasd-eckd-0.0.4444.rules
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Convert the rule to Base64 encoded by running the following command:
base64 /path/to/file/
$ base64 /path/to/file/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the following MCO sample profile into a YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.1.4. Configuring qeth Copy linkLink copied to clipboard!
The following is an example of how to configure a qeth device by adding a udev rule.
Procedure
Take the following sample udev rule
41-qeth-0.0.1000.rules
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Convert the rule to Base64 encoded by running the following command:
base64 /path/to/file/
$ base64 /path/to/file/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the following MCO sample profile into a YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.2. Configuring additional devices manually Copy linkLink copied to clipboard!
Tasks in this section describe how to manually configure additional devices in an IBM Z® or IBM® LinuxONE environment. This configuration method is persistent over node restarts but not OpenShift Container Platform native and you need to redo the steps if you replace the node.
Prerequisites
- You are logged in to the cluster as a user with administrative privileges.
- The device must be available to the node.
- In a z/VM environment, the device must be attached to the z/VM guest.
Procedure
Connect to the node via SSH by running the following command:
ssh <user>@<node_ip_address>
$ ssh <user>@<node_ip_address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also start a debug session to the node by running the following command:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable the devices with the
chzdev
command, enter the following command:sudo chzdev -e <device>
$ sudo chzdev -e <device>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.3. RoCE network Cards Copy linkLink copied to clipboard!
RoCE (RDMA over Converged Ethernet) network cards do not need to be enabled and their interfaces can be configured with the Kubernetes NMState Operator whenever they are available in the node. For example, RoCE network cards are available if they are attached in a z/VM environment or passed through in a RHEL KVM environment.
15.4. Enabling multipathing for FCP LUNs Copy linkLink copied to clipboard!
Tasks in this section describe how to manually configure additional devices in an IBM Z® or IBM® LinuxONE environment. This configuration method is persistent over node restarts but not OpenShift Container Platform native and you need to redo the steps if you replace the node.
On IBM Z® and IBM® LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z® and IBM® LinuxONE.
Prerequisites
- You are logged in to the cluster as a user with administrative privileges.
- You have configured multiple paths to a LUN with either method explained above.
Procedure
Connect to the node via SSH by running the following command:
ssh <user>@<node_ip_address>
$ ssh <user>@<node_ip_address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also start a debug session to the node by running the following command:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable multipathing, run the following command:
sudo /sbin/mpathconf --enable
$ sudo /sbin/mpathconf --enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To start the
multipathd
daemon, run the following command:sudo multipath
$ sudo multipath
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To format your multipath device with fdisk, run the following command:
sudo fdisk /dev/mapper/mpatha
$ sudo fdisk /dev/mapper/mpatha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the devices have been grouped, run the following command:
sudo multipath -ll
$ sudo multipath -ll
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 16. RHCOS image layering Copy linkLink copied to clipboard!
Red Hat Enterprise Linux CoreOS (RHCOS) image layering allows you to easily extend the functionality of your base RHCOS image by layering additional images onto the base image. This layering does not modify the base RHCOS image. Instead, it creates a custom layered image that includes all RHCOS functionality and adds additional functionality to specific nodes in the cluster.
You create a custom layered image by using a Containerfile and applying it to nodes by using a MachineConfig
object. The Machine Config Operator overrides the base RHCOS image, as specified by the osImageURL
value in the associated machine config, and boots the new image. You can remove the custom layered image by deleting the machine config, The MCO reboots the nodes back to the base RHCOS image.
With RHCOS image layering, you can install RPMs into your base image, and your custom content will be booted alongside RHCOS. The Machine Config Operator (MCO) can roll out these custom layered images and monitor these custom containers in the same way it does for the default RHCOS image. RHCOS image layering gives you greater flexibility in how you manage your RHCOS nodes.
Installing realtime kernel and extensions RPMs as custom layered content is not recommended. This is because these RPMs can conflict with RPMs installed by using a machine config. If there is a conflict, the MCO enters a degraded
state when it tries to install the machine config RPM. You need to remove the conflicting extension from your machine config before proceeding.
As soon as you apply the custom layered image to your cluster, you effectively take ownership of your custom layered images and those nodes. While Red Hat remains responsible for maintaining and updating the base RHCOS image on standard nodes, you are responsible for maintaining and updating images on nodes that use a custom layered image. You assume the responsibility for the package you applied with the custom layered image and any issues that might arise with the package.
To apply a custom layered image, you create a Containerfile that references an OpenShift Container Platform image and the RPM that you want to apply. You then push the resulting custom layered image to an image registry. In a non-production OpenShift Container Platform cluster, create a MachineConfig
object for the targeted node pool that points to the new image.
Use the same base RHCOS image installed on the rest of your cluster. Use the oc adm release info --image-for rhel-coreos
command to obtain the base image used in your cluster.
RHCOS image layering allows you to use the following types of images to create custom layered images:
OpenShift Container Platform Hotfixes. You can work with Customer Experience and Engagement (CEE) to obtain and apply Hotfix packages on top of your RHCOS image. In some instances, you might want a bug fix or enhancement before it is included in an official OpenShift Container Platform release. RHCOS image layering allows you to easily add the Hotfix before it is officially released and remove the Hotfix when the underlying RHCOS image incorporates the fix.
ImportantSome Hotfixes require a Red Hat Support Exception and are outside of the normal scope of OpenShift Container Platform support coverage or life cycle policies.
In the event you want a Hotfix, it will be provided to you based on Red Hat Hotfix policy. Apply it on top of the base image and test that new custom layered image in a non-production environment. When you are satisfied that the custom layered image is safe to use in production, you can roll it out on your own schedule to specific node pools. For any reason, you can easily roll back the custom layered image and return to using the default RHCOS.
Example Containerfile to apply a Hotfix
Copy to Clipboard Copied! Toggle word wrap Toggle overflow RHEL packages. You can download Red Hat Enterprise Linux (RHEL) packages from the Red Hat Customer Portal, such as chrony, firewalld, and iputils.
Example Containerfile to apply the firewalld utility
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Containerfile to apply the libreswan utility
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because libreswan requires additional RHEL packages, the image must be built on an entitled RHEL host.
Third-party packages. You can download and install RPMs from third-party organizations, such as the following types of packages:
- Bleeding edge drivers and kernel enhancements to improve performance or add capabilities.
- Forensic client tools to investigate possible and actual break-ins.
- Security agents.
- Inventory agents that provide a coherent view of the entire cluster.
- SSH Key management packages.
Example Containerfile to apply a third-party package from EPEL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Containerfile to apply a third-party package that has RHEL dependencies
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This Containerfile installs the Linux fish program. Because fish requires additional RHEL packages, the image must be built on an entitled RHEL host.
After you create the machine config, the Machine Config Operator (MCO) performs the following steps:
- Renders a new machine config for the specified pool or pools.
- Performs cordon and drain operations on the nodes in the pool or pools.
- Writes the rest of the machine config parameters onto the nodes.
- Applies the custom layered image to the node.
- Reboots the node using the new image.
It is strongly recommended that you test your images outside of your production environment before rolling out to your cluster.
16.1. Applying a RHCOS custom layered image Copy linkLink copied to clipboard!
You can easily configure Red Hat Enterprise Linux CoreOS (RHCOS) image layering on the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the new custom layered image, overriding the base Red Hat Enterprise Linux CoreOS (RHCOS) image.
To apply a custom layered image to your cluster, you must have the custom layered image in a repository that your cluster can access. Then, create a MachineConfig
object that points to the custom layered image. You need a separate MachineConfig
object for each machine config pool that you want to configure.
When you configure a custom layered image, OpenShift Container Platform no longer automatically updates any node that uses the custom layered image. You become responsible for manually updating your nodes as appropriate. If you roll back the custom layer, OpenShift Container Platform will again automatically update the node. See the Additional resources section that follows for important information about updating nodes that use a custom layered image.
Prerequisites
You must create a custom layered image that is based on an OpenShift Container Platform image digest, not a tag.
NoteYou should use the same base RHCOS image that is installed on the rest of your cluster. Use the
oc adm release info --image-for rhel-coreos
command to obtain the base image being used in your cluster.For example, the following Containerfile creates a custom layered image from an OpenShift Container Platform 4.15 image and overrides the kernel package with one from CentOS 9 Stream:
Example Containerfile for a custom layer image
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteInstructions on how to create a Containerfile are beyond the scope of this documentation.
-
Because the process for building a custom layered image is performed outside of the cluster, you must use the
--authfile /path/to/pull-secret
option with Podman or Buildah. Alternatively, to have the pull secret read by these tools automatically, you can add it to one of the default file locations:~/.docker/config.json
,$XDG_RUNTIME_DIR/containers/auth.json
,~/.docker/config.json
, or~/.dockercfg
. Refer to thecontainers-auth.json
man page for more information. - You must push the custom layered image to a repository that your cluster can access.
Procedure
Create a machine config file.
Create a YAML file similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
MachineConfig
object:oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIt is strongly recommended that you test your images outside of your production environment before rolling out to your cluster.
Verification
You can verify that the custom layered image is applied by performing any of the following checks:
Check that the worker machine config pool has rolled out with the new machine config:
Check that the new machine config is created:
oc get mc
$ oc get mc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
osImageURL
value in the new machine config points to the expected image:oc describe mc rendered-worker-5de4837625b1cbc237de6b22bc0bc873
$ oc describe mc rendered-worker-5de4837625b1cbc237de6b22bc0bc873
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the associated machine config pool is updated with the new machine config:
oc get mcp
$ oc get mcp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-15961f1da260f7be141006404d17d39b True False False 3 3 3 0 39m worker rendered-worker-5de4837625b1cbc237de6b22bc0bc873 True False False 3 0 0 0 39m
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-15961f1da260f7be141006404d17d39b True False False 3 3 3 0 39m worker rendered-worker-5de4837625b1cbc237de6b22bc0bc873 True False False 3 0 0 0 39m
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- When the
UPDATING
field isTrue
, the machine config pool is updating with the new machine config. In this case, you will not see the new machine config listed in the output. When the field becomesFalse
, the worker machine config pool has rolled out to the new machine config.
Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
When the node is back in the
Ready
state, check that the node is using the custom layered image:Open an
oc debug
session to the node. For example:oc debug node/ip-10-0-155-125.us-west-1.compute.internal
$ oc debug node/ip-10-0-155-125.us-west-1.compute.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell:chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
rpm-ostree status
command to view that the custom layered image is in use:sudo rpm-ostree status
sh-4.4# sudo rpm-ostree status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
State: idle Deployments: * ostree-unverified-registry:quay.io/my-registry/... Digest: sha256:...
State: idle Deployments: * ostree-unverified-registry:quay.io/my-registry/... Digest: sha256:...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
16.2. Removing a RHCOS custom layered image Copy linkLink copied to clipboard!
You can easily revert Red Hat Enterprise Linux CoreOS (RHCOS) image layering from the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the cluster base Red Hat Enterprise Linux CoreOS (RHCOS) image, overriding the custom layered image.
To remove a Red Hat Enterprise Linux CoreOS (RHCOS) custom layered image from your cluster, you need to delete the machine config that applied the image.
Procedure
Delete the machine config that applied the custom layered image.
oc delete mc os-layer-custom
$ oc delete mc os-layer-custom
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After deleting the machine config, the nodes reboot.
Verification
You can verify that the custom layered image is removed by performing any of the following checks:
Check that the worker machine config pool is updating with the previous machine config:
oc get mcp
$ oc get mcp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- When the
UPDATING
field isTrue
, the machine config pool is updating with the previous machine config. When the field becomesFalse
, the worker machine config pool has rolled out to the previous machine config.
Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the node is back in the
Ready
state, check that the node is using the base image:Open an
oc debug
session to the node. For example:oc debug node/ip-10-0-155-125.us-west-1.compute.internal
$ oc debug node/ip-10-0-155-125.us-west-1.compute.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell:chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
rpm-ostree status
command to view that the custom layered image is in use:sudo rpm-ostree status
sh-4.4# sudo rpm-ostree status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
State: idle Deployments: * ostree-unverified-registry:podman pull quay.io/openshift-release-dev/ocp-release@sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73 Digest: sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73
State: idle Deployments: * ostree-unverified-registry:podman pull quay.io/openshift-release-dev/ocp-release@sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73 Digest: sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
16.3. Updating with a RHCOS custom layered image Copy linkLink copied to clipboard!
When you configure Red Hat Enterprise Linux CoreOS (RHCOS) image layering, OpenShift Container Platform no longer automatically updates the node pool that uses the custom layered image. You become responsible to manually update your nodes as appropriate.
To update a node that uses a custom layered image, follow these general steps:
- The cluster automatically upgrades to version x.y.z+1, except for the nodes that use the custom layered image.
- You could then create a new Containerfile that references the updated OpenShift Container Platform image and the RPM that you had previously applied.
- Create a new machine config that points to the updated custom layered image.
Updating a node with a custom layered image is not required. However, if that node gets too far behind the current OpenShift Container Platform version, you could experience unexpected results.
Chapter 17. AWS Local Zone or Wavelength Zone tasks Copy linkLink copied to clipboard!
After installing OpenShift Container Platform on Amazon Web Services (AWS), you can further configure AWS Local Zones or Wavelength Zones and an edge compute pool.
17.1. Extend existing clusters to use AWS Local Zones or Wavelength Zones Copy linkLink copied to clipboard!
As a post-installation task, you can extend an existing OpenShift Container Platform cluster on Amazon Web Services (AWS) to use AWS Local Zones or Wavelength Zones.
Extending nodes to Local Zones or Wavelength Zones locations comprises the following steps:
- Adjusting the cluster-network maximum transmission unit (MTU).
- Opting in the Local Zones or Wavelength Zones group to AWS Local Zones or Wavelength Zones.
Creating a subnet in the existing VPC for a Local Zones or Wavelength Zones location.
ImportantBefore you extend an existing OpenShift Container Platform cluster on AWS to use Local Zones or Wavelength Zones, check that the existing VPC contains available Classless Inter-Domain Routing (CIDR) blocks. These blocks are needed for creating the subnets.
- Creating the machine set manifest, and then creating a node in each Local Zone or Wavelength Zone location.
Local Zones only: Adding the permission
ec2:ModifyAvailabilityZoneGroup
to the Identity and Access Management (IAM) user or role, so that the required network resources can be created. For example:Example of an additional IAM policy for AWS Local Zones deployments
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wavelength Zone only: Adding the permissions
ec2:ModifyAvailabilityZoneGroup
,ec2:CreateCarrierGateway
, andec2:DeleteCarrierGateway
to the Identity and Access Management (IAM) user or role, so that the required network resources can be created. For example:Example of an additional IAM policy for AWS Wavelength Zones deployments
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.1.1. About edge compute pools Copy linkLink copied to clipboard!
Edge compute nodes are tainted compute nodes that run in AWS Local Zones or Wavelength Zones locations.
When deploying a cluster that uses Local Zones or Wavelength Zones, consider the following points:
- Amazon EC2 instances in the Local Zones or Wavelength Zones are more expensive than Amazon EC2 instances in the Availability Zones.
- The latency is lower between the applications running in AWS Local Zones or Wavelength Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Local Zones or Wavelength Zones and Availability Zones.
Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Local Zones or Wavelength Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes
.
The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing.
You can access the following resources to learn more about a respective zone type:
- See How Local Zones work in the AWS documentation.
- See How AWS Wavelength work in the AWS documentation.
OpenShift Container Platform 4.12 introduced a new compute pool, edge, that is designed for use in remote zones. The edge compute pool configuration is common between AWS Local Zones or Wavelength Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Local Zones or Wavelength Zones resources, the default instance type can vary from the traditional compute pool.
The default Elastic Block Store (EBS) for Local Zones or Wavelength Zones locations is gp2
, which differs from the non-edge compute pool. The instance type used for each Local Zones or Wavelength Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone.
The edge compute pool creates new labels that developers can use to deploy applications onto AWS Local Zones or Wavelength Zones nodes. The new labels are:
-
node-role.kubernetes.io/edge=''
-
Local Zones only:
machine.openshift.io/zone-type=local-zone
-
Wavelength Zones only:
machine.openshift.io/zone-type=wavelength-zone
-
machine.openshift.io/zone-group=$ZONE_GROUP_NAME
By default, the machine sets for the edge compute pool define the taint of NoSchedule
to prevent other workloads from spreading on Local Zones or Wavelength Zones instances. Users can only run user workloads if they define tolerations in the pod specification.
17.2. Changing the cluster network MTU to support Local Zones or Wavelength Zones Copy linkLink copied to clipboard!
You might need to change the maximum transmission unit (MTU) value for the cluster network so that your cluster infrastructure can support Local Zones or Wavelength Zones subnets.
17.2.1. About the cluster MTU Copy linkLink copied to clipboard!
During installation the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You do not usually need to override the detected MTU.
You might want to change the MTU of the cluster network for several reasons:
- The MTU detected during cluster installation is not correct for your infrastructure.
- Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance.
17.2.1.1. Service interruption considerations Copy linkLink copied to clipboard!
When you initiate an MTU change on your cluster the following effects might impact service availability:
- At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart.
- Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change.
17.2.1.2. MTU value selection Copy linkLink copied to clipboard!
When planning your MTU migration there are two related but distinct MTU values to consider.
- Hardware MTU: This MTU value is set based on the specifics of your network infrastructure.
-
Cluster network MTU: This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin. For OVN-Kubernetes, the overhead is
100
bytes. For OpenShift SDN, the overhead is50
bytes.
If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001
, and some have an MTU of 1500
, you must set this value to 1400
.
To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value (maxmtu
) that is accepted by the network interface by using the ip -d link
command.
17.2.1.3. How the migration process works Copy linkLink copied to clipboard!
The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response.
User-initiated steps | OpenShift Container Platform activity |
---|---|
Set the following values in the Cluster Network Operator configuration:
| Cluster Network Operator (CNO): Confirms that each field is set to a valid value.
If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the Machine Config Operator (MCO): Performs a rolling reboot of each node in the cluster. |
Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use a variety of methods to accomplish this, including:
| N/A |
Set the | Machine Config Operator (MCO): Performs a rolling reboot of each node in the cluster with the new MTU configuration. |
17.2.1.4. Changing the cluster network MTU Copy linkLink copied to clipboard!
As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.
You cannot roll back an MTU value for nodes during the MTU migration process, but you can roll back the value after the MTU migration process completes.
The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster using an account with
cluster-admin
permissions. You have identified the target MTU for your cluster.
-
The MTU for the OVN-Kubernetes network plugin must be set to
100
less than the lowest hardware MTU value in your cluster. -
The MTU for the OpenShift SDN network plugin must be set to
50
less than the lowest hardware MTU value in your cluster.
-
The MTU for the OVN-Kubernetes network plugin must be set to
- If your nodes are physical machines, ensure that the cluster network and the connected network switches support jumbo frames.
- If your nodes are virtual machines (VMs), ensure that the hypervisor and the connected network switches support jumbo frames.
Procedure
To obtain the current MTU for the cluster network, enter the following command:
oc describe network.config cluster
$ oc describe network.config cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change.
oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }'
$ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<overlay_from>
- Specifies the current cluster network MTU value.
<overlay_to>
-
Specifies the target MTU for the cluster network. This value is set relative to the value of
<machine_to>
. For OVN-Kubernetes, this value must be100
less than the value of<machine_to>
. For OpenShift SDN, this value must be50
less than the value of<machine_to>
. <machine_to>
- Specifies the MTU for the primary network interface on the underlying host network.
Example that increases the cluster MTU
oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }'
$ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
oc get machineconfigpools
$ oc get machineconfigpools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A successfully updated node has the following status:
UPDATED=true
,UPDATING=false
,DEGRADED=false
.NoteBy default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster.
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command:
oc describe node | egrep "hostname|machineconfig"
$ oc describe node | egrep "hostname|machineconfig"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the following statements are true:
-
The value of
machineconfiguration.openshift.io/state
field isDone
. -
The value of the
machineconfiguration.openshift.io/currentConfig
field is equal to the value of themachineconfiguration.openshift.io/desiredConfig
field.
-
The value of
To confirm that the machine config is correct, enter the following command:
oc get machineconfig <config_name> -o yaml | grep ExecStart
$ oc get machineconfig <config_name> -o yaml | grep ExecStart
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<config_name>
is the name of the machine config from themachineconfiguration.openshift.io/currentConfig
field.The machine config must include the following update to the systemd configuration:
ExecStart=/usr/local/bin/mtu-migration.sh
ExecStart=/usr/local/bin/mtu-migration.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Finalize the MTU migration for your plugin. In both example commands,
<mtu>
specifies the new cluster network MTU that you specified with<overlay_to>
.To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin:
oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}'
$ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To finalize the MTU migration, enter the following command for the OpenShift SDN network plugin:
oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}'
$ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
oc get machineconfigpools
$ oc get machineconfigpools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A successfully updated node has the following status:
UPDATED=true
,UPDATING=false
,DEGRADED=false
.
Verification
Verify that the node in your cluster uses the MTU that you specified by entering the following command:
oc describe network.config cluster
$ oc describe network.config cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.2.2. Opting in to AWS Local Zones or Wavelength Zones Copy linkLink copied to clipboard!
If you plan to create subnets in AWS Local Zones or Wavelength Zones, you must opt in to each zone group separately.
Prerequisites
- You have installed the AWS CLI.
- You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster.
- You have attached a permissive IAM policy to a user or role account that opts in to the zone group.
Procedure
List the zones that are available in your AWS Region by running the following command:
Example command for listing available AWS Local Zones in an AWS Region
aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=local-zone \ --all-availability-zones
$ aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=local-zone \ --all-availability-zones
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command for listing available AWS Wavelength Zones in an AWS Region
aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=wavelength-zone \ --all-availability-zones
$ aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=wavelength-zone \ --all-availability-zones
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Depending on the AWS Region, the list of available zones might be long. The command returns the following fields:
ZoneName
- The name of the Local Zones or Wavelength Zones.
GroupName
- The group that comprises the zone. To opt in to the Region, save the name.
Status
-
The status of the Local Zones or Wavelength Zones group. If the status is
not-opted-in
, you must opt in theGroupName
as described in the next step.
Opt in to the zone group on your AWS account by running the following command:
aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ --opt-in-status opted-in
$ aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \
1 --opt-in-status opted-in
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<value_of_GroupName>
with the name of the group of the Local Zones or Wavelength Zones where you want to create subnets.
17.2.3. Create network requirements in an existing VPC that uses AWS Local Zones or Wavelength Zones Copy linkLink copied to clipboard!
If you want a Machine API to create an Amazon EC2 instance in a remote zone location, you must create a subnet in a Local Zones or Wavelength Zones location. You can use any provisioning tool, such as Ansible or Terraform, to create subnets in the existing Virtual Private Cloud (VPC).
You can configure the CloudFormation template to meet your requirements. The following subsections include steps that use CloudFormation templates to create the network requirements that extend an existing VPC to use an AWS Local Zones or Wavelength Zones.
Extending nodes to Local Zones requires that you create the following resources:
- 2 VPC Subnets: public and private. The public subnet associates to the public route table for the regular Availability Zones in the Region. The private subnet associates to the provided route table ID.
Extending nodes to Wavelength Zones requires that you create the following resources:
- 1 VPC Carrier Gateway associated to the provided VPC ID.
- 1 VPC Route Table for Wavelength Zones with a default route entry to VPC Carrier Gateway.
- 2 VPC Subnets: public and private. The public subnet associates to the public route table for an AWS Wavelength Zone. The private subnet associates to the provided route table ID.
Considering the limitation of NAT Gateways in Wavelength Zones, the provided CloudFormation templates support only associating the private subnets with the provided route table ID. A route table ID is attached to a valid NAT Gateway in the AWS Region.
17.2.4. Wavelength Zones only: Creating a VPC carrier gateway Copy linkLink copied to clipboard!
To use public subnets in your OpenShift Container Platform cluster that runs on Wavelength Zones, you must create the carrier gateway and associate the carrier gateway to the VPC. Subnets are useful for deploying load balancers or edge compute nodes.
To create edge nodes or internet-facing load balancers in Wavelength Zones locations for your OpenShift Container Platform cluster, you must create the following required network components:
- A carrier gateway that associates to the existing VPC.
- A carrier route table that lists route entries.
- A subnet that associates to the carrier route table.
Carrier gateways exist for VPCs that only contain subnets in a Wavelength Zone.
The following list explains the functions of a carrier gateway in the context of an AWS Wavelength Zones location:
- Provides connectivity between your Wavelength Zone and the carrier network, which includes any available devices from the carrier network.
- Performs Network Address Translation (NAT) functions, such as translating IP addresses that are public IP addresses stored in a network border group, from Wavelength Zones to carrier IP addresses. These translation functions apply to inbound and outbound traffic.
- Authorizes inbound traffic from a carrier network that is located in a specific location.
- Authorizes outbound traffic to a carrier network and the internet.
No inbound connection configuration exists from the internet to a Wavelength Zone through the carrier gateway.
You can use the provided CloudFormation template to create a stack of the following AWS resources:
- One carrier gateway that associates to the VPC ID in the template.
-
One public route table for the Wavelength Zone named as
<ClusterName>-public-carrier
. - Default IPv4 route entry in the new route table that targets the carrier gateway.
- VPC gateway endpoint for an AWS Simple Storage Service (S3).
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure
.
Procedure
- Go to the next section of the documentation named "CloudFormation template for the VPC Carrier Gateway", and then copy the syntax from the CloudFormation template for VPC Carrier Gateway template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires.
Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<stack_name>
is the name for the CloudFormation stack, such asclusterName-vpc-carrier-gw
. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path and the name of the CloudFormation template YAML file that you saved.- 3
<VpcId>
is the VPC ID extracted from the CloudFormation stack output created in the section named "Creating a VPC in AWS".- 4
<ClusterName>
is a custom value that prefixes to resources that the CloudFormation stack creates. You can use the same name that is defined in themetadata.name
section of theinstall-config.yaml
configuration file.
Example output
arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f
arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that the CloudFormation template components exist by running the following command:
aws cloudformation describe-stacks --stack-name <stack_name>
$ aws cloudformation describe-stacks --stack-name <stack_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
StackStatus
displaysCREATE_COMPLETE
, the output displays values for the following parameter. Ensure that you provide the parameter value to the other CloudFormation templates that you run to create for your cluster.PublicRouteTableId
The ID of the Route Table in the Carrier infrastructure.
17.2.5. Wavelength Zones only: CloudFormation template for the VPC Carrier Gateway Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the Carrier Gateway on AWS Wavelength infrastructure.
Example 17.1. CloudFormation template for VPC Carrier Gateway
17.2.6. Creating subnets for AWS edge compute services Copy linkLink copied to clipboard!
Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create a subnet in Local Zones or Wavelength Zones. Complete the following procedure for each Wavelength Zone that you want to deploy compute nodes to.
You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure
. - You opted in to the Local Zones or Wavelength Zones group.
Procedure
- Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires.
Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<stack_name>
is the name for the CloudFormation stack, such ascluster-wl-<local_zone_shortname>
for Local Zones andcluster-wl-<wavelength_zone_shortname>
for Wavelength Zones. You need the name of this stack if you remove the cluster.- 2
<template>
is the relative path and the name of the CloudFormation template YAML file that you saved.- 3
${VPC_ID}
is the VPC ID, which is the valueVpcID
in the output of the CloudFormation template for the VPC.- 4
${CLUSTER_NAME}
is the value of ClusterName to be used as a prefix of the new AWS resource names.- 5
${ZONE_NAME}
is the value of Local Zones or Wavelength Zones name to create the subnets.- 6
${ROUTE_TABLE_PUB}
is the Public Route Table Id extracted from the CloudFormation template. For Local Zones, the public route table is extracted from the VPC CloudFormation Stack. For Wavelength Zones, the value must be extracted from the output of the VPC’s carrier gateway CloudFormation stack.- 7
${SUBNET_CIDR_PUB}
is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR blockVpcCidr
.- 8
${ROUTE_TABLE_PVT}
is the PrivateRouteTableId extracted from the output of the VPC’s CloudFormation stack.- 9
${SUBNET_CIDR_PVT}
is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR blockVpcCidr
.
Example output
arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f
arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that the template components exist by running the following command:
aws cloudformation describe-stacks --stack-name <stack_name>
$ aws cloudformation describe-stacks --stack-name <stack_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
StackStatus
displaysCREATE_COMPLETE
, the output displays values for the following parameters:PublicSubnetId
The IDs of the public subnet created by the CloudFormation stack.
PrivateSubnetId
The IDs of the private subnet created by the CloudFormation stack.
Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster.
17.2.7. CloudFormation template for the VPC subnet Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the private and public subnets in a zone on Local Zones or Wavelength Zones infrastructure.
Example 17.2. CloudFormation template for VPC subnets
17.2.8. Creating a machine set manifest for an AWS Local Zones or Wavelength Zones node Copy linkLink copied to clipboard!
After you create subnets in AWS Local Zones or Wavelength Zones, you can create a machine set manifest.
The installation program sets the following labels for the edge
machine pools at cluster installation time:
-
machine.openshift.io/parent-zone-name: <value_of_ParentZoneName>
-
machine.openshift.io/zone-group: <value_of_ZoneGroup>
-
machine.openshift.io/zone-type: <value_of_ZoneType>
The following procedure details how you can create a machine set configuraton that matches the edge
compute pool configuration.
Prerequisites
- You have created subnets in AWS Local Zones or Wavelength Zones.
Procedure
Manually preserve
edge
machine pool labels when creating the machine set manifest by gathering the AWS API. To complete this action, enter the following command in your command-line interface (CLI):Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for Local Zone
us-east-1-nyc-1a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for Wavelength Zone
us-east-1-wl1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.2.8.1. Sample YAML for a compute machine set custom resource on AWS Copy linkLink copied to clipboard!
This sample YAML defines a compute machine set that runs in the us-east-1-nyc-1a
Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/edge: ""
.
If you want to reference the sample YAML file in the context of Wavelength Zones, ensure that you replace the AWS Region and zone information with supported Wavelength Zone values.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <edge>
is the node label to add.
- 1 3 4 10 13 15
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 2 7
- Specify the infrastructure ID,
edge
role node label, and zone name. - 5 6 8
- Specify the
edge
role node label. - 9
- Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region.
oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone>
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 16 17
- Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a
name:value
pair ofEmail:admin-email@example.com
.NoteCustom tags can also be specified during installation in the
install-config.yml
file. If theinstall-config.yml
file and the machine set include a tag with the samename
data, the value for the tag from the machine set takes priority over the value for the tag in theinstall-config.yml
file. - 11
- Specify the zone name, for example,
us-east-1-nyc-1a
. - 12
- Specify the region, for example,
us-east-1
. - 14
- The ID of the public subnet that you created in AWS Local Zones or Wavelength Zones. You created this public subnet ID when you finished the procedure for "Creating a subnet in an AWS zone".
- 18
- Specify a taint to prevent user workloads from being scheduled on
edge
nodes.NoteAfter adding the
NoSchedule
taint on the infrastructure node, existing DNS pods running on that node are marked asmisscheduled
. You must either delete or add toleration onmisscheduled
DNS pods.
17.2.8.2. Creating a compute machine set Copy linkLink copied to clipboard!
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view values of a specific compute machine set custom resource (CR), run the following command:
oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the list of compute machine sets by running the following command:
oc get machineset -n openshift-machine-api
$ oc get machineset -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.Optional: To check nodes that were created by the edge machine, run the following command:
oc get nodes -l node-role.kubernetes.io/edge
$ oc get nodes -l node-role.kubernetes.io/edge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f
NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3. Creating user workloads in AWS Local Zones or Wavelength Zones Copy linkLink copied to clipboard!
After you create an Amazon Web Service (AWS) Local Zones or Wavelength Zones infrastructure and deploy your cluster, you can use edge compute nodes to create user workloads in Local Zones or Wavelength Zones subnets.
When you use the installation program to create a cluster, the installation program automatically specifies a taint effect of NoSchedule
to each edge compute node. This means that a scheduler does not add a new pod, or deployment, to a node if the pod does not match the specified tolerations for a taint. You can modify the taint for better control over how nodes create workloads in each Local Zones or Wavelength Zones subnet.
The installation program creates the compute machine set manifests file with node-role.kubernetes.io/edge
and node-role.kubernetes.io/worker
labels applied to each edge compute node that is located in a Local Zones or Wavelength Zones subnet.
The examples in the procedure are for a Local Zones infrastructure. If you are working with a Wavelength Zones infrastructure, ensure you adapt the examples to what is supported in this infrastructure.
Prerequisites
-
You have access to the OpenShift CLI (
oc
). - You deployed your cluster in a Virtual Private Cloud (VPC) with defined Local Zones or Wavelength Zones subnets.
-
You ensured that the compute machine set for the edge compute nodes on Local Zones or Wavelength Zones subnets specifies the taints for
node-role.kubernetes.io/edge
.
Procedure
Create a
deployment
resource YAML file for an example application to be deployed in the edge compute node that operates in a Local Zones subnet. Ensure that you specify the correct tolerations that match the taints for the edge compute node.Example of a configured
deployment
resource for an edge compute node that operates in a Local Zone subnetCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
storageClassName
: For the Local Zone configuration, you must specifygp2-csi
.- 2
kind
: Defines thedeployment
resource.- 3
name
: Specifies the name of your Local Zone application. For example,local-zone-demo-app-nyc-1
.- 4
namespace:
Defines the namespace for the AWS Local Zone where you want to run the user workload. For example:local-zone-app-nyc-1a
.- 5
zone-group
: Defines the group to where a zone belongs. For example,us-east-1-iah-1
.- 6
nodeSelector
: Targets edge compute nodes that match the specified labels.- 7
tolerations
: Sets the values that match with thetaints
defined on theMachineSet
manifest for the Local Zone node.
Create a
service
resource YAML file for the node. This resource exposes a pod from a targeted edge compute node to services that run inside your Local Zone network.Example of a configured
service
resource for an edge compute node that operates in a Local Zone subnetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.4. Next steps Copy linkLink copied to clipboard!
- Optional: Use the AWS Load Balancer (ALB) Operator to expose a pod from a targeted edge compute node to services that run inside of a Local Zones or Wavelength Zones subnet from a public network. See Installing the AWS Load Balancer Operator.
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.