This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Machine management
Adding and maintaining cluster machines
Abstract
Chapter 1. Creating MachineSets 링크 복사링크가 클립보드에 복사되었습니다!
1.1. Creating a MachineSet in AWS 링크 복사링크가 클립보드에 복사되었습니다!
You can create a different MachineSet to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure MachineSets and related Machines so that you can move supporting workloads to the new Machines.
1.1.1. Machine API overview 링크 복사링크가 클립보드에 복사되었습니다!
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.
For OpenShift Container Platform 4.2 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.2 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
- Machines
- A fundamental unit that describes the host for a Node. A machine has a providerSpec, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.
- MachineSets
- Groups of machines. MachineSets are to machines as ReplicaSets are to Pods. If you need more machines or must scale them down, you change the replicas field on the MachineSet to meet your compute need.
The following custom resources add more capabilities to your cluster:
- MachineAutoscaler
- This resource automatically scales machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified MachineSet, and the MachineAutoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator.
- ClusterAutoscaler
- This resource is based on the upstream ClusterAutoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the MachineSet API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. You can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the ScalingPolicy so you can scale up nodes but not scale them down.
- MachineHealthCheck
This resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
NoteIn version 4.2, MachineHealthChecks is a Technology Preview feature
In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each MachineSet is scoped to a single zone, so the installation program sends out MachineSets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. The autoscaler provides best-effort balancing over the life of a cluster.
1.1.2. Sample YAML for a MachineSet Custom Resource on AWS 링크 복사링크가 클립보드에 복사되었습니다!
This sample YAML defines a MachineSet that runs in the us-east-1a
Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
In this sample, <infrastructureID>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
- 1 3 5 11 12 13 14
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and
jq
package installed, you can obtain the infrastructure ID by running the following command:oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 2 4 8
- Specify the infrastructure ID, node label, and zone.
- 6 7 9
- Specify the node label to add.
- 10
- Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your AWS zone for your OpenShift Container Platform nodes.
1.1.3. Creating a MachineSet 링크 복사링크가 클립보드에 복사되었습니다!
In addition to the ones created by the installation program, you can create your own MachineSets to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
-
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the MachineSet Custom Resource sample, as shown, and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.If you are not sure about which value to set for a specific field, you can check an existing MachineSet from your cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check values of a specific MachineSet:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the new
MachineSet
:oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the list of MachineSets:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the new MachineSet is available, the
DESIRED
andCURRENT
values match. If the MachineSet is not available, wait a few minutes and run the command again.After the new MachineSet is available, check status of the machine and the node that it references:
oc describe machine <name> -n openshift-machine-api
$ oc describe machine <name> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the new node and confirm that the new node has the label that you specified:
oc get node <node_name> --show-labels
$ oc get node <node_name> --show-labels
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the command output and confirm that
node-role.kubernetes.io/<your_label>
is in theLABELS
list.
Any change to a MachineSet is not applied to existing machines owned by the MachineSet. For example, labels edited or added to an existing MachineSet are not propagated to existing machines and Nodes associated with the MachineSet.
Next steps
If you need MachineSets in other availability zones, repeat this process to create more MachineSets.
1.2. Creating a MachineSet in Azure 링크 복사링크가 클립보드에 복사되었습니다!
You can create a different MachineSet to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure MachineSets and related Machines so that you can move supporting workloads to the new Machines.
1.2.1. Machine API overview 링크 복사링크가 클립보드에 복사되었습니다!
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.
For OpenShift Container Platform 4.2 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.2 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
- Machines
- A fundamental unit that describes the host for a Node. A machine has a providerSpec, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.
- MachineSets
- Groups of machines. MachineSets are to machines as ReplicaSets are to Pods. If you need more machines or must scale them down, you change the replicas field on the MachineSet to meet your compute need.
The following custom resources add more capabilities to your cluster:
- MachineAutoscaler
- This resource automatically scales machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified MachineSet, and the MachineAutoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator.
- ClusterAutoscaler
- This resource is based on the upstream ClusterAutoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the MachineSet API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. You can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the ScalingPolicy so you can scale up nodes but not scale them down.
- MachineHealthCheck
This resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
NoteIn version 4.2, MachineHealthChecks is a Technology Preview feature
In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each MachineSet is scoped to a single zone, so the installation program sends out MachineSets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. The autoscaler provides best-effort balancing over the life of a cluster.
1.2.2. Sample YAML for a MachineSet Custom Resource on Azure 링크 복사링크가 클립보드에 복사되었습니다!
This sample YAML defines a MachineSet that runs in the 1
Microsoft Azure zone in the centralus
region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
In this sample, <infrastructureID>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
- 1 5 7 12 13 14 17
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and
jq
package installed, you can obtain the infrastructure ID by running the following command:oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 2 3 8 9 11 15 16
- Specify the node label to add.
- 4 6 10
- Specify the infrastructure ID, node label, and region.
- 18
- Specify the zone within your region to place Machines on. Be sure that your region supports the zone that you specify.
1.2.3. Creating a MachineSet 링크 복사링크가 클립보드에 복사되었습니다!
In addition to the ones created by the installation program, you can create your own MachineSets to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
-
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the MachineSet Custom Resource sample, as shown, and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.If you are not sure about which value to set for a specific field, you can check an existing MachineSet from your cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check values of a specific MachineSet:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the new
MachineSet
:oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the list of MachineSets:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the new MachineSet is available, the
DESIRED
andCURRENT
values match. If the MachineSet is not available, wait a few minutes and run the command again.After the new MachineSet is available, check status of the machine and the node that it references:
oc describe machine <name> -n openshift-machine-api
$ oc describe machine <name> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the new node and confirm that the new node has the label that you specified:
oc get node <node_name> --show-labels
$ oc get node <node_name> --show-labels
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the command output and confirm that
node-role.kubernetes.io/<your_label>
is in theLABELS
list.
Any change to a MachineSet is not applied to existing machines owned by the MachineSet. For example, labels edited or added to an existing MachineSet are not propagated to existing machines and Nodes associated with the MachineSet.
Next steps
If you need MachineSets in other availability zones, repeat this process to create more MachineSets.
1.3. Creating a MachineSet in GCP 링크 복사링크가 클립보드에 복사되었습니다!
You can create a different MachineSet to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure MachineSets and related Machines so that you can move supporting workloads to the new Machines.
1.3.1. Machine API overview 링크 복사링크가 클립보드에 복사되었습니다!
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.
For OpenShift Container Platform 4.2 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.2 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
- Machines
- A fundamental unit that describes the host for a Node. A machine has a providerSpec, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.
- MachineSets
- Groups of machines. MachineSets are to machines as ReplicaSets are to Pods. If you need more machines or must scale them down, you change the replicas field on the MachineSet to meet your compute need.
The following custom resources add more capabilities to your cluster:
- MachineAutoscaler
- This resource automatically scales machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified MachineSet, and the MachineAutoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator.
- ClusterAutoscaler
- This resource is based on the upstream ClusterAutoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the MachineSet API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. You can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the ScalingPolicy so you can scale up nodes but not scale them down.
- MachineHealthCheck
This resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
NoteIn version 4.2, MachineHealthChecks is a Technology Preview feature
In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each MachineSet is scoped to a single zone, so the installation program sends out MachineSets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. The autoscaler provides best-effort balancing over the life of a cluster.
1.3.2. Sample YAML for a MachineSet Custom Resource on GCP 링크 복사링크가 클립보드에 복사되었습니다!
This sample YAML defines a MachineSet that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
In this sample, <infrastructureID>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
- 1 2 3 4 5 8 10 11 14
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and
jq
package installed, you can obtain the infrastructure ID by running the following command:oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 12 16
- Specify the infrastructure ID and node label.
- 6 7 9
- Specify the node label to add.
- 13 15
- Specify the name of the GCP project that you use for your cluster.
1.3.3. Creating a MachineSet 링크 복사링크가 클립보드에 복사되었습니다!
In addition to the ones created by the installation program, you can create your own MachineSets to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
-
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the MachineSet Custom Resource sample, as shown, and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.If you are not sure about which value to set for a specific field, you can check an existing MachineSet from your cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check values of a specific MachineSet:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the new
MachineSet
:oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the list of MachineSets:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the new MachineSet is available, the
DESIRED
andCURRENT
values match. If the MachineSet is not available, wait a few minutes and run the command again.After the new MachineSet is available, check status of the machine and the node that it references:
oc describe machine <name> -n openshift-machine-api
$ oc describe machine <name> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the new node and confirm that the new node has the label that you specified:
oc get node <node_name> --show-labels
$ oc get node <node_name> --show-labels
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the command output and confirm that
node-role.kubernetes.io/<your_label>
is in theLABELS
list.
Any change to a MachineSet is not applied to existing machines owned by the MachineSet. For example, labels edited or added to an existing MachineSet are not propagated to existing machines and Nodes associated with the MachineSet.
Next steps
If you need MachineSets in other availability zones, repeat this process to create more MachineSets.
Chapter 2. Manually scaling a MachineSet 링크 복사링크가 클립보드에 복사되었습니다!
You can add or remove an instance of a machine in a MachineSet.
If you need to modify aspects of a MachineSet outside of scaling, see Modifying a MachineSet.
Prerequisites
-
If you enabled the cluster-wide proxy and scale up workers not included in
networking.machineCIDR
from the installation configuration, you must add the workers to the Proxy object’snoProxy
field to prevent connection issues.
This process is not applicable to clusters where you manually provisioned the machines yourself. You can use the advanced machine management and scaling capabilities only in clusters where the machine API is operational.
2.1. Scaling a MachineSet manually 링크 복사링크가 클립보드에 복사되었습니다!
If you must add or remove an instance of a machine in a MachineSet, you can manually scale the MachineSet.
Prerequisites
-
Install an OpenShift Container Platform cluster and the
oc
command line. -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the MachineSets that are in the cluster:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The MachineSets are listed in the form of
<clusterid>-worker-<aws-region-az>
.Scale the MachineSet:
oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can scale the MachineSet up or down. It takes several minutes for the new machines to be available.
ImportantBy default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker MachineSet to
0
unless you first relocate the router pods.
Chapter 3. Modifying a MachineSet 링크 복사링크가 클립보드에 복사되었습니다!
You can make changes to a MachineSet, such as adding labels, changing the instance type, or changing block storage.
If you need to scale a MachineSet without making other changes, see Manually scaling a MachineSet.
3.1. Modifying a MachineSet 링크 복사링크가 클립보드에 복사되었습니다!
To make changes to a MachineSet, edit the MachineSet YAML. Then remove all machines associated with the MachineSet by deleting or scaling down the machines to 0
. Then scale the replicas back to the desired number. Changes you make to a MachineSet do not affect existing machines.
If you need to scale a MachineSet without making other changes, you do not need to delete the machines.
By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker MachineSet to 0
unless you first relocate the router pods.
Prerequisites
- Install an OpenShift Container Platform cluster and the oc command line.
-
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Edit the MachineSet:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the MachineSet to
0
:oc scale --replicas=0 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=0 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the machines to be removed.
Scale up the MachineSet as needed:
oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the machines to start. The new Machines contain changes you made to the Machineset.
Chapter 4. Deleting a machine 링크 복사링크가 클립보드에 복사되었습니다!
You can delete a specific machine.
4.1. Deleting a specific machine 링크 복사링크가 클립보드에 복사되었습니다!
You can delete a specific machine.
Prerequisites
- Install an OpenShift Container Platform cluster.
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
-
Log into
oc
as a user withcluster-admin
permission.
Procedure
View the Machines that are in the cluster and identify the one to delete:
oc get machine -n openshift-machine-api
$ oc get machine -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command output contains a list of Machines in the
<clusterid>-worker-<cloud_region>
format.Delete the Machine:
oc delete machine <machine> -n openshift-machine-api
$ oc delete machine <machine> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantBy default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured Pod disruption budget, the drain operation might not be able to succeed in preventing the machine from being deleted. You can skip draining the node by annotating "machine.openshift.io/exclude-node-draining" in a specific machine. If the machine being deleted belongs to a MachineSet, a new machine is immediately created to satisfy the specified number of replicas.
Chapter 5. Applying autoscaling to an OpenShift Container Platform cluster 링크 복사링크가 클립보드에 복사되었습니다!
Applying autoscaling to an OpenShift Container Platform cluster involves deploying a ClusterAutoscaler and then deploying MachineAutoscalers for each Machine type in your cluster.
You can configure the ClusterAutoscaler only in clusters where the machine API is operational.
5.1. About the ClusterAutoscaler 링크 복사링크가 클립보드에 복사되었습니다!
The ClusterAutoscaler adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The ClusterAutoscaler has a cluster scope, and is not associated with a particular namespace.
The ClusterAutoscaler increases the size of the cluster when there are Pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The ClusterAutoscaler does not increase the cluster resources beyond the limits that you specify.
Ensure that the maxNodesTotal
value in the ClusterAutoscaler
definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to.
The ClusterAutoscaler decreases the size of the cluster when some nodes are consistently not needed for a significant period, such as when it has low resource use and all of its important Pods can fit on other nodes.
If the following types of Pods are present on a node, the ClusterAutoscaler will not remove the node:
- Pods with restrictive PodDisruptionBudgets (PDBs).
- Kube-system Pods that do not run on the node by default.
- Kube-system Pods that do not have a PDBB or have a PDB that is too restrictive.
- Pods that are not backed by a controller object such as a Deployment, ReplicaSet, or StatefulSet.
- Pods with local storage.
- Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on.
-
Unless they also have a
"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
annotation, Pods that have a"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
annotation.
If you configure the ClusterAutoscaler, additional usage restrictions apply:
- Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system Pods.
- Specify requests for your Pods.
- If you have to prevent Pods from being deleted too quickly, configure appropriate PDBs.
- Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure.
- Do not run additional node group autoscalers, especially the ones offered by your cloud provider.
The Horizontal Pod Autoscaler (HPA) and the ClusterAutoscaler modify cluster resources in different ways. The HPA changes the deployment’s or ReplicaSet’s number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the ClusterAutoscaler adds resources so that the HPA-created Pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the ClusterAutoscaler deletes the unnecessary nodes.
The ClusterAutoscaler takes Pod priorities into account. The Pod Priority and Preemption feature enables scheduling Pods based on priorities if the cluster does not have enough resources, but the ClusterAutoscaler ensures that the cluster has resources to run all Pods. To honor the intention of both features, the ClusterAutoscaler inclues a priority cutoff function. You can use this cutoff to schedule "best-effort" Pods, which do not cause the ClusterAutoscaler to increase resources but instead run only when spare resources are available.
Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the Pods, and nodes running these Pods might be deleted to free resources.
5.2. About the MachineAutoscaler 링크 복사링크가 클립보드에 복사되었습니다!
The MachineAutoscaler adjusts the number of Machines in the MachineSets that you deploy in an OpenShift Container Platform cluster. You can scale both the default worker
MachineSet and any other MachineSets that you create. The MachineAutoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the MachineSet they target.
You must deploy a MachineAutoscaler for the ClusterAutoscaler to scale your machines. The ClusterAutoscaler uses the annotations on MachineSets that the MachineAutoscaler sets to determine the resources that it can scale. If you define a ClusterAutoscaler without also defining MachineAutoscalers, the ClusterAutoscaler will never scale your cluster.
5.3. Configuring the ClusterAutoscaler 링크 복사링크가 클립보드에 복사되었습니다!
First, deploy the ClusterAutoscaler to manage automatic resource scaling in your OpenShift Container Platform cluster.
Because the ClusterAutoscaler is scoped to the entire cluster, you can make only one ClusterAutoscaler for the cluster.
5.3.1. ClusterAutoscaler resource definition 링크 복사링크가 클립보드에 복사되었습니다!
This ClusterAutoscaler
resource definition shows the parameters and sample values for the ClusterAutoscaler.
- 1
- Specify the priority that a pod must exceed to cause the ClusterAutoscaler to deploy additional nodes. Enter a 32-bit integer value. The
podPriorityThreshold
value is compared to the value of thePriorityClass
that you assign to each pod. - 2
- Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your
MachineAutoscaler
resources. - 3
- Specify the minimum number of cores to deploy.
- 4
- Specify the maximum number of cores to deploy.
- 5
- Specify the minimum amount of memory, in GiB, per node.
- 6
- Specify the maximum amount of memory, in GiB, per node.
- 7 10
- Optionally, specify the type of GPU node to deploy. Only
nvidia.com/gpu
andamd.com/gpu
are valid types. - 8 11
- Specify the minimum number of GPUs to deploy.
- 9 12
- Specify the maximum number of GPUs to deploy.
- 13
- In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including
ns
,us
,ms
,s
,m
, andh
. - 14
- Specify whether the ClusterAutoscaler can remove unnecessary nodes.
- 15
- Optionally, specify the period to wait before deleting a node after a node has recently been added. If you do not specify a value, the default value of
10m
is used. - 16
- Specify the period to wait before deleting a node after a node has recently been deleted. If you do not specify a value, the default value of
10s
is used. - 17
- Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of
3m
is used. - 18
- Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of
10m
is used.
5.3.2. Deploying the ClusterAutoscaler 링크 복사링크가 클립보드에 복사되었습니다!
To deploy the ClusterAutoscaler, you create an instance of the ClusterAutoscaler
resource.
Procedure
-
Create a YAML file for the
ClusterAutoscaler
resource that contains the customized resource definition. Create the resource in the cluster:
oc create -f <filename>.yaml
$ oc create -f <filename>.yaml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<filename>
is the name of the resource file that you customized.
Next steps
- After you configure the ClusterAutoscaler, you must configure at least one MachineAutoscaler.
5.4. Configuring the MachineAutoscalers 링크 복사링크가 클립보드에 복사되었습니다!
After you deploy the ClusterAutoscaler, deploy MachineAutoscaler resources that reference the MachineSets that are used to scale the cluster.
You must deploy at least one MachineAutoscaler resource after you deploy the ClusterAutoscaler resource.
You must configure separate resources for each MachineSet. Remember that MachineSets are different in each region, so consider whether you want to enable machine scaling in multiple regions. The MachineSet that you scale must have at least one machine in it.
5.4.1. MachineAutoscaler resource definition 링크 복사링크가 클립보드에 복사되었습니다!
This MachineAutoscaler resource definition shows the parameters and sample values for the MachineAutoscaler.
- 1
- Specify the
MachineAutoscaler
name. To make it easier to identify which MachineSet this MachineAutoscaler scales, specify or include the name of the MachineSet to scale. The MachineSet name takes the following form:<clusterid>-<machineset>-<aws-region-az>
- 2
- Specify the minimum number Machines of the specified type that must remain in the specified AWS zone after the ClusterAutoscaler initiates cluster scaling. Do not set this value to
0
. - 3
- Specify the maximum number Machines of the specified type that the ClusterAutoscaler can deploy in the specified AWS zone after it initiates cluster scaling. Ensure that the
maxNodesTotal
value in theClusterAutoscaler
definition is large enough to allow the MachineAutoScaler to deploy this number of machines. - 4
- In this section, provide values that describe the existing MachineSet to scale.
- 5
- The
kind
parameter value is alwaysMachineSet
. - 6
- The
name
value must match the name of an existing MachineSet, as shown in themetadata.name
parameter value.
5.4.2. Deploying the MachineAutoscaler 링크 복사링크가 클립보드에 복사되었습니다!
To deploy the MachineAutoscaler, you create an instance of the MachineAutoscaler
resource.
Procedure
-
Create a YAML file for the
MachineAutoscaler
resource that contains the customized resource definition. Create the resource in the cluster:
oc create -f <filename>.yaml
$ oc create -f <filename>.yaml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<filename>
is the name of the resource file that you customized.
5.5. Additional resources 링크 복사링크가 클립보드에 복사되었습니다!
- For more information about pod priority, see Including pod priority in pod scheduling decisions in OpenShift Container Platform.
Chapter 6. Creating infrastructure MachineSets 링크 복사링크가 클립보드에 복사되었습니다!
You can create a MachineSet to host only infrastructure components. You apply specific Kubernetes labels to these Machines and then update the infrastructure components to run on only those Machines. These infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment.
Unlike earlier versions of OpenShift Container Platform, you cannot move the infrastructure components to the master Machines. To move the components, you must create a new MachineSet.
6.1. OpenShift Container Platform infrastructure components 링크 복사링크가 클립보드에 복사되었습니다!
The following OpenShift Container Platform components are infrastructure components:
- Kubernetes and OpenShift Container Platform control plane services that run on masters
- The default router
- The container image registry
- The cluster metrics collection, or monitoring service
- Cluster aggregated logging
- Service brokers
Any node that runs any other container, pod, or component is a worker node that your subscription must cover.
6.2. Creating infrastructure MachineSets for production environments 링크 복사링크가 클립보드에 복사되었습니다!
In a production deployment, deploy at least three MachineSets to hold infrastructure components. Both the logging aggregation solution and the service mesh deploy Elasticsearch, and Elasticsearch requires three instances that are installed on different nodes. For high availability, install deploy these nodes to different availability zones. Since you need different MachineSets for each availability zone, create at least three MachineSets.
6.2.1. Creating MachineSets for different clouds 링크 복사링크가 클립보드에 복사되었습니다!
Use the sample MachineSet for your cloud.
6.2.1.1. Sample YAML for a MachineSet Custom Resource on AWS 링크 복사링크가 클립보드에 복사되었습니다!
This sample YAML defines a MachineSet that runs in the us-east-1a
Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
In this sample, <infrastructureID>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
- 1 3 5 11 12 13 14
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and
jq
package installed, you can obtain the infrastructure ID by running the following command:oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 2 4 8
- Specify the infrastructure ID, node label, and zone.
- 6 7 9
- Specify the node label to add.
- 10
- Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your AWS zone for your OpenShift Container Platform nodes.
6.2.1.2. Sample YAML for a MachineSet Custom Resource on Azure 링크 복사링크가 클립보드에 복사되었습니다!
This sample YAML defines a MachineSet that runs in the 1
Microsoft Azure zone in the centralus
region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
In this sample, <infrastructureID>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
- 1 5 7 12 13 14 17
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and
jq
package installed, you can obtain the infrastructure ID by running the following command:oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 2 3 8 9 11 15 16
- Specify the node label to add.
- 4 6 10
- Specify the infrastructure ID, node label, and region.
- 18
- Specify the zone within your region to place Machines on. Be sure that your region supports the zone that you specify.
6.2.1.3. Sample YAML for a MachineSet Custom Resource on GCP 링크 복사링크가 클립보드에 복사되었습니다!
This sample YAML defines a MachineSet that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
In this sample, <infrastructureID>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
- 1 2 3 4 5 8 10 11 14
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and
jq
package installed, you can obtain the infrastructure ID by running the following command:oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 12 16
- Specify the infrastructure ID and node label.
- 6 7 9
- Specify the node label to add.
- 13 15
- Specify the name of the GCP project that you use for your cluster.
6.2.2. Creating a MachineSet 링크 복사링크가 클립보드에 복사되었습니다!
In addition to the ones created by the installation program, you can create your own MachineSets to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
-
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the MachineSet Custom Resource sample, as shown, and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.If you are not sure about which value to set for a specific field, you can check an existing MachineSet from your cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check values of a specific MachineSet:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the new
MachineSet
:oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the list of MachineSets:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the new MachineSet is available, the
DESIRED
andCURRENT
values match. If the MachineSet is not available, wait a few minutes and run the command again.After the new MachineSet is available, check status of the machine and the node that it references:
oc describe machine <name> -n openshift-machine-api
$ oc describe machine <name> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the new node and confirm that the new node has the label that you specified:
oc get node <node_name> --show-labels
$ oc get node <node_name> --show-labels
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the command output and confirm that
node-role.kubernetes.io/<your_label>
is in theLABELS
list.
Any change to a MachineSet is not applied to existing machines owned by the MachineSet. For example, labels edited or added to an existing MachineSet are not propagated to existing machines and Nodes associated with the MachineSet.
Next steps
If you need MachineSets in other availability zones, repeat this process to create more MachineSets.
6.3. Moving resources to infrastructure MachineSets 링크 복사링크가 클립보드에 복사되었습니다!
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure MachineSets that you created.
6.3.1. Moving the router 링크 복사링크가 클립보드에 복사되었습니다!
You can deploy the router Pod to a different MachineSet. By default, the Pod is displayed to a worker node.
Prerequisites
- Configure additional MachineSets in your OpenShift Container Platform cluster.
Procedure
View the
IngressController
Custom Resource for the router Operator:oc get ingresscontroller default -n openshift-ingress-operator -o yaml
$ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command output resembles the following text:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
ingresscontroller
resource and change thenodeSelector
to use theinfra
label:oc edit ingresscontroller default -n openshift-ingress-operator -o yaml
$ oc edit ingresscontroller default -n openshift-ingress-operator -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
nodeSelector
stanza that references theinfra
label to thespec
section, as shown:spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: ""
spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: ""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the router pod is running on the
infra
node.View the list of router pods and note the node name of the running pod:
oc get pod -n openshift-ingress -o wide
$ oc get pod -n openshift-ingress -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the running pod is on the
ip-10-0-217-226.ec2.internal
node.View the node status of the running pod:
oc get node <node_name>
$ oc get node <node_name>
1 NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.14.6+c4799753c
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the
<node_name>
that you obtained from the pod list.
Because the role list includes
infra
, the pod is running on the correct node.
6.3.2. Moving the default registry 링크 복사링크가 클립보드에 복사되었습니다!
You configure the registry Operator to deploy its pods to different nodes.
Prerequisites
- Configure additional MachineSets in your OpenShift Container Platform cluster.
Procedure
View the
config/instance
object:oc get config/cluster -o yaml
$ oc get config/cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output resembles the following text:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
config/instance
object:oc edit config/cluster
$ oc edit config/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines of text the
spec
section of the object:nodeSelector: node-role.kubernetes.io/infra: ""
nodeSelector: node-role.kubernetes.io/infra: ""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the registry Pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry Pod is located:
oc get pods -o wide -n openshift-image-registry
$ oc get pods -o wide -n openshift-image-registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the node has the label you specified:
oc describe node <node_name>
$ oc describe node <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the command output and confirm that
node-role.kubernetes.io/infra
is in theLABELS
list.
6.3.3. Moving the monitoring solution 링크 복사링크가 클립보드에 복사되었습니다!
By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom ConfigMap.
Procedure
Save the following ConfigMap definition as the
cluster-monitoring-configmap.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Running this ConfigMap forces the components of the monitoring stack to redeploy to infrastructure nodes.
Apply the new ConfigMap:
oc create -f cluster-monitoring-configmap.yaml
$ oc create -f cluster-monitoring-configmap.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Watch the monitoring Pods move to the new machines:
watch 'oc get pod -n openshift-monitoring -o wide'
$ watch 'oc get pod -n openshift-monitoring -o wide'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a component has not moved to the
infra
node, delete the pod with this component:oc delete pod -n openshift-monitoring <pod>
$ oc delete pod -n openshift-monitoring <pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The component from the deleted pod is re-created on the
infra
node.
Additional resources
- See the monitoring documentation for the general instructions on moving OpenShift Container Platform components.
6.3.4. Moving the cluster logging resources 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
You should set your MachineSet to use at least 6 replicas.
Prerequisites
- Cluster logging and Elasticsearch must be installed. These features are not installed by default.
Procedure
Edit the Cluster Logging Custom Resource in the
openshift-logging
project:oc edit ClusterLogging instance
$ oc edit ClusterLogging instance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
To verify that a component has moved, you can use the oc get pod -o wide
command.
For example:
You want to move the Kibana pod from the
ip-10-0-147-79.us-east-2.compute.internal
node:oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You want to move the Kibana Pod to the
ip-10-0-139-48.us-east-2.compute.internal
node, a dedicated infrastructure node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the node has a
node-role.kubernetes.io/infra: ''
label:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To move the Kibana Pod, edit the Cluster Logging CR to add a node selector:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you save the CR, the current Kibana pod is terminated and new pod is deployed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The new pod is on the
ip-10-0-139-48.us-east-2.compute.internal
node:oc get pod kibana-7d85dcffc8-bfpfp -o wide
$ oc get pod kibana-7d85dcffc8-bfpfp -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After a few moments, the original Kibana pod is removed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Adding RHEL compute machines to an OpenShift Container Platform cluster 링크 복사링크가 클립보드에 복사되었습니다!
In OpenShift Container Platform, you can add Red Hat Enterprise Linux (RHEL) compute, or worker, machines to a user-provisioned infrastructure cluster. You can use RHEL as the operating system on only compute machines.
7.1. About adding RHEL compute nodes to a cluster 링크 복사링크가 클립보드에 복사되었습니다!
In OpenShift Container Platform 4.2, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines, which are also known as worker machines, in your cluster if you use a user-provisioned infrastructure installation. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane, or master, machines in your cluster.
As with all installations that use user-provisioned infrastructure, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks.
Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster.
Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines.
You must add any RHEL compute machines to the cluster after you initialize the control plane.
7.2. System requirements for RHEL compute nodes 링크 복사링크가 클립보드에 복사되었습니다!
The Red Hat Enterprise Linux (RHEL) compute machine hosts, which are also known as worker machine hosts, in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements.
- You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information.
- Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
Each system must meet the following hardware requirements:
- Physical or virtual system, or an instance running on a public or private IaaS.
Base OS: RHEL 7.6 with "Minimal" installation option.
ImportantOnly RHEL 7.6 is supported in OpenShift Container Platform 4.2. You must not upgrade your compute machines to RHEL 8.
- NetworkManager 1.0 or later.
- 1 vCPU.
- Minimum 8 GB RAM.
-
Minimum 15 GB hard disk space for the file system containing
/var/
. -
Minimum 1 GB hard disk space for the file system containing
/usr/local/bin/
. - Minimum 1 GB hard disk space for the file system containing the system’s temporary directory. The system’s temporary directory is determined according to the rules defined in the tempfile module in Python’s standard library.
-
Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the
disk.enableUUID=true
attribute must be set.
7.2.1. Certificate signing requests management 링크 복사링크가 클립보드에 복사되었습니다!
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
7.3. Preparing the machine to run the playbook 링크 복사링크가 클립보드에 복사되었습니다!
Before you can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.2 cluster, you must prepare a machine to run the playbook from. This machine is not part of the cluster but must be able to access it.
Prerequisites
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
, on the machine that you run the playbook on. -
Log in as a user with
cluster-admin
permission.
Procedure
-
Ensure that the
kubeconfig
file for the cluster and the installation program that you used to install the cluster are on the machine. One way to accomplish this is to use the same machine that you used to install the cluster. - Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN.
Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.
ImportantIf you use SSH key-based authentication, you must manage the key with an SSH agent.
If you have not already done so, register the machine with RHSM and attach a pool with an
OpenShift
subscription to it:Register the machine with RHSM:
subscription-manager register --username=<user_name> --password=<password>
# subscription-manager register --username=<user_name> --password=<password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data from RHSM:
subscription-manager refresh
# subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available subscriptions:
subscription-manager list --available --matches '*OpenShift*'
# subscription-manager list --available --matches '*OpenShift*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
subscription-manager attach --pool=<pool_id>
# subscription-manager attach --pool=<pool_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Enable the repositories required by OpenShift Container Platform 4.2:
subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ansible-2.8-rpms" \ --enable="rhel-7-server-ose-4.2-rpms"
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ansible-2.8-rpms" \ --enable="rhel-7-server-ose-4.2-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the required packages, including
Openshift-Ansible
:yum install openshift-ansible openshift-clients jq
# yum install openshift-ansible openshift-clients jq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
openshift-ansible
package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. Theopenshift-clients
provides theoc
CLI, and thejq
package improves the display of JSON output on your command line.
7.4. Preparing a RHEL compute node 링크 복사링크가 클립보드에 복사되었습니다!
Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories.
On each host, register with RHSM:
subscription-manager register --username=<user_name> --password=<password>
# subscription-manager register --username=<user_name> --password=<password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data from RHSM:
subscription-manager refresh
# subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available subscriptions:
subscription-manager list --available --matches '*OpenShift*'
# subscription-manager list --available --matches '*OpenShift*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
subscription-manager attach --pool=<pool_id>
# subscription-manager attach --pool=<pool_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable all yum repositories:
Disable all the enabled RHSM repositories:
subscription-manager repos --disable="*"
# subscription-manager repos --disable="*"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the remaining yum repositories and note their names under
repo id
, if any:yum repolist
# yum repolist
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
yum-config-manager
to disable the remaining yum repositories:yum-config-manager --disable <repo_id>
# yum-config-manager --disable <repo_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, disable all repositories:
yum-config-manager --disable \*
yum-config-manager --disable \*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this might take a few minutes if you have a large number of available repositories
Enable only the repositories required by OpenShift Container Platform 4.2:
subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-4.2-rpms"
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-4.2-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop and disable firewalld on the host:
systemctl disable --now firewalld.service
# systemctl disable --now firewalld.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker.
7.5. Adding a RHEL compute machine to your cluster 링크 복사링크가 클립보드에 복사되었습니다!
You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.2 cluster.
Prerequisites
- You installed the required packages and performed the necessary configuration on the machine that you run the playbook on.
- You prepared the RHEL hosts for installation.
Procedure
Perform the following steps on the machine that you prepared to run the playbook:
Create an Ansible inventory file that is named
/<path>/inventory/hosts
that defines your compute machine hosts and required variables:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the user name that runs the Ansible tasks on the remote compute machines.
- 2
- If you do not specify
root
for theansible_user
, you must setansible_become
toTrue
and assign the user sudo permissions. - 3
- Specify the path to the
kubeconfig
file for your cluster. - 4
- List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the host name that the cluster uses to access the machine, so set the correct public or private name to access the machine.
Run the playbook:
cd /usr/share/ansible/openshift-ansible ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<path>
, specify the path to the Ansible inventory file that you created.
7.6. Approving the CSRs for your machines 링크 복사링크가 클립보드에 복사되었습니다!
When you add machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists all of the machines that you created.
Review the pending certificate signing requests (CSRs) and ensure that the you see a client and server request with
Pending
orApproved
status for each machine that you added to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster
kube-controller-manager
. You must implement a method of automatically approving the kubelet serving certificate requests.To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.7. Required parameters for the Ansible hosts file 링크 복사링크가 클립보드에 복사되었습니다!
You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster.
Paramter | Description | Values |
---|---|---|
| The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. |
A user name on the system. The default value is |
|
If the values of |
|
|
Specifies a path to a local directory that contains the | The path and name of the configuration file. |
7.7.1. Removing RHCOS compute machines from a cluster 링크 복사링크가 클립보드에 복사되었습니다!
After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines.
Prerequisites
- You have added RHEL compute machines to your cluster.
Procedure
View the list of machines and record the node names of the RHCOS compute machines:
oc get nodes -o wide
$ oc get nodes -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each RHCOS compute machine, delete the node:
Mark the node as unschedulable by running the
oc adm cordon
command:oc adm cordon <node_name>
$ oc adm cordon <node_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the node name of one of the RHCOS compute machines.
Drain all the pods from the node:
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the node name of the RHCOS compute machine that you isolated.
Delete the node:
oc delete nodes <node_name>
$ oc delete nodes <node_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the node name of the RHCOS compute machine that you drained.
Review the list of compute machines to ensure that only the RHEL nodes remain:
oc get nodes -o wide
$ oc get nodes -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove the RHCOS machines from the load balancer for your cluster’s compute machines. You can delete the Virtual Machines or reimage the physical hardware for the RHCOS compute machines.
Chapter 8. Adding more RHEL compute machines to an OpenShift Container Platform cluster 링크 복사링크가 클립보드에 복사되었습니다!
If your OpenShift Container Platform cluster already includes Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, you can add more RHEL compute machines to it.
8.1. About adding RHEL compute nodes to a cluster 링크 복사링크가 클립보드에 복사되었습니다!
In OpenShift Container Platform 4.2, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines, which are also known as worker machines, in your cluster if you use a user-provisioned infrastructure installation. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane, or master, machines in your cluster.
As with all installations that use user-provisioned infrastructure, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks.
Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster.
Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines.
You must add any RHEL compute machines to the cluster after you initialize the control plane.
8.2. System requirements for RHEL compute nodes 링크 복사링크가 클립보드에 복사되었습니다!
The Red Hat Enterprise Linux (RHEL) compute machine hosts, which are also known as worker machine hosts, in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements.
- You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information.
- Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
Each system must meet the following hardware requirements:
- Physical or virtual system, or an instance running on a public or private IaaS.
Base OS: RHEL 7.6 with "Minimal" installation option.
ImportantOnly RHEL 7.6 is supported in OpenShift Container Platform 4.2. You must not upgrade your compute machines to RHEL 8.
- NetworkManager 1.0 or later.
- 1 vCPU.
- Minimum 8 GB RAM.
-
Minimum 15 GB hard disk space for the file system containing
/var/
. -
Minimum 1 GB hard disk space for the file system containing
/usr/local/bin/
. - Minimum 1 GB hard disk space for the file system containing the system’s temporary directory. The system’s temporary directory is determined according to the rules defined in the tempfile module in Python’s standard library.
-
Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the
disk.enableUUID=true
attribute must be set.
8.2.1. Certificate signing requests management 링크 복사링크가 클립보드에 복사되었습니다!
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
8.3. Preparing a RHEL compute node 링크 복사링크가 클립보드에 복사되었습니다!
Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories.
On each host, register with RHSM:
subscription-manager register --username=<user_name> --password=<password>
# subscription-manager register --username=<user_name> --password=<password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data from RHSM:
subscription-manager refresh
# subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available subscriptions:
subscription-manager list --available --matches '*OpenShift*'
# subscription-manager list --available --matches '*OpenShift*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
subscription-manager attach --pool=<pool_id>
# subscription-manager attach --pool=<pool_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable all yum repositories:
Disable all the enabled RHSM repositories:
subscription-manager repos --disable="*"
# subscription-manager repos --disable="*"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the remaining yum repositories and note their names under
repo id
, if any:yum repolist
# yum repolist
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
yum-config-manager
to disable the remaining yum repositories:yum-config-manager --disable <repo_id>
# yum-config-manager --disable <repo_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, disable all repositories:
yum-config-manager --disable \*
yum-config-manager --disable \*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this might take a few minutes if you have a large number of available repositories
Enable only the repositories required by OpenShift Container Platform 4.2:
subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-4.2-rpms"
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-4.2-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop and disable firewalld on the host:
systemctl disable --now firewalld.service
# systemctl disable --now firewalld.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker.
8.4. Adding more RHEL compute machines to your cluster 링크 복사링크가 클립보드에 복사되었습니다!
You can add more compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.2 cluster.
Prerequisites
- Your OpenShift Container Platform cluster already contains RHEL compute nodes.
-
The
hosts
file that you used to add the first RHEL compute machines to your cluster is on the machine that you use the run the playbook. - The machine that you run the playbook on must be able to access all of the RHEL hosts. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN.
-
The
kubeconfig
file for the cluster and the installation program that you used to install the cluster are on the machine that you use the run the playbook. - You must prepare the RHEL hosts for installation.
- Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.
- If you use SSH key-based authentication, you must manage the key with an SSH agent.
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
, on the machine that you run the playbook on.
Procedure
-
Open the Ansible inventory file at
/<path>/inventory/hosts
that defines your compute machine hosts and required variables. -
Rename the
[new_workers]
section of the file to[workers]
. Add a
[new_workers]
section to the file and define the fully-qualified domain names for each new host. The file resembles the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the
mycluster-worker-0.example.com
andmycluster-worker-1.example.com
machines are in the cluster and you add themycluster-worker-2.example.com
andmycluster-worker-3.example.com
machines.Run the scaleup playbook:
cd /usr/share/ansible/openshift-ansible ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<path>
, specify the path to the Ansible inventory file that you created.
8.5. Approving the CSRs for your machines 링크 복사링크가 클립보드에 복사되었습니다!
When you add machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists all of the machines that you created.
Review the pending certificate signing requests (CSRs) and ensure that the you see a client and server request with
Pending
orApproved
status for each machine that you added to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster
kube-controller-manager
. You must implement a method of automatically approving the kubelet serving certificate requests.To approve them individually, run the following command for each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6. Required parameters for the Ansible hosts file 링크 복사링크가 클립보드에 복사되었습니다!
You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster.
Paramter | Description | Values |
---|---|---|
| The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. |
A user name on the system. The default value is |
|
If the values of |
|
|
Specifies a path to a local directory that contains the | The path and name of the configuration file. |
Chapter 9. Deploying machine health checks 링크 복사링크가 클립보드에 복사되었습니다!
You can configure and deploy a machine health check to automatically repair damaged machines in a machine pool.
Machine health checks is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
This process is not applicable to clusters where you manually provisioned the machines yourself. You can use the advanced machine management and scaling capabilities only in clusters where the machine API is operational.
Prerequisites
Enable a FeatureGate so you can access Technology Preview features.
NoteTurning on Technology Preview features cannot be undone and prevents upgrades.
9.1. About MachineHealthChecks 링크 복사링크가 클립보드에 복사되었습니다!
MachineHealthChecks automatically repairs unhealthy Machines in a particular MachinePool.
To monitor machine health, you create a resource to define the configuration for a controller. You set a condition to check for, such as staying in the NotReady
status for 15 minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor.
You cannot apply a MachineHealthCheck to a machine with the master role.
The controller that observes a MachineHealthCheck resource checks for the status that you defined. If a machine fails the health check, it is automatically deleted and a new one is created to take its place. When a machine is deleted, you see a machine deleted
event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time.
To stop the check, you remove the resource.
9.2. Sample MachineHealthCheck resource 링크 복사링크가 클립보드에 복사되었습니다!
The MachineHealthCheck resource resembles the following YAML file:
MachineHealthCheck
9.3. Creating a MachineHealthCheck resource 링크 복사링크가 클립보드에 복사되었습니다!
You can create a MachineHealthCheck resource for all MachinePools in your cluster except the master
pool.
Prerequisites
-
Install the
oc
command line interface.
Procedure
-
Create a
healthcheck.yml
file that contains the definition of your MachineHealthCheck. Apply the
healthcheck.yml
file to your cluster:oc apply -f healthcheck.yml
$ oc apply -f healthcheck.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
링크 복사링크가 클립보드에 복사되었습니다!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.