Chapter 2. Using Self Node Remediation
You can use the Self Node Remediation Operator to automatically reboot unhealthy nodes. This remediation strategy minimizes downtime for stateful applications and ReadWriteOnce (RWO) volumes, and restores compute capacity if transient failures occur.
2.1. About the Self Node Remediation Operator
The Self Node Remediation Operator runs on the cluster nodes and reboots nodes that are identified as unhealthy. The Operator uses the MachineHealthCheck
or NodeHealthCheck
controller to detect the health of a node in the cluster. When a node is identified as unhealthy, the MachineHealthCheck
or the NodeHealthCheck
resource creates the SelfNodeRemediation
custom resource (CR), which triggers the Self Node Remediation Operator.
The SelfNodeRemediation
CR resembles the following YAML file:
apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediation metadata: name: selfnoderemediation-sample namespace: openshift-workload-availability spec: remediationStrategy: <remediation_strategy> 1 status: lastError: <last_error_message> 2
The Self Node Remediation Operator minimizes downtime for stateful applications and restores compute capacity if transient failures occur. You can use this Operator regardless of the management interface, such as IPMI or an API to provision a node, and regardless of the cluster installation type, such as installer-provisioned infrastructure or user-provisioned infrastructure.
2.1.1. About watchdog devices
Watchdog devices can be any of the following:
- Independently powered hardware devices
- Hardware devices that share power with the hosts they control
-
Virtual devices implemented in software, or
softdog
Hardware watchdog and softdog
devices have electronic or software timers, respectively. These watchdog devices are used to ensure that the machine enters a safe state when an error condition is detected. The cluster is required to repeatedly reset the watchdog timer to prove that it is in a healthy state. This timer might elapse due to fault conditions, such as deadlocks, CPU starvation, and loss of network or disk access. If the timer expires, the watchdog device assumes that a fault has occurred and the device triggers a forced reset of the node.
Hardware watchdog devices are more reliable than softdog
devices.
2.1.1.1. Understanding Self Node Remediation Operator behavior with watchdog devices
The Self Node Remediation Operator determines the remediation strategy based on the watchdog devices that are present.
If a hardware watchdog device is configured and available, the Operator uses it for remediation. If a hardware watchdog device is not configured, the Operator enables and uses a softdog
device for remediation.
If neither watchdog devices are supported, either by the system or by the configuration, the Operator remediates nodes by using software reboot.
Additional resources
2.2. Control plane fencing
In earlier releases, you could enable Self Node Remediation and Node Health Check on worker nodes. In the event of node failure, you can now also follow remediation strategies on control plane nodes.
Self Node Remediation occurs in two primary scenarios.
API Server Connectivity
- In this scenario, the control plane node to be remediated is not isolated. It can be directly connected to the API Server, or it can be indirectly connected to the API Server through worker nodes or control-plane nodes, that are directly connected to the API Server.
-
When there is API Server Connectivity, the control plane node is remediated only if the Node Health Check Operator has created a
SelfNodeRemediation
custom resource (CR) for the node.
No API Server Connectivity
- In this scenario, the control plane node to be remediated is isolated from the API Server. The node cannot connect directly or indirectly to the API Server.
When there is no API Server Connectivity, the control plane node will be remediated as outlined with these steps:
Check the status of the control plane node with the majority of the peer worker nodes. If the majority of the peer worker nodes cannot be reached, the node will be analyzed further.
Self-diagnose the status of the control plane node
- If self diagnostics passed, no action will be taken.
- If self diagnostics failed, the node will be fenced and remediated.
-
The self diagnostics currently supported are checking the
kubelet
service status, and checking endpoint availability usingopt in
configuration.
- If the node did not manage to communicate to most of its worker peers, check the connectivity of the control plane node with other control plane nodes. If the node can communicate with any other control plane peer, no action will be taken. Otherwise, the node will be fenced and remediated.
2.3. Installing the Self Node Remediation Operator by using the web console
You can use the Red Hat OpenShift web console to install the Self Node Remediation Operator.
The Node Health Check Operator also installs the Self Node Remediation Operator as a default remediation provider.
Prerequisites
-
Log in as a user with
cluster-admin
privileges.
Procedure
-
In the Red Hat OpenShift web console, navigate to Operators
OperatorHub. - Select the Self Node Remediation Operator from the list of available Operators, and then click Install.
-
Keep the default selection of Installation mode and namespace to ensure that the Operator is installed to the
openshift-workload-availability
namespace. - Click Install.
Verification
To confirm that the installation is successful:
-
Navigate to the Operators
Installed Operators page. -
Check that the Operator is installed in the
openshift-workload-availability
namespace and its status isSucceeded
.
If the Operator is not installed successfully:
-
Navigate to the Operators
Installed Operators page and inspect the Status
column for any errors or failures. -
Navigate to the Workloads
Pods page and check the logs of the self-node-remediation-controller-manager
pod andself-node-remediation-ds
pods in theopenshift-workload-availability
project for any reported issues.
2.4. Installing the Self Node Remediation Operator by using the CLI
You can use the OpenShift CLI (oc
) to install the Self Node Remediation Operator.
You can install the Self Node Remediation Operator in your own namespace or in the openshift-workload-availability
namespace.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create a
Namespace
custom resource (CR) for the Self Node Remediation Operator:Define the
Namespace
CR and save the YAML file, for example,workload-availability-namespace.yaml
:apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability
To create the
Namespace
CR, run the following command:$ oc create -f workload-availability-namespace.yaml
Create an
OperatorGroup
CR:Define the
OperatorGroup
CR and save the YAML file, for example,workload-availability-operator-group.yaml
:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability
To create the
OperatorGroup
CR, run the following command:$ oc create -f workload-availability-operator-group.yaml
Create a
Subscription
CR:Define the
Subscription
CR and save the YAML file, for example,self-node-remediation-subscription.yaml
:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: self-node-remediation-operator namespace: openshift-workload-availability 1 spec: channel: stable installPlanApproval: Manual 2 name: self-node-remediation-operator source: redhat-operators sourceNamespace: openshift-marketplace package: self-node-remediation
- 1
- Specify the
Namespace
where you want to install the Self Node Remediation Operator. To install the Self Node Remediation Operator in theopenshift-workload-availability
namespace, specifyopenshift-workload-availability
in theSubscription
CR. - 2
- Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation.
To create the
Subscription
CR, run the following command:$ oc create -f self-node-remediation-subscription.yaml
Verify that the Self Node Remediation Operator created the
SelfNodeRemediationTemplate
CR:$ oc get selfnoderemediationtemplate -n openshift-workload-availability
Example output
self-node-remediation-automatic-strategy-template
Verification
Verify that the installation succeeded by inspecting the CSV resource:
$ oc get csv -n openshift-workload-availability
Example output
NAME DISPLAY VERSION REPLACES PHASE self-node-remediation.v0.8.0 Self Node Remediation Operator v.0.8.0 self-node-remediation.v0.7.1 Succeeded
Verify that the Self Node Remediation Operator is up and running:
$ oc get deployment -n openshift-workload-availability
Example output
NAME READY UP-TO-DATE AVAILABLE AGE self-node-remediation-controller-manager 1/1 1 1 28h
Verify that the Self Node Remediation Operator created the
SelfNodeRemediationConfig
CR:$ oc get selfnoderemediationconfig -n openshift-workload-availability
Example output
NAME AGE self-node-remediation-config 28h
Verify that each self node remediation pod is scheduled and running on each worker node and control plane node:
$ oc get daemonset -n openshift-workload-availability
Example output
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE self-node-remediation-ds 6 6 6 6 6 <none> 28h
2.5. Configuring the Self Node Remediation Operator
The Self Node Remediation Operator creates the SelfNodeRemediationConfig
CR and the SelfNodeRemediationTemplate
Custom Resource Definition (CRD).
To avoid unexpected reboots of a specific node, the Node Maintenance Operator places the node in maintenance mode and automatically adds a node selector that prevents the SNR daemonset from running on the specific node.
2.5.1. Understanding the Self Node Remediation Operator configuration
The Self Node Remediation Operator creates the SelfNodeRemediationConfig
CR with the name self-node-remediation-config
. The CR is created in the namespace of the Self Node Remediation Operator.
A change in the SelfNodeRemediationConfig
CR re-creates the Self Node Remediation daemon set.
The SelfNodeRemediationConfig
CR resembles the following YAML file:
apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationConfig metadata: name: self-node-remediation-config namespace: openshift-workload-availability spec: safeTimeToAssumeNodeRebootedSeconds: 180 1 watchdogFilePath: /dev/watchdog 2 isSoftwareRebootEnabled: true 3 apiServerTimeout: 15s 4 apiCheckInterval: 5s 5 maxApiErrorThreshold: 3 6 peerApiServerTimeout: 5s 7 peerDialTimeout: 5s 8 peerRequestTimeout: 5s 9 peerUpdateInterval: 15m 10 hostPort: 30001 11 customDsTolerations: 12 - effect: NoSchedule key: node-role.kubernetes.io.infra operator: Equal value: "value1" tolerationSeconds: 3600
- 1
- Specify an optional time duration that the Operator waits before recovering affected workloads running on an unhealthy node. Starting replacement pods while they are still running on the failed node can lead to data corruption and a violation of run-once semantics. The Operator calculates a minimum duration using the values in the
ApiServerTimeout
,ApiCheckInterval
,MaxApiErrorThreshold
,PeerDialTimeout
, andPeerRequestTimeout
fields, as well as the watchdog timeout and the cluster size at the time of remediation. To check the minimum duration calculation, view the manager pod logs and find references to thecalculated minimum time in seconds
.If you specify a value that is lower than the minimum duration, the Operator uses the minimum duration. However, if you want to increase the duration to a value higher than this minimum value, you can set
safeTimeToAssumeNodeRebootedSeconds
to a value higher than the minimum duration. - 2
- Specify the file path of the watchdog device in the nodes. If you enter an incorrect path to the watchdog device, the Self Node Remediation Operator automatically detects the softdog device path.
If a watchdog device is unavailable, the
SelfNodeRemediationConfig
CR uses a software reboot. - 3
- Specify if you want to enable software reboot of the unhealthy nodes. By default, the value of
isSoftwareRebootEnabled
is set totrue
. To disable the software reboot, set the parameter value tofalse
. - 4
- Specify the timeout duration to check connectivity with each API server. When this duration elapses, the Operator starts remediation. The timeout duration must be greater than or equal to 10 milliseconds.
- 5
- Specify the frequency to check connectivity with each API server. The timeout duration must be greater than or equal to 1 second.
- 6
- Specify a threshold value. After reaching this threshold, the node starts contacting its peers. The threshold value must be greater than or equal to 1 second.
- 7
- Specify the duration of the timeout for the peer to connect the API server. The timeout duration must be greater than or equal to 10 milliseconds.
- 8
- Specify the duration of the timeout for establishing connection with the peer. The timeout duration must be greater than or equal to 10 milliseconds.
- 9
- Specify the duration of the timeout to get a response from the peer. The timeout duration must be greater than or equal to 10 milliseconds.
- 10
- Specify the frequency to update peer information such as IP address. The timeout duration must be greater than or equal to 10 seconds.
- 11
- Specify an optional value to change the port that Self Node Remediation agents use for internal communication. The value must be greater than 0. The default value is port 30001.
- 12
- Specify custom toleration Self Node Remediation agents that are running on the DaemonSets to support remediation for different types of nodes. You can configure the following fields:
-
effect
: The effect indicates the taint effect to match. If this field is empty, all taint effects are matched. When specified, allowed values areNoSchedule
,PreferNoSchedule
andNoExecute
. -
key
: The key is the taint key that the toleration applies to. If this field is empty, all taint keys are matched. If the key is empty, theoperator
field must beExists
. This combination means to match all values and all keys. -
operator
: The operator represents a key’s relationship to the value. Valid operators areExists
andEqual
. The default isEqual
.Exists
is equivalent to a wildcard for a value, so that a pod can tolerate all taints of a particular category. -
value
: The taint value the toleration matches to. If the operator isExists
, the value should be empty, otherwise it is just a regular string. -
tolerationSeconds
: The period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (that is, do not evict). Zero and negative values will be treated as 0 (that is evict immediately) by the system. - Custom toleration allows you to add a toleration to the Self Node Remediation agent pod. For more information, see Using tolerations to control OpenShift Logging pod placement.
-
You can edit the self-node-remediation-config
CR that is created by the Self Node Remediation Operator. However, when you try to create a new CR for the Self Node Remediation Operator, the following message is displayed in the logs:
controllers.SelfNodeRemediationConfig ignoring selfnoderemediationconfig CRs that are not named 'self-node-remediation-config' or not in the namespace of the operator: 'openshift-workload-availability' {"selfnoderemediationconfig": "openshift-workload-availability/selfnoderemediationconfig-copy"}
2.5.2. Understanding the Self Node Remediation Template configuration
The Self Node Remediation Operator also creates the SelfNodeRemediationTemplate
Custom Resource Definition (CRD). This CRD defines the remediation strategy for the nodes. The following remediation strategies are available:
Automatic
-
This remediation strategy simplifies the remediation process by letting the Self Node Remediation Operator decide on the most suitable remediation strategy for the cluster. This strategy checks if the
OutOfServiceTaint
strategy is available on the cluster. If theOutOfServiceTaint
strategy is available, the Operator selects theOutOfServiceTaint
strategy. If theOutOfServiceTaint
strategy is not available, the Operator selects theResourceDeletion
strategy.Automatic
is the default remediation strategy. ResourceDeletion
- This remediation strategy removes the pods on the node, rather than the removal of the node object. This strategy recovers workloads faster.
OutOfServiceTaint
-
This remediation strategy implicitly causes the removal of the pods and associated volume attachments on the node, rather than the removal of the node object. It achieves this by placing the
OutOfServiceTaint
strategy on the node. This strategy recovers workloads faster. This strategy has been supported on technology preview since OpenShift Container Platform version 4.13, and on general availability since OpenShift Container Platform version 4.15.
The Self Node Remediation Operator creates the SelfNodeRemediationTemplate
CR for the strategy self-node-remediation-automatic-strategy-template
, which the Automatic
remediation strategy uses.
The SelfNodeRemediationTemplate
CR resembles the following YAML file:
apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationTemplate metadata: creationTimestamp: "2022-03-02T08:02:40Z" name: self-node-remediation-<remediation_object>-deletion-template 1 namespace: openshift-workload-availability spec: template: spec: remediationStrategy: <remediation_strategy> 2
2.5.3. Troubleshooting the Self Node Remediation Operator
2.5.3.1. General troubleshooting
- Issue
- You want to troubleshoot issues with the Self Node Remediation Operator.
- Resolution
- Check the Operator logs.
2.5.3.2. Checking the daemon set
- Issue
- The Self Node Remediation Operator is installed but the daemon set is not available.
- Resolution
- Check the Operator logs for errors or warnings.
2.5.3.3. Unsuccessful remediation
- Issue
- An unhealthy node was not remediated.
- Resolution
Verify that the
SelfNodeRemediation
CR was created by running the following command:$ oc get snr -A
If the
MachineHealthCheck
controller did not create theSelfNodeRemediation
CR when the node turned unhealthy, check the logs of theMachineHealthCheck
controller. Additionally, ensure that theMachineHealthCheck
CR includes the required specification to use the remediation template.If the
SelfNodeRemediation
CR was created, ensure that its name matches the unhealthy node or the machine object.
2.5.3.4. Daemon set and other Self Node Remediation Operator resources exist even after uninstalling the Operator
- Issue
- The Self Node Remediation Operator resources, such as the daemon set, configuration CR, and the remediation template CR, exist even after after uninstalling the Operator.
- Resolution
To remove the Self Node Remediation Operator resources, delete the resources by running the following commands for each resource type:
$ oc delete ds <self-node-remediation-ds> -n <namespace>
$ oc delete snrc <self-node-remediation-config> -n <namespace>
$ oc delete snrt <self-node-remediation-template> -n <namespace>
2.5.4. Gathering data about the Self Node Remediation Operator
To collect debugging information about the Self Node Remediation Operator, use the must-gather
tool. For information about the must-gather
image for the Self Node Remediation Operator, see Gathering data about specific features.