Search

Chapter 2. Using Self Node Remediation

download PDF

You can use the Self Node Remediation Operator to automatically reboot unhealthy nodes. This remediation strategy minimizes downtime for stateful applications and ReadWriteOnce (RWO) volumes, and restores compute capacity if transient failures occur.

2.1. About the Self Node Remediation Operator

The Self Node Remediation Operator runs on the cluster nodes and reboots nodes that are identified as unhealthy. The Operator uses the MachineHealthCheck or NodeHealthCheck controller to detect the health of a node in the cluster. When a node is identified as unhealthy, the MachineHealthCheck or the NodeHealthCheck resource creates the SelfNodeRemediation custom resource (CR), which triggers the Self Node Remediation Operator.

The SelfNodeRemediation CR resembles the following YAML file:

apiVersion: self-node-remediation.medik8s.io/v1alpha1
kind: SelfNodeRemediation
metadata:
  name: selfnoderemediation-sample
  namespace: openshift-workload-availability
spec:
  remediationStrategy: <remediation_strategy> 1
status:
  lastError: <last_error_message> 2
1
Specifies the remediation strategy for the nodes.
2
Displays the last error that occurred during remediation. When remediation succeeds or if no errors occur, the field is left empty.

The Self Node Remediation Operator minimizes downtime for stateful applications and restores compute capacity if transient failures occur. You can use this Operator regardless of the management interface, such as IPMI or an API to provision a node, and regardless of the cluster installation type, such as installer-provisioned infrastructure or user-provisioned infrastructure.

2.1.1. About watchdog devices

Watchdog devices can be any of the following:

  • Independently powered hardware devices
  • Hardware devices that share power with the hosts they control
  • Virtual devices implemented in software, or softdog

Hardware watchdog and softdog devices have electronic or software timers, respectively. These watchdog devices are used to ensure that the machine enters a safe state when an error condition is detected. The cluster is required to repeatedly reset the watchdog timer to prove that it is in a healthy state. This timer might elapse due to fault conditions, such as deadlocks, CPU starvation, and loss of network or disk access. If the timer expires, the watchdog device assumes that a fault has occurred and the device triggers a forced reset of the node.

Hardware watchdog devices are more reliable than softdog devices.

2.1.1.1. Understanding Self Node Remediation Operator behavior with watchdog devices

The Self Node Remediation Operator determines the remediation strategy based on the watchdog devices that are present.

If a hardware watchdog device is configured and available, the Operator uses it for remediation. If a hardware watchdog device is not configured, the Operator enables and uses a softdog device for remediation.

If neither watchdog devices are supported, either by the system or by the configuration, the Operator remediates nodes by using software reboot.

2.2. Control plane fencing

In earlier releases, you could enable Self Node Remediation and Node Health Check on worker nodes. In the event of node failure, you can now also follow remediation strategies on control plane nodes.

Self Node Remediation occurs in two primary scenarios.

  • API Server Connectivity

    • In this scenario, the control plane node to be remediated is not isolated. It can be directly connected to the API Server, or it can be indirectly connected to the API Server through worker nodes or control-plane nodes, that are directly connected to the API Server.
    • When there is API Server Connectivity, the control plane node is remediated only if the Node Health Check Operator has created a SelfNodeRemediation custom resource (CR) for the node.
  • No API Server Connectivity

    • In this scenario, the control plane node to be remediated is isolated from the API Server. The node cannot connect directly or indirectly to the API Server.
    • When there is no API Server Connectivity, the control plane node will be remediated as outlined with these steps:

      • Check the status of the control plane node with the majority of the peer worker nodes. If the majority of the peer worker nodes cannot be reached, the node will be analyzed further.

        • Self-diagnose the status of the control plane node

          • If self diagnostics passed, no action will be taken.
          • If self diagnostics failed, the node will be fenced and remediated.
          • The self diagnostics currently supported are checking the kubelet service status, and checking endpoint availability using opt in configuration.
      • If the node did not manage to communicate to most of its worker peers, check the connectivity of the control plane node with other control plane nodes. If the node can communicate with any other control plane peer, no action will be taken. Otherwise, the node will be fenced and remediated.

2.3. Installing the Self Node Remediation Operator by using the web console

You can use the Red Hat OpenShift web console to install the Self Node Remediation Operator.

Note

The Node Health Check Operator also installs the Self Node Remediation Operator as a default remediation provider.

Prerequisites

  • Log in as a user with cluster-admin privileges.

Procedure

  1. In the Red Hat OpenShift web console, navigate to Operators OperatorHub.
  2. Select the Self Node Remediation Operator from the list of available Operators, and then click Install.
  3. Keep the default selection of Installation mode and namespace to ensure that the Operator is installed to the openshift-workload-availability namespace.
  4. Click Install.

Verification

To confirm that the installation is successful:

  1. Navigate to the Operators Installed Operators page.
  2. Check that the Operator is installed in the openshift-workload-availability namespace and its status is Succeeded.

If the Operator is not installed successfully:

  1. Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures.
  2. Navigate to the Workloads Pods page and check the logs of the self-node-remediation-controller-manager pod and self-node-remediation-ds pods in the openshift-workload-availability project for any reported issues.

2.4. Installing the Self Node Remediation Operator by using the CLI

You can use the OpenShift CLI (oc) to install the Self Node Remediation Operator.

You can install the Self Node Remediation Operator in your own namespace or in the openshift-workload-availability namespace.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a Namespace custom resource (CR) for the Self Node Remediation Operator:

    1. Define the Namespace CR and save the YAML file, for example, workload-availability-namespace.yaml:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: openshift-workload-availability
    2. To create the Namespace CR, run the following command:

      $ oc create -f workload-availability-namespace.yaml
  2. Create an OperatorGroup CR:

    1. Define the OperatorGroup CR and save the YAML file, for example, workload-availability-operator-group.yaml:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: workload-availability-operator-group
        namespace: openshift-workload-availability
    2. To create the OperatorGroup CR, run the following command:

      $ oc create -f workload-availability-operator-group.yaml
  3. Create a Subscription CR:

    1. Define the Subscription CR and save the YAML file, for example, self-node-remediation-subscription.yaml:

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
          name: self-node-remediation-operator
          namespace: openshift-workload-availability 1
      spec:
          channel: stable
          installPlanApproval: Manual 2
          name: self-node-remediation-operator
          source: redhat-operators
          sourceNamespace: openshift-marketplace
          package: self-node-remediation
      1
      Specify the Namespace where you want to install the Self Node Remediation Operator. To install the Self Node Remediation Operator in the openshift-workload-availability namespace, specify openshift-workload-availability in the Subscription CR.
      2
      Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation.
    2. To create the Subscription CR, run the following command:

      $ oc create -f self-node-remediation-subscription.yaml

Verification

  1. Verify that the installation succeeded by inspecting the CSV resource:

    $ oc get csv -n openshift-workload-availability

    Example output

    NAME                               DISPLAY                          VERSION   REPLACES   PHASE
    self-node-remediation.v0.9.0       Self Node Remediation Operator   v.0.9.0   self-node-remediation.v0.8.1           Succeeded

  2. Verify that the Self Node Remediation Operator is up and running:

    $ oc get deployment -n openshift-workload-availability

    Example output

    NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
    self-node-remediation-controller-manager    1/1     1            1           28h

  3. Verify that the Self Node Remediation Operator created the SelfNodeRemediationConfig CR:

    $ oc get selfnoderemediationconfig -n openshift-workload-availability

    Example output

    NAME                           AGE
    self-node-remediation-config   28h

  4. Verify that each self node remediation pod is scheduled and running on each worker node and control plane node:

    $ oc get daemonset -n openshift-workload-availability

    Example output

    NAME                      DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
    self-node-remediation-ds  6        6        6      6           6          <none>         28h

2.5. Configuring the Self Node Remediation Operator

The Self Node Remediation Operator creates the SelfNodeRemediationConfig CR and the SelfNodeRemediationTemplate Custom Resource Definition (CRD).

Note

To avoid unexpected reboots of a specific node, the Node Maintenance Operator places the node in maintenance mode and automatically adds a node selector that prevents the SNR daemonset from running on the specific node.

2.5.1. Understanding the Self Node Remediation Operator configuration

The Self Node Remediation Operator creates the SelfNodeRemediationConfig CR with the name self-node-remediation-config. The CR is created in the namespace of the Self Node Remediation Operator.

A change in the SelfNodeRemediationConfig CR re-creates the Self Node Remediation daemon set.

The SelfNodeRemediationConfig CR resembles the following YAML file:

apiVersion: self-node-remediation.medik8s.io/v1alpha1
kind: SelfNodeRemediationConfig
metadata:
  name: self-node-remediation-config
  namespace: openshift-workload-availability
spec:
  safeTimeToAssumeNodeRebootedSeconds: 180 1
  watchdogFilePath: /dev/watchdog 2
  isSoftwareRebootEnabled: true 3
  apiServerTimeout: 15s 4
  apiCheckInterval: 5s 5
  maxApiErrorThreshold: 3 6
  peerApiServerTimeout: 5s 7
  peerDialTimeout: 5s 8
  peerRequestTimeout: 5s 9
  peerUpdateInterval: 15m 10
  hostPort: 30001 11
  customDsTolerations: 12
      - effect: NoSchedule
        key: node-role.kubernetes.io.infra
        operator: Equal
        value: "value1"
        tolerationSeconds: 3600
1
Specify the time duration that the Operator waits before recovering affected workloads running on an unhealthy node. Starting replacement pods while they are still running on the failed node can lead to data corruption and a violation of run-once semantics. The time duration must be equal to or greater than the minimum value calculated by the Operator using the values in the ApiServerTimeout, ApiCheckInterval, maxApiErrorThreshold, peerDialTimeout, and peerRequestTimeout fields. In the logs, you can reference the calculated minTimeToAssumeNodeRebooted is: [value] value to see the minimum value calculated by the Operator. Specifying a value that is lower than the minimum value calculated prevents the Operator from functioning.
2
Specify the file path of the watchdog device in the nodes. If you enter an incorrect path to the watchdog device, the Self Node Remediation Operator automatically detects the softdog device path.

If a watchdog device is unavailable, the SelfNodeRemediationConfig CR uses a software reboot.

3
Specify if you want to enable software reboot of the unhealthy nodes. By default, the value of isSoftwareRebootEnabled is set to true. To disable the software reboot, set the parameter value to false.
4
Specify the timeout duration to check connectivity with each API server. When this duration elapses, the Operator starts remediation. The timeout duration must be greater than or equal to 10 milliseconds.
5
Specify the frequency to check connectivity with each API server. The timeout duration must be greater than or equal to 1 second.
6
Specify a threshold value. After reaching this threshold, the node starts contacting its peers. The threshold value must be greater than or equal to 1 second.
7
Specify the duration of the timeout for the peer to connect the API server. The timeout duration must be greater than or equal to 10 milliseconds.
8
Specify the duration of the timeout for establishing connection with the peer. The timeout duration must be greater than or equal to 10 milliseconds.
9
Specify the duration of the timeout to get a response from the peer. The timeout duration must be greater than or equal to 10 milliseconds.
10
Specify the frequency to update peer information such as IP address. The timeout duration must be greater than or equal to 10 seconds.
11
Specify an optional value to change the port that Self Node Remediation agents use for internal communication. The value must be greater than 0. The default value is port 30001.
12
Specify custom toleration Self Node Remediation agents that are running on the DaemonSets to support remediation for different types of nodes. You can configure the following fields:
  • effect: The effect indicates the taint effect to match. If this field is empty, all taint effects are matched. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
  • key: The key is the taint key that the toleration applies to. If this field is empty, all taint keys are matched. If the key is empty, the operator field must be Exists. This combination means to match all values and all keys.
  • operator: The operator represents a key’s relationship to the value. Valid operators are Exists and Equal. The default is Equal. Exists is equivalent to a wildcard for a value, so that a pod can tolerate all taints of a particular category.
  • value: The taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise it is just a regular string.
  • tolerationSeconds: The period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (that is, do not evict). Zero and negative values will be treated as 0 (that is evict immediately) by the system.
  • Custom toleration allows you to add a toleration to the Self Node Remediation agent pod. For more information, see Using tolerations to control OpenShift Logging pod placement.
Note

You can edit the self-node-remediation-config CR that is created by the Self Node Remediation Operator. However, when you try to create a new CR for the Self Node Remediation Operator, the following message is displayed in the logs:

controllers.SelfNodeRemediationConfig
ignoring selfnoderemediationconfig CRs that are not named 'self-node-remediation-config'
or not in the namespace of the operator:
'openshift-workload-availability' {"selfnoderemediationconfig":
"openshift-workload-availability/selfnoderemediationconfig-copy"}

2.5.2. Understanding the Self Node Remediation Template configuration

The Self Node Remediation Operator also creates the SelfNodeRemediationTemplate Custom Resource Definition (CRD). This CRD defines the remediation strategy for the nodes. The following remediation strategies are available:

Automatic
This remediation strategy simplifies the remediation process by letting the Self Node Remediation Operator decide on the most suitable remediation strategy for the cluster. This strategy checks if the OutOfServiceTaint strategy is available on the cluster. If the OutOfServiceTaint strategy is available, the Operator selects the OutOfServiceTaint strategy. If the OutOfServiceTaint strategy is not available, the Operator selects the ResourceDeletion strategy. Automatic is the default remediation strategy.
ResourceDeletion
This remediation strategy removes the pods on the node, rather than the removal of the node object. This strategy recovers workloads faster.
OutOfServiceTaint
This remediation strategy implicitly causes the removal of the pods and associated volume attachments on the node, rather than the removal of the node object. It achieves this by placing the OutOfServiceTaint strategy on the node. This strategy recovers workloads faster. This strategy has been supported on technology preview since OpenShift Container Platform version 4.13, and on general availability since OpenShift Container Platform version 4.15.

The Self Node Remediation Operator creates the SelfNodeRemediationTemplate CR for the strategy self-node-remediation-resource-deletion-template, which the ResourceDeletion remediation strategy uses.

The SelfNodeRemediationTemplate CR resembles the following YAML file:

apiVersion: self-node-remediation.medik8s.io/v1alpha1
kind: SelfNodeRemediationTemplate
metadata:
  creationTimestamp: "2022-03-02T08:02:40Z"
  name: self-node-remediation-<remediation_object>-deletion-template 1
  namespace: openshift-workload-availability
spec:
  template:
    spec:
      remediationStrategy: <remediation_strategy>  2
1
Specifies the type of remediation template based on the remediation strategy. Replace <remediation_object> with either resource or node; for example, self-node-remediation-resource-deletion-template.
2
Specifies the remediation strategy. The default remediation strategy is Automatic.

2.5.3. Troubleshooting the Self Node Remediation Operator

2.5.3.1. General troubleshooting

Issue
You want to troubleshoot issues with the Self Node Remediation Operator.
Resolution
Check the Operator logs.

2.5.3.2. Checking the daemon set

Issue
The Self Node Remediation Operator is installed but the daemon set is not available.
Resolution
Check the Operator logs for errors or warnings.

2.5.3.3. Unsuccessful remediation

Issue
An unhealthy node was not remediated.
Resolution

Verify that the SelfNodeRemediation CR was created by running the following command:

$ oc get snr -A

If the MachineHealthCheck controller did not create the SelfNodeRemediation CR when the node turned unhealthy, check the logs of the MachineHealthCheck controller. Additionally, ensure that the MachineHealthCheck CR includes the required specification to use the remediation template.

If the SelfNodeRemediation CR was created, ensure that its name matches the unhealthy node or the machine object.

2.5.3.4. Daemon set and other Self Node Remediation Operator resources exist even after uninstalling the Operator

Issue
The Self Node Remediation Operator resources, such as the daemon set, configuration CR, and the remediation template CR, exist even after after uninstalling the Operator.
Resolution

To remove the Self Node Remediation Operator resources, delete the resources by running the following commands for each resource type:

$ oc delete ds <self-node-remediation-ds> -n <namespace>
$ oc delete snrc <self-node-remediation-config> -n <namespace>
$ oc delete snrt <self-node-remediation-template> -n <namespace>

2.5.4. Gathering data about the Self Node Remediation Operator

To collect debugging information about the Self Node Remediation Operator, use the must-gather tool. For information about the must-gather image for the Self Node Remediation Operator, see Gathering data about specific features.

2.5.5. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.