Este contenido no está disponible en el idioma seleccionado.

Chapter 24. Handling Out of Resource Errors


24.1. Overview

This topic discusses best-effort attempts to prevent OpenShift Container Platform from experiencing out-of-memory (OOM) and out-of-disk-space conditions.

A node must maintain stability when available compute resources are low. This is especially important when dealing with incompressible resources such as memory or disk. If either resource is exhausted, the node becomes unstable.

Administrators can proactively monitor nodes for and prevent against situations where the node runs out of compute and memory resources using configurable eviction policies.

This topic also provides information on how OpenShift Container Platform handles out-of-resource conditions and provides an example scenario and recommended practices:

Warning

If swap memory is enabled for a node, that node cannot detect that it is under memory pressure.

To take advantage of memory based evictions, operators must disable swap.

24.2. Configuring Eviction Policies

An eviction policy allows a node to fail one or more pods when the node is running low on available resources. Failing a pod allows the node to reclaim needed resources.

An eviction policy is a combination of an eviction trigger signal with a specific eviction threshold value that is set in the node configuration file or through the command line. Evictions can be either hard, where a node takes immediate action on a pod that exceeds a threshold, or soft, where a node allows a grace period before taking action.

Note

To modify a node in your cluster, update the node configuration maps as needed. Do not manually edit the node-config.yaml file.

By using well-configured eviction policies, a node can proactively monitor for and prevent against total resource consumption of a compute resource.

Note

When the node fails a pod, the node ends all the containers in the pod, and the PodPhase is transitioned to Failed.

When detecting disk pressure, the node supports the nodefs and imagefs file system partitions.

The nodefs, or rootfs, is the file system that the node uses for local disk volumes, daemon logs, emptyDir, and other local storage. For example, rootfs is the file system that provides /. The rootfs contains openshift.local.volumes, by default /var/lib/origin/openshift.local.volumes.

The imagefs is the file system that the container runtime uses for storing images and individual container-writable layers. Eviction thresholds are at 85% full for imagefs. The imagefs file system depends on the runtime and, in the case of Docker, which storage driver that the container uses.

  • For Docker:

    • If you use the devicemapper storage driver, the imagefs is thin pool.

      You can limit the read and write layer for the container by setting the --storage-opt dm.basesize flag in the Docker daemon.

      $ sudo dockerd --storage-opt dm.basesize=50G
    • If you use the overlay2 storage driver, the imagefs is the file system that contains /var/lib/docker/overlay2.
  • For CRI-O, which uses the overlay driver, the imagefs is /var/lib/containers/storage by default.
Note

If you do not use local storage isolation (ephemeral storage) and you do not use XFS quota (volumeConfig), you cannot limit local disk usage by the pod.

24.2.1. Using the Node Configuration to Create a Policy

To configure an eviction policy, edit the appropriate node configuration map to specify the eviction thresholds under the eviction-hard or eviction-soft parameters.

The following samples show eviction thresholds:

Sample Node Configuration File for a Hard Eviction

kubeletArguments:
  eviction-hard: 1
  - memory.available<100Mi 2
  - nodefs.available<10%
  - nodefs.inodesFree<5%
  - imagefs.available<15%
  - imagefs.inodesFree<10%

1
The type of eviction: Use this parameter for a hard eviction.
2
Eviction thresholds based on a specific eviction trigger signal.
Note

You must provide percentage values for the inodesFree parameters. You can provide a percentage or a numerical value for the other parameters.

Sample Node Configuration File for a Soft Eviction

kubeletArguments:
  eviction-soft: 1
  - memory.available<100Mi 2
  - nodefs.available<10%
  - nodefs.inodesFree<5%
  - imagefs.available<15%
  - imagefs.inodesFree<10%
  eviction-soft-grace-period:3
  - memory.available=1m30s
  - nodefs.available=1m30s
  - nodefs.inodesFree=1m30s
  - imagefs.available=1m30s
  - imagefs.inodesFree=1m30s

1
The type of eviction: Use this parameter for a soft eviction.
2
An eviction threshold based on a specific eviction trigger signal.
3
The grace period for the soft eviction. Leave the default values for optimal performance.

Restart the OpenShift Container Platform service for the changes to take effect:

# systemctl restart atomic-openshift-node

24.2.2. Understanding Eviction Signals

You can configure a node to trigger eviction decisions on any of the signals described in the table below. You add an eviction signal to an eviction threshold along with a threshold value.

To view the signals:

curl <certificate details> \
  https://<master>/api/v1/nodes/<node>/proxy/stats/summary
Table 24.1. Supported Eviction Signals
Node ConditionEviction SignalValueDescription

MemoryPressure

memory.available

memory.available = node.status.capacity[memory] - node.stats.memory.workingSet

Available memory on the node has exceeded an eviction threshold.

DiskPressure

nodefs.available

nodefs.available = node.stats.fs.available

Available disk space on either the node root file system or image file system has exceeded an eviction threshold.

nodefs.inodesFree

nodefs.inodesFree = node.stats.fs.inodesFree

imagefs.available

imagefs.available = node.stats.runtime.imagefs.available

imagefs.inodesFree

imagefs.inodesFree = node.stats.runtime.imagefs.inodesFree

Each of the signals in the preceding table supports either a literal or percentage-based value, except inodesFree. The inodesFree signal must be specified as a percentage. The percentage-based value is calculated relative to the total capacity associated with each signal.

A script derives the value for memory.available from your cgroup driver using the same set of steps that the kubelet performs. The script excludes inactive file memory (that is, the number of bytes of file-backed memory on inactive LRU list) from its calculation as it assumes that inactive file memory is reclaimable under pressure.

Note

Do not use tools like free -m, because free -m does not work in a container.

OpenShift Container Platform monitors these file systems every 10 seconds.

If you store volumes and logs in a dedicated file system, the node does not monitor that file system.

Note

The node supports the ability to trigger eviction decisions based on disk pressure. Before evicting pods because of disk pressure, the node also performs container and image garbage collection.

24.2.3. Understanding Eviction Thresholds

You can configure a node to specify eviction thresholds. Reaching a threshold triggers the node to reclaim resources. You can configure a threshold in the node configuration file.

If an eviction threshold is met, independent of its associated grace period, the node reports a condition to indicate that the node is under memory or disk pressure. Reporting the pressure prevents the scheduler from scheduling any additional pods on the node while attempts to reclaim resources are made.

The node continues to report node status updates at the frequency specified by the node-status-update-frequency argument. The default frequency is 10s (ten seconds).

Eviction thresholds can be hard, for when the node takes immediate action when a threshold is met, or soft, for when you allow a grace period before reclaiming resources.

Note

Soft eviction usage is more common when you target a certain level of utilization, but can tolerate temporary spikes. We recommended setting the soft eviction threshold lower than the hard eviction threshold, but the time period can be operator-specific. The system reservation should also cover the soft eviction threshold.

The soft eviction threshold is an advanced feature. You should configure a hard eviction threshold before attempting to use soft eviction thresholds.

Thresholds are configured in the following form:

<eviction_signal><operator><quantity>

For example, if an operator has a node with 10Gi of memory, and that operator wants to induce eviction if available memory falls below 1Gi, an eviction threshold for memory can be specified as either of the following:

memory.available<1Gi
memory.available<10%
Note

The node evaluates and monitors eviction thresholds every 10 seconds and the value can not be modified. This is the housekeeping interval.

24.2.3.1. Understanding Hard Eviction Thresholds

A hard eviction threshold has no grace period. When a hard eviction threshold is met, the node takes immediate action to reclaim the associated resource. For example, the node can end one or more pods immediately with no graceful termination.

To configure hard eviction thresholds, add eviction thresholds to the node configuration file under eviction-hard, as shown in Using the Node Configuration to Create a Policy.

Sample Node Configuration File with Hard Eviction Thresholds

kubeletArguments:
  eviction-hard:
  - memory.available<500Mi
  - nodefs.available<500Mi
  - nodefs.inodesFree<5%
  - imagefs.available<100Mi
  - imagefs.inodesFree<10%

This example is a general guideline and not recommended settings.

24.2.3.1.1. Default Hard Eviction Thresholds

OpenShift Container Platform uses the following default configuration for eviction-hard.

...
kubeletArguments:
  eviction-hard:
  - memory.available<100Mi
  - nodefs.available<10%
  - nodefs.inodesFree<5%
  - imagefs.available<15%
...

24.2.3.2. Understanding Soft Eviction Thresholds

A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. The node does not reclaim resources associated with the eviction signal until that grace period is exceeded. If no grace period is provided in the node configuration, the node produces an error on startup.

In addition, if a soft eviction threshold is met, an operator can specify a maximum-allowed pod termination grace period to use when evicting pods from the node. If eviction-max-pod-grace-period is specified, the node uses the lesser value among the pod.Spec.TerminationGracePeriodSeconds and the maximum-allowed grace period. If not specified, the node ends pods immediately with no graceful termination.

For soft eviction thresholds the following flags are supported:

  • eviction-soft: a set of eviction thresholds, such as memory.available<1.5Gi. If the threshold is met over a corresponding grace period, the threshold triggers a pod eviction.
  • eviction-soft-grace-period: a set of eviction grace periods, such as memory.available=1m30s. The grace period corresponds to how long a soft eviction threshold must hold before triggering a pod eviction.
  • eviction-max-pod-grace-period: the maximum-allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met.

To configure soft eviction thresholds, add eviction thresholds to the node configuration file under eviction-soft, as shown in Using the Node Configuration to Create a Policy.

Sample Node Configuration Files with Soft Eviction Thresholds

kubeletArguments:
  eviction-soft:
  - memory.available<500Mi
  - nodefs.available<500Mi
  - nodefs.inodesFree<5%
  - imagefs.available<100Mi
  - imagefs.inodesFree<10%
  eviction-soft-grace-period:
  - memory.available=1m30s
  - nodefs.available=1m30s
  - nodefs.inodesFree=1m30s
  - imagefs.available=1m30s
  - imagefs.inodesFree=1m30s

This example is a general guideline and not recommended settings.

24.3. Configuring the Amount of Resource for Scheduling

You can control how much of a node resource is made available for scheduling in order to allow the scheduler to fully allocate a node and to prevent evictions.

Set system-reserved equal to the amount of resource that you want available to the scheduler for deploying pods and for system-daemons. The system-reserved resources are reserved for operating system daemons such as sshd and NetworkManager. Evictions should only occur if pods use more than their requested amount of an allocatable resource.

A node reports two values:

  • Capacity: How much resource is on the machine.
  • Allocatable: How much resource is made available for scheduling.

To configure the amount of allocatable resources, edit the appropriate node configuration map to add or modify the system-reserved parameter for eviction-hard or eviction-soft.

kubeletArguments:
  eviction-hard: 1
    - "memory.available<500Mi"
  system-reserved:
    - "memory=1.5Gi"
1
This threshold can either be eviction-hard or eviction-soft.

To determine appropriate values for the system-reserved setting, determine a node’s resource usage using the node summary API. For more information, see Configuring Nodes for Allocated Resources.

Restart the OpenShift Container Platform service for the changes to take effect:

# systemctl restart atomic-openshift-node

24.4. Controlling Node Condition Oscillation

If a node oscillates above and below a soft eviction threshold, but does not exceed an associated grace period, the oscillation can cause problems for the scheduler.

To prevent the oscillation, set the eviction-pressure-transition-period parameter to control how long the node must wait before transitioning out of a pressure condition.

  1. Edit or add the parameter to the kubeletArguments section of the appropriate node configuration map using a set of <resource_type>=<resource_quantity> pairs.

    kubeletArguments:
      eviction-pressure-transition-period:
      - 5m

    The node toggles the condition back to false when the node has not met an eviction threshold for the specified pressure condition during the specified period.

    Note

    Use the default value, 5 minutes, before making any adjustments. The default value is intended to enable the system to stabilize and to prevent the scheduler from scheduling new pods to the node before it has settled.

  2. Restart the OpenShift Container Platform services for the changes to take effect:

    # systemctl restart atomic-openshift-node

24.5. Reclaiming Node-level Resources

If an eviction criteria is satisfied, the node initiates the process of reclaiming the pressured resource until the signal is below the defined threshold. During this time, the node does not support scheduling any new pods.

The node attempts to reclaim node-level resources before the node evicts end-user pods, based on whether the host system has a dedicated imagefs configured for the container runtime.

With Imagefs

If the host system has imagefs:

  • If the nodefs file system meets eviction thresholds, the node frees disk space in the following order:

    • Delete dead pods and containers.
  • If the imagefs file system meets eviction thresholds, the node frees disk space in the following order:

    • Delete all unused images.
Without Imagefs

If the host system does not have imagefs:

  • If the nodefs file system meets eviction thresholds, the node frees disk space in the following order:

    • Delete dead pods and containers.
    • Delete all unused images.

24.6. Understanding Pod Eviction

If an eviction threshold is met and the grace period is passed, the node initiates the process of evicting pods until the signal is below the defined threshold.

The node ranks pods for eviction by their quality of service. Among pods with the same quality of service, the node ranks the pods by the consumption of the compute resource relative to the pod’s scheduling request.

Each quality of service level has an out-of-memory score. The Linux out-of-memory tool (OOM killer) uses the score to determine which pods to end. For more information, see Understanding Quality of Service and Out of Memory Killer.

The following table lists each quality of service level and the associated out-of-memory score.

Table 24.2. Quality of Service Levels
Quality of ServiceDescription

Guaranteed

Pods that consume the highest amount of the resource relative to their request are failed first. If no pod exceeds its request, the strategy targets the largest consumer of the resource.

Burstable

Pods that consume the highest amount of the resource relative to their request for that resource are failed first. If no pod exceeds its request, the strategy targets the largest consumer of the resource.

BestEffort

Pods that consume the highest amount of the resource are failed first.

A guaranteed quality of service pod is never evicted due to resource consumption by another pod unless a system daemon, such as node or the container engine, consumes more resources than were reserved using the system-reserved allocations or if the node has only guaranteed quality of service pods remaining.

If the node has only guaranteed quality of service pods remaining, the node evicts a pod that least impacts node stability and limits the impact of the unexpected consumption to the other guaranteed quality of service pods.

Local disk is a best-effort quality of service resource. If necessary, the node evicts pods one at a time to reclaim disk space when disk pressure is encountered. The node ranks pods by quality of service. If the node is responding to a lack of free inodes, the node reclaims inodes by evicting pods with the lowest quality of service first. If the node is responding to lack of available disk, the node ranks pods within a quality of service that consumes the largest amount of local disk and then evicts those pods first.

24.6.1. Understanding Quality of Service and Out of Memory Killer

If the node experiences a system out-of-memory (OOM) event before it is able to reclaim memory, the node depends on the OOM killer to respond.

The node sets a oom_score_adj value for each container that is based on the quality of service for the pod.

Table 24.3. Quality of Service Levels
Quality of Serviceoom_score_adj Value

Guaranteed

-998

Burstable

min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999)

BestEffort

1000

If the node is unable to reclaim memory before the node experiences a system OOM event, the OOM killer process calculates an OOM score:

% of node memory a container is using + oom_score_adj = oom_score

The node then ends the container with the highest score.

Containers with the lowest quality of service and that consume the largest amount of memory, relative to the scheduling request, are ended first.

Unlike pod eviction, if a pod container is ended due to OOM, the node can restart the container according to the node restart policy.

24.7. Understanding the Pod Scheduler and OOR Conditions

The scheduler views node conditions when the scheduler places additional pods on the node. For example, if the node has an eviction threshold like the following:

eviction-hard is "memory.available<500Mi"

If available memory falls below 500Mi, the node reports a value in Node.Status.Conditions as MemoryPressure as true.

Table 24.4. Node Conditions and Scheduler Behavior
Node ConditionScheduler Behavior

MemoryPressure

If a node reports this condition, the scheduler does not place BestEffort pods on that node.

DiskPressure

If a node reports this condition, the scheduler does not place any additional pods on that node.

24.8. Example Scenario

An Operator:

  • Has a node with a memory capacity of 10Gi.
  • Wants to reserve 10% of memory capacity for system daemons such as kernel, node, and other daemons.
  • Wants to evict pods at 95% memory utilization to reduce thrashing and incidence of system OOM.

Implicit in this configuration is the understanding that system-reserved should include the amount of memory covered by the eviction threshold.

To reach that capacity, either some pod is using more than its request, or the system is using more than 1Gi.

If a node has 10 Gi of capacity and you want to reserve 10% of that capacity for the system daemons with the system-reserved setting, perform the following calculation:

capacity = 10 Gi
system-reserved = 10 Gi * .1 = 1 Gi

The amount of allocatable resources becomes:

allocatable = capacity - system-reserved = 9 Gi

This means by default, the scheduler will schedule pods that request 9 Gi of memory to that node.

If you want to enable eviction so that eviction is triggered when the node observes that available memory is below 10% of capacity for 30 seconds, or immediately when it falls below 5% of capacity, you need the scheduler to evaluate allocatable as 8Gi. Therefore, ensure your system reservation covers the greater of your eviction thresholds.

capacity = 10 Gi
eviction-threshold = 10 Gi * .1 = 1 Gi
system-reserved = (10Gi * .1) + eviction-threshold = 2 Gi
allocatable = capacity - system-reserved = 8 Gi

Add the following to the appropriate node configuration map:

kubeletArguments:
  system-reserved:
  - "memory=2Gi"
  eviction-hard:
  - "memory.available<.5Gi"
  eviction-soft:
  - "memory.available<1Gi"
  eviction-soft-grace-period:
  - "memory.available=30s"

This configuration ensures that the scheduler does not place pods on a node and immediately induce memory pressure and trigger an eviction. This configuration assumes those pods use less than their configured request.

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.