Chapter 7. Working with clusters
7.1. Viewing system event information in an OpenShift Container Platform cluster
Events in OpenShift Container Platform are modeled based on events that happen to API objects in an OpenShift Container Platform cluster.
7.1.1. Understanding events
Events allow OpenShift Container Platform to record information about real-world events in a resource-agnostic manner. They also allow developers and administrators to consume information about system components in a unified way.
7.1.2. Viewing events using the CLI
You can get a list of events in a given project using the CLI.
Procedure
To view events in a project use the following command:
$ oc get events [-n <project>] 1
- 1
- The name of the project.
For example:
$ oc get events -n openshift-config
Example output
LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image "gcr.io/google_containers/busybox" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image "gcr.io/google_containers/busybox" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network "openshift-sdn": cannot set "openshift-sdn" ifname to "eth0": no netns: failed to Statfs "/proc/33366/ns/net": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal
To view events in your project from the OpenShift Container Platform console.
- Launch the OpenShift Container Platform console.
-
Click Home
Events and select your project. Move to resource that you want to see events. For example: Home
Projects <project-name> <resource-name>. Many objects, such as pods and deployments, have their own Events tab as well, which shows events related to that object.
7.1.3. List of events
This section describes the events of OpenShift Container Platform.
Name | Description |
---|---|
| Failed pod configuration validation. |
Name | Description |
---|---|
| Back-off restarting failed the container. |
| Container created. |
| Pull/Create/Start failed. |
| Killing the container. |
| Container started. |
| Preempting other pods. |
| Container runtime did not stop the pod within specified grace period. |
Name | Description |
---|---|
| Container is unhealthy. |
Name | Description |
---|---|
| Back off Ctr Start, image pull. |
| The image’s NeverPull Policy is violated. |
| Failed to pull the image. |
| Failed to inspect the image. |
| Successfully pulled the image or the container image is already present on the machine. |
| Pulling the image. |
Name | Description |
---|---|
| Free disk space failed. |
| Invalid disk capacity. |
Name | Description |
---|---|
| Volume mount failed. |
| Host network not supported. |
| Host/port conflict. |
| Kubelet setup failed. |
| Undefined shaper. |
| Node is not ready. |
| Node is not schedulable. |
| Node is ready. |
| Node is schedulable. |
| Node selector mismatch. |
| Out of disk. |
| Node rebooted. |
| Starting kubelet. |
| Failed to attach volume. |
| Failed to detach volume. |
| Failed to expand/reduce volume. |
| Successfully expanded/reduced volume. |
| Failed to expand/reduce file system. |
| Successfully expanded/reduced file system. |
| Failed to unmount volume. |
| Failed to map a volume. |
| Failed unmaped device. |
| Volume is already mounted. |
| Volume is successfully detached. |
| Volume is successfully mounted. |
| Volume is successfully unmounted. |
| Container garbage collection failed. |
| Image garbage collection failed. |
| Failed to enforce System Reserved Cgroup limit. |
| Enforced System Reserved Cgroup limit. |
| Unsupported mount option. |
| Pod sandbox changed. |
| Failed to create pod sandbox. |
| Failed pod sandbox status. |
Name | Description |
---|---|
| Pod sync failed. |
Name | Description |
---|---|
| There is an OOM (out of memory) situation on the cluster. |
Name | Description |
---|---|
| Failed to stop a pod. |
| Failed to create a pod container. |
| Failed to make pod data directories. |
| Network is not ready. |
|
Error creating: |
|
Created pod: |
|
Error deleting: |
|
Deleted pod: |
Name | Description |
---|---|
SelectorRequired | Selector is required. |
| Could not convert selector into a corresponding internal selector object. |
| HPA was unable to compute the replica count. |
| Unknown metric source type. |
| HPA was able to successfully calculate a replica count. |
| Failed to convert the given HPA. |
| HPA controller was unable to get the target’s current scale. |
| HPA controller was able to get the target’s current scale. |
| Failed to compute desired number of replicas based on listed metrics. |
|
New size: |
|
New size: |
| Failed to update status. |
Name | Description |
---|---|
| Starting OpenShift-SDN. |
| The pod’s network interface has been lost and the pod will be stopped. |
Name | Description |
---|---|
|
The service-port |
Name | Description |
---|---|
| There are no persistent volumes available and no storage class is set. |
| Volume size or class is different from what is requested in claim. |
| Error creating recycler pod. |
| Occurs when volume is recycled. |
| Occurs when pod is recycled. |
| Occurs when volume is deleted. |
| Error when deleting the volume. |
| Occurs when volume for the claim is provisioned either manually or via external software. |
| Failed to provision volume. |
| Error cleaning provisioned volume. |
| Occurs when the volume is provisioned successfully. |
| Delay binding until pod scheduling. |
Name | Description |
---|---|
| Handler failed for pod start. |
| Handler failed for pre-stop. |
| Pre-stop hook unfinished. |
Name | Description |
---|---|
| Failed to cancel deployment. |
| Canceled deployment. |
| Created new replication controller. |
| No available Ingress IP to allocate to service. |
Name | Description |
---|---|
|
Failed to schedule pod: |
|
By |
|
Successfully assigned |
Name | Description |
---|---|
| This daemon set is selecting all pods. A non-empty selector is required. |
|
Failed to place pod on |
|
Found failed daemon pod |
Name | Description |
---|---|
| Error creating load balancer. |
| Deleting load balancer. |
| Ensuring load balancer. |
| Ensured load balancer. |
|
There are no available nodes for |
|
Lists the new |
|
Lists the new IP address. For example, |
|
Lists external IP address. For example, |
|
Lists the new UID. For example, |
|
Lists the new |
|
Lists the new |
| Updated load balancer with new hosts. |
| Error updating load balancer with new hosts. |
| Deleting load balancer. |
| Error deleting load balancer. |
| Deleted load balancer. |
7.2. Estimating the number of pods your OpenShift Container Platform nodes can hold
As a cluster administrator, you can use the cluster capacity tool to view the number of pods that can be scheduled to increase the current resources before they become exhausted, and to ensure any future pods can be scheduled. This capacity comes from an individual node host in a cluster, and includes CPU, memory, disk space, and others.
7.2.1. Understanding the OpenShift Container Platform cluster capacity tool
The cluster capacity tool simulates a sequence of scheduling decisions to determine how many instances of an input pod can be scheduled on the cluster before it is exhausted of resources to provide a more accurate estimation.
The remaining allocatable capacity is a rough estimation, because it does not count all of the resources being distributed among nodes. It analyzes only the remaining resources and estimates the available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster.
Also, pods might only have scheduling support on particular sets of nodes based on its selection and affinity criteria. As a result, the estimation of which remaining pods a cluster can schedule can be difficult.
You can run the cluster capacity analysis tool as a stand-alone utility from the command line, or as a job in a pod inside an OpenShift Container Platform cluster. Running it as job inside of a pod enables you to run it multiple times without intervention.
7.2.2. Running the cluster capacity tool on the command line
You can run the OpenShift Container Platform cluster capacity tool from the command line to estimate the number of pods that can be scheduled onto your cluster.
Prerequisites
- Run the OpenShift Cluster Capacity Tool, which is available as a container image from the Red Hat Ecosystem Catalog.
Create a sample
Pod
spec file, which the tool uses for estimating resource usage. Thepodspec
specifies its resource requirements aslimits
orrequests
. The cluster capacity tool takes the pod’s resource requirements into account for its estimation analysis.An example of the
Pod
spec input is:apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi
Procedure
To use the cluster capacity tool on the command line:
From the terminal, log in to the Red Hat Registry:
$ podman login registry.redhat.io
Pull the cluster capacity tool image:
$ podman pull registry.redhat.io/openshift4/ose-cluster-capacity
Run the cluster capacity tool:
$ podman run -v $HOME/.kube:/kube:Z -v $(pwd):/cc:Z ose-cluster-capacity \ /bin/cluster-capacity --kubeconfig /kube/config --podspec /cc/pod-spec.yaml \ --verbose 1
- 1
- You can also add the
--verbose
option to output a detailed description of how many pods can be scheduled on each node in the cluster.
Example output
small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)
In the above example, the number of estimated pods that can be scheduled onto the cluster is 88.
7.2.3. Running the cluster capacity tool as a job inside a pod
Running the cluster capacity tool as a job inside of a pod has the advantage of being able to be run multiple times without needing user intervention. Running the cluster capacity tool as a job involves using a ConfigMap
object.
Prerequisites
Download and install the cluster capacity tool.
Procedure
To run the cluster capacity tool:
Create the cluster role:
$ cat << EOF| oc create -f -
Example output
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [""] resources: ["pods", "nodes", "persistentvolumeclaims", "persistentvolumes", "services", "replicationcontrollers"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "watch", "list"] EOF
Create the service account:
$ oc create sa cluster-capacity-sa
Add the role to the service account:
$ oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:default:cluster-capacity-sa
Define and create the
Pod
spec:apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi
The cluster capacity analysis is mounted in a volume using a
ConfigMap
object namedcluster-capacity-configmap
to mount input pod spec filepod.yaml
into a volumetest-volume
at the path/test-pod
.If you haven’t created a
ConfigMap
object, create one before creating the job:$ oc create configmap cluster-capacity-configmap \ --from-file=pod.yaml=pod.yaml
Create the job using the below example of a job specification file:
apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: "Always" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: "true" command: - "/bin/sh" - "-ec" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: "Never" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap
- 1
- A required environment variable letting the cluster capacity tool know that it is running inside a cluster as a pod.
Thepod.yaml
key of theConfigMap
object is the same as thePod
spec file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as/test-pod/pod.yaml
.
Run the cluster capacity image as a job in a pod:
$ oc create -f cluster-capacity-job.yaml
Check the job logs to find the number of pods that can be scheduled in the cluster:
$ oc logs jobs/cluster-capacity-job
Example output
small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)
7.3. Restrict resource consumption with limit ranges
By default, containers run with unbounded compute resources on an OpenShift Container Platform cluster. With limit ranges, you can restrict resource consumption for specific objects in a project:
- pods and containers: You can set minimum and maximum requirements for CPU and memory for pods and their containers.
-
Image streams: You can set limits on the number of images and tags in an
ImageStream
object. - Images: You can limit the size of images that can be pushed to an internal registry.
- Persistent volume claims (PVC): You can restrict the size of the PVCs that can be requested.
If a pod does not meet the constraints imposed by the limit range, the pod cannot be created in the namespace.
7.3.1. About limit ranges
A limit range, defined by a LimitRange
object, restricts resource consumption in a project. In the project you can set specific resource limits for a pod, container, image, image stream, or persistent volume claim (PVC).
All requests to create and modify resources are evaluated against each LimitRange
object in the project. If the resource violates any of the enumerated constraints, the resource is rejected.
The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources.
Sample limit range object for a container
apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10"
7.3.1.1. About component limits
The following examples show limit range parameters for each component. The examples are broken out for clarity. You can create a single LimitRange
object for any or all components as necessary.
7.3.1.1.1. Container limits
A limit range allows you to specify the minimum and maximum CPU and memory that each container in a pod can request for a specific project. If a container is created in the project, the container CPU and memory requests in the Pod
spec must comply with the values set in the LimitRange
object. If not, the pod does not get created.
-
The container CPU or memory request and limit must be greater than or equal to the
min
resource constraint for containers that are specified in theLimitRange
object. The container CPU or memory request and limit must be less than or equal to the
max
resource constraint for containers that are specified in theLimitRange
object.If the
LimitRange
object defines amax
CPU, you do not need to define a CPUrequest
value in thePod
spec. But you must specify a CPUlimit
value that satisfies the maximum CPU constraint specified in the limit range.The ratio of the container limits to requests must be less than or equal to the
maxLimitRequestRatio
value for containers that is specified in theLimitRange
object.If the
LimitRange
object defines amaxLimitRequestRatio
constraint, any new containers must have both arequest
and alimit
value. OpenShift Container Platform calculates the limit-to-request ratio by dividing thelimit
by therequest
. This value should be a non-negative integer greater than 1.For example, if a container has
cpu: 500
in thelimit
value, andcpu: 100
in therequest
value, the limit-to-request ratio forcpu
is5
. This ratio must be less than or equal to themaxLimitRequestRatio
.
If the Pod
spec does not specify a container resource memory or limit, the default
or defaultRequest
CPU and memory values for containers specified in the limit range object are assigned to the container.
Container LimitRange
object definition
apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Container" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "100m" 4 memory: "4Mi" 5 default: cpu: "300m" 6 memory: "200Mi" 7 defaultRequest: cpu: "200m" 8 memory: "100Mi" 9 maxLimitRequestRatio: cpu: "10" 10
- 1
- The name of the LimitRange object.
- 2
- The maximum amount of CPU that a single container in a pod can request.
- 3
- The maximum amount of memory that a single container in a pod can request.
- 4
- The minimum amount of CPU that a single container in a pod can request.
- 5
- The minimum amount of memory that a single container in a pod can request.
- 6
- The default amount of CPU that a container can use if not specified in the
Pod
spec. - 7
- The default amount of memory that a container can use if not specified in the
Pod
spec. - 8
- The default amount of CPU that a container can request if not specified in the
Pod
spec. - 9
- The default amount of memory that a container can request if not specified in the
Pod
spec. - 10
- The maximum limit-to-request ratio for a container.
7.3.1.1.2. Pod limits
A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod
spec must comply with the values set in the LimitRange
object. If not, the pod does not get created.
If the Pod
spec does not specify a container resource memory or limit, the default
or defaultRequest
CPU and memory values for containers specified in the limit range object are assigned to the container.
Across all containers in a pod, the following must hold true:
-
The container CPU or memory request and limit must be greater than or equal to the
min
resource constraints for pods that are specified in theLimitRange
object. -
The container CPU or memory request and limit must be less than or equal to the
max
resource constraints for pods that are specified in theLimitRange
object. -
The ratio of the container limits to requests must be less than or equal to the
maxLimitRequestRatio
constraint specified in theLimitRange
object.
Pod LimitRange
object definition
apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 maxLimitRequestRatio: cpu: "10" 6
- 1
- The name of the limit range object.
- 2
- The maximum amount of CPU that a pod can request across all containers.
- 3
- The maximum amount of memory that a pod can request across all containers.
- 4
- The minimum amount of CPU that a pod can request across all containers.
- 5
- The minimum amount of memory that a pod can request across all containers.
- 6
- The maximum limit-to-request ratio for a container.
7.3.1.1.3. Image limits
A LimitRange
object allows you to specify the maximum size of an image that can be pushed to an internal registry.
When pushing images to an internal registry, the following must hold true:
-
The size of the image must be less than or equal to the
max
size for images that is specified in theLimitRange
object.
Image LimitRange
object definition
apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2
To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quotas.
The image size is not always available in the manifest of an uploaded image. This is especially the case for images built with Docker 1.10 or higher and pushed to a v2 registry. If such an image is pulled with an older Docker daemon, the image manifest is converted by the registry to schema v1 lacking all the size information. No storage limit set on images prevent it from being uploaded.
The issue is being addressed.
7.3.1.1.4. Image stream limits
A LimitRange
object allows you to specify limits for image streams.
For each image stream, the following must hold true:
-
The number of image tags in an
ImageStream
specification must be less than or equal to theopenshift.io/image-tags
constraint in theLimitRange
object. -
The number of unique references to images in an
ImageStream
specification must be less than or equal to theopenshift.io/images
constraint in the limit range object.
Imagestream LimitRange
object definition
apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3
The openshift.io/image-tags
resource represents unique image references. Possible references are an ImageStreamTag
, an ImageStreamImage
and a DockerImage
. Tags can be created using the oc tag
and oc import-image
commands. No distinction is made between internal and external references. However, each unique reference tagged in an ImageStream
specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction.
The openshift.io/images
resource represents unique image names recorded in image stream status. It allows for restriction of a number of images that can be pushed to the internal registry. Internal and external references are not distinguished.
7.3.1.1.5. Persistent volume claim limits
A LimitRange
object allows you to restrict the storage requested in a persistent volume claim (PVC).
Across all persistent volume claims in a project, the following must hold true:
-
The resource request in a persistent volume claim (PVC) must be greater than or equal the
min
constraint for PVCs that is specified in theLimitRange
object. -
The resource request in a persistent volume claim (PVC) must be less than or equal the
max
constraint for PVCs that is specified in theLimitRange
object.
PVC LimitRange
object definition
apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "PersistentVolumeClaim" min: storage: "2Gi" 2 max: storage: "50Gi" 3
7.3.2. Creating a Limit Range
To apply a limit range to a project:
Create a
LimitRange
object with your required specifications:apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" 2 max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" 3 max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: 4 cpu: "300m" memory: "200Mi" defaultRequest: 5 cpu: "200m" memory: "100Mi" maxLimitRequestRatio: 6 cpu: "10" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: "PersistentVolumeClaim" 9 min: storage: "2Gi" max: storage: "50Gi"
- 1
- Specify a name for the
LimitRange
object. - 2
- To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed.
- 3
- To set limits for a container, specify the minimum and maximum CPU and memory requests as needed.
- 4
- Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the
Pod
spec. - 5
- Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the
Pod
spec. - 6
- Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the
Pod
spec. - 7
- To set limits for an Image object, set the maximum size of an image that can be pushed to an internal registry.
- 8
- To set limits for an image stream, set the maximum number of image tags and references that can be in the
ImageStream
object file, as needed. - 9
- To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested.
Create the object:
$ oc create -f <limit_range_file> -n <project> 1
- 1
- Specify the name of the YAML file you created and the project where you want the limits to apply.
7.3.3. Viewing a limit
You can view any limits defined in a project by navigating in the web console to the project’s Quota page.
You can also use the CLI to view limit range details:
Get the list of
LimitRange
object defined in the project. For example, for a project called demoproject:$ oc get limits -n demoproject
NAME CREATED AT resource-limits 2020-07-15T17:14:23Z
Describe the
LimitRange
object you are interested in, for example theresource-limits
limit range:$ oc describe limits resource-limits -n demoproject
Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -
7.3.4. Deleting a Limit Range
To remove any active LimitRange
object to no longer enforce the limits in a project:
Run the following command:
$ oc delete limits <limit_name>
7.4. Configuring cluster memory to meet container memory and risk requirements
As a cluster administrator, you can help your clusters operate efficiently through managing application memory by:
- Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements.
- Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters.
- Diagnosing and resolving memory-related error conditions associated with running in a container.
7.4.1. Understanding managing application memory
It is recommended to fully read the overview of how OpenShift Container Platform manages Compute Resources before proceeding.
For each kind of resource (memory, CPU, storage), OpenShift Container Platform allows optional request and limit values to be placed on each container in a pod.
Note the following about memory requests and memory limits:
Memory request
- The memory request value, if specified, influences the OpenShift Container Platform scheduler. The scheduler considers the memory request when scheduling a container to a node, then fences off the requested memory on the chosen node for the use of the container.
- If a node’s memory is exhausted, OpenShift Container Platform prioritizes evicting its containers whose memory usage most exceeds their memory request. In serious cases of memory exhaustion, the node OOM killer may select and kill a process in a container based on a similar metric.
- The cluster administrator can assign quota or assign default values for the memory request value.
- The cluster administrator can override the memory request values that a developer specifies, in order to manage cluster overcommit.
Memory limit
- The memory limit value, if specified, provides a hard limit on the memory that can be allocated across all the processes in a container.
- If the memory allocated by all of the processes in a container exceeds the memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container.
- If both memory request and limit are specified, the memory limit value must be greater than or equal to the memory request.
- The cluster administrator can assign quota or assign default values for the memory limit value.
-
The minimum memory limit is 12 MB. If a container fails to start due to a
Cannot allocate memory
pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources.
7.4.1.1. Managing application memory strategy
The steps for sizing application memory on OpenShift Container Platform are as follows:
Determine expected container memory usage
Determine expected mean and peak container memory usage, empirically if necessary (for example, by separate load testing). Remember to consider all the processes that may potentially run in parallel in the container: for example, does the main application spawn any ancillary scripts?
Determine risk appetite
Determine risk appetite for eviction. If the risk appetite is low, the container should request memory according to the expected peak usage plus a percentage safety margin. If the risk appetite is higher, it may be more appropriate to request memory according to the expected mean usage.
Set container memory request
Set container memory request based on the above. The more accurately the request represents the application memory usage, the better. If the request is too high, cluster and quota usage will be inefficient. If the request is too low, the chances of application eviction increase.
Set container memory limit, if required
Set container memory limit, if required. Setting a limit has the effect of immediately killing a container process if the combined memory usage of all processes in the container exceeds the limit, and is therefore a mixed blessing. On the one hand, it may make unanticipated excess memory usage obvious early ("fail fast"); on the other hand it also terminates processes abruptly.
Note that some OpenShift Container Platform clusters may require a limit value to be set; some may override the request based on the limit; and some application images rely on a limit value being set as this is easier to detect than a request value.
If the memory limit is set, it should not be set to less than the expected peak container memory usage plus a percentage safety margin.
Ensure application is tuned
Ensure application is tuned with respect to configured request and limit values, if appropriate. This step is particularly relevant to applications which pool memory, such as the JVM. The rest of this page discusses this.
Additional resources
7.4.2. Understanding OpenJDK settings for OpenShift Container Platform
The default OpenJDK settings do not work well with containerized environments. As a result, some additional Java memory settings must always be provided whenever running the OpenJDK in a container.
The JVM memory layout is complex, version dependent, and describing it in detail is beyond the scope of this documentation. However, as a starting point for running OpenJDK in a container, at least the following three memory-related tasks are key:
- Overriding the JVM maximum heap size.
- Encouraging the JVM to release unused memory to the operating system, if appropriate.
- Ensuring all JVM processes within a container are appropriately configured.
Optimally tuning JVM workloads for running in a container is beyond the scope of this documentation, and may involve setting multiple additional JVM options.
7.4.2.1. Understanding how to override the JVM maximum heap size
For many Java workloads, the JVM heap is the largest single consumer of memory. Currently, the OpenJDK defaults to allowing up to 1/4 (1/-XX:MaxRAMFraction
) of the compute node’s memory to be used for the heap, regardless of whether the OpenJDK is running in a container or not. It is therefore essential to override this behavior, especially if a container memory limit is also set.
There are at least two ways the above can be achieved:
If the container memory limit is set and the experimental options are supported by the JVM, set
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
.NoteThe
UseCGroupMemoryLimitForHeap
option has been removed in JDK 11. Use-XX:+UseContainerSupport
instead.This sets
-XX:MaxRAM
to the container memory limit, and the maximum heap size (-XX:MaxHeapSize
/-Xmx
) to 1/-XX:MaxRAMFraction
(1/4 by default).Directly override one of
-XX:MaxRAM
,-XX:MaxHeapSize
or-Xmx
.This option involves hard-coding a value, but has the advantage of allowing a safety margin to be calculated.
7.4.2.2. Understanding how to encourage the JVM to release unused memory to the operating system
By default, the OpenJDK does not aggressively return unused memory to the operating system. This may be appropriate for many containerized Java workloads, but notable exceptions include workloads where additional active processes co-exist with a JVM within a container, whether those additional processes are native, additional JVMs, or a combination of the two.
The OpenShift Container Platform Jenkins maven slave image uses the following JVM arguments to encourage the JVM to release unused memory to the operating system:
-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.
These arguments are intended to return heap memory to the operating system whenever allocated memory exceeds 110% of in-use memory (-XX:MaxHeapFreeRatio
), spending up to 20% of CPU time in the garbage collector (-XX:GCTimeRatio
). At no time will the application heap allocation be less than the initial heap allocation (overridden by -XX:InitialHeapSize
/ -Xms
). Detailed additional information is available Tuning Java’s footprint in OpenShift (Part 1), Tuning Java’s footprint in OpenShift (Part 2), and at OpenJDK and Containers.
7.4.2.3. Understanding how to ensure all JVM processes within a container are appropriately configured
In the case that multiple JVMs run in the same container, it is essential to ensure that they are all configured appropriately. For many workloads it will be necessary to grant each JVM a percentage memory budget, leaving a perhaps substantial additional safety margin.
Many Java tools use different environment variables (JAVA_OPTS
, GRADLE_OPTS
, MAVEN_OPTS
, and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM.
The JAVA_TOOL_OPTIONS
environment variable is always respected by the OpenJDK, and values specified in JAVA_TOOL_OPTIONS
will be overridden by other options specified on the JVM command line. By default, to ensure that these options are used by default for all JVM workloads run in the slave image, the OpenShift Container Platform Jenkins maven slave image sets:
JAVA_TOOL_OPTIONS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true"
The UseCGroupMemoryLimitForHeap
option has been removed in JDK 11. Use -XX:+UseContainerSupport
instead.
This does not guarantee that additional options are not required, but is intended to be a helpful starting point.
7.4.3. Finding the memory request and limit from within a pod
An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API.
Procedure
Configure the pod to add the
MEMORY_REQUEST
andMEMORY_LIMIT
stanzas:apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - "3600" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi
Create the pod:
$ oc create -f <file-name>.yaml
Access the pod using a remote shell:
$ oc rsh test
Check that the requested values were applied:
$ env | grep MEMORY | sort
Example output
MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184
The memory limit value can also be read from inside the container by the /sys/fs/cgroup/memory/memory.limit_in_bytes
file.
7.4.4. Understanding OOM kill policy
OpenShift Container Platform can kill a process in a container if the total memory usage of all the processes in the container exceeds the memory limit, or in serious cases of node memory exhaustion.
When a process is Out of Memory (OOM) killed, this might result in the container exiting immediately. If the container PID 1 process receives the SIGKILL, the container will exit immediately. Otherwise, the container behavior is dependent on the behavior of the other processes.
For example, a container process exited with code 137, indicating it received a SIGKILL signal.
If the container does not exit immediately, an OOM kill is detectable as follows:
Access the pod using a remote shell:
# oc rsh test
Run the following command to see the current OOM kill count in
/sys/fs/cgroup/memory/memory.oom_control
:$ grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control oom_kill 0
Run the following command to provoke an OOM kill:
$ sed -e '' </dev/zero
Example output
Killed
Run the following command to view the exit status of the
sed
command:$ echo $?
Example output
137
The
137
code indicates the container process exited with code 137, indicating it received a SIGKILL signal.Run the following command to see that the OOM kill counter in
/sys/fs/cgroup/memory/memory.oom_control
incremented:$ grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control oom_kill 1
If one or more processes in a pod are OOM killed, when the pod subsequently exits, whether immediately or not, it will have phase Failed and reason OOMKilled. An OOM-killed pod might be restarted depending on the value of
restartPolicy
. If not restarted, controllers such as the replication controller will notice the pod’s failed status and create a new pod to replace the old one.Use the follwing command to get the pod status:
$ oc get pod test
Example output
NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m
If the pod has not restarted, run the following command to view the pod:
$ oc get pod test -o yaml
Example output
... status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed
If restarted, run the following command to view the pod:
$ oc get pod test -o yaml
Example output
... status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running
7.4.5. Understanding pod eviction
OpenShift Container Platform may evict a pod from its node when the node’s memory is exhausted. Depending on the extent of memory exhaustion, the eviction may or may not be graceful. Graceful eviction implies the main process (PID 1) of each container receiving a SIGTERM signal, then some time later a SIGKILL signal if the process has not exited already. Non-graceful eviction implies the main process of each container immediately receiving a SIGKILL signal.
An evicted pod has phase Failed and reason Evicted. It will not be restarted, regardless of the value of restartPolicy
. However, controllers such as the replication controller will notice the pod’s failed status and create a new pod to replace the old one.
$ oc get pod test
Example output
NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m
$ oc get pod test -o yaml
Example output
... status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted
7.5. Configuring your cluster to place pods on overcommitted nodes
In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable.
Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node.
The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods' compute resource requests and nodes' available capacity into consideration.
OpenShift Container Platform administrators can control the level of overcommit and manage container density on nodes. You can configure cluster-level overcommit using the ClusterResourceOverride Operator to override the ratio between requests and limits set on developer containers. In conjunction with node overcommit and project memory and CPU limits and defaults, you can adjust the resource limit and request to achieve the desired level of overcommit.
In OpenShift Container Platform, you must enable cluster-level overcommit. Node overcommitment is enabled by default. See Disabling overcommitment for a node.
7.5.1. Resource requests and overcommitment
For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node.
The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service.
Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted.
7.5.2. Cluster-level overcommit using the Cluster Resource Override Operator
The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits.
You must install the Cluster Resource Override Operator using the OpenShift Container Platform console or CLI as shown in the following sections. During the installation, you create a ClusterResourceOverride
custom resource (CR), where you set the level of overcommit, as shown in the following example:
apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4
- 1
- The name must be
cluster
. - 2
- Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50.
- 3
- Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25.
- 4
- Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200.
The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange
object with default limits per individual project or configure limits in Pod
specs for the overrides to apply.
When configured, overrides can be enabled per-project by applying the following label to the Namespace object for each project:
apiVersion: v1 kind: Namespace metadata: .... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" ....
The Operator watches for the ClusterResourceOverride
CR and ensures that the ClusterResourceOverride
admission webhook is installed into the same namespace as the operator.
7.5.2.1. Installing the Cluster Resource Override Operator using the web console
You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
-
The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To install the Cluster Resource Override Operator using the OpenShift Container Platform web console:
In the OpenShift Container Platform web console, navigate to Home
Projects - Click Create Project.
-
Specify
clusterresourceoverride-operator
as the name of the project. - Click Create.
Navigate to Operators
OperatorHub. - Choose ClusterResourceOverride Operator from the list of available Operators and click Install.
- On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode.
- Make sure clusterresourceoverride-operator is selected for Installed Namespace.
- Select an Update Channel and Approval Strategy.
- Click Install.
On the Installed Operators page, click ClusterResourceOverride.
- On the ClusterResourceOverride Operator details page, click Create Instance.
On the Create ClusterResourceOverride page, edit the YAML template to set the overcommit values as needed:
apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4
- 1
- The name must be
cluster
. - 2
- Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50.
- 3
- Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
- 4
- Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
- Click Create.
Check the current state of the admission webhook by checking the status of the cluster custom resource:
- On the ClusterResourceOverride Operator page, click cluster.
On the ClusterResourceOverride Details age, click YAML. The
mutatingWebhookConfigurationRef
section appears when the webhook is called.apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 ....
- 1
- Reference to the
ClusterResourceOverride
admission webhook.
7.5.2.2. Installing the Cluster Resource Override Operator using the CLI
You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
-
The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To install the Cluster Resource Override Operator using the CLI:
Create a namespace for the Cluster Resource Override Operator:
Create a
Namespace
object YAML file (for example,cro-namespace.yaml
) for the Cluster Resource Override Operator:apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator
Create the namespace:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-namespace.yaml
Create an Operator group:
Create an
OperatorGroup
object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator
Create the Operator Group:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-og.yaml
Create a subscription:
Create a
Subscription
object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "4.6" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace
Create the subscription:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-sub.yaml
Create a
ClusterResourceOverride
custom resource (CR) object in theclusterresourceoverride-operator
namespace:Change to the
clusterresourceoverride-operator
namespace.$ oc project clusterresourceoverride-operator
Create a
ClusterResourceOverride
object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator:apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4
- 1
- The name must be
cluster
. - 2
- Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50.
- 3
- Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
- 4
- Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
Create the
ClusterResourceOverride
object:$ oc create -f <file-name>.yaml
For example:
$ oc create -f cro-cr.yaml
Verify the current state of the admission webhook by checking the status of the cluster custom resource.
$ oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml
The
mutatingWebhookConfigurationRef
section appears when the webhook is called.Example output
apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 ....
- 1
- Reference to the
ClusterResourceOverride
admission webhook.
7.5.2.3. Configuring cluster-level overcommit
The Cluster Resource Override Operator requires a ClusterResourceOverride
custom resource (CR) and a label for each project where you want the Operator to control overcommit.
Prerequisites
-
The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a
LimitRange
object or configure limits inPod
specs for the overrides to apply.
Procedure
To modify cluster-level overcommit:
Edit the
ClusterResourceOverride
CR:apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3
- 1
- Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50.
- 2
- Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
- 3
- Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit:
apiVersion: v1 kind: Namespace metadata: ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 ...
- 1
- Add this label to each project.
7.5.3. Node-level overcommit
You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects.
7.5.3.1. Understanding compute resources and containers
The node-enforced behavior for compute resources is specific to the resource type.
7.5.3.1.1. Understanding container CPU requests
A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container.
For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled.
7.5.3.1.2. Understanding container memory requests
A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node’s resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount.
7.5.3.2. Understanding overcomitment and quality of service classes
A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity.
In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class.
For each compute resource, a container is divided into one of three QoS classes with decreasing order of priority:
Priority | Class Name | Description |
---|---|---|
1 (highest) | Guaranteed | If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the container is classified as Guaranteed. |
2 | Burstable | If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the container is classified as Burstable. |
3 (lowest) | BestEffort | If requests and limits are not set for any of the resources, then the container is classified as BestEffort. |
Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first:
- Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted.
- Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist.
- BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory.
7.5.3.2.1. Understanding how to reserve memory across quality of service tiers
You can use the qos-reserved
parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes.
OpenShift Container Platform uses the qos-reserved
parameter as follows:
-
A value of
qos-reserved=memory=100%
will prevent theBurstable
andBestEffort
QOS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM onBestEffort
andBurstable
workloads in favor of increasing memory resource guarantees forGuaranteed
andBurstable
workloads. -
A value of
qos-reserved=memory=50%
will allow theBurstable
andBestEffort
QOS classes to consume half of the memory requested by a higher QoS class. -
A value of
qos-reserved=memory=0%
will allow aBurstable
andBestEffort
QoS classes to consume up to the full node allocatable amount if available, but increases the risk that aGuaranteed
workload will not have access to requested memory. This condition effectively disables this feature.
7.5.3.3. Understanding swap memory and QOS
You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement.
For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed.
Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure, resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event.
If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure.
7.5.3.4. Understanding nodes overcommitment
In an overcommitted environment, it is important to properly configure your node to provide best system behavior.
When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory.
To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory
parameter to 1
, overriding the default operating system setting.
OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom
parameter to 0
. A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority
You can view the current setting by running the following commands on your nodes:
$ sysctl -a |grep commit
Example output
vm.overcommit_memory = 1
$ sysctl -a |grep panic
Example output
vm.panic_on_oom = 0
The above flags should already be set on nodes, and no further action is required.
You can also perform the following configurations for each node:
- Disable or enforce CPU limits using CPU CFS quotas
- Reserve resources for system processes
- Reserve memory across quality of service tiers
7.5.3.5. Disabling or enforcing CPU limits using CPU CFS quotas
Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel.
If you disable CPU limit enforcement, it is important to understand the impact on your node:
- If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel.
- If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel.
- If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node.
Prerequisites
Obtain the label associated with the static
MachineConfigPool
CRD for the type of node you want to configure. Perform one of the following steps:View the machine config pool:
$ oc describe machineconfigpool <name>
For example:
$ oc describe machineconfigpool worker
Example output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1
- 1
- If a label has been added it appears under
labels
.
If the label is not present, add a key/value pair:
$ oc label machineconfigpool worker custom-kubelet=small-pods
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a disabling CPU limits
apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: cpuCfsQuota: 3 - "false"
7.5.3.6. Reserving resources for system processes
To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory.
Procedure
To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes.
7.5.3.7. Disabling overcommitment for a node
When enabled, overcommitment can be disabled on each node.
Procedure
To disable overcommitment in a node run the following command on that node:
$ sysctl -w vm.overcommit_memory=0
7.5.4. Project-level limits
To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed.
For information on project-level resource limits, see Additional resources.
Alternatively, you can disable overcommitment for specific projects.
7.5.4.1. Disabling overcommitment for a project
When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment.
Procedure
To disable overcommitment in a project:
- Edit the project object file
Add the following annotation:
quota.openshift.io/cluster-resource-override-enabled: "false"
Create the project object:
$ oc create -f <file-name>.yaml
7.5.5. Additional resources
- For information setting per-project resource limits, see Setting deployment resources.
- For more information about explicitly reserving resources for non-pod processes, see Allocating resources for nodes.
7.6. Enabling OpenShift Container Platform features using FeatureGates
As an administrator, you can use feature gates to enable features that are not part of the default set of features.
7.6.1. Understanding feature gates
You can use the FeatureGate
custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default.
You can activate the following feature set by using the FeatureGate
CR:
-
IPv6DualStackNoUpgrade
. This feature gate enables the dual-stack networking mode in your cluster. Dual-stack networking supports the use of IPv4 and IPv6 simultaneously. Enabling this feature set is not supported, cannot be undone, and prevents updates. This feature set is not recommended on production clusters.
7.6.2. Enabling feature sets using the web console
You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate
custom resource (CR).
Procedure
To enable feature sets:
-
In the OpenShift Container Platform web console, switch to the Administration
Custom Resource Definitions page. - On the Custom Resource Definitions page, click FeatureGate.
- On the Custom Resource Definition Details page, click the Instances tab.
- Click the cluster feature gate, then click the YAML tab.
Edit the cluster instance to add specific feature sets:
Sample Feature Gate custom resource
apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 .... spec: featureSet: IPv6DualStackNoUpgrade 2
After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied.
NoteEnabling the
IPv6DualStackNoUpgrade
feature set cannot be undone and prevents updates. This feature set is not recommended on production clusters.
Verification
You can verify that the feature gates are enabled by looking at the kubelet.conf
file on a node after the nodes return to the ready state.
-
From the Administrator perspective in the web console, navigate to Compute
Nodes. - Select a node.
- In the Node details page, click Terminal.
In the terminal window, change your root directory to the host:
sh-4.2# chroot /host
View the
kubelet.conf
file:sh-4.2# cat /etc/kubernetes/kubelet.conf
Sample output
... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false ...
The features that are listed as
true
are enabled on your cluster.NoteThe features listed vary depending upon the OpenShift Container Platform version.
7.6.3. Enabling feature sets using the CLI
You can use the OpenShift CLI (oc
) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate
custom resource (CR).
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
To enable feature sets:
Edit the
FeatureGate
CR namedcluster
:$ oc edit featuregate cluster
Sample FeatureGate custom resource
apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: IPv6DualStackNoUpgrade 2
After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied.
NoteEnabling the
IPv6DualStackNoUpgrade
feature set cannot be undone and prevents updates. This feature set is not recommended on production clusters.
Verification
You can verify that the feature gates are enabled by looking at the kubelet.conf
file on a node after the nodes return to the ready state.
Start a debug session for a node:
$ oc debug node/<node_name>
Change your root directory to the host:
sh-4.2# chroot /host
View the
kubelet.conf
file:sh-4.2# cat /etc/kubernetes/kubelet.conf
Sample output
... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false ...
The features that are listed as
true
are enabled on your cluster.NoteThe features listed vary depending upon the OpenShift Container Platform version.