Questo contenuto non è disponibile nella lingua selezionata.
Chapter 2. Working with pods
2.1. Using pods Copia collegamentoCollegamento copiato negli appunti!
A pod is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed.
2.1.1. Understanding pods Copia collegamentoCollegamento copiato negli appunti!
Pods are the rough equivalent of a machine instance (physical or virtual) to a Container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking.
Pods have a lifecycle; they are defined, then they are assigned to run on a node, then they run until their container(s) exit or they are removed for some other reason. Pods, depending on policy and exit code, might be removed after exiting, or can be retained to enable access to the logs of their containers.
Red Hat OpenShift Service on AWS treats pods as largely immutable; changes cannot be made to a pod definition while it is running. Red Hat OpenShift Service on AWS implements changes by terminating an existing pod and recreating it with modified configuration, base image(s), or both. Pods are also treated as expendable, and do not maintain state when recreated. Therefore pods should usually be managed by higher-level controllers, rather than directly by users.
Bare pods that are not managed by a replication controller will be not rescheduled upon node disruption.
2.1.2. Example pod configurations Copia collegamentoCollegamento copiato negli appunti!
Red Hat OpenShift Service on AWS leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed.
The following is an example definition of a pod. It demonstrates many features of pods, most of which are discussed in other topics and thus only briefly mentioned here:
Pod
object definition (YAML)
- 1
- Pods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key/value format in the
metadata
hash. - 2
- The pod restart policy with possible values
Always
,OnFailure
, andNever
. The default value isAlways
. - 3
- Red Hat OpenShift Service on AWS defines a security context for containers which specifies whether they are allowed to run as privileged containers, run as a user of their choice, and more. The default context is very restrictive but administrators can modify this as needed.
- 4
containers
specifies an array of one or more container definitions.- 5
- The container specifies where external storage volumes are mounted within the container.
- 6
- Specify the volumes to provide for the pod. Volumes mount at the specified path. Do not mount to the container root,
/
, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host/dev/pts
files. It is safe to mount the host by using/host
. - 7
- Each container in the pod is instantiated from its own container image.
- 8
- The pod defines storage volumes that are available to its container(s) to use.
If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state?.
This pod definition does not include attributes that are filled by Red Hat OpenShift Service on AWS automatically after the pod is created and its lifecycle begins. The Kubernetes pod documentation has details about the functionality and purpose of pods.
2.1.3. Understanding resource requests and limits Copia collegamentoCollegamento copiato negli appunti!
You can specify CPU and memory requests and limits for pods by using a pod spec, as shown in "Example pod configurations", or the specification for the controlling object of the pod.
CPU and memory requests specify the minimum amount of a resource that a pod needs to run, helping Red Hat OpenShift Service on AWS to schedule pods on nodes with sufficient resources.
CPU and memory limits define the maximum amount of a resource that a pod can consume, preventing the pod from consuming excessive resources and potentially impacting other pods on the same node.
CPU and memory requests and limits are processed by using the following principles:
CPU limits are enforced by using CPU throttling. When a container approaches its CPU limit, the kernel restricts access to the CPU specified as the container’s limit. As such, a CPU limit is a hard limit that the kernel enforces. Red Hat OpenShift Service on AWS can allow a container to exceed its CPU limit for extended periods of time. However, container runtimes do not terminate pods or containers for excessive CPU usage.
CPU limits and requests are measured in CPU units. One CPU unit is equivalent to 1 physical CPU core or 1 virtual core, depending on whether the node is a physical host or a virtual machine running inside a physical machine. Fractional requests are allowed. For example, when you define a container with a CPU request of
0.5
, you are requesting half as much CPU time than if you asked for1.0
CPU. For CPU units,0.1
is equivalent to the100m
, which can be read as one hundred millicpu or one hundred millicores. A CPU resource is always an absolute amount of resource, and is never a relative amount.NoteBy default, the smallest amount of CPU that can be allocated to a pod is 10 mCPU. You can request resource limits lower than 10 mCPU in a pod spec. However, the pod would still be allocated 10 mCPU.
Memory limits are enforced by the kernel by using out of memory (OOM) kills. When a container uses more than its memory limit, the kernel can terminate that container. However, terminations happen only when the kernel detects memory pressure. As such, a container that over allocates memory might not be immediately killed. This means memory limits are enforced reactively. A container can use more memory than its memory limit. If it does, the container can get killed.
You can express memory as a plain integer or as a fixed-point number by using one of these quantity suffixes:
E
,P
,T
,G
,M
, ork
. You can also use the power-of-two equivalents:Ei
,Pi
,Ti
,Gi
,Mi
, orKi
.
If the node where a pod is running has enough of a resource available, it is possible for a container to use more CPU or memory resources than it requested. However, the container cannot exceed the corresponding limit. For example, if you set a container memory request of 256 MiB
, and that container is in a pod scheduled to a node with 8GiB
of memory and no other pods, the container can try to use more memory than the requested 256 MiB
.
This behavior does not apply to CPU and memory limits. These limits are applied by the kubelet and the container runtime, and are enforced by the kernel. On Linux nodes, the kernel enforces limits by using cgroups.
2.2. Viewing pods Copia collegamentoCollegamento copiato negli appunti!
As an administrator, you can view cluster pods, check their health, and evaluate the overall health of the cluster. You can also view a list of pods associated with a specific project or view usage statistics about pods. Regularly viewing pods can help you detect problems early, track resource usage, and ensure cluster stability.
2.2.1. Viewing pods in a project Copia collegamentoCollegamento copiato negli appunti!
You can display pod usage statistics, such as CPU, memory, and storage consumption, to monitor container runtime environments and ensure efficient resource use.
Procedure
Change to the project by entering the following command:
oc project <project_name>
$ oc project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain a list of pods by entering the following command:
oc get pods
$ oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m
NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Add the
-o wide
flags to view the pod IP address and the node where the pod is located. For example:oc get pods -o wide
$ oc get pods -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2. Viewing pod usage statistics Copia collegamentoCollegamento copiato negli appunti!
You can display usage statistics about pods, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption.
Prerequisites
-
You must have
cluster-reader
permission to view the usage statistics. - Metrics must be installed to view the usage statistics.
Procedure
View the usage statistics by entering the following command:
oc adm top pods -n <namespace>
$ oc adm top pods -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi
NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Add the
--selector=''
label to view usage statistics for pods with labels. Note that you must choose the label query to filter on, such as=
,==
, or!=
. For example:oc adm top pod --selector='<pod_name>'
$ oc adm top pod --selector='<pod_name>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.3. Viewing resource logs Copia collegamentoCollegamento copiato negli appunti!
You can view logs for resources in the OpenShift CLI (oc) or web console. Logs display from the end (or tail) by default. Viewing logs for resources can help you troubleshoot issues and monitor resource behavior.
2.2.3.1. Viewing resource logs by using the web console Copia collegamentoCollegamento copiato negli appunti!
Use the following procedure to view resource logs by using the Red Hat OpenShift Service on AWS web console.
Procedure
In the Red Hat OpenShift Service on AWS console, navigate to Workloads
Pods or navigate to the pod through the resource you want to investigate. NoteSome resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource.
- Select a project from the drop-down menu.
- Click the name of the pod you want to investigate.
- Click Logs.
2.2.3.2. Viewing resource logs by using the CLI Copia collegamentoCollegamento copiato negli appunti!
Use the following procedure to view resource logs by using the command-line interface (CLI).
Prerequisites
-
Access to the OpenShift CLI (
oc
).
Procedure
View the log for a specific pod by entering the following command:
oc logs -f <pod_name> -c <container_name>
$ oc logs -f <pod_name> -c <container_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
-f
- Optional: Specifies that the output follows what is being written into the logs.
<pod_name>
- Specifies the name of the pod.
<container_name>
- Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name.
For example:
oc logs -f ruby-57f7f4855b-znl92 -c ruby
$ oc logs -f ruby-57f7f4855b-znl92 -c ruby
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the log for a specific resource by entering the following command:
oc logs <object_type>/<resource_name>
$ oc logs <object_type>/<resource_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc logs deployment/ruby
$ oc logs deployment/ruby
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Configuring an Red Hat OpenShift Service on AWS cluster for pods Copia collegamentoCollegamento copiato negli appunti!
As an administrator, you can create and maintain an efficient cluster for pods.
By keeping your cluster efficient, you can provide a better environment for your developers using such tools as what a pod does when it exits, ensuring that the required number of pods is always running, when to restart pods designed to run only once, limit the bandwidth available to pods, and how to keep pods running during disruptions.
2.3.1. Configuring how pods behave after restart Copia collegamentoCollegamento copiato negli appunti!
A pod restart policy determines how Red Hat OpenShift Service on AWS responds when Containers in that pod exit. The policy applies to all Containers in that pod.
The possible values are:
-
Always
- Tries restarting a successfully exited Container on the pod continuously, with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. The default isAlways
. -
OnFailure
- Tries restarting a failed Container on the pod with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. -
Never
- Does not try to restart exited or failed Containers on the pod. Pods immediately fail and exit.
After the pod is bound to a node, the pod will never be bound to another node. This means that a controller is necessary in order for a pod to survive node failure:
Condition | Controller Type | Restart Policy |
---|---|---|
Pods that are expected to terminate (such as batch computations) | Job |
|
Pods that are expected to not terminate (such as web servers) | Replication controller |
|
Pods that must run one-per-machine | Daemon set | Any |
If a Container on a pod fails and the restart policy is set to OnFailure
, the pod stays on the node and the Container is restarted. If you do not want the Container to restart, use a restart policy of Never
.
If an entire pod fails, Red Hat OpenShift Service on AWS starts a new pod. Developers must address the possibility that applications might be restarted in a new pod. In particular, applications must handle temporary files, locks, incomplete output, and so forth caused by previous runs.
Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents Red Hat OpenShift Service on AWS from restarting.
If the underlying cloud provider endpoints are not reliable, do not install a cluster using cloud provider integration. Install the cluster as if it was in a no-cloud environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster.
For details on how Red Hat OpenShift Service on AWS uses restart policy with failed Containers, see the Example States in the Kubernetes documentation.
2.3.2. Limiting the bandwidth available to pods Copia collegamentoCollegamento copiato negli appunti!
You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods.
Procedure
To limit the bandwidth on a pod:
Write an object definition JSON file, and specify the data traffic speed using
kubernetes.io/ingress-bandwidth
andkubernetes.io/egress-bandwidth
annotations. For example, to limit both pod egress and ingress bandwidth to 10M/s:Limited
Pod
object definitionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod using the object definition:
oc create -f <file_or_dir_path>
$ oc create -f <file_or_dir_path>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up Copia collegamentoCollegamento copiato negli appunti!
A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance.
PodDisruptionBudget
is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures).
A PodDisruptionBudget
object’s configuration consists of the following key parts:
- A label selector, which is a label query over a set of pods.
An availability level, which specifies the minimum number of pods that must be available simultaneously, either:
-
minAvailable
is the number of pods must always be available, even during a disruption. -
maxUnavailable
is the number of pods can be unavailable during a disruption.
-
Available
refers to the number of pods that has condition Ready=True
. Ready=True
refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services.
A maxUnavailable
of 0%
or 0
or a minAvailable
of 100%
or equal to the number of replicas is permitted but can block nodes from being drained.
The default setting for maxUnavailable
is 1
for all the machine config pools in Red Hat OpenShift Service on AWS. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3
for the control plane pool.
You can check for pod disruption budgets across all projects with the following:
oc get poddisruptionbudget --all-namespaces
$ oc get poddisruptionbudget --all-namespaces
The following example contains some values that are specific to Red Hat OpenShift Service on AWS on AWS.
Example output
The PodDisruptionBudget
is considered healthy when there are at least minAvailable
pods running in the system. Every pod above that limit can be evicted.
Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements.
2.3.3.1. Specifying the number of pods that must be up with pod disruption budgets Copia collegamentoCollegamento copiato negli appunti!
You can use a PodDisruptionBudget
object to specify the minimum number or percentage of replicas that must be up at a time.
Procedure
To configure a pod disruption budget:
Create a YAML file with the an object definition similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
PodDisruptionBudget
is part of thepolicy/v1
API group.- 2
- The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example,
20%
. - 3
- A label query over a set of resources. The result of
matchLabels
andmatchExpressions
are logically conjoined. Leave this parameter blank, for exampleselector {}
, to select all pods in the project.
Or:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
PodDisruptionBudget
is part of thepolicy/v1
API group.- 2
- The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example,
20%
. - 3
- A label query over a set of resources. The result of
matchLabels
andmatchExpressions
are logically conjoined. Leave this parameter blank, for exampleselector {}
, to select all pods in the project.
Run the following command to add the object to project:
oc create -f </path/to/file> -n <project_name>
$ oc create -f </path/to/file> -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Creating and using config maps Copia collegamentoCollegamento copiato negli appunti!
The following sections define config maps and how to create and use them.
2.5.1. Understanding config maps Copia collegamentoCollegamento copiato negli appunti!
Many applications require configuration by using some combination of configuration files, command-line arguments, and environment variables. In Red Hat OpenShift Service on AWS, these configuration artifacts are decoupled from image content to keep containerized applications portable.
The ConfigMap
object provides mechanisms to inject containers with configuration data while keeping containers agnostic of Red Hat OpenShift Service on AWS. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs.
The ConfigMap
object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example:
ConfigMap
Object Definition
You can use the binaryData
field when you create a config map from a binary file, such as an image.
Configuration data can be consumed in pods in a variety of ways. A config map can be used to:
- Populate environment variable values in containers
- Set command-line arguments in a container
- Populate configuration files in a volume
Users and system components can store configuration data in a config map.
A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information.
2.5.1.1. Config map restrictions Copia collegamentoCollegamento copiato negli appunti!
A config map must be created before its contents can be consumed in pods.
Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis.
ConfigMap
objects reside in a project.
They can only be referenced by pods in the same project.
The Kubelet only supports the use of a config map for pods it gets from the API server.
This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the Red Hat OpenShift Service on AWS node’s --manifest-url
flag, its --config
flag, or its REST API because these are not common ways to create pods.
2.5.2. Creating a config map in the Red Hat OpenShift Service on AWS web console Copia collegamentoCollegamento copiato negli appunti!
You can create a config map in the Red Hat OpenShift Service on AWS web console.
Procedure
To create a config map as a cluster administrator:
-
In the Administrator perspective, select
Workloads
Config Maps
. - At the top right side of the page, select Create Config Map.
- Enter the contents of your config map.
- Select Create.
-
In the Administrator perspective, select
To create a config map as a developer:
-
In the Developer perspective, select
Config Maps
. - At the top right side of the page, select Create Config Map.
- Enter the contents of your config map.
- Select Create.
-
In the Developer perspective, select
2.5.3. Creating a config map by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can use the following command to create a config map from directories, specific files, or literal values.
Procedure
Create a config map:
oc create configmap <configmap_name> [options]
$ oc create configmap <configmap_name> [options]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.3.1. Creating a config map from a directory Copia collegamentoCollegamento copiato negli appunti!
You can create a config map from a directory by using the --from-file
flag. This method allows you to use multiple files within a directory to create a config map.
Each file in the directory is used to populate a key in the config map, where the name of the key is the file name, and the value of the key is the content of the file.
For example, the following command creates a config map with the contents of the example-files
directory:
oc create configmap game-config --from-file=example-files/
$ oc create configmap game-config --from-file=example-files/
View the keys in the config map:
oc describe configmaps game-config
$ oc describe configmaps game-config
Example output
You can see that the two keys in the map are created from the file names in the directory specified in the command. The content of those keys might be large, so the output of oc describe
only shows the names of the keys and their sizes.
Prerequisite
You must have a directory with files that contain the data you want to populate a config map with.
The following procedure uses these example files:
game.properties
andui.properties
:cat example-files/game.properties
$ cat example-files/game.properties
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat example-files/ui.properties
$ cat example-files/ui.properties
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice
color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a config map holding the content of each file in this directory by entering the following command:
oc create configmap game-config \ --from-file=example-files/
$ oc create configmap game-config \ --from-file=example-files/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enter the
oc get
command for the object with the-o
option to see the values of the keys:oc get configmaps game-config -o yaml
$ oc get configmaps game-config -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.3.2. Creating a config map from a file Copia collegamentoCollegamento copiato negli appunti!
You can create a config map from a file by using the --from-file
flag. You can pass the --from-file
option multiple times to the CLI.
You can also specify the key to set in a config map for content imported from a file by passing a key=value
expression to the --from-file
option. For example:
oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties
$ oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties
If you create a config map from a file, you can include files containing non-UTF8 data that are placed in this field without corrupting the non-UTF8 data. Red Hat OpenShift Service on AWS detects binary files and transparently encodes the file as MIME
. On the server, the MIME
payload is decoded and stored without corrupting the data.
Prerequisite
You must have a directory with files that contain the data you want to populate a config map with.
The following procedure uses these example files:
game.properties
andui.properties
:cat example-files/game.properties
$ cat example-files/game.properties
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat example-files/ui.properties
$ cat example-files/ui.properties
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice
color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a config map by specifying a specific file:
oc create configmap game-config-2 \ --from-file=example-files/game.properties \ --from-file=example-files/ui.properties
$ oc create configmap game-config-2 \ --from-file=example-files/game.properties \ --from-file=example-files/ui.properties
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a config map by specifying a key-value pair:
oc create configmap game-config-3 \ --from-file=game-special-key=example-files/game.properties
$ oc create configmap game-config-3 \ --from-file=game-special-key=example-files/game.properties
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enter the
oc get
command for the object with the-o
option to see the values of the keys from the file:oc get configmaps game-config-2 -o yaml
$ oc get configmaps game-config-2 -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the
oc get
command for the object with the-o
option to see the values of the keys from the key-value pair:oc get configmaps game-config-3 -o yaml
$ oc get configmaps game-config-3 -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the key that you set in the preceding step.
2.5.3.3. Creating a config map from literal values Copia collegamentoCollegamento copiato negli appunti!
You can supply literal values for a config map.
The --from-literal
option takes a key=value
syntax, which allows literal values to be supplied directly on the command line.
Procedure
Create a config map by specifying a literal value:
oc create configmap special-config \ --from-literal=special.how=very \ --from-literal=special.type=charm
$ oc create configmap special-config \ --from-literal=special.how=very \ --from-literal=special.type=charm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enter the
oc get
command for the object with the-o
option to see the values of the keys:oc get configmaps special-config -o yaml
$ oc get configmaps special-config -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.4. Use cases: Consuming config maps in pods Copia collegamentoCollegamento copiato negli appunti!
The following sections describe some uses cases when consuming ConfigMap
objects in pods.
2.5.4.1. Populating environment variables in containers by using config maps Copia collegamentoCollegamento copiato negli appunti!
You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names.
As an example, consider the following config map:
ConfigMap
with two environment variables
ConfigMap
with one environment variable
Procedure
You can consume the keys of this
ConfigMap
in a pod usingconfigMapKeyRef
sections.Sample
Pod
specification configured to inject specific environment variablesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Stanza to pull the specified environment variables from a
ConfigMap
. - 2
- Name of a pod environment variable that you are injecting a key’s value into.
- 3 5
- Name of the
ConfigMap
to pull specific environment variables from. - 4 6
- Environment variable to pull from the
ConfigMap
. - 7
- Makes the environment variable optional. As optional, the pod will be started even if the specified
ConfigMap
and keys do not exist. - 8
- Stanza to pull all environment variables from a
ConfigMap
. - 9
- Name of the
ConfigMap
to pull all environment variables from.
When this pod is run, the pod logs will include the following output:
SPECIAL_LEVEL_KEY=very log_level=INFO
SPECIAL_LEVEL_KEY=very log_level=INFO
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
SPECIAL_TYPE_KEY=charm
is not listed in the example output because optional: true
is set.
2.5.4.2. Setting command-line arguments for container commands with config maps Copia collegamentoCollegamento copiato negli appunti!
You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax $(VAR_NAME)
.
As an example, consider the following config map:
Procedure
To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container’s command using the
$(VAR_NAME)
syntax.Sample pod specification configured to inject specific environment variables
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Inject the values into a command in a container using the keys you want to use as environment variables.
When this pod is run, the output from the echo command run in the test-container container is as follows:
very charm
very charm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.4.3. Injecting content into a volume by using config maps Copia collegamentoCollegamento copiato negli appunti!
You can inject content into a volume by using config maps.
Example ConfigMap
custom resource (CR)
Procedure
You have a couple different options for injecting content into a volume by using config maps.
The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- File containing key.
When this pod is run, the output of the cat command will be:
very
very
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also control the paths within the volume where config map keys are projected:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Path to config map key.
When this pod is run, the output of the cat command will be:
very
very
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Including pod priority in pod scheduling decisions Copia collegamentoCollegamento copiato negli appunti!
You can enable pod priority and preemption in your cluster. Pod priority indicates the importance of a pod relative to other pods and queues the pods based on that priority. pod preemption allows the cluster to evict, or preempt, lower-priority pods so that higher-priority pods can be scheduled if there is no available space on a suitable node pod priority also affects the scheduling order of pods and out-of-resource eviction ordering on the node.
To use priority and preemption, reference a priority class in the pod specification to apply that weight for scheduling.
2.6.1. Understanding pod priority Copia collegamentoCollegamento copiato negli appunti!
When you use the Pod Priority and Preemption feature, the scheduler orders pending pods by their priority, and a pending pod is placed ahead of other pending pods with lower priority in the scheduling queue. As a result, the higher priority pod might be scheduled sooner than pods with lower priority if its scheduling requirements are met. If a pod cannot be scheduled, scheduler continues to schedule other lower priority pods.
2.6.1.1. Pod priority classes Copia collegamentoCollegamento copiato negli appunti!
You can assign pods a priority class, which is a non-namespaced object that defines a mapping from a name to the integer value of the priority. The higher the value, the higher the priority.
A priority class object can take any 32-bit integer value smaller than or equal to 1000000000 (one billion). Reserve numbers larger than or equal to one billion for critical pods that must not be preempted or evicted. By default, Red Hat OpenShift Service on AWS has two reserved priority classes for critical system pods to have guaranteed scheduling.
oc get priorityclasses
$ oc get priorityclasses
Example output
NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s
NAME VALUE GLOBAL-DEFAULT AGE
system-node-critical 2000001000 false 72m
system-cluster-critical 2000000000 false 72m
openshift-user-critical 1000000000 false 3d13h
cluster-logging 1000000 false 29s
system-node-critical - This priority class has a value of 2000001000 and is used for all pods that should never be evicted from a node. Examples of pods that have this priority class are
ovnkube-node
, and so forth. A number of critical components include thesystem-node-critical
priority class by default, for example:- master-api
- master-controller
- master-etcd
- ovn-kubernetes
- sync
system-cluster-critical - This priority class has a value of 2000000000 (two billion) and is used with pods that are important for the cluster. Pods with this priority class can be evicted from a node in certain circumstances. For example, pods configured with the
system-node-critical
priority class can take priority. However, this priority class does ensure guaranteed scheduling. Examples of pods that can have this priority class are fluentd, add-on components like descheduler, and so forth. A number of critical components include thesystem-cluster-critical
priority class by default, for example:- fluentd
- metrics-server
- descheduler
-
openshift-user-critical - You can use the
priorityClassName
field with important pods that cannot bind their resource consumption and do not have predictable resource consumption behavior. Prometheus pods under theopenshift-monitoring
andopenshift-user-workload-monitoring
namespaces use theopenshift-user-critical
priorityClassName
. Monitoring workloads usesystem-critical
as their firstpriorityClass
, but this causes problems when monitoring uses excessive memory and the nodes cannot evict them. As a result, monitoring drops priority to give the scheduler flexibility, moving heavy workloads around to keep critical nodes operating. - cluster-logging - This priority is used by Fluentd to make sure Fluentd pods are scheduled to nodes over other apps.
2.6.1.2. Pod priority names Copia collegamentoCollegamento copiato negli appunti!
After you have one or more priority classes, you can create pods that specify a priority class name in a Pod
spec. The priority admission controller uses the priority class name field to populate the integer value of the priority. If the named priority class is not found, the pod is rejected.
2.6.2. Understanding pod preemption Copia collegamentoCollegamento copiato negli appunti!
When a developer creates a pod, the pod goes into a queue. If the developer configured the pod for pod priority or preemption, the scheduler picks a pod from the queue and tries to schedule the pod on a node. If the scheduler cannot find space on an appropriate node that satisfies all the specified requirements of the pod, preemption logic is triggered for the pending pod.
When the scheduler preempts one or more pods on a node, the nominatedNodeName
field of higher-priority Pod
spec is set to the name of the node, along with the nodename
field. The scheduler uses the nominatedNodeName
field to keep track of the resources reserved for pods and also provides information to the user about preemptions in the clusters.
After the scheduler preempts a lower-priority pod, the scheduler honors the graceful termination period of the pod. If another node becomes available while scheduler is waiting for the lower-priority pod to terminate, the scheduler can schedule the higher-priority pod on that node. As a result, the nominatedNodeName
field and nodeName
field of the Pod
spec might be different.
Also, if the scheduler preempts pods on a node and is waiting for termination, and a pod with a higher-priority pod than the pending pod needs to be scheduled, the scheduler can schedule the higher-priority pod instead. In such a case, the scheduler clears the nominatedNodeName
of the pending pod, making the pod eligible for another node.
Preemption does not necessarily remove all lower-priority pods from a node. The scheduler can schedule a pending pod by removing a portion of the lower-priority pods.
The scheduler considers a node for pod preemption only if the pending pod can be scheduled on the node.
2.6.2.1. Non-preempting priority classes Copia collegamentoCollegamento copiato negli appunti!
Pods with the preemption policy set to Never
are placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled stays in the scheduling queue until sufficient resources are free and it can be scheduled. Non-preempting pods, like other pods, are subject to scheduler back-off. This means that if the scheduler tries unsuccessfully to schedule these pods, they are retried with lower frequency, allowing other pods with lower priority to be scheduled before them.
Non-preempting pods can still be preempted by other, high-priority pods.
2.6.2.2. Pod preemption and other scheduler settings Copia collegamentoCollegamento copiato negli appunti!
If you enable pod priority and preemption, consider your other scheduler settings:
- Pod priority and pod disruption budget
- A pod disruption budget specifies the minimum number or percentage of replicas that must be up at a time. If you specify pod disruption budgets, Red Hat OpenShift Service on AWS respects them when preempting pods at a best effort level. The scheduler attempts to preempt pods without violating the pod disruption budget. If no such pods are found, lower-priority pods might be preempted despite their pod disruption budget requirements.
- Pod priority and pod affinity
- Pod affinity requires a new pod to be scheduled on the same node as other pods with the same label.
If a pending pod has inter-pod affinity with one or more of the lower-priority pods on a node, the scheduler cannot preempt the lower-priority pods without violating the affinity requirements. In this case, the scheduler looks for another node to schedule the pending pod. However, there is no guarantee that the scheduler can find an appropriate node and pending pod might not be scheduled.
To prevent this situation, carefully configure pod affinity with equal-priority pods.
2.6.2.3. Graceful termination of preempted pods Copia collegamentoCollegamento copiato negli appunti!
When preempting a pod, the scheduler waits for the pod graceful termination period to expire, allowing the pod to finish working and exit. If the pod does not exit after the period, the scheduler kills the pod. This graceful termination period creates a time gap between the point that the scheduler preempts the pod and the time when the pending pod can be scheduled on the node.
To minimize this gap, configure a small graceful termination period for lower-priority pods.
2.6.3. Configuring priority and preemption Copia collegamentoCollegamento copiato negli appunti!
You apply pod priority and preemption by creating a priority class object and associating pods to the priority by using the priorityClassName
in your pod specs.
You cannot add a priority class directly to an existing scheduled pod.
Procedure
To configure your cluster to use priority and preemption:
Define a pod spec to include the name of a priority class by creating a YAML file similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the priority class to use with this pod.
Create the pod:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can add the priority name directly to the pod configuration or to a pod template.
2.7. Placing pods on specific nodes using node selectors Copia collegamentoCollegamento copiato negli appunti!
A node selector specifies a map of key-value pairs. The rules are defined using custom labels on nodes and selectors specified in pods.
For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node.
If you are using node affinity and node selectors in the same pod configuration, see the important considerations below.
2.7.1. Using node selectors to control pod placement Copia collegamentoCollegamento copiato negli appunti!
You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, Red Hat OpenShift Service on AWS schedules the pods on nodes that contain matching labels.
You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down.
To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet
object, DaemonSet
object, StatefulSet
object, Deployment
object, or DeploymentConfig
object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod.
You cannot add a node selector directly to an existing scheduled pod.
Prerequisites
To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75
pod is controlled by the router-default-66d5cf9464
replica set:
oc describe pod router-default-66d5cf9464-7pwkc
$ oc describe pod router-default-66d5cf9464-7pwkc
Example output
The web console lists the controlling object under ownerReferences
in the pod YAML:
Procedure
Add the matching node selector to a pod:
To add a node selector to existing and future pods, add a node selector to the controlling object for the pods:
Example
ReplicaSet
object with labelsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the node selector.
To add a node selector to a specific, new pod, add the selector to the
Pod
object directly:Example
Pod
object with a node selectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou cannot add a node selector directly to an existing scheduled pod.