Questo contenuto non è disponibile nella lingua selezionata.
Chapter 7. Working with containers
7.1. Understanding Containers Copia collegamentoCollegamento copiato negli appunti!
The basic units of OpenShift Container Platform applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service (often called a "micro-service"), such as a web server or a database, though containers can be used for arbitrary workloads.
The Linux kernel has been incorporating capabilities for container technologies for years. OpenShift Container Platform and Kubernetes add the ability to orchestrate containers across multi-host installations.
About containers and RHEL kernel memory
Due to Red Hat Enterprise Linux (RHEL) behavior, a container on a node with high CPU usage might seem to consume more memory than expected. The higher memory consumption could be caused by the
kmem_cache
kmem_cache
kmem_cache
cpu_cache
The amount of memory stored in those caches is proportional to the number of CPUs that the system uses. As a result, a higher number of CPUs results in a greater amount of kernel memory being held in these caches. Higher amounts of kernel memory in these caches can cause OpenShift Container Platform containers to exceed the configured memory limits, resulting in the container being killed.
To avoid losing containers due to kernel memory issues, ensure that the containers request sufficient memory. You can use the following formula to estimate the amount of memory consumed by the
kmem_cache
nproc
nproc
$(nproc) X 1/2 MiB
7.2. Using Init Containers to perform tasks before a pod is deployed Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform provides init containers, which are specialized containers that run before application containers and can contain utilities or setup scripts not present in an app image.
7.2.1. Understanding Init Containers Copia collegamentoCollegamento copiato negli appunti!
You can use an Init Container resource to perform tasks before the rest of a pod is deployed.
A pod can have Init Containers in addition to application containers. Init containers allow you to reorganize setup scripts and binding code.
An Init Container can:
- Contain and run utilities that are not desirable to include in the app Container image for security reasons.
- Contain utilities or custom code for setup that is not present in an app image. For example, there is no requirement to make an image FROM another image just to use a tool like sed, awk, python, or dig during setup.
- Use Linux namespaces so that they have different filesystem views from app containers, such as access to secrets that application containers are not able to access.
Each Init Container must complete successfully before the next one is started. So, Init Containers provide an easy way to block or delay the startup of app containers until some set of preconditions are met.
For example, the following are some ways you can use Init Containers:
Wait for a service to be created with a shell command like:
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1Register this pod with a remote server from the downward API with a command like:
$ curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d ‘instance=$()&ip=$()’-
Wait for some time before starting the app Container with a command like .
sleep 60 - Clone a git repository into a volume.
- Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app Container. For example, place the POD_IP value in a configuration and generate the main app configuration file using Jinja.
See the Kubernetes documentation for more information.
7.2.2. Creating Init Containers Copia collegamentoCollegamento copiato negli appunti!
The following example outlines a simple pod which has two Init Containers. The first waits for
myservice
mydb
Procedure
Create the pod for the Init Container:
Create a YAML file similar to the following:
apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] - name: init-mydb image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] # ...Create the pod:
$ oc create -f myapp.yamlView the status of the pod:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5sThe pod status,
, indicates it is waiting for the two services.Init:0/2
Create the
service.myserviceCreate a YAML file similar to the following:
kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376Create the pod:
$ oc create -f myservice.yamlView the status of the pod:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5sThe pod status,
, indicates it is waiting for one service, in this case theInit:1/2service.mydb
Create the
service:mydbCreate a YAML file similar to the following:
kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377Create the pod:
$ oc create -f mydb.yamlView the status of the pod:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2mThe pod status indicated that it is no longer waiting for the services and is running.
7.3. Using volumes to persist container data Copia collegamentoCollegamento copiato negli appunti!
Files in a container are ephemeral. As such, when a container crashes or stops, the data is lost. You can use volumes to persist the data used by the containers in a pod. A volume is directory, accessible to the Containers in a pod, where data is stored for the life of the pod.
7.3.1. Understanding volumes Copia collegamentoCollegamento copiato negli appunti!
Volumes are mounted file systems available to pods and their containers which may be backed by a number of host-local or network attached storage endpoints. Containers are not persistent by default; on restart, their contents are cleared.
To ensure that the file system on the volume contains no errors and, if errors are present, to repair them when possible, OpenShift Container Platform invokes the
fsck
mount
The simplest volume type is
emptyDir
emptyDir
7.3.2. Working with volumes using the OpenShift Container Platform CLI Copia collegamentoCollegamento copiato negli appunti!
You can use the CLI command
oc set volume
The
oc set volume
$ oc set volume <object_selection> <operation> <mandatory_parameters> <options>
- Object selection
-
Specify one of the following for the
object_selectionparameter in theoc set volumecommand:
| Syntax | Description | Example |
|---|---|---|
|
| Selects
|
|
|
| Selects
|
|
|
| Selects resources of type
|
|
|
| Selects all resources of type
|
|
|
| File name, directory, or URL to file to use to edit the resource. |
|
- Operation
-
Specify
--addor--removefor theoperationparameter in theoc set volumecommand. - Mandatory parameters
- Any mandatory parameters are specific to the selected operation and are discussed in later sections.
- Options
- Any options are specific to the selected operation and are discussed in later sections.
7.3.3. Listing volumes and volume mounts in a pod Copia collegamentoCollegamento copiato negli appunti!
You can list volumes and volume mounts in pods or pod templates:
Procedure
To list volumes:
$ oc set volume <object_type>/<name> [options]
List volume supported options:
| Option | Description | Default |
|---|---|---|
|
| Name of the volume. | |
|
| Select containers by name. It can also take wildcard
|
|
For example:
To list all volumes for pod p1:
$ oc set volume pod/p1To list volume v1 defined on all deployment configs:
$ oc set volume dc --all --name=v1
7.3.4. Adding volumes to a pod Copia collegamentoCollegamento copiato negli appunti!
You can add volumes and volume mounts to a pod.
Procedure
To add a volume, a volume mount, or both to pod templates:
$ oc set volume <object_type>/<name> --add [options]
| Option | Description | Default |
|---|---|---|
|
| Name of the volume. | Automatically generated, if not specified. |
|
| Name of the volume source. Supported values:
|
|
|
| Select containers by name. It can also take wildcard
|
|
|
| Mount path inside the selected containers. Do not mount to the container root,
| |
|
| Host path. Mandatory parameter for
| |
|
| Name of the secret. Mandatory parameter for
| |
|
| Name of the configmap. Mandatory parameter for
| |
|
| Name of the persistent volume claim. Mandatory parameter for
| |
|
| Details of volume source as a JSON string. Recommended if the desired volume source is not supported by
| |
|
| Display the modified objects instead of updating them on the server. Supported values:
| |
|
| Output the modified objects with the given version. |
|
For example:
To add a new volume source emptyDir to the registry
object:DeploymentConfig$ oc set volume dc/registry --addTipYou can alternatively apply the following YAML to add the volume:
Example 7.1. Sample deployment config with an added volume
kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes:1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP- 1
- Add the volume source emptyDir.
To add volume v1 with secret secret1 for replication controller r1 and mount inside the containers at /data:
$ oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/dataTipYou can alternatively apply the following YAML to add the volume:
Example 7.2. Sample replication controller with added volume and secret
kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes:1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts:2 - name: v1 mountPath: /dataTo add existing persistent volume v1 with claim name pvc1 to deployment configuration dc.json on disk, mount the volume on container c1 at /data, and update the
object on the server:DeploymentConfig$ oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim \ --claim-name=pvc1 --mount-path=/data --containers=c1TipYou can alternatively apply the following YAML to add the volume:
Example 7.3. Sample deployment config with persistent volume added
kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v11 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts:2 - name: v1 mountPath: /dataTo add a volume v1 based on Git repository https://github.com/namespace1/project1 with revision 5125c45f9f563 for all replication controllers:
$ oc set volume rc --all --add --name=v1 \ --source='{"gitRepo": { "repository": "https://github.com/namespace1/project1", "revision": "5125c45f9f563" }}'
7.3.5. Updating volumes and volume mounts in a pod Copia collegamentoCollegamento copiato negli appunti!
You can modify the volumes and volume mounts in a pod.
Procedure
Updating existing volumes using the
--overwrite
$ oc set volume <object_type>/<name> --add --overwrite [options]
For example:
To replace existing volume v1 for replication controller r1 with existing persistent volume claim pvc1:
$ oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1TipYou can alternatively apply the following YAML to replace the volume:
Example 7.4. Sample replication controller with persistent volume claim named
pvc1kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v11 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data- 1
- Set persistent volume claim to
pvc1.
To change the
object d1 mount point to /opt for volume v1:DeploymentConfig$ oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/optTipYou can alternatively apply the following YAML to change the mount point:
Example 7.5. Sample deployment config with mount point set to
opt.kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts:1 - name: v1 mountPath: /opt- 1
- Set the mount point to
/opt.
7.3.6. Removing volumes and volume mounts from a pod Copia collegamentoCollegamento copiato negli appunti!
You can remove a volume or volume mount from a pod.
Procedure
To remove a volume from pod templates:
$ oc set volume <object_type>/<name> --remove [options]
| Option | Description | Default |
|---|---|---|
|
| Name of the volume. | |
|
| Select containers by name. It can also take wildcard
|
|
|
| Indicate that you want to remove multiple volumes at once. | |
|
| Display the modified objects instead of updating them on the server. Supported values:
| |
|
| Output the modified objects with the given version. |
|
For example:
To remove a volume v1 from the
object d1:DeploymentConfig$ oc set volume dc/d1 --remove --name=v1To unmount volume v1 from container c1 for the
object d1 and remove the volume v1 if it is not referenced by any containers on d1:DeploymentConfig$ oc set volume dc/d1 --remove --name=v1 --containers=c1To remove all volumes for replication controller r1:
$ oc set volume rc/r1 --remove --confirm
7.3.7. Configuring volumes for multiple uses in a pod Copia collegamentoCollegamento copiato negli appunti!
You can configure a volume to allows you to share one volume for multiple uses in a single pod using the
volumeMounts.subPath
subPath
You cannot add a
subPath
Procedure
To view the list of files in the volume, run the
command:oc rsh$ oc rsh <pod>Example output
sh-4.2$ ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3Specify the
:subPathExample
Podspec withsubPathparameterapiVersion: v1 kind: Pod metadata: name: my-site spec: containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql1 - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html2 volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data
7.4. Mapping volumes using projected volumes Copia collegamentoCollegamento copiato negli appunti!
A projected volume maps several existing volume sources into the same directory.
The following types of volume sources can be projected:
- Secrets
- Config Maps
- Downward API
All sources are required to be in the same namespace as the pod.
7.4.1. Understanding projected volumes Copia collegamentoCollegamento copiato negli appunti!
Projected volumes can map any combination of these volume sources into a single directory, allowing the user to:
- automatically populate a single volume with the keys from multiple secrets, config maps, and with downward API information, so that I can synthesize a single directory with various sources of information;
- populate a single volume with the keys from multiple secrets, config maps, and with downward API information, explicitly specifying paths for each item, so that I can have full control over the contents of that volume.
When the
RunAsUser
RunAsUsername
Therefore, the
RunAsUsername
The following general scenarios show how you can use projected volumes.
- Config map, secrets, Downward API.
-
Projected volumes allow you to deploy containers with configuration data that includes passwords. An application using these resources could be deploying Red Hat OpenStack Platform (RHOSP) on Kubernetes. The configuration data might have to be assembled differently depending on if the services are going to be used for production or for testing. If a pod is labeled with production or testing, the downward API selector
metadata.labelscan be used to produce the correct RHOSP configs. - Config map + secrets.
- Projected volumes allow you to deploy containers involving configuration data and passwords. For example, you might execute a config map with some sensitive encrypted tasks that are decrypted using a vault password file.
- ConfigMap + Downward API.
-
Projected volumes allow you to generate a config including the pod name (available via the
metadata.nameselector). This application can then pass the pod name along with requests to easily determine the source without using IP tracking. - Secrets + Downward API.
-
Projected volumes allow you to use a secret as a public key to encrypt the namespace of the pod (available via the
metadata.namespaceselector). This example allows the Operator to use the application to deliver the namespace information securely without using an encrypted transport.
7.4.1.1. Example Pod specs Copia collegamentoCollegamento copiato negli appunti!
The following are examples of
Pod
Pod with a secret, a Downward API, and a config map
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: container-test
image: busybox
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
defaultMode: 0400
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "cpu_limit"
resourceFieldRef:
containerName: container-test
resource: limits.cpu
- configMap:
name: myconfigmap
items:
- key: config
path: my-group/my-config
mode: 0777
- 1
- Add a
volumeMountssection for each container that needs the secret. - 2
- Specify a path to an unused directory where the secret will appear.
- 3
- Set
readOnlytotrue. - 4
- Add a
volumesblock to list each projected volume source. - 5
- Specify any name for the volume.
- 6
- Set the execute permission on the files.
- 7
- Add a secret. Enter the name of the secret object. Each secret you want to use must be listed.
- 8
- Specify the path to the secrets file under the
mountPath. Here, the secrets file is in /projected-volume/my-group/my-username. - 9
- Add a Downward API source.
- 10
- Add a ConfigMap source.
- 11
- Set the mode for the specific projection
If there are multiple containers in the pod, each container needs a
volumeMounts
volumes
Pod with multiple secrets with a non-default permission mode set
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: container-test
image: busybox
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
defaultMode: 0755
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- secret:
name: mysecret2
items:
- key: password
path: my-group/my-password
mode: 511
The
defaultMode
mode
7.4.1.2. Pathing Considerations Copia collegamentoCollegamento copiato negli appunti!
- Collisions Between Keys when Configured Paths are Identical
If you configure any keys with the same path, the pod spec will not be accepted as valid. In the following example, the specified path for
andmysecretare the same:myconfigmapapiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data
Consider the following situations related to the volume file paths.
- Collisions Between Keys without Configured Paths
- The only run-time validation that can occur is when all the paths are known at pod creation, similar to the above scenario. Otherwise, when a conflict occurs the most recent specified resource will overwrite anything preceding it (this is true for resources that are updated after pod creation as well).
- Collisions when One Path is Explicit and the Other is Automatically Projected
- In the event that there is a collision due to a user specified path matching data that is automatically projected, the latter resource will overwrite anything preceding it as before
7.4.2. Configuring a Projected Volume for a Pod Copia collegamentoCollegamento copiato negli appunti!
When creating projected volumes, consider the volume file path situations described in Understanding projected volumes.
The following example shows how to use a projected volume to mount an existing secret volume source. The steps can be used to create a user name and password secrets from local files. You then create a pod that runs one container, using a projected volume to mount the secrets into the same shared directory.
The user name and password values can be any valid string that is base64 encoded.
The following example shows
admin
$ echo -n "admin" | base64
Example output
YWRtaW4=
The following example shows the password
1f2d1e2e67df
$ echo -n "1f2d1e2e67df" | base64
Example output
MWYyZDFlMmU2N2Rm
Procedure
To use a projected volume to mount an existing secret volume source.
Create the secret:
Create a YAML file similar to the following, replacing the password and user information as appropriate:
apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=Use the following command to create the secret:
$ oc create -f <secrets-filename>For example:
$ oc create -f secret.yamlExample output
secret "mysecret" createdYou can check that the secret was created using the following commands:
$ oc get secret <secret-name>For example:
$ oc get secret mysecretExample output
NAME TYPE DATA AGE mysecret Opaque 2 17h$ oc get secret <secret-name> -o yamlFor example:
$ oc get secret mysecret -o yamlapiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: "2107" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque
Create a pod with a projected volume.
Create a YAML file similar to the following, including a
section:volumeskind: Pod metadata: name: test-projected-volume spec: containers: - name: test-projected-volume image: busybox args: - sleep - "86400" volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL volumes: - name: all-in-one projected: sources: - secret: name: mysecret1 - 1
- The name of the secret you created.
Create the pod from the configuration file:
$ oc create -f <your_yaml_file>.yamlFor example:
$ oc create -f secret-pod.yamlExample output
pod "test-projected-volume" created
Verify that the pod container is running, and then watch for changes to the pod:
$ oc get pod <name>For example:
$ oc get pod test-projected-volumeThe output should appear similar to the following:
Example output
NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14sIn another terminal, use the
command to open a shell to the running container:oc exec$ oc exec -it <pod> <command>For example:
$ oc exec -it test-projected-volume -- /bin/shIn your shell, verify that the
directory contains your projected sources:projected-volumes/ # lsExample output
bin home root tmp dev proc run usr etc projected-volume sys var
7.5. Allowing containers to consume API objects Copia collegamentoCollegamento copiato negli appunti!
The Downward API is a mechanism that allows containers to consume information about API objects without coupling to OpenShift Container Platform. Such information includes the pod’s name, namespace, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin.
7.5.1. Expose pod information to Containers using the Downward API Copia collegamentoCollegamento copiato negli appunti!
The Downward API contains such information as the pod’s name, project, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin.
Fields within the pod are selected using the
FieldRef
FieldRef
| Field | Description |
|---|---|
|
| The path of the field to select, relative to the pod. |
|
| The API version to interpret the
|
Currently, the valid selectors in the v1 API include:
| Selector | Description |
|---|---|
|
| The pod’s name. This is supported in both environment variables and volumes. |
|
| The pod’s namespace.This is supported in both environment variables and volumes. |
|
| The pod’s labels. This is only supported in volumes and not in environment variables. |
|
| The pod’s annotations. This is only supported in volumes and not in environment variables. |
|
| The pod’s IP. This is only supported in environment variables and not volumes. |
The
apiVersion
7.5.2. Understanding how to consume container values using the downward API Copia collegamentoCollegamento copiato negli appunti!
You containers can consume API values using environment variables or a volume plugin. Depending on the method you choose, containers can consume:
- Pod name
- Pod project/namespace
- Pod annotations
- Pod labels
Annotations and labels are available using only a volume plugin.
7.5.2.1. Consuming container values using environment variables Copia collegamentoCollegamento copiato negli appunti!
When using a container’s environment variables, use the
EnvVar
valueFrom
EnvVarSource
FieldRef
value
Only constant attributes of the pod can be consumed this way, as environment variables cannot be updated once a process is started in a way that allows the process to be notified that the value of a variable has changed. The fields supported using environment variables are:
- Pod name
- Pod project/namespace
Procedure
Create a new pod spec that contains the environment variables you want the container to consume:
Create a
file similar to the following:pod.yamlapiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace restartPolicy: Never # ...Create the pod from the
file:pod.yaml$ oc create -f pod.yaml
Verification
Check the container’s logs for the
andMY_POD_NAMEvalues:MY_POD_NAMESPACE$ oc logs -p dapi-env-test-pod
7.5.2.2. Consuming container values using a volume plugin Copia collegamentoCollegamento copiato negli appunti!
You containers can consume API values using a volume plugin.
Containers can consume:
- Pod name
- Pod project/namespace
- Pod annotations
- Pod labels
Procedure
To use the volume plugin:
Create a new pod spec that contains the environment variables you want the container to consume:
Create a
file similar to the following:volume-pod.yamlkind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: "345" annotation2: "456" spec: containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: ["sh", "-c", "cat /tmp/etc/pod_labels /tmp/etc/pod_annotations"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never # ...Create the pod from the
file:volume-pod.yaml$ oc create -f volume-pod.yaml
Verification
Check the container’s logs and verify the presence of the configured fields:
$ oc logs -p dapi-volume-test-podExample output
cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api
7.5.3. Understanding how to consume container resources using the Downward API Copia collegamentoCollegamento copiato negli appunti!
When creating pods, you can use the Downward API to inject information about computing resource requests and limits so that image and application authors can correctly create an image for specific environments.
You can do this using environment variable or a volume plugin.
7.5.3.1. Consuming container resources using environment variables Copia collegamentoCollegamento copiato negli appunti!
When creating pods, you can use the Downward API to inject information about computing resource requests and limits using environment variables.
When creating the pod configuration, specify environment variables that correspond to the contents of the
resources
spec.container
If the resource limits are not included in the container configuration, the downward API defaults to the node’s CPU and memory allocatable values.
Procedure
Create a new pod spec that contains the resources you want to inject:
Create a
file similar to the following:pod.yamlapiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ "/bin/sh", "-c", "env" ] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory # ...Create the pod from the
file:pod.yaml$ oc create -f pod.yaml
7.5.3.2. Consuming container resources using a volume plugin Copia collegamentoCollegamento copiato negli appunti!
When creating pods, you can use the Downward API to inject information about computing resource requests and limits using a volume plugin.
When creating the pod configuration, use the
spec.volumes.downwardAPI.items
spec.resources
If the resource limits are not included in the container configuration, the Downward API defaults to the node’s CPU and memory allocatable values.
Procedure
Create a new pod spec that contains the resources you want to inject:
Create a
file similar to the following:pod.yamlapiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: ["sh", "-c", "while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done"] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: "cpu_limit" resourceFieldRef: containerName: client-container resource: limits.cpu - path: "cpu_request" resourceFieldRef: containerName: client-container resource: requests.cpu - path: "mem_limit" resourceFieldRef: containerName: client-container resource: limits.memory - path: "mem_request" resourceFieldRef: containerName: client-container resource: requests.memory # ...Create the pod from the
file:volume-pod.yaml$ oc create -f volume-pod.yaml
7.5.4. Consuming secrets using the Downward API Copia collegamentoCollegamento copiato negli appunti!
When creating pods, you can use the downward API to inject secrets so image and application authors can create an image for specific environments.
Procedure
Create a secret to inject:
Create a
file similar to the following:secret.yamlapiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-authCreate the secret object from the
file:secret.yaml$ oc create -f secret.yaml
Create a pod that references the
field from the aboveusernameobject:SecretCreate a
file similar to the following:pod.yamlapiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username restartPolicy: Never # ...Create the pod from the
file:pod.yaml$ oc create -f pod.yaml
Verification
Check the container’s logs for the
value:MY_SECRET_USERNAME$ oc logs -p dapi-env-test-pod
7.5.5. Consuming configuration maps using the Downward API Copia collegamentoCollegamento copiato negli appunti!
When creating pods, you can use the Downward API to inject configuration map values so image and application authors can create an image for specific environments.
Procedure
Create a config map with the values to inject:
Create a
file similar to the following:configmap.yamlapiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalueCreate the config map from the
file:configmap.yaml$ oc create -f configmap.yaml
Create a pod that references the above config map:
Create a
file similar to the following:pod.yamlapiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey restartPolicy: Always # ...Create the pod from the
file:pod.yaml$ oc create -f pod.yaml
Verification
Check the container’s logs for the
value:MY_CONFIGMAP_VALUE$ oc logs -p dapi-env-test-pod
7.5.6. Referencing environment variables Copia collegamentoCollegamento copiato negli appunti!
When creating pods, you can reference the value of a previously defined environment variable by using the
$()
Procedure
Create a pod that references an existing environment variable:
Create a
file similar to the following:pod.yamlapiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: $(MY_EXISTING_ENV) restartPolicy: Never # ...Create the pod from the
file:pod.yaml$ oc create -f pod.yaml
Verification
Check the container’s logs for the
value:MY_ENV_VAR_REF_ENV$ oc logs -p dapi-env-test-pod
7.5.7. Escaping environment variable references Copia collegamentoCollegamento copiato negli appunti!
When creating a pod, you can escape an environment variable reference by using a double dollar sign. The value will then be set to a single dollar sign version of the provided value.
Procedure
Create a pod that references an existing environment variable:
Create a
file similar to the following:pod.yamlapiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_NEW_ENV value: $$(SOME_OTHER_ENV) restartPolicy: Never # ...Create the pod from the
file:pod.yaml$ oc create -f pod.yaml
Verification
Check the container’s logs for the
value:MY_NEW_ENV$ oc logs -p dapi-env-test-pod
7.6. Copying files to or from an OpenShift Container Platform container Copia collegamentoCollegamento copiato negli appunti!
You can use the CLI to copy local files to or from a remote directory in a container using the
rsync
7.6.1. Understanding how to copy files Copia collegamentoCollegamento copiato negli appunti!
The
oc rsync
oc rsync
$ oc rsync <source> <destination> [-c <container>]
7.6.1.1. Requirements Copia collegamentoCollegamento copiato negli appunti!
- Specifying the Copy Source
The source argument of the
command must point to either a local directory or a pod directory. Individual files are not supported.oc rsyncWhen specifying a pod directory the directory name must be prefixed with the pod name:
<pod name>:<dir>If the directory name ends in a path separator (
), only the contents of the directory are copied to the destination. Otherwise, the directory and its contents are copied to the destination./- Specifying the Copy Destination
-
The destination argument of the
oc rsynccommand must point to a directory. If the directory does not exist, butrsyncis used for copy, the directory is created for you. - Deleting Files at the Destination
-
The
--deleteflag may be used to delete any files in the remote directory that are not in the local directory. - Continuous Syncing on File Change
Using the
option causes the command to monitor the source path for any file system changes, and synchronizes changes when they occur. With this argument, the command runs forever.--watchSynchronization occurs after short quiet periods to ensure a rapidly changing file system does not result in continuous synchronization calls.
When using the
option, the behavior is effectively the same as manually invoking--watchrepeatedly, including any arguments normally passed tooc rsync. Therefore, you can control the behavior via the same flags used with manual invocations ofoc rsync, such asoc rsync.--delete
7.6.2. Copying files to and from containers Copia collegamentoCollegamento copiato negli appunti!
Support for copying local files to or from a container is built into the CLI.
Prerequisites
When working with
oc rsync
rsync must be installed. The
command uses the localoc rsynctool, if present on the client machine and the remote container.rsyncIf
is not found locally or in the remote container, a tar archive is created locally and sent to the container where the tar utility is used to extract the files. If tar is not available in the remote container, the copy will fail.rsyncThe tar copy method does not provide the same functionality as
. For example,oc rsynccreates the destination directory if it does not exist and only sends files that are different between the source and the destination.oc rsyncNoteIn Windows, the
client should be installed and added to the PATH for use with thecwRsynccommand.oc rsync
Procedure
To copy a local directory to a pod directory:
$ oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>For example:
$ oc rsync /home/user/source devpod1234:/src -c user-containerTo copy a pod directory to a local directory:
$ oc rsync devpod1234:/src /home/user/sourceExample output
$ oc rsync devpod1234:/src/status.txt /home/user/
7.6.3. Using advanced Rsync features Copia collegamentoCollegamento copiato negli appunti!
The
oc rsync
rsync
rsync
oc rsync
--exclude-from=FILE
rsync
--rsh
-e
RSYNC_RSH
$ rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>
or:
Export the
RSYNC_RSH
$ export RSYNC_RSH='oc rsh'
Then, run the rsync command:
$ rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>
Both of the above examples configure standard
rsync
oc rsh
oc rsync
7.7. Executing remote commands in an OpenShift Container Platform container Copia collegamentoCollegamento copiato negli appunti!
You can use the CLI to execute remote commands in an OpenShift Container Platform container.
7.7.1. Executing remote commands in containers Copia collegamentoCollegamento copiato negli appunti!
Support for remote container command execution is built into the CLI.
Procedure
To run a command in a container:
$ oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]
For example:
$ oc exec mypod date
Example output
Thu Apr 9 02:21:53 UTC 2015
oc exec
cluster-admin
7.7.2. Protocol for initiating a remote command from a client Copia collegamentoCollegamento copiato negli appunti!
Clients initiate the execution of a remote command in a container by issuing a request to the Kubernetes API server:
/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>
In the above URL:
-
is the FQDN of the node.
<node_name> -
is the project of the target pod.
<namespace> -
is the name of the target pod.
<pod> -
is the name of the target container.
<container> -
is the desired command to be executed.
<command>
For example:
/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date
Additionally, the client can add parameters to the request to indicate if:
- the client should send input to the remote container’s command (stdin).
- the client’s terminal is a TTY.
- the remote container’s command should send output from stdout to the client.
- the remote container’s command should send output from stderr to the client.
After sending an
exec
The client creates one stream each for stdin, stdout, and stderr. To distinguish among the streams, the client sets the
streamType
stdin
stdout
stderr
The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the remote command execution request.
7.8. Using port forwarding to access applications in a container Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform supports port forwarding to pods.
7.8.1. Understanding port forwarding Copia collegamentoCollegamento copiato negli appunti!
You can use the CLI to forward one or more local ports to a pod. This allows you to listen on a given or random port locally, and have data forwarded to and from given ports in the pod.
Support for port forwarding is built into the CLI:
$ oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]
The CLI listens on each local port specified by the user, forwarding using the protocol described below.
Ports may be specified using the following formats:
|
| The client listens on port 5000 locally and forwards to 5000 in the pod. |
|
| The client listens on port 6000 locally and forwards to 5000 in the pod. |
|
| The client selects a free local port and forwards to 5000 in the pod. |
OpenShift Container Platform handles port-forward requests from clients. Upon receiving a request, OpenShift Container Platform upgrades the response and waits for the client to create port-forwarding streams. When OpenShift Container Platform receives a new stream, it copies data between the stream and the pod’s port.
Architecturally, there are options for forwarding to a pod’s port. The supported OpenShift Container Platform implementation invokes
nsenter
socat
nsenter
socat
7.8.2. Using port forwarding Copia collegamentoCollegamento copiato negli appunti!
You can use the CLI to port-forward one or more local ports to a pod.
Procedure
Use the following command to listen on the specified port in a pod:
$ oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]
For example:
Use the following command to listen on ports
and5000locally and forward data to and from ports6000and5000in the pod:6000$ oc port-forward <pod> 5000 6000Example output
Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000Use the following command to listen on port
locally and forward to8888in the pod:5000$ oc port-forward <pod> 8888:5000Example output
Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000Use the following command to listen on a free port locally and forward to
in the pod:5000$ oc port-forward <pod> :5000Example output
Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000Or:
$ oc port-forward <pod> 0:5000
7.8.3. Protocol for initiating port forwarding from a client Copia collegamentoCollegamento copiato negli appunti!
Clients initiate port forwarding to a pod by issuing a request to the Kubernetes API server:
/proxy/nodes/<node_name>/portForward/<namespace>/<pod>
In the above URL:
-
is the FQDN of the node.
<node_name> -
is the namespace of the target pod.
<namespace> -
is the name of the target pod.
<pod>
For example:
/proxy/nodes/node123.openshift.com/portForward/myns/mypod
After sending a port forward request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses Hyptertext Transfer Protocol Version 2 (HTTP/2).
The client creates a stream with the
port
The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the port forwarding request.
7.9. Using sysctls in containers Copia collegamentoCollegamento copiato negli appunti!
Sysctl settings are exposed through Kubernetes, allowing users to modify certain kernel parameters at runtime. Only sysctls that are namespaced can be set independently on pods. If a sysctl is not namespaced, called node-level, you must use another method of setting the sysctl, such as by using the Node Tuning Operator.
Network sysctls are a special category of sysctl. Network sysctls include:
-
System-wide sysctls, for example , that are valid for all networking. You can set these independently for each pod on a node.
net.ipv4.ip_local_port_range -
Interface-specific sysctls, for example , that only apply to a specific additional network interface for a given pod. You can set these independently for each additional network configuration. You set these by using a configuration in the
net.ipv4.conf.IFNAME.accept_localafter the network interfaces are created.tuning-cni
Moreover, only those sysctls considered safe are whitelisted by default; you can manually enable other unsafe sysctls on the node to be available to the user.
7.9.1. About sysctls Copia collegamentoCollegamento copiato negli appunti!
In Linux, the sysctl interface allows an administrator to modify kernel parameters at runtime. Parameters are available from the
/proc/sys/
-
kernel (common prefix: )
kernel. -
networking (common prefix: )
net. -
virtual memory (common prefix: )
vm. -
MDADM (common prefix: )
dev.
More subsystems are described in Kernel documentation. To get a list of all parameters, run:
$ sudo sysctl -a
7.9.2. Namespaced and node-level sysctls Copia collegamentoCollegamento copiato negli appunti!
A number of sysctls are namespaced in the Linux kernels. This means that you can set them independently for each pod on a node. Being namespaced is a requirement for sysctls to be accessible in a pod context within Kubernetes.
The following sysctls are known to be namespaced:
-
kernel.shm* -
kernel.msg* -
kernel.sem -
fs.mqueue.*
Additionally, most of the sysctls in the
net.*
Sysctls that are not namespaced are called node-level and must be set manually by the cluster administrator, either by means of the underlying Linux distribution of the nodes, such as by modifying the
/etc/sysctls.conf
Consider marking nodes with special sysctls as tainted. Only schedule pods onto them that need those sysctl settings. Use the taints and toleration feature to mark the nodes.
7.9.3. Safe and unsafe sysctls Copia collegamentoCollegamento copiato negli appunti!
Sysctls are grouped into safe and unsafe sysctls.
For system-wide sysctls to be considered safe, they must be namespaced. A namespaced sysctl ensures there is isolation between namespaces and therefore pods. If you set a sysctl for one pod it must not add any of the following:
- Influence any other pod on the node
- Harm the node health
- Gain CPU or memory resources outside of the resource limits of a pod
Being namespaced alone is not sufficient for the sysctl to be considered safe.
Any sysctl that is not added to the allowed list on OpenShift Container Platform is considered unsafe for OpenShift Container Platform.
Unsafe sysctls are not allowed by default. For system-wide sysctls the cluster administrator must manually enable them on a per-node basis. Pods with disabled unsafe sysctls are scheduled but do not launch.
You cannot manually enable interface-specific unsafe sysctls.
OpenShift Container Platform adds the following system-wide and interface-specific safe sysctls to an allowed safe list:
| sysctl | Description |
|---|---|
|
| When set to
|
|
| Defines the local port range that is used by TCP and UDP to choose the local port. The first number is the first port number, and the second number is the last local port number. If possible, it is better if these numbers have different parity (one even and one odd value). They must be greater than or equal to
|
|
| When
|
|
| This restricts
|
|
| This defines the first unprivileged port in the network namespace. To disable all privileged ports, set this to
|
| sysctl | Description |
|---|---|
|
| Accept IPv4 ICMP redirect messages. |
|
| Accept IPv4 packets with strict source route (SRR) option. |
|
| Define behavior for gratuitous ARP frames with an IPv4 address that is not already present in the ARP table:
|
|
| Define mode for notification of IPv4 address and device changes. |
|
| Disable IPSEC policy (SPD) for this IPv4 interface. |
|
| Accept ICMP redirect messages only to gateways listed in the interface’s current gateway list. |
|
| Send redirects is enabled only if the node acts as a router. That is, a host should not send an ICMP redirect message. It is used by routers to notify the host about a better routing path that is available for a particular destination. |
|
| Accept IPv6 Router advertisements; autoconfigure using them. It also determines whether or not to transmit router solicitations. Router solicitations are transmitted only if the functional setting is to accept router advertisements. |
|
| Accept IPv6 ICMP redirect messages. |
|
| Accept IPv6 packets with SRR option. |
|
| Define behavior for gratuitous ARP frames with an IPv6 address that is not already present in the ARP table:
|
|
| Define mode for notification of IPv6 address and device changes. |
|
| This parameter controls the hardware address to IP mapping lifetime in the neighbour table for IPv6. |
|
| Set the retransmit timer for neighbor discovery messages. |
When setting these values using the
tuning
IFNAME
IFNAME
7.9.4. Starting a pod with safe sysctls Copia collegamentoCollegamento copiato negli appunti!
You can set sysctls on pods using the pod’s
securityContext
securityContext
Safe sysctls are allowed by default.
This example uses the pod
securityContext
-
kernel.shm_rmid_forced -
net.ipv4.ip_local_port_range -
net.ipv4.tcp_syncookies -
net.ipv4.ping_group_range
To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects.
Use this procedure to start a pod with the configured sysctl settings.
In most cases you modify an existing pod definition and add the
securityContext
Procedure
Create a YAML file
that defines an example pod and add thesysctl_pod.yamlspec, as shown in the following example:securityContextapiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 20001 runAsGroup: 30002 allowPrivilegeEscalation: false3 capabilities:4 drop: ["ALL"] securityContext: runAsNonRoot: true5 seccompProfile:6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "1" - name: net.ipv4.ip_local_port_range value: "32770 60666" - name: net.ipv4.tcp_syncookies value: "0" - name: net.ipv4.ping_group_range value: "0 200000000"- 1
runAsUsercontrols which user ID the container is run with.- 2
runAsGroupcontrols which primary group ID the containers is run with.- 3
allowPrivilegeEscalationdetermines if a pod can request to allow privilege escalation. If unspecified, it defaults to true. This boolean directly controls whether theno_new_privsflag gets set on the container process.- 4
capabilitiespermit privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod.- 5
runAsNonRoot: truerequires that the container will run with a user with any UID other than 0.- 6
RuntimeDefaultenables the default seccomp profile for a pod or container workload.
Create the pod by running the following command:
$ oc apply -f sysctl_pod.yamlVerify that the pod is created by running the following command:
$ oc get podExample output
NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14sLog in to the pod by running the following command:
$ oc rsh sysctl-exampleVerify the values of the configured sysctl flags. For example, find the value
by running the following command:kernel.shm_rmid_forcedsh-4.4# sysctl kernel.shm_rmid_forcedExpected output
kernel.shm_rmid_forced = 1
7.9.5. Starting a pod with unsafe sysctls Copia collegamentoCollegamento copiato negli appunti!
A pod with unsafe sysctls fails to launch on any node unless the cluster administrator explicitly enables unsafe sysctls for that node. As with node-level sysctls, use the taints and toleration feature or labels on nodes to schedule those pods onto the right nodes.
The following example uses the pod
securityContext
kernel.shm_rmid_forced
net.core.somaxconn
kernel.msgmax
To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects.
The following example illustrates what happens when you add safe and unsafe sysctls to a pod specification:
Procedure
Create a YAML file
that defines an example pod and add thesysctl-example-unsafe.yamlspecification, as shown in the following example:securityContextapiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "0" - name: net.core.somaxconn value: "1024" - name: kernel.msgmax value: "65536"Create the pod using the following command:
$ oc apply -f sysctl-example-unsafe.yamlVerify that the pod is scheduled but does not deploy because unsafe sysctls are not allowed for the node using the following command:
$ oc get podExample output
NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s
7.9.6. Enabling unsafe sysctls Copia collegamentoCollegamento copiato negli appunti!
A cluster administrator can allow certain unsafe sysctls for very special situations such as high performance or real-time application tuning.
If you want to use unsafe sysctls, a cluster administrator must enable them individually for a specific type of node. The sysctls must be namespaced.
You can further control which sysctls are set in pods by specifying lists of sysctls or sysctl patterns in the
allowedUnsafeSysctls
-
The option controls specific needs such as high performance or real-time application tuning.
allowedUnsafeSysctls
Due to their nature of being unsafe, the use of unsafe sysctls is at-your-own-risk and can lead to severe problems, such as improper behavior of containers, resource shortage, or breaking a node.
Procedure
List existing MachineConfig objects for your OpenShift Container Platform cluster to decide how to label your machine config by running the following command:
$ oc get machineconfigpoolExample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42mAdd a label to the machine config pool where the containers with the unsafe sysctls will run by running the following command:
$ oc label machineconfigpool worker custom-kubelet=sysctlCreate a YAML file
that defines aset-sysctl-worker.yamlcustom resource (CR):KubeletConfigapiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl1 kubeletConfig: allowedUnsafeSysctls:2 - "kernel.msg*" - "net.core.somaxconn"Create the object by running the following command:
$ oc apply -f set-sysctl-worker.yamlWait for the Machine Config Operator to generate the new rendered configuration and apply it to the machines by running the following command:
$ oc get machineconfigpool worker -wAfter some minutes the
status changes from True to False:UPDATINGNAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72mCreate a YAML file
that defines an example pod and add thesysctl-example-safe-unsafe.yamlspec, as shown in the following example:securityContextapiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "0" - name: net.core.somaxconn value: "1024" - name: kernel.msgmax value: "65536"Create the pod by running the following command:
$ oc apply -f sysctl-example-safe-unsafe.yamlExpected output
Warning: would violate PodSecurity "restricted:latest": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe createdVerify that the pod is created by running the following command:
$ oc get podExample output
NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19sLog in to the pod by running the following command:
$ oc rsh sysctl-example-safe-unsafeVerify the values of the configured sysctl flags. For example, find the value
by running the following command:net.core.somaxconnsh-4.4# sysctl net.core.somaxconnExpected output
net.core.somaxconn = 1024
The unsafe sysctl is now allowed and the value is set as defined in the
securityContext