Este contenido no está disponible en el idioma seleccionado.
Post-installation configuration
Day 2 operations for OpenShift Container Platform
Abstract
Chapter 1. Post-installation configuration overview Copiar enlaceEnlace copiado en el portapapeles!
After installing OpenShift Container Platform, a cluster administrator can configure and customize the following components:
- Machine
- Cluster
- Node
- Network
- Storage
- Users
- Alerts and notifications
1.1. Configuration tasks to perform after installation Copiar enlaceEnlace copiado en el portapapeles!
Cluster administrators can perform the following post-installation configuration tasks:
Configure operating system features: Machine Config Operator (MCO) manages
objects. By using MCO, you can perform the following tasks on an OpenShift Container Platform cluster:MachineConfig-
Configure nodes by using objects
MachineConfig - Configure MCO-related custom resources
-
Configure nodes by using
Configure cluster features: As a cluster administrator, you can modify the configuration resources of the major features of an OpenShift Container Platform cluster. These features include:
- Image registry
- Networking configuration
- Image build behavior
- Identity provider
- The etcd configuration
- Machine set creation to handle the workloads
- Cloud provider credential management
Configure cluster components to be private: By default, the installation program provisions OpenShift Container Platform by using a publicly accessible DNS and endpoints. If you want your cluster to be accessible only from within an internal network, configure the following components to be private:
- DNS
- Ingress Controller
- API server
Perform node operations: By default, OpenShift Container Platform uses Red Hat Enterprise Linux CoreOS (RHCOS) compute machines. As a cluster administrator, you can perform the following operations with the machines in your OpenShift Container Platform cluster:
- Add and remove compute machines
- Add and remove taints and tolerations to the nodes
- Configure the maximum number of pods per node
- Enable Device Manager
Configure network: After installing OpenShift Container Platform, you can configure the following:
- Ingress cluster traffic
- Node port service range
- Network policy
- Enabling the cluster-wide proxy
Configure storage: By default, containers operate using ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. TO store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods:
- Dynamic provisioning: You can dynamically provision storage on demand by defining and creating storage classes that control different levels of storage, including storage access.
- Static provisioning: You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options.
- Configure users: OAuth access tokens allow users to authenticate themselves to the API. As a cluster administrator, you can configure OAuth to perform the following tasks:
- Specify an identity provider
- Use role-based access control to define and supply permissions to users
- Install an Operator from OperatorHub
- Manage alerts and notifications: By default, firing alerts are displayed on the Alerting UI of the web console. You can also configure OpenShift Container Platform to send alert notifications to external systems.
Chapter 2. Configuring a private cluster Copiar enlaceEnlace copiado en el portapapeles!
After you install an OpenShift Container Platform version 4.8 cluster, you can set some of its core components to be private.
2.1. About private clusters Copiar enlaceEnlace copiado en el portapapeles!
By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
DNS
If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster’s own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for
*.apps
Ingress
api
The
*.apps
Ingress Controller
Because the default
Ingress
API server
By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic.
On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster’s access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the
/readyz
On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer.
On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster.
2.2. Setting DNS to private Copiar enlaceEnlace copiado en el portapapeles!
After you deploy a cluster, you can modify its DNS to use only a private zone.
Procedure
Review the
custom resource for your cluster:DNS$ oc get dnses.config.openshift.io/cluster -o yamlExample output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}Note that the
section contains both a private and a public zone.specPatch the
custom resource to remove the public zone:DNS$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patchedBecause the Ingress Controller consults the
definition when it createsDNSobjects, when you create or modifyIngressobjects, only private records are created.IngressImportantDNS records for the existing Ingress objects are not modified when you remove the public zone.
Optional: Review the
custom resource for your cluster and confirm that the public zone was removed:DNS$ oc get dnses.config.openshift.io/cluster -o yamlExample output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}
2.3. Setting the Ingress Controller to private Copiar enlaceEnlace copiado en el portapapeles!
After you deploy a cluster, you can modify its Ingress Controller to use only a private zone.
Procedure
Modify the default Ingress Controller to use only an internal endpoint:
$ oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOFExample output
ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replacedThe public DNS entry is removed, and the private zone entry is updated.
2.4. Restricting the API server to private Copiar enlaceEnlace copiado en el portapapeles!
After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Have access to the web console as a user with privileges.
admin
Procedure
In the web portal or console for AWS or Azure, take the following actions:
Locate and delete appropriate load balancer component.
- For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.
-
For Azure, delete the rule for the load balancer.
api-internal
-
Delete the DNS entry in the public zone.
api.$clustername.$yourdomain
Remove the external load balancers:
ImportantYou can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers.
From your terminal, list the cluster machines:
$ oc get machine -n openshift-machine-apiExample output
NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15mYou modify the control plane machines, which contain
in the name, in the following step.masterRemove the external load balancer from each control plane machine.
Edit a control plane
object to remove the reference to the external load balancer:Machine$ oc edit machines -n openshift-machine-api <master_name>1 - 1
- Specify the name of the control plane, or master,
Machineobject to modify.
Remove the lines that describe the external load balancer, which are marked in the following example, and save and exit the object specification:
... spec: providerSpec: value: ... loadBalancers: - name: lk4pj-ext1 type: network2 - name: lk4pj-int type: network-
Repeat this process for each of the machines that contains in the name.
master
Chapter 3. Post-installation machine configuration tasks Copiar enlaceEnlace copiado en el portapapeles!
There are times when you need to make changes to the operating systems running on OpenShift Container Platform nodes. This can include changing settings for network time service, adding kernel arguments, or configuring journaling in a specific way.
Aside from a few specialized features, most changes to operating systems on OpenShift Container Platform nodes can be done by creating what are referred to as
MachineConfig
Tasks in this section describe how to use features of the Machine Config Operator to configure operating system features on OpenShift Container Platform nodes.
3.1. Understanding the Machine Config Operator Copiar enlaceEnlace copiado en el portapapeles!
3.1.1. Machine Config Operator Copiar enlaceEnlace copiado en el portapapeles!
Purpose
The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet.
There are four components:
-
: Provides Ignition configuration to new machines joining the cluster.
machine-config-server -
: Coordinates the upgrade of machines to the desired configurations defined by a
machine-config-controllerobject. Options are provided to control the upgrade for sets of machines individually.MachineConfig -
: Applies new machine configuration during update. Validates and verifies the state of the machine to the requested machine configuration.
machine-config-daemon -
: Provides a complete source of machine configuration at installation, first start up, and updates for a machine.
machine-config
Project
3.1.2. Machine config overview Copiar enlaceEnlace copiado en el portapapeles!
The Machine Config Operator (MCO) manages updates to systemd, CRI-O and Kubelet, the kernel, Network Manager and other system features. It also offers a
MachineConfig
- A machine config can make a specific change to a file or service on the operating system of each system representing a pool of OpenShift Container Platform nodes.
MCO applies changes to operating systems in pools of machines. All OpenShift Container Platform clusters start with worker and control plane node (also known as the master node) pools. By adding more role labels, you can configure custom pools of nodes. For example, you can set up a custom pool of worker nodes that includes particular hardware features needed by an application. However, examples in this section focus on changes to the default pool types.
ImportantA node can have multiple labels applied that indicate its type, such as
ormaster, however it can be a member of only a single machine config pool.worker- Some machine configuration must be in place before OpenShift Container Platform is installed to disk. In most cases, this can be accomplished by creating a machine config that is injected directly into the OpenShift Container Platform installer process, instead of running as a post-installation machine config. In other cases, you might need to do bare metal installation where you pass kernel arguments at OpenShift Container Platform installer startup, to do such things as setting per-node individual IP addresses or advanced disk partitioning.
- MCO manages items that are set in machine configs. Manual changes you do to your systems will not be overwritten by MCO, unless MCO is explicitly told to manage a conflicting file. In other words, MCO only makes specific updates you request, it does not claim control over the whole node.
- Manual changes to nodes are strongly discouraged. If you need to decommission a node and start a new one, those direct changes would be lost.
-
MCO is only supported for writing to files in and
/etcdirectories, although there are symbolic links to some directories that can be writeable by being symbolically linked to one of those areas. The/varand/optdirectories are examples./usr/local - Ignition is the configuration format used in MachineConfigs. See the Ignition Configuration Specification v3.2.0 for details.
- Although Ignition config settings can be delivered directly at OpenShift Container Platform installation time, and are formatted in the same way that MCO delivers Ignition configs, MCO has no way of seeing what those original Ignition configs are. Therefore, you should wrap Ignition config settings into a machine config before deploying them.
-
When a file managed by MCO changes outside of MCO, the Machine Config Daemon (MCD) sets the node as . It will not overwrite the offending file, however, and should continue to operate in a
degradedstate.degraded -
A key reason for using a machine config is that it will be applied when you spin up new nodes for a pool in your OpenShift Container Platform cluster. The provisions a new machine and MCO configures it.
machine-api-operator
MCO uses Ignition as the configuration format. OpenShift Container Platform 4.6 moved from Ignition config specification version 2 to version 3.
3.1.2.1. What can you change with machine configs? Copiar enlaceEnlace copiado en el portapapeles!
The kinds of components that MCO can change include:
config: Create Ignition config objects (see the Ignition configuration specification) to do things like modify files, systemd services, and other features on OpenShift Container Platform machines, including:
-
Configuration files: Create or overwrite files in the or
/vardirectory./etc - systemd units: Create and set the status of a systemd service or add to an existing systemd service by dropping in additional settings.
- users and groups: Change SSH keys in the passwd section post-installation.
-
Configuration files: Create or overwrite files in the
Changing SSH keys via machine configs is only supported for the
core
- kernelArguments: Add arguments to the kernel command line when OpenShift Container Platform nodes boot.
-
kernelType: Optionally identify a non-standard kernel to use instead of the standard kernel. Use to use the RT kernel (for RAN). This is only supported on select platforms.
realtime - fips: Enable FIPS mode. FIPS should be set at installation-time setting and not a post-installation procedure.
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64
- extensions: Extend RHCOS features by adding selected pre-packaged software. For this feature, available extensions include usbguard and kernel modules.
-
Custom resources (for
ContainerRuntimeandKubelet): Outside of machine configs, MCO manages two special custom resources for modifying CRI-O container runtime settings (CR) and the Kubelet service (ContainerRuntimeCR).Kubelet
The MCO is not the only Operator that can change operating system components on OpenShift Container Platform nodes. Other Operators can modify operating system-level features as well. One example is the Node Tuning Operator, which allows you to do node-level tuning through Tuned daemon profiles.
Tasks for the MCO configuration that can be done post-installation are included in the following procedures. See descriptions of RHCOS bare metal installation for system configuration tasks that must be done during or before OpenShift Container Platform installation.
3.1.2.2. Project Copiar enlaceEnlace copiado en el portapapeles!
See the openshift-machine-config-operator GitHub site for details.
3.1.3. Checking machine config pool status Copiar enlaceEnlace copiado en el portapapeles!
To see the status of the Machine Config Operator (MCO), its sub-components, and the resources it manages, use the following
oc
Procedure
To see the number of MCO-managed nodes available on your cluster for each machine config pool (MCP), run the following command:
$ oc get machineconfigpoolExample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4… True False False 3 3 3 0 4h42m worker rendered-worker-f4b64… False True False 3 2 2 0 4h42mwhere:
- UPDATED
-
The
Truestatus indicates that the MCO has applied the current machine config to the nodes in that MCP. The current machine config is specified in theSTATUSfield in theoc get mcpoutput. TheFalsestatus indicates a node in the MCP is updating. - UPDATING
-
The
Truestatus indicates that the MCO is applying the desired machine config, as specified in theMachineConfigPoolcustom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. TheFalsestatus indicates that all nodes in the MCP are updated. - DEGRADED
-
A
Truestatus indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. AFalsestatus indicates that all nodes in the MCP are ready. - MACHINECOUNT
- Indicates the total number of machines in that MCP.
- READYMACHINECOUNT
- Indicates the total number of machines in that MCP that are ready for scheduling.
- UPDATEDMACHINECOUNT
- Indicates the total number of machines in that MCP that have the current machine config.
- DEGRADEDMACHINECOUNT
- Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable.
In the previous output, there are three control plane (master) nodes and three worker nodes. The control plane MCP and the associated nodes are updated to the current machine config. The nodes in the worker MCP are being updated to the desired machine config. Two of the nodes in the worker MCP are updated and one is still updating, as indicated by the
beingUPDATEDMACHINECOUNT. There are no issues, as indicated by the2beingDEGRADEDMACHINECOUNTand0beingDEGRADED.FalseWhile the nodes in the MCP are updating, the machine config listed under
is the current machine config, which the MCP is being updated from. When the update is complete, the listed machine config is the desired machine config, which the MCP was updated to.CONFIGNoteIf a node is being cordoned, that node is not included in the
, but is included in theREADYMACHINECOUNT. Also, the MCP status is set toMACHINECOUNT. Because the node has the current machine config, it is counted in theUPDATINGtotal:UPDATEDMACHINECOUNTExample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4… True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a… False True False 3 2 3 0 4h42mTo check the status of the nodes in an MCP by examining the
custom resource, run the following command: :MachineConfigPool$ oc describe mcp workerExample output
... Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none>NoteIf a node is being cordoned, the node is not included in the
. It is included in theReady Machine Count:Unavailable Machine CountExample output
... Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3To see each existing
object, run the following command:MachineConfig$ oc get machineconfigsExample output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52… 3.2.0 5h18m ... rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18mNote that the
objects listed asMachineConfigare not meant to be changed or deleted.renderedTo view the contents of a particular machine config (in this case,
), run the following command:01-master-kubelet$ oc describe machineconfigs 01-master-kubeletThe output from the command shows that this
object contains both configuration files (MachineConfigandcloud.conf) and a systemd service (Kubernetes Kubelet):kubelet.confExample output
Name: 01-master-kubelet ... Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous... Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube \ kubelet \ --config=/etc/kubernetes/kubelet.conf \ ...
If something goes wrong with a machine config that you apply, you can always back out that change. For example, if you had run
oc create -f ./myconfig.yaml
$ oc delete -f ./myconfig.yaml
If that was the only problem, the nodes in the affected pool should return to a non-degraded state. This actually causes the rendered configuration to roll back to its previously rendered state.
If you add your own machine configs to your cluster, you can use the commands shown in the previous example to check their status and the related status of the pool to which they are applied.
3.2. Using MachineConfig objects to configure nodes Copiar enlaceEnlace copiado en el portapapeles!
You can use the tasks in this section to create
MachineConfig
OpenShift Container Platform supports Ignition specification version 3.2. All new machine configs you create going forward should be based on Ignition specification version 3.2. If you are upgrading your OpenShift Container Platform cluster, any existing Ignition specification version 2.x machine configs will be translated automatically to specification version 3.2.
Use the following "Configuring chrony time service" procedure as a model for how to go about adding other configuration files to OpenShift Container Platform nodes.
3.2.1. Configuring chrony time service Copiar enlaceEnlace copiado en el portapapeles!
You can set the time server and related settings used by the chrony time service (
chronyd
chrony.conf
Procedure
Create a Butane config including the contents of the
file. For example, to configure chrony on worker nodes, create achrony.conffile.99-worker-chrony.buNoteSee "Creating machine configs with Butane" for information about Butane.
variant: openshift version: 4.8.0 metadata: name: 99-worker-chrony1 labels: machineconfiguration.openshift.io/role: worker2 storage: files: - path: /etc/chrony.conf mode: 06443 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony- 1 2
- On control plane nodes, substitute
masterforworkerin both of these locations. - 3
- Specify an octal value mode for the
modefield in the machine config file. After creating the file and applying the changes, themodeis converted to a decimal value. You can check the YAML file with the commandoc get mc <mc-name> -o yaml. - 4
- Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers:
1.rhel.pool.ntp.org,2.rhel.pool.ntp.org, or3.rhel.pool.ntp.org.
Use Butane to generate a
object file,MachineConfig, containing the configuration to be delivered to the nodes:99-worker-chrony.yaml$ butane 99-worker-chrony.bu -o 99-worker-chrony.yamlApply the configurations in one of two ways:
-
If the cluster is not running yet, after you generate manifest files, add the object file to the
MachineConfigdirectory, and then continue to create the cluster.<installation_directory>/openshift If the cluster is already running, apply the file:
$ oc apply -f ./99-worker-chrony.yaml
-
If the cluster is not running yet, after you generate manifest files, add the
3.2.2. Adding kernel arguments to nodes Copiar enlaceEnlace copiado en el portapapeles!
In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set.
Improper use of kernel arguments can result in your systems becoming unbootable.
Examples of kernel arguments you could set include:
- enforcing=0: Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging.
-
nosmt: Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance.
nosmt
See Kernel.org kernel parameters for a list and descriptions of kernel arguments.
In the following procedure, you create a
MachineConfig
- A set of machines to which you want to add the kernel argument. In this case, machines with a worker role.
- Kernel arguments that are appended to the end of the existing kernel arguments.
- A label that indicates where in the list of machine configs the change is applied.
Prerequisites
- Have administrative privilege to a working OpenShift Container Platform cluster.
Procedure
List existing
objects for your OpenShift Container Platform cluster to determine how to label your machine config:MachineConfig$ oc get MachineConfigExample output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33mCreate a
object file that identifies the kernel argument (for example,MachineConfig)05-worker-kernelarg-selinuxpermissive.yamlapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker1 name: 05-worker-kernelarg-selinuxpermissive2 spec: config: ignition: version: 3.2.0 kernelArguments: - enforcing=03 Create the new machine config:
$ oc create -f 05-worker-kernelarg-selinuxpermissive.yamlCheck the machine configs to see that the new one was added:
$ oc get MachineConfigExample output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33mCheck the nodes:
$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.21.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.21.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.21.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.21.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.21.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.21.0You can see that scheduling on each worker node is disabled as the change is being applied.
Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in
on the host):/proc/cmdline$ oc debug node/ip-10-0-141-105.ec2.internalExample output
Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exitYou should see the
argument added to the other kernel arguments.enforcing=0
3.2.3. Enabling multipathing with kernel arguments on RHCOS Copiar enlaceEnlace copiado en el portapapeles!
Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Post-installation support is available by activating multipathing via the machine config.
Enabling multipathing during installation is supported and recommended for nodes provisioned in OpenShift Container Platform 4.8 or higher. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. For more information about enabling multipathing during installation time, see "Enabling multipathing with kernel arguments on RHCOS" in the Installing on bare metal documentation.
On IBM Z and LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z and LinuxONE.
Prerequisites
- You have a running OpenShift Container Platform cluster that uses version 4.7 or later.
- You are logged in to the cluster as a user with administrative privileges.
Procedure
To enable multipathing post-installation on control plane nodes:
Create a machine config file, such as
, that instructs the cluster to add the99-master-kargs-mpath.yamllabel and that identifies the multipath kernel argument, for example:masterapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'
To enable multipathing post-installation on worker nodes:
Create a machine config file, such as
, that instructs the cluster to add the99-worker-kargs-mpath.yamllabel and that identifies the multipath kernel argument, for example:workerapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'
Create the new machine config by using either the master or worker YAML file you previously created:
$ oc create -f ./99-worker-kargs-mpath.yamlCheck the machine configs to see that the new one was added:
$ oc get MachineConfigExample output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33mCheck the nodes:
$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0You can see that scheduling on each worker node is disabled as the change is being applied.
Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in
on the host):/proc/cmdline$ oc debug node/ip-10-0-141-105.ec2.internalExample output
Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exitYou should see the added kernel arguments.
3.2.4. Adding a real-time kernel to nodes Copiar enlaceEnlace copiado en el portapapeles!
Some OpenShift Container Platform workloads require a high degree of determinism.While Linux is not a real-time operating system, the Linux real-time kernel includes a preemptive scheduler that provides the operating system with real-time characteristics.
If your OpenShift Container Platform workloads require these real-time characteristics, you can switch your machines to the Linux real-time kernel. For OpenShift Container Platform, 4.8 you can make this switch using a
MachineConfig
kernelType
realtime
- Currently, real-time kernel is supported only on worker nodes, and only for radio access network (RAN) use.
- The following procedure is fully supported with bare metal installations that use systems that are certified for Red Hat Enterprise Linux for Real Time 8.
- Real-time support in OpenShift Container Platform is limited to specific subscriptions.
- The following procedure is also supported for use with Google Cloud Platform.
Prerequisites
- Have a running OpenShift Container Platform cluster (version 4.4 or later).
- Log in to the cluster as a user with administrative privileges.
Procedure
Create a machine config for the real-time kernel: Create a YAML file (for example,
) that contains a99-worker-realtime.yamlobject for theMachineConfigkernel type. This example tells the cluster to use a real-time kernel for all worker nodes:realtime$ cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-realtime spec: kernelType: realtime EOFAdd the machine config to the cluster. Type the following to add the machine config to the cluster:
$ oc create -f 99-worker-realtime.yamlCheck the real-time kernel: Once each impacted node reboots, log in to the cluster and run the following commands to make sure that the real-time kernel has replaced the regular kernel for the set of nodes you configured:
$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.21.0 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.21.0 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.21.0$ oc debug node/ip-10-0-143-147.us-east-2.compute.internalExample output
Starting pod/ip-10-0-143-147us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/LinuxThe kernel name contains
and text “PREEMPT RT” indicates that this is a real-time kernel.rtTo go back to the regular kernel, delete the
object:MachineConfig$ oc delete -f 99-worker-realtime.yaml
3.2.5. Configuring journald settings Copiar enlaceEnlace copiado en el portapapeles!
If you need to configure settings for the
journald
This procedure describes how to modify
journald
/etc/systemd/journald.conf
journald.conf
Prerequisites
- Have a running OpenShift Container Platform cluster.
- Log in to the cluster as a user with administrative privileges.
Procedure
Create a Butane config file,
, that includes an40-worker-custom-journald.bufile with the required settings./etc/systemd/journald.confNoteSee "Creating machine configs with Butane" for information about Butane.
variant: openshift version: 4.8.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30sUse Butane to generate a
object file,MachineConfig, containing the configuration to be delivered to the worker nodes:40-worker-custom-journald.yaml$ butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yamlApply the machine config to the pool:
$ oc apply -f 40-worker-custom-journald.yamlCheck that the new machine config is applied and that the nodes are not in a degraded state. It might take a few minutes. The worker pool will show the updates in progress, as each node successfully has the new machine config applied:
$ oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34mTo check that the change was applied, you can log in to a worker node:
$ oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+$Format:%h$ $ oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug ... ... sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit
3.2.6. Adding extensions to RHCOS Copiar enlaceEnlace copiado en el portapapeles!
RHCOS is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to OpenShift Container Platform clusters across all platforms. While adding software packages to RHCOS systems is generally discouraged, the MCO provides an
extensions
Currently, the following extension is available:
-
usbguard: Adding the extension protects RHCOS systems from attacks from intrusive USB devices. See USBGuard for details.
usbguard
The following procedure describes how to use a machine config to add one or more extensions to your RHCOS nodes.
Prerequisites
- Have a running OpenShift Container Platform cluster (version 4.6 or later).
- Log in to the cluster as a user with administrative privileges.
Procedure
Create a machine config for extensions: Create a YAML file (for example,
) that contains a80-extensions.yamlMachineConfigobject. This example tells the cluster to add theextensionsextension.usbguard$ cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOFAdd the machine config to the cluster. Type the following to add the machine config to the cluster:
$ oc create -f 80-extensions.yamlThis sets all worker nodes to have rpm packages for
installed.usbguardCheck that the extensions were applied:
$ oc get machineconfig 80-worker-extensionsExample output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57sCheck that the new machine config is now applied and that the nodes are not in a degraded state. It may take a few minutes. The worker pool will show the updates in progress, as each machine successfully has the new machine config applied:
$ oc get machineconfigpoolExample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34mCheck the extensions. To check that the extension was applied, run:
$ oc get node | grep workerExample output
NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.18.3$ oc debug node/ip-10-0-169-2.us-east-2.compute.internalExample output
... To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm
3.2.7. Loading custom firmware blobs in the machine config manifest Copiar enlaceEnlace copiado en el portapapeles!
Because the default location for firmware blobs in
/usr/lib
Procedure
Create a Butane config file,
, that updates the search path so that it is root-owned and writable to local storage. The following example places the custom blob file from your local workstation onto nodes under98-worker-firmware-blob.bu./var/lib/firmwareNoteSee "Creating machine configs with Butane" for information about Butane.
Butane config file for custom firmware blob
variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name>1 contents: local: <package_name>2 mode: 06443 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware'4 - 1
- Sets the path on the node where the firmware package is copied to.
- 2
- Specifies a file with contents that are read from a local file directory on the system running Butane. The path of the local file is relative to a
files-dirdirectory, which must be specified by using the--files-diroption with Butane in the following step. - 3
- Sets the permissions for the file on the RHCOS node. It is recommended to set
0644permissions. - 4
- The
firmware_class.pathparameter customizes the kernel search path of where to look for the custom firmware blob that was copied from your local workstation onto the root file system of the node. This example uses/var/lib/firmwareas the customized path.
Run Butane to generate a
object file that uses a copy of the firmware blob on your local workstation namedMachineConfig. The firmware blob contains the configuration to be delivered to the nodes. The following example uses the98-worker-firmware-blob.yamloption to specify the directory on your workstation where the local file or files are located:--files-dir$ butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>Apply the configurations to the nodes in one of two ways:
-
If the cluster is not running yet, after you generate manifest files, add the object file to the
MachineConfigdirectory, and then continue to create the cluster.<installation_directory>/openshift If the cluster is already running, apply the file:
$ oc apply -f 98-worker-firmware-blob.yamlA
object YAML file is created for you to finish configuring your machines.MachineConfig
-
If the cluster is not running yet, after you generate manifest files, add the
-
Save the Butane config in case you need to update the object in the future.
MachineConfig
3.3. Configuring MCO-related custom resources Copiar enlaceEnlace copiado en el portapapeles!
Besides managing
MachineConfig
KubeletConfig
ContainerRuntimeConfig
3.3.1. Creating a KubeletConfig CRD to edit kubelet parameters Copiar enlaceEnlace copiado en el portapapeles!
The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new
kubelet-config-controller
KubeletConfig
As the fields in the
kubeletConfig
kubeletConfig
Consider the following guidance:
-
Create one CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one
KubeletConfigCR for all of the pools.KubeletConfig -
Edit an existing CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes.
KubeletConfig -
As needed, create multiple CRs with a limit of 10 per cluster. For the first
KubeletConfigCR, the Machine Config Operator (MCO) creates a machine config appended withKubeletConfig. With each subsequent CR, the controller creates anotherkubeletmachine config with a numeric suffix. For example, if you have akubeletmachine config with akubeletsuffix, the next-2machine config is appended withkubelet.-3
If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the
kubelet-3
kubelet-2
If you have a machine config with a
kubelet-9
KubeletConfig
kubelet
Example KubeletConfig CR
$ oc get kubeletconfig
NAME AGE
set-max-pods 15m
Example showing a KubeletConfig machine config
$ oc get mc | grep kubelet
...
99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m
...
The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes.
Prerequisites
Obtain the label associated with the static
CR for the type of node you want to configure. Perform one of the following steps:MachineConfigPoolView the machine config pool:
$ oc describe machineconfigpool <name>For example:
$ oc describe machineconfigpool workerExample output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods1 - 1
- If a label has been added it appears under
labels.
If the label is not present, add a key/value pair:
$ oc label machineconfigpool worker custom-kubelet=set-max-pods
Procedure
View the available machine configuration objects that you can select:
$ oc get machineconfigBy default, the two kubelet-related configs are
and01-master-kubelet.01-worker-kubeletCheck the current value for the maximum pods per node:
$ oc describe node <node_name>For example:
$ oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94Look for
in thevalue: pods: <value>stanza:AllocatableExample output
Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration:
apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods1 kubeletConfig: maxPods: 5002 NoteThe rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values,
for50andkubeAPIQPSfor100, are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node.kubeAPIBurstapiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>Update the machine config pool for workers with the label:
$ oc label machineconfigpool worker custom-kubelet=large-podsCreate the
object:KubeletConfig$ oc create -f change-maxPods-cr.yamlVerify that the
object is created:KubeletConfig$ oc get kubeletconfigExample output
NAME AGE set-max-pods 15mDepending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes.
Verify that the changes are applied to the node:
Check on a worker node that the
value changed:maxPods$ oc describe node <node_name>Locate the
stanza:Allocatable... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 5001 ...- 1
- In this example, the
podsparameter should report the value you set in theKubeletConfigobject.
Verify the change in the
object:KubeletConfig$ oc get kubeletconfigs set-max-pods -o yamlThis should show a
andstatus: "True":type:Successspec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success
3.3.2. Creating a ContainerRuntimeConfig CR to edit CRI-O parameters Copiar enlaceEnlace copiado en el portapapeles!
You can change some of the settings associated with the OpenShift Container Platform CRI-O runtime for the nodes associated with a specific machine config pool (MCP). Using a
ContainerRuntimeConfig
crio.conf
storage.conf
To revert the changes implemented by using a
ContainerRuntimeConfig
You can modify the following settings by using a
ContainerRuntimeConfig
-
PIDs limit: The parameter sets the CRI-O
pidsLimitparameter, which is maximum number of processes allowed in a container. The default is 1024 (pids_limit).pids_limit = 1024 -
Log level: The parameter sets the CRI-O
logLevelparameter, which is the level of verbosity for log messages. The default islog_level(info). Other options includelog_level = info,fatal,panic,error,warn, anddebug.trace -
Overlay size: The parameter sets the CRI-O Overlay storage driver
overlaySizeparameter, which is the maximum size of a container image.size -
Maximum log size: The parameter sets the CRI-O
logSizeMaxparameter, which is the maximum size allowed for the container log file. The default is unlimited (log_size_max). If set to a positive number, it must be at least 8192 to not be smaller than the ConMon read buffer. ConMon is a program that monitors communications between a container manager (such as Podman or CRI-O) and the OCI runtime (such as runc or crun) for a single container.log_size_max = -1
You should have one
ContainerRuntimeConfig
ContainerRuntimeConfig
You should edit an existing
ContainerRuntimeConfig
ContainerRuntimeConfig
You can create multiple
ContainerRuntimeConfig
ContainerRuntimeConfig
containerruntime
containerruntime
containerruntime
-2
containerruntime
-3
If you want to delete the machine configs, you should delete them in reverse order to avoid exceeding the limit. For example, you should delete the
containerruntime-3
containerruntime-2
If you have a machine config with a
containerruntime-9
ContainerRuntimeConfig
containerruntime
Example showing multiple ContainerRuntimeConfig CRs
$ oc get ctrcfg
Example output
NAME AGE
ctr-pid 24m
ctr-overlay 15m
ctr-level 5m45s
Example showing multiple containerruntime machine configs
$ oc get mc | grep container
Example output
...
01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m
...
01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m
...
99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m
99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m
99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s
...
The following example raises the
pids_limit
log_level
debug
log_size_max
Example ContainerRuntimeConfig CR
apiVersion: machineconfiguration.openshift.io/v1
kind: ContainerRuntimeConfig
metadata:
name: overlay-size
spec:
machineConfigPoolSelector:
matchLabels:
pools.operator.machineconfiguration.openshift.io/worker: ''
containerRuntimeConfig:
pidsLimit: 2048
logLevel: debug
overlaySize: 8G
logSizeMax: "-1"
- 1
- Specifies the machine config pool label.
- 2
- Optional: Specifies the maximum number of processes allowed in a container.
- 3
- Optional: Specifies the level of verbosity for log messages.
- 4
- Optional: Specifies the maximum size of a container image.
- 5
- Optional: Specifies the maximum size allowed for the container log file. If set to a positive number, it must be at least 8192.
Procedure
To change CRI-O settings using the
ContainerRuntimeConfig
Create a YAML file for the
CR:ContainerRuntimeConfigapiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: ''1 containerRuntimeConfig:2 pidsLimit: 2048 logLevel: debug overlaySize: 8G logSizeMax: "-1"Create the
CR:ContainerRuntimeConfig$ oc create -f <file_name>.yamlVerify that the CR is created:
$ oc get ContainerRuntimeConfigExample output
NAME AGE overlay-size 3m19sCheck that a new
machine config is created:containerruntime$ oc get machineconfigs | grep containerrunExample output
99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31sMonitor the machine config pool until all are shown as ready:
$ oc get mcp workerExample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9hVerify that the settings were applied in CRI-O:
Open an
session to a node in the machine config pool and runoc debug.chroot /host$ oc debug node/<node_name>sh-4.4# chroot /hostVerify the changes in the
file:crio.confsh-4.4# crio config | egrep 'log_level|pids_limit|log_size_max'Example output
pids_limit = 2048 log_size_max = -1 log_level = "debug"Verify the changes in the `storage.conf`file:
sh-4.4# head -n 7 /etc/containers/storage.confExample output
[storage] driver = "overlay" runroot = "/var/run/containers/storage" graphroot = "/var/lib/containers/storage" [storage.options] additionalimagestores = [] size = "8G"
3.3.3. Setting the default maximum container root partition size for Overlay with CRI-O Copiar enlaceEnlace copiado en el portapapeles!
The root partition of each container shows all of the available disk space of the underlying host. Follow this guidance to set a maximum partition size for the root disk of all containers.
To configure the maximum Overlay size, as well as other CRI-O options like the log level and PID limit, you can create the following
ContainerRuntimeConfig
apiVersion: machineconfiguration.openshift.io/v1
kind: ContainerRuntimeConfig
metadata:
name: overlay-size
spec:
machineConfigPoolSelector:
matchLabels:
custom-crio: overlay-size
containerRuntimeConfig:
pidsLimit: 2048
logLevel: debug
overlaySize: 8G
Procedure
Create the configuration object:
$ oc apply -f overlaysize.ymlTo apply the new CRI-O configuration to your worker nodes, edit the worker machine config pool:
$ oc edit machineconfigpool workerAdd the
label based on thecustom-crioname you set in thematchLabelsCRD:ContainerRuntimeConfigapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2020-07-09T15:46:34Z" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: ""Save the changes, then view the machine configs:
$ oc get machineconfigsNew
and99-worker-generated-containerruntimeobjects are created:rendered-worker-xyzExample output
99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36sAfter those objects are created, monitor the machine config pool for the changes to be applied:
$ oc get mcp workerThe worker nodes show
asUPDATING, as well as the number of machines, the number updated, and other details:TrueExample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20hWhen complete, the worker nodes transition back to
asUPDATING, and theFalsenumber matches theUPDATEDMACHINECOUNT:MACHINECOUNTExample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20hLooking at a worker machine, you see that the new 8 GB max size configuration is applied to all of the workers:
Example output
head -n 7 /etc/containers/storage.conf [storage] driver = "overlay" runroot = "/var/run/containers/storage" graphroot = "/var/lib/containers/storage" [storage.options] additionalimagestores = [] size = "8G"Looking inside a container, you see that the root partition is now 8 GB:
Example output
~ $ df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /
Chapter 4. Post-installation cluster tasks Copiar enlaceEnlace copiado en el portapapeles!
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements.
4.1. Available cluster customizations Copiar enlaceEnlace copiado en el portapapeles!
You complete most of the cluster configuration and customization after you deploy your OpenShift Container Platform cluster. A number of configuration resources are available.
If you install your cluster on IBM Z, not all features and functions are available.
You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider.
For current documentation of the settings that you control by using these resources, use the
oc explain
oc explain builds --api-version=config.openshift.io/v1
4.1.1. Cluster configuration resources Copiar enlaceEnlace copiado en el portapapeles!
All cluster configuration resources are globally scoped (not namespaced) and named
cluster
| Resource name | Description |
|---|---|
|
| Provides API server configuration such as certificates and certificate authorities. |
|
| Controls the identity provider and authentication configuration for the cluster. |
|
| Controls default and enforced configuration for all builds on the cluster. |
|
| Configures the behavior of the web console interface, including the logout behavior. |
|
| Enables FeatureGates so that you can use Tech Preview features. |
|
| Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). |
|
| Configuration details related to routing such as the default domain for routes. |
|
| Configures identity providers and other behavior related to internal OAuth server flows. |
|
| Configures how projects are created including the project template. |
|
| Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. |
|
| Configures scheduler behavior such as policies and default node selectors. |
4.1.2. Operator configuration resources Copiar enlaceEnlace copiado en el portapapeles!
These configuration resources are cluster-scoped instances, named
cluster
| Resource name | Description |
|---|---|
|
| Controls console appearance such as branding customizations |
|
| Configures internal image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. |
|
| Configures the Samples Operator to control which example image streams and templates are installed on the cluster. |
4.1.3. Additional configuration resources Copiar enlaceEnlace copiado en el portapapeles!
These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances.
| Resource name | Instance name | Namespace | Description |
|---|---|---|---|
|
|
|
| Controls the Alertmanager deployment parameters. |
|
|
|
| Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. |
4.1.4. Informational Resources Copiar enlaceEnlace copiado en el portapapeles!
You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly.
| Resource name | Instance name | Description |
|---|---|---|
|
|
| In OpenShift Container Platform 4.8, you must not customize the
|
|
|
| You cannot modify the DNS settings for your cluster. You can view the DNS Operator status. |
|
|
| Configuration details allowing the cluster to interact with its cloud provider. |
|
|
| You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation. |
4.2. Updating the global cluster pull secret Copiar enlaceEnlace copiado en el portapapeles!
You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret.
The procedure is required when users use a separate registry to store images than the registry used during installation.
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin
Procedure
Optional: To append a new pull secret to the existing pull secret, complete the following steps:
Enter the following command to download the pull secret:
$ oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location>1 - 1
- Provide the path to the pull secret file.
Enter the following command to add the new pull secret:
$ oc registry login --registry="<registry>" \1 --auth-basic="<username>:<password>" \2 --to=<pull_secret_location>3 Alternatively, you can perform a manual update to the pull secret file.
Enter the following command to update the global pull secret for your cluster:
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location>1 - 1
- Provide the path to the new pull secret file.
This update is rolled out to all nodes, which can take some time depending on the size of your cluster.
NoteAs of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot.
4.3. Adjust worker nodes Copiar enlaceEnlace copiado en el portapapeles!
If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new machine sets, scale them up, then scale the original machine set down before removing them.
4.3.1. Understanding the difference between machine sets and the machine config pool Copiar enlaceEnlace copiado en el portapapeles!
MachineSet
The
MachineConfigPool
MachineConfigController
The
MachineConfigPool
The
NodeSelector
MachineSet
4.3.2. Scaling a machine set manually Copiar enlaceEnlace copiado en el portapapeles!
To add or remove an instance of a machine in a machine set, you can manually scale the machine set.
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets.
Prerequisites
-
Install an OpenShift Container Platform cluster and the command line.
oc -
Log in to as a user with
ocpermission.cluster-admin
Procedure
View the machine sets that are in the cluster:
$ oc get machinesets -n openshift-machine-apiThe machine sets are listed in the form of
.<clusterid>-worker-<aws-region-az>View the machines that are in the cluster:
$ oc get machine -n openshift-machine-apiSet the annotation on the machine that you want to delete:
$ oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true"Cordon and drain the node that you want to delete:
$ oc adm cordon <node_name> $ oc adm drain <node_name>Scale the machine set:
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-apiOr:
$ oc edit machineset <machineset> -n openshift-machine-apiTipYou can alternatively apply the following YAML to scale the machine set:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2You can scale the machine set up or down. It takes several minutes for the new machines to be available.
Verification
Verify the deletion of the intended machine:
$ oc get machines
4.3.3. The machine set deletion policy Copiar enlaceEnlace copiado en el portapapeles!
Random
Newest
Oldest
Random
spec:
deletePolicy: <delete_policy>
replicas: <desired_replica_count>
Specific machines can also be prioritized for deletion by adding the annotation
machine.openshift.io/cluster-api-delete-machine=true
By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to
0
Custom machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker machine sets are scaling down. This prevents service disruption.
4.3.4. Creating default cluster-wide node selectors Copiar enlaceEnlace copiado en el portapapeles!
You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes.
With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels.
You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down.
You can add additional key/value pairs to a pod. But you cannot add a different value for a default key.
Procedure
To add a default cluster-wide node selector:
Edit the Scheduler Operator CR to add the default cluster-wide node selectors:
$ oc edit scheduler clusterExample Scheduler Operator CR with a node selector
apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east1 mastersSchedulable: false policy: name: ""- 1
- Add a node selector with the appropriate
<key>:<value>pairs.
After making this change, wait for the pods in the
project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy.openshift-kube-apiserverAdd labels to a node by using a machine set or editing the node directly:
Use a machine set to add labels to nodes managed by the machine set when a node is created:
Run the following command to add labels to a
object:MachineSet$ oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api1 - 1
- Add a
<key>/<value>pair for each label.
For example:
$ oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-apiTipYou can alternatively apply the following YAML to add labels to a machine set:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node"Verify that the labels are added to the
object by using theMachineSetcommand:oc editFor example:
$ oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-apiExample
MachineSetobjectapiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ...Redeploy the nodes associated with that machine set by scaling down to
and scaling up the nodes:0For example:
$ oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api$ oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-apiWhen the nodes are ready and available, verify that the label is added to the nodes by using the
command:oc get$ oc get nodes -l <key>=<value>For example:
$ oc get nodes -l type=user-nodeExample output
NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.18.3+002a51f
Add labels directly to a node:
Edit the
object for the node:Node$ oc label nodes <name> <key>=<value>For example, to label a node:
$ oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=eastTipYou can alternatively apply the following YAML to add labels to a node:
kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east"Verify that the labels are added to the node using the
command:oc get$ oc get nodes -l <key>=<value>,<key>=<value>For example:
$ oc get nodes -l type=user-node,region=eastExample output
NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.18.3+002a51f
4.4. Creating infrastructure machine sets for production environments Copiar enlaceEnlace copiado en el portapapeles!
You can create a machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets.
To create an infrastructure node, you can use a machine set, post_installation_configuration/cluster-tasks.adoc#creating-an-infra-node_post-install-cluster-tasks[assign a label to the nodes], or use a machine config pool.
For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds.
Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label.
4.4.1. Creating a machine set Copiar enlaceEnlace copiado en el portapapeles!
In addition to the ones created by the installation program, you can create your own machine sets to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI ().
oc -
Log in to as a user with
ocpermission.cluster-admin
Procedure
Create a new YAML file that contains the machine set custom resource (CR) sample and is named
.<file_name>.yamlEnsure that you set the
and<clusterID>parameter values.<role>If you are not sure which value to set for a specific field, you can check an existing machine set from your cluster:
$ oc get machinesets -n openshift-machine-apiExample output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55mCheck values of a specific machine set:
$ oc get machineset <machineset_name> -n \ openshift-machine-api -o yamlExample output
... template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk1 machine.openshift.io/cluster-api-machine-role: worker2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
Create the new
CR:MachineSet$ oc create -f <file_name>.yamlView the list of machine sets:
$ oc get machineset -n openshift-machine-apiExample output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55mWhen the new machine set is available, the
andDESIREDvalues match. If the machine set is not available, wait a few minutes and run the command again.CURRENT
4.4.2. Creating an infrastructure node Copiar enlaceEnlace copiado en el portapapeles!
See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes (also known as the master nodes) are managed by the machine API.
Requirements of the cluster dictate that infrastructure, also called
infra
app
Procedure
Add a label to the worker node that you want to act as application node:
$ oc label node <node-name> node-role.kubernetes.io/app=""Add a label to the worker nodes that you want to act as infrastructure nodes:
$ oc label node <node-name> node-role.kubernetes.io/infra=""Check to see if applicable nodes now have the
role andinfraroles:app$ oc get nodesCreate a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod’s selector.
ImportantIf the default node selector key conflicts with the key of a pod’s label, then the default node selector is not applied.
However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as
, when a pod’s label is set to a different node role, such asnode-role.kubernetes.io/infra="", can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles.node-role.kubernetes.io/master=""You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts.
Edit the
object:Scheduler$ oc edit scheduler clusterAdd the
field with the appropriate node selector:defaultNodeSelectorapiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-11 ...- 1
- This example node selector deploys pods on nodes in the
us-east-1region by default.
- Save the file to apply the changes.
You can now move infrastructure resources to the newly labeled
infra
4.4.3. Creating a machine config pool for infrastructure machines Copiar enlaceEnlace copiado en el portapapeles!
If you need infrastructure machines to have dedicated configurations, you must create an infra pool.
Procedure
Add a label to the node you want to assign as the infra node with a specific label:
$ oc label node <node_name> <label>$ oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=Create a machine config pool that contains both the worker role and your custom role as machine config selector:
$ cat infra.mcp.yamlExample output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: ""2 NoteCustom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool.
After you have the YAML file, you can create the machine config pool:
$ oc create -f infra.mcp.yamlCheck the machine configs to ensure that the infrastructure configuration rendered successfully:
$ oc get machineconfigExample output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13dYou should see a new machine config, with the
prefix.rendered-infra-*Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as
. Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes.infraNoteAfter you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration.
Create a machine config:
$ cat infra.mc.yamlExample output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra- 1
- Add the label you added to the node as a
nodeSelector.
Apply the machine config to the infra-labeled nodes:
$ oc create -f infra.mc.yaml
Confirm that your new machine config pool is available:
$ oc get mcpExample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91mIn this example, a worker node was changed to an infra node.
4.5. Assigning machine set resources to infrastructure nodes Copiar enlaceEnlace copiado en el portapapeles!
After creating an infrastructure machine set, the
worker
infra
infra
worker
However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control.
4.5.1. Binding infrastructure node workloads using taints and tolerations Copiar enlaceEnlace copiado en el portapapeles!
If you have an infra node that has the
infra
worker
It is recommended that you preserve the dual
infra,worker
worker
master
worker
worker
infra
Prerequisites
-
Configure additional objects in your OpenShift Container Platform cluster.
MachineSet
Procedure
Add a taint to the infra node to prevent scheduling user workloads on it:
Determine if the node has the taint:
$ oc describe nodes <node_name>Sample output
oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ...This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the next step.
If you have not configured a taint to prevent scheduling user workloads on it:
$ oc adm taint nodes <node_name> <key>:<effect>For example:
$ oc adm taint nodes node1 node-role.kubernetes.io/infra:NoScheduleTipYou can alternatively apply the following YAML to add the taint:
kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule ...This example places a taint on
that has keynode1and taint effectnode-role.kubernetes.io/infra. Nodes with theNoScheduleeffect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.NoScheduleNoteIf a descheduler is used, pods violating node taints could be evicted from the cluster.
Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the
object specification:Podtolerations: - effect: NoSchedule1 key: node-role.kubernetes.io/infra2 operator: Exists3 This toleration matches the taint created by the
command. A pod with this toleration can be scheduled onto the infra node.oc adm taintNoteMoving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.
- Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details.
4.6. Moving resources to infrastructure machine sets Copiar enlaceEnlace copiado en el portapapeles!
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.
4.6.1. Moving the router Copiar enlaceEnlace copiado en el portapapeles!
You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.
Prerequisites
- Configure additional machine sets in your OpenShift Container Platform cluster.
Procedure
View the
custom resource for the router Operator:IngressController$ oc get ingresscontroller default -n openshift-ingress-operator -o yamlThe command output resembles the following text:
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=defaultEdit the
resource and change theingresscontrollerto use thenodeSelectorlabel:infra$ oc edit ingresscontroller default -n openshift-ingress-operatorspec: nodePlacement: nodeSelector:1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved- 1
- Add a
nodeSelectorparameter with the appropriate value to the component you want to move. You can use anodeSelectorin the format shown or use<key>: <value>pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration.
Confirm that the router pod is running on the
node.infraView the list of router pods and note the node name of the running pod:
$ oc get pod -n openshift-ingress -o wideExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>In this example, the running pod is on the
node.ip-10-0-217-226.ec2.internalView the node status of the running pod:
$ oc get node <node_name>1 - 1
- Specify the
<node_name>that you obtained from the pod list.
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.21.0Because the role list includes
, the pod is running on the correct node.infra
4.6.2. Moving the default registry Copiar enlaceEnlace copiado en el portapapeles!
You configure the registry Operator to deploy its pods to different nodes.
Prerequisites
- Configure additional machine sets in your OpenShift Container Platform cluster.
Procedure
View the
object:config/instance$ oc get configs.imageregistry.operator.openshift.io/cluster -o yamlExample output
apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ...Edit the
object:config/instance$ oc edit configs.imageregistry.operator.openshift.io/clusterspec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector:1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved- 1
- Add a
nodeSelectorparameter with the appropriate value to the component you want to move. You can use anodeSelectorin the format shown or use<key>: <value>pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Verify the registry pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry pod is located:
$ oc get pods -o wide -n openshift-image-registryConfirm the node has the label you specified:
$ oc describe node <node_name>Review the command output and confirm that
is in thenode-role.kubernetes.io/infralist.LABELS
4.6.3. Moving the monitoring solution Copiar enlaceEnlace copiado en el portapapeles!
The monitoring stack includes multiple components, including Prometheus, Grafana, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.
Procedure
Edit the
config map and change thecluster-monitoring-configto use thenodeSelectorlabel:infra$ oc edit configmap cluster-monitoring-config -n openshift-monitoringapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector:1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute- 1
- Add a
nodeSelectorparameter with the appropriate value to the component you want to move. You can use anodeSelectorin the format shown or use<key>: <value>pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Watch the monitoring pods move to the new machines:
$ watch 'oc get pod -n openshift-monitoring -o wide'If a component has not moved to the
node, delete the pod with this component:infra$ oc delete pod -n openshift-monitoring <pod>The component from the deleted pod is re-created on the
node.infra
4.6.4. Moving OpenShift Logging resources Copiar enlaceEnlace copiado en el portapapeles!
You can configure the Cluster Logging Operator to deploy the pods for OpenShift Logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
Prerequisites
- OpenShift Logging and Elasticsearch must be installed. These features are not installed by default.
Procedure
Edit the
custom resource (CR) in theClusterLoggingproject:openshift-logging$ oc edit ClusterLogging instanceapiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector:1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector:2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana ...
Verification
To verify that a component has moved, you can use the
oc get pod -o wide
For example:
You want to move the Kibana pod from the
node:ip-10-0-147-79.us-east-2.compute.internal$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wideExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>You want to move the Kibana pod to the
node, a dedicated infrastructure node:ip-10-0-139-48.us-east-2.compute.internal$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.21.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.21.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.21.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.21.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.21.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.21.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.21.0Note that the node has a
label:node-role.kubernetes.io/infra: ''$ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yamlExample output
kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ...To move the Kibana pod, edit the
CR to add a node selector:ClusterLoggingapiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector:1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana- 1
- Add a node selector to match the label in the node specification.
After you save the CR, the current Kibana pod is terminated and new pod is deployed:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33sThe new pod is on the
node:ip-10-0-139-48.us-east-2.compute.internal$ oc get pod kibana-7d85dcffc8-bfpfp -o wideExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>After a few moments, the original Kibana pod is removed.
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s
4.7. About the cluster autoscaler Copiar enlaceEnlace copiado en el portapapeles!
The cluster autoscaler adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace.
The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.
The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources.
Ensure that the
maxNodesTotal
ClusterAutoscaler
Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply:
- The sum of CPU and memory requests of all pods running on the node is less than 50% of the allocated resources on the node.
- The cluster autoscaler can move all pods running on the node to the other nodes.
- The cluster autoscaler does not have scale down disabled annotation.
If the following types of pods are present on a node, the cluster autoscaler will not remove the node:
- Pods with restrictive pod disruption budgets (PDBs).
- Kube-system pods that do not run on the node by default.
- Kube-system pods that do not have a PDB or have a PDB that is too restrictive.
- Pods that are not backed by a controller object such as a deployment, replica set, or stateful set.
- Pods with local storage.
- Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on.
-
Unless they also have a annotation, pods that have a
"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"annotation."cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62.
If you configure the cluster autoscaler, additional usage restrictions apply:
- Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods.
- Specify requests for your pods.
- If you have to prevent pods from being deleted too quickly, configure appropriate PDBs.
- Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure.
- Do not run additional node group autoscalers, especially the ones offered by your cloud provider.
The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment’s or replica set’s number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes.
The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available.
Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources.
4.7.1. ClusterAutoscaler resource definition Copiar enlaceEnlace copiado en el portapapeles!
This
ClusterAutoscaler
apiVersion: "autoscaling.openshift.io/v1"
kind: "ClusterAutoscaler"
metadata:
name: "default"
spec:
podPriorityThreshold: -10
resourceLimits:
maxNodesTotal: 24
cores:
min: 8
max: 128
memory:
min: 4
max: 256
gpus:
- type: nvidia.com/gpu
min: 0
max: 16
- type: amd.com/gpu
min: 0
max: 4
scaleDown:
enabled: true
delayAfterAdd: 10m
delayAfterDelete: 5m
delayAfterFailure: 30s
unneededTime: 5m
- 1
- Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The
podPriorityThresholdvalue is compared to the value of thePriorityClassthat you assign to each pod. - 2
- Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your
MachineAutoscalerresources. - 3
- Specify the minimum number of cores to deploy in the cluster.
- 4
- Specify the maximum number of cores to deploy in the cluster.
- 5
- Specify the minimum amount of memory, in GiB, in the cluster.
- 6
- Specify the maximum amount of memory, in GiB, in the cluster.
- 7
- Optionally, specify the type of GPU node to deploy. Only
nvidia.com/gpuandamd.com/gpuare valid types. - 8
- Specify the minimum number of GPUs to deploy in the cluster.
- 9
- Specify the maximum number of GPUs to deploy in the cluster.
- 10
- In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including
ns,us,ms,s,m, andh. - 11
- Specify whether the cluster autoscaler can remove unnecessary nodes.
- 12
- Optionally, specify the period to wait before deleting a node after a node has recently been added. If you do not specify a value, the default value of
10mis used. - 13
- Specify the period to wait before deleting a node after a node has recently been deleted. If you do not specify a value, the default value of
10sis used. - 14
- Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of
3mis used. - 15
- Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of
10mis used.
When performing a scaling operation, the cluster autoscaler remains within the ranges set in the
ClusterAutoscaler
The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes.
4.7.2. Deploying the cluster autoscaler Copiar enlaceEnlace copiado en el portapapeles!
To deploy the cluster autoscaler, you create an instance of the
ClusterAutoscaler
Procedure
-
Create a YAML file for the resource that contains the customized resource definition.
ClusterAutoscaler Create the resource in the cluster:
$ oc create -f <filename>.yaml1 - 1
<filename>is the name of the resource file that you customized.
4.8. About the machine autoscaler Copiar enlaceEnlace copiado en el portapapeles!
The machine autoscaler adjusts the number of Machines in the machine sets that you deploy in an OpenShift Container Platform cluster. You can scale both the default
worker
MachineAutoscaler
You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster.
4.8.1. MachineAutoscaler resource definition Copiar enlaceEnlace copiado en el portapapeles!
This
MachineAutoscaler
apiVersion: "autoscaling.openshift.io/v1beta1"
kind: "MachineAutoscaler"
metadata:
name: "worker-us-east-1a"
namespace: "openshift-machine-api"
spec:
minReplicas: 1
maxReplicas: 12
scaleTargetRef:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
name: worker-us-east-1a
- 1
- Specify the machine autoscaler name. To make it easier to identify which machine set this machine autoscaler scales, specify or include the name of the machine set to scale. The machine set name takes the following form:
<clusterid>-<machineset>-<region>. - 2
- Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, RHOSP, or vSphere, this value can be set to
0. For other providers, do not set this value to0.You can save on costs by setting this value to
for use cases such as running expensive or limited-usage hardware that is used for specialized workloads, or by scaling a machine set with extra large machines. The cluster autoscaler scales the machine set down to zero if the machines are not in use.0ImportantDo not set the
value tospec.minReplicasfor the three compute machine sets that are created during the OpenShift Container Platform installation process for an installer provisioned infrastructure.0 - 3
- Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified zone after it initiates cluster scaling. Ensure that the
maxNodesTotalvalue in theClusterAutoscalerresource definition is large enough to allow the machine autoscaler to deploy this number of machines. - 4
- In this section, provide values that describe the existing machine set to scale.
- 5
- The
kindparameter value is alwaysMachineSet. - 6
- The
namevalue must match the name of an existing machine set, as shown in themetadata.nameparameter value.
4.8.2. Deploying the machine autoscaler Copiar enlaceEnlace copiado en el portapapeles!
To deploy the machine autoscaler, you create an instance of the
MachineAutoscaler
Procedure
-
Create a YAML file for the resource that contains the customized resource definition.
MachineAutoscaler Create the resource in the cluster:
$ oc create -f <filename>.yaml1 - 1
<filename>is the name of the resource file that you customized.
4.9. Enabling Technology Preview features using FeatureGates Copiar enlaceEnlace copiado en el portapapeles!
You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the
FeatureGate
4.9.1. Understanding feature gates Copiar enlaceEnlace copiado en el portapapeles!
You can use the
FeatureGate
You can activate any of the following feature sets by using the
FeatureGate
- . This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these tech preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Enabling this feature set cannot be undone and prevents minor version updates. This feature set is not recommended on production clusters.
TechPreviewNoUpgradeWarningEnabling the
feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters.TechPreviewNoUpgradeThe following Technology Preview features are enabled by this feature set:
- Azure Disk CSI Driver Operator. Enables the provisioning of persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure Disk Storage.
- VMware vSphere CSI Driver Operator. Enables the provisioning of persistent volumes (PVs) by using the Container Storage Interface (CSI) VMware vSphere driver for Virtual Machine Disk (VMDK) volumes.
CSI automatic migration. Enables the automatic migration of supported in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers. Available as a Technology Preview for:
- Amazon Web Services (AWS) Elastic Block Storage (EBS)
- OpenStack Cinder
4.9.2. Enabling feature sets using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the
FeatureGate
Procedure
To enable feature sets:
- In the OpenShift Container Platform web console, switch to the Administration → Custom Resource Definitions page.
- On the Custom Resource Definitions page, click FeatureGate.
- On the Custom Resource Definition Details page, click the Instances tab.
- Click the cluster feature gate, then click the YAML tab.
Edit the cluster instance to add specific feature sets:
WarningEnabling the
feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters.TechPreviewNoUpgradeSample Feature Gate custom resource
apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster1 .... spec: featureSet: TechPreviewNoUpgrade2 After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied.
Verification
You can verify that the feature gates are enabled by looking at the
kubelet.conf
- From the Administrator perspective in the web console, navigate to Compute → Nodes.
- Select a node.
- In the Node details page, click Terminal.
In the terminal window, change your root directory to
:/hostsh-4.2# chroot /hostView the
file:kubelet.confsh-4.2# cat /etc/kubernetes/kubelet.confSample output
... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false ...The features that are listed as
are enabled on your cluster.trueNoteThe features listed vary depending upon the OpenShift Container Platform version.
4.9.3. Enabling feature sets using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can use the OpenShift CLI (
oc
FeatureGate
Prerequisites
-
You have installed the OpenShift CLI ().
oc
Procedure
To enable feature sets:
Edit the
CR namedFeatureGate:cluster$ oc edit featuregate clusterWarningEnabling the
feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters.TechPreviewNoUpgradeSample FeatureGate custom resource
apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster1 spec: featureSet: TechPreviewNoUpgrade2 After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied.
Verification
You can verify that the feature gates are enabled by looking at the
kubelet.conf
- From the Administrator perspective in the web console, navigate to Compute → Nodes.
- Select a node.
- In the Node details page, click Terminal.
In the terminal window, change your root directory to
:/hostsh-4.2# chroot /hostView the
file:kubelet.confsh-4.2# cat /etc/kubernetes/kubelet.confSample output
... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false ...The features that are listed as
are enabled on your cluster.trueNoteThe features listed vary depending upon the OpenShift Container Platform version.
4.10. etcd tasks Copiar enlaceEnlace copiado en el portapapeles!
Back up etcd, enable or disable etcd encryption, or defragment etcd data.
4.10.1. About etcd encryption Copiar enlaceEnlace copiado en el portapapeles!
By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.
When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:
- Secrets
- Config maps
- Routes
- OAuth access tokens
- OAuth authorize tokens
When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup.
Etcd encryption only encrypts values, not keys. Resource types, namespaces, and object names are unencrypted.
If etcd encryption is enabled during a backup, the
static_kuberesources_<datetimestamp>.tar.gz
4.10.2. Enabling etcd encryption Copiar enlaceEnlace copiado en el portapapeles!
You can enable etcd encryption to encrypt sensitive resources in your cluster.
Do not back up etcd resources until the initial encryption process is completed. If the encryption process is not completed, the backup might be only partially encrypted.
After you enable etcd encryption, several changes can occur:
- The etcd encryption might affect the memory consumption of a few resources.
- You might notice a transient affect on backup performance because the leader must serve the backup.
- A disk I/O can affect the node that receives the backup state.
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin
Procedure
Modify the
object:APIServer$ oc edit apiserverSet the
field type toencryption:aescbcspec: encryption: type: aescbc1 - 1
- The
aescbctype means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption.
Save the file to apply the changes.
The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.
Verify that etcd encryption was successful.
Review the
status condition for the OpenShift API server to verify that its resources were successfully encrypted:Encrypted$ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'The output shows
upon successful encryption:EncryptionCompletedEncryptionCompleted All resources encrypted: routes.route.openshift.ioIf the output shows
, encryption is still in progress. Wait a few minutes and try again.EncryptionInProgressReview the
status condition for the Kubernetes API server to verify that its resources were successfully encrypted:Encrypted$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'The output shows
upon successful encryption:EncryptionCompletedEncryptionCompleted All resources encrypted: secrets, configmapsIf the output shows
, encryption is still in progress. Wait a few minutes and try again.EncryptionInProgressReview the
status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted:Encrypted$ oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'The output shows
upon successful encryption:EncryptionCompletedEncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.ioIf the output shows
, encryption is still in progress. Wait a few minutes and try again.EncryptionInProgress
4.10.3. Disabling etcd encryption Copiar enlaceEnlace copiado en el portapapeles!
You can disable encryption of etcd data in your cluster.
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin
Procedure
Modify the
object:APIServer$ oc edit apiserverSet the
field type toencryption:identityspec: encryption: type: identity1 - 1
- The
identitytype is the default value and means that no encryption is performed.
Save the file to apply the changes.
The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.
Verify that etcd decryption was successful.
Review the
status condition for the OpenShift API server to verify that its resources were successfully decrypted:Encrypted$ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'The output shows
upon successful decryption:DecryptionCompletedDecryptionCompleted Encryption mode set to identity and everything is decryptedIf the output shows
, decryption is still in progress. Wait a few minutes and try again.DecryptionInProgressReview the
status condition for the Kubernetes API server to verify that its resources were successfully decrypted:Encrypted$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'The output shows
upon successful decryption:DecryptionCompletedDecryptionCompleted Encryption mode set to identity and everything is decryptedIf the output shows
, decryption is still in progress. Wait a few minutes and try again.DecryptionInProgressReview the
status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted:Encrypted$ oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'The output shows
upon successful decryption:DecryptionCompletedDecryptionCompleted Encryption mode set to identity and everything is decryptedIf the output shows
, decryption is still in progress. Wait a few minutes and try again.DecryptionInProgress
4.10.4. Backing up etcd data Copiar enlaceEnlace copiado en el portapapeles!
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
Only save a backup from a single control plane host (also known as the master host). Do not take a backup from each control plane host in the cluster.
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin You have checked whether the cluster-wide proxy is enabled.
TipYou can check whether the proxy is enabled by reviewing the output of
. The proxy is enabled if theoc get proxy cluster -o yaml,httpProxy, andhttpsProxyfields have values set.noProxy
Procedure
Start a debug session for a control plane node:
$ oc debug node/<node_name>Change your root directory to
:/hostsh-4.2# chroot /host-
If the cluster-wide proxy is enabled, be sure that you have exported the ,
NO_PROXY, andHTTP_PROXYenvironment variables.HTTPS_PROXY Run the
script and pass in the location to save the backup to.cluster-backup.shTipThe
script is maintained as a component of the etcd Cluster Operator and is a wrapper around thecluster-backup.shcommand.etcdctl snapshot savesh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backupExample script output
found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backupIn this example, two files are created in the
directory on the control plane host:/home/core/assets/backup/-
: This file is the etcd snapshot. The
snapshot_<datetimestamp>.dbscript confirms its validity.cluster-backup.sh - : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.
static_kuberesources_<datetimestamp>.tar.gzNoteIf etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot.
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.
-
4.10.5. Defragmenting etcd data Copiar enlaceEnlace copiado en el portapapeles!
For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.
Monitor these key metrics:
-
, which is the current quota limit
etcd_server_quota_backend_bytes -
, which indicates the actual database usage after a history compaction
etcd_mvcc_db_total_size_in_use_in_bytes -
, which shows the database size, including free space waiting for defragmentation
etcd_debugging_mvcc_db_total_size_in_bytes
Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.
History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.
Because etcd writes data to disk, its performance strongly depends on disk performance. Consider defragmenting etcd every month, twice a month, or as needed for your cluster. You can also monitor the
etcd_db_total_size_in_bytes
You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression:
(etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024
Defragmenting etcd is a blocking action. The etcd member will not response until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.
Follow this procedure to defragment etcd data on each etcd member.
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin
Procedure
Determine which etcd member is the leader, because the leader should be defragmented last.
Get the list of etcd pods:
$ oc get pods -n openshift-etcd -o wide | grep -v quorum-guard | grep etcdExample output
etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>Choose a pod and run the following command to determine which etcd member is the leader:
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w tableExample output
Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+Based on the
column of this output, theIS LEADERendpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader ishttps://10.0.199.170:2379.etcd-ip-10-0-199-170.example.redhat.com
Defragment an etcd member.
Connect to the running etcd container, passing in the name of a pod that is not the leader:
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.comUnset the
environment variable:ETCDCTL_ENDPOINTSsh-4.4# unset ETCDCTL_ENDPOINTSDefragment the etcd member:
sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defragExample output
Finished defragmenting etcd member[https://localhost:2379]If a timeout error occurs, increase the value for
until the command succeeds.--command-timeoutVerify that the database size was reduced:
sh-4.4# etcdctl endpoint status -w table --clusterExample output
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | |1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.
Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.
Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.
If any
alarms were triggered due to the space quota being exceeded, clear them.NOSPACECheck if there are any
alarms:NOSPACEsh-4.4# etcdctl alarm listExample output
memberID:12345678912345678912 alarm:NOSPACEClear the alarms:
sh-4.4# etcdctl alarm disarm
4.10.6. Restoring to a previous cluster state Copiar enlaceEnlace copiado en el portapapeles!
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts (also known as the master hosts).
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin - A healthy control plane host to use as the recovery host.
- SSH access to control plane hosts.
-
A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: and
snapshot_<datetimestamp>.db.static_kuberesources_<datetimestamp>.tar.gz
For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one.
Procedure
- Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
Establish SSH connectivity to each of the control plane nodes, including the recovery host.
The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.
ImportantIf you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
Copy the etcd backup directory to the recovery control plane host.
This procedure assumes that you copied the
directory containing the etcd snapshot and the resources for the static pods to thebackupdirectory of your recovery control plane host./home/core/Stop the static pods on any other control plane nodes.
NoteIt is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host.
- Access a control plane host that is not the recovery host.
Move the existing etcd pod file out of the kubelet manifest directory:
$ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmpVerify that the etcd pods are stopped.
$ sudo crictl ps | grep etcd | grep -v operatorThe output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the existing Kubernetes API server pod file out of the kubelet manifest directory:
$ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmpVerify that the Kubernetes API server pods are stopped.
$ sudo crictl ps | grep kube-apiserver | grep -v operatorThe output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the etcd data directory to a different location:
$ sudo mv /var/lib/etcd/ /tmp- Repeat this step on each of the other control plane hosts that is not the recovery host.
- Access the recovery control plane host.
If the cluster-wide proxy is enabled, be sure that you have exported the
,NO_PROXY, andHTTP_PROXYenvironment variables.HTTPS_PROXYTipYou can check whether the proxy is enabled by reviewing the output of
. The proxy is enabled if theoc get proxy cluster -o yaml,httpProxy, andhttpsProxyfields have values set.noProxyRun the restore script on the recovery control plane host and pass in the path to the etcd backup directory:
$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/backupExample script output
...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yamlNoteThe restore process can cause nodes to enter the
state if the node certificates were updated after the last etcd backup.NotReadyCheck the nodes to ensure they are in the
state.ReadyRun the following command:
$ oc get nodes -wSample output
NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.23.3+e419edf host-172-25-75-38 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-40 Ready master 3d20h v1.23.3+e419edf host-172-25-75-65 Ready master 3d20h v1.23.3+e419edf host-172-25-75-74 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-79 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-86 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-98 Ready infra,worker 3d20h v1.23.3+e419edfIt can take several minutes for all nodes to report their state.
If any nodes are in the
state, log in to the nodes and remove all of the PEM files from theNotReadydirectory on each node. You can SSH into the nodes or use the terminal window in the web console./var/lib/kubelet/pki$ ssh -i <ssh-key-path> core@<master-hostname>Sample
pkidirectorysh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem
Restart the kubelet service on all control plane hosts.
From the recovery host, run the following command:
$ sudo systemctl restart kubelet.service- Repeat this step on all other control plane hosts.
Approve the pending CSRs:
Get the list of current CSRs:
$ oc get csrExample output
NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending4 ...Review the details of a CSR to verify that it is valid:
$ oc describe csr <csr_name>1 - 1
<csr_name>is the name of a CSR from the list of current CSRs.
Approve each valid
CSR:node-bootstrapper$ oc adm certificate approve <csr_name>For user-provisioned installations, approve each valid kubelet service CSR:
$ oc adm certificate approve <csr_name>
Verify that the single member control plane has started successfully.
From the recovery host, verify that the etcd container is running.
$ sudo crictl ps | grep etcd | grep -v operatorExample output
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0From the recovery host, verify that the etcd pod is running.
$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcdNoteIf you attempt to run
prior to running this command and receive the following error, wait a few moments for the authentication controllers to start and try again.oc loginUnable to connect to the server: EOFExample output
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47sIf the status is
, or the output lists more than one running etcd pod, wait a few minutes and check again.Pending- Repeat this step for each lost control plane host that is not the recovery host.
Delete and recreate other non-recovery, control plane machines, one by one. After these machines are recreated, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new control plane node using the same method that was used to originally create it.
WarningDo not delete and recreate the machine for the recovery host.
Obtain the machine for one of the lost control plane hosts.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
$ oc get machines -n openshift-machine-api -o wideExample output:
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running- 1
- This is the control plane machine for the lost control plane host,
ip-10-0-131-183.ec2.internal.
Save the machine configuration to a file on your file system:
$ oc get machine clustername-8qw5l-master-0 \1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml- 1
- Specify the name of the control plane machine for the lost control plane host.
Edit the
file that was created in the previous step to assign a new name and remove unnecessary fields.new-master-machine.yamlRemove the entire
section:statusstatus: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: "2020-04-20T17:44:29Z" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: "2020-04-20T16:53:50Z" lastTransitionTime: "2020-04-20T16:53:50Z" message: machine successfully created reason: MachineCreationSucceeded status: "True" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatusChange the
field to a new name.metadata.nameIt is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example,
is changed toclustername-8qw5l-master-0:clustername-8qw5l-master-3apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: ... name: clustername-8qw5l-master-3 ...Remove the
field:spec.providerIDproviderID: aws:///us-east-1a/i-0fdb85790d76d0c3fRemove the
andmetadata.annotationsfields:metadata.generationannotations: machine.openshift.io/instance-state: running ... generation: 2Remove the
andmetadata.resourceVersionfields:metadata.uidresourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a
Delete the machine of the lost control plane host:
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-01 - 1
- Specify the name of the control plane machine for the lost control plane host.
Verify that the machine was deleted:
$ oc get machines -n openshift-machine-api -o wideExample output:
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a runningCreate the new machine using the
file:new-master-machine.yaml$ oc apply -f new-master-machine.yamlVerify that the new machine has been created:
$ oc get machines -n openshift-machine-api -o wideExample output:
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running- 1
- The new machine,
clustername-8qw5l-master-3is being created and is ready after the phase changes fromProvisioningtoRunning.
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
- Repeat these steps for each lost control plane host that is not the recovery host.
In a separate terminal window, log in to the cluster as a user with the
role by using the following command:cluster-admin$ oc login -u <cluster_admin>1 - 1
- For
<cluster_admin>, specify a user name with thecluster-adminrole.
Force etcd redeployment.
In a terminal that has access to the cluster as a
user, run the following command:cluster-admin$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge1 - 1
- The
forceRedeploymentReasonvalue must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.
Verify all nodes are updated to the latest revision.
In a terminal that has access to the cluster as a
user, run the following command:cluster-admin$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Review the
status condition for etcd to verify that all nodes are at the latest revision. The output showsNodeInstallerProgressingupon successful update:AllNodesAtLatestRevisionAllNodesAtLatestRevision 3 nodes are at revision 71 - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
, this means that the update is still in progress. Wait a few minutes and try again.2 nodes are at revision 6; 1 nodes are at revision 7After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.
In a terminal that has access to the cluster as a
user, run the following commands.cluster-adminForce a new rollout for the Kubernetes API server:
$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=mergeVerify all nodes are updated to the latest revision.
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Review the
status condition to verify that all nodes are at the latest revision. The output showsNodeInstallerProgressingupon successful update:AllNodesAtLatestRevisionAllNodesAtLatestRevision 3 nodes are at revision 71 - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
, this means that the update is still in progress. Wait a few minutes and try again.2 nodes are at revision 6; 1 nodes are at revision 7Force a new rollout for the Kubernetes controller manager:
$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=mergeVerify all nodes are updated to the latest revision.
$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Review the
status condition to verify that all nodes are at the latest revision. The output showsNodeInstallerProgressingupon successful update:AllNodesAtLatestRevisionAllNodesAtLatestRevision 3 nodes are at revision 71 - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
, this means that the update is still in progress. Wait a few minutes and try again.2 nodes are at revision 6; 1 nodes are at revision 7Force a new rollout for the Kubernetes scheduler:
$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=mergeVerify all nodes are updated to the latest revision.
$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Review the
status condition to verify that all nodes are at the latest revision. The output showsNodeInstallerProgressingupon successful update:AllNodesAtLatestRevisionAllNodesAtLatestRevision 3 nodes are at revision 71 - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
, this means that the update is still in progress. Wait a few minutes and try again.2 nodes are at revision 6; 1 nodes are at revision 7
Verify that all control plane hosts have started and joined the cluster.
In a terminal that has access to the cluster as a
user, run the following command:cluster-admin$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcdExample output
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores Kubernetes API information. This includes OpenShift Container Platform components such as routers, Operators, and third-party components.
Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using
oc login
4.10.7. Issues and workarounds for restoring a persistent storage state Copiar enlaceEnlace copiado en el portapapeles!
If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a
StatefulSet
The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.
The following are some example scenarios that produce an out-of-date status:
- MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.
- Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.
- Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.
A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from
or/dev/disk/by-iddirectories. This situation might cause the local PVs to refer to devices that no longer exist./devTo fix this problem, an administrator must:
- Manually remove the PVs with invalid devices.
- Remove symlinks from respective nodes.
-
Delete or
LocalVolumeobjects (see Storage → Configuring persistent storage → Persistent storage using local volumes → Deleting the Local Storage Operator Resources).LocalVolumeSet
4.11. Pod disruption budgets Copiar enlaceEnlace copiado en el portapapeles!
Understand and configure pod disruption budgets.
4.11.1. Understanding how to use pod disruption budgets to specify the number of pods that must be up Copiar enlaceEnlace copiado en el portapapeles!
A pod disruption budget is part of the Kubernetes API, which can be managed with
oc
PodDisruptionBudget
A
PodDisruptionBudget
- A label selector, which is a label query over a set of pods.
An availability level, which specifies the minimum number of pods that must be available simultaneously, either:
-
is the number of pods must always be available, even during a disruption.
minAvailable -
is the number of pods can be unavailable during a disruption.
maxUnavailable
-
A
maxUnavailable
0%
0
minAvailable
100%
You can check for pod disruption budgets across all projects with the following:
$ oc get poddisruptionbudget --all-namespaces
Example output
NAMESPACE NAME MIN-AVAILABLE SELECTOR
another-project another-pdb 4 bar=foo
test-project my-pdb 2 foo=bar
The
PodDisruptionBudget
minAvailable
Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements.
4.11.2. Specifying the number of pods that must be up with pod disruption budgets Copiar enlaceEnlace copiado en el portapapeles!
You can use a
PodDisruptionBudget
Procedure
To configure a pod disruption budget:
Create a YAML file with the an object definition similar to the following:
apiVersion: policy/v11 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 22 selector:3 matchLabels: foo: bar- 1
PodDisruptionBudgetis part of thepolicy/v1API group.- 2
- The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example,
20%. - 3
- A label query over a set of resources. The result of
matchLabelsandmatchExpressionsare logically conjoined. Leave this paramter blank, for exampleselector {}, to select all pods in the project.
Or:
apiVersion: policy/v11 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25%2 selector:3 matchLabels: foo: bar- 1
PodDisruptionBudgetis part of thepolicy/v1API group.- 2
- The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example,
20%. - 3
- A label query over a set of resources. The result of
matchLabelsandmatchExpressionsare logically conjoined. Leave this paramter blank, for exampleselector {}, to select all pods in the project.
Run the following command to add the object to project:
$ oc create -f </path/to/file> -n <project_name>
4.12. Rotating or removing cloud provider credentials Copiar enlaceEnlace copiado en el portapapeles!
After installing OpenShift Container Platform, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation.
To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
4.12.1. Rotating cloud provider credentials manually Copiar enlaceEnlace copiado en el portapapeles!
If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.
Prerequisites
Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:
- For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported.
- For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere are supported.
- You have changed the credentials that are used to interface with your cloud provider.
- The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.
Procedure
- In the Administrator perspective of the web console, navigate to Workloads → Secrets.
In the table on the Secrets page, find the root secret for your cloud provider.
Expand Platform Secret name AWS
aws-credsAzure
azure-credentialsGCP
gcp-credentialsRHOSP
openstack-credentialsRHV
ovirt-credentialsVMware vSphere
vsphere-creds-
Click the Options menu
in the same row as the secret and select Edit Secret.
- Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.
- Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.
If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials.
NoteIf the vSphere CSI Driver Operator is enabled, this step is not required.
To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the
role and run the following command:cluster-admin$ oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date )"'"}}' \ --type=mergeWhile the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports
. To view the status, run the following command:Progressing=true$ oc get co kube-controller-managerIf the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual
objects.CredentialsRequest-
Log in to the OpenShift Container Platform CLI as a user with the role.
cluster-admin Get the names and namespaces of all referenced component secrets:
$ oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'where
is the corresponding value for your cloud provider:<provider_spec>-
AWS:
AWSProviderSpec -
GCP:
GCPProviderSpec
Partial example output for AWS
{ "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" }-
AWS:
Delete each of the referenced component secrets:
$ oc delete secret <secret_name> \1 -n <secret_namespace>2 Example deletion of an AWS secret
$ oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-driversYou do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones.
-
Log in to the OpenShift Container Platform CLI as a user with the
Verification
To verify that the credentials have changed:
- In the Administrator perspective of the web console, navigate to Workloads → Secrets.
- Verify that the contents of the Value field or fields have changed.
4.12.2. Removing cloud provider credentials Copiar enlaceEnlace copiado en el portapapeles!
After installing an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in mint mode, you can remove the administrator-level credential secret from the
kube-system
Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.
Prerequisites
- Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP.
Procedure
- In the Administrator perspective of the web console, navigate to Workloads → Secrets.
In the table on the Secrets page, find the root secret for your cloud provider.
Expand Platform Secret name AWS
aws-credsGCP
gcp-credentials-
Click the Options menu
in the same row as the secret and select Delete Secret.
4.13. Configuring image streams for a disconnected cluster Copiar enlaceEnlace copiado en el portapapeles!
After installing OpenShift Container Platform in a disconnected environment, configure the image streams for the Cluster Samples Operator and the
must-gather
4.13.1. Cluster Samples Operator assistance for mirroring Copiar enlaceEnlace copiado en el portapapeles!
During installation, OpenShift Container Platform creates a config map named
imagestreamtag-to-image
openshift-cluster-samples-operator
imagestreamtag-to-image
The format of the key for each entry in the data field in the config map is
<image_stream_name>_<image_stream_tag_name>
During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to
Removed
Managed
The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators’s objects to reach the services they require.
You can use this config map as a reference for which images need to be mirrored for your image streams to import.
-
While the Cluster Samples Operator is set to , you can create your mirrored registry, or determine which existing mirrored registry you want to use.
Removed - Mirror the samples you want to the mirrored registry using the new config map as your guide.
-
Add any of the image streams you did not mirror to the list of the Cluster Samples Operator configuration object.
skippedImagestreams -
Set of the Cluster Samples Operator configuration object to the mirrored registry.
samplesRegistry -
Then set the Cluster Samples Operator to to install the image streams you have mirrored.
Managed
4.13.2. Using Cluster Samples Operator image streams with alternate or mirrored registries Copiar enlaceEnlace copiado en el portapapeles!
Most image streams in the
openshift
The
jenkins
jenkins-agent-maven
jenkins-agent-nodejs
Setting the
samplesRegistry
The
cli
installer
must-gather
tests
The Cluster Samples Operator must be set to
Managed
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin - Create a pull secret for your mirror registry.
Procedure
Access the images of a specific image stream to mirror, for example:
$ oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.ioMirror images from registry.redhat.io associated with any image streams you need in the restricted network environment into one of the defined mirrors, for example:
$ oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest ${MIRROR_ADDR}/rhscl/ruby-25-rhel7:latestCreate the cluster’s image configuration object:
$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-configAdd the required trusted CAs for the mirror in the cluster’s image configuration object:
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=mergeUpdate the
field in the Cluster Samples Operator configuration object to contain thesamplesRegistryportion of the mirror location defined in the mirror configuration:hostname$ oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operatorNoteThis is required because the image stream import process does not use the mirror or search mechanism at this time.
Add any image streams that are not mirrored into the
field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator toskippedImagestreamsin the Cluster Samples Operator configuration object.RemovedNoteThe Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them.
Many of the templates in the
namespace reference the image streams. So usingopenshiftto purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams.Removed
4.13.3. Preparing your cluster to gather support data Copiar enlaceEnlace copiado en el portapapeles!
Clusters using a restricted network must import the default must-gather image to gather debugging data for Red Hat support. The must-gather image is not imported by default, and clusters on a restricted network do not have access to the internet to pull the latest image from a remote repository.
Procedure
If you have not added your mirror registry’s trusted CA to your cluster’s image configuration object as part of the Cluster Samples Operator configuration, perform the following steps:
Create the cluster’s image configuration object:
$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-configAdd the required trusted CAs for the mirror in the cluster’s image configuration object:
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
Import the default must-gather image from your installation payload:
$ oc import-image is/must-gather -n openshift
When running the
oc adm must-gather
--image
$ oc adm must-gather --image=$(oc adm release info --image-for must-gather)
4.14. Configuring periodic importing of Cluster Sample Operator image stream tags Copiar enlaceEnlace copiado en el portapapeles!
You can ensure that you always have access to the latest versions of the Cluster Sample Operator images by periodically importing the image stream tags when new versions become available.
Procedure
Fetch all the imagestreams in the
namespace by running the following command:openshiftoc get imagestreams -nopenshiftFetch the tags for every imagestream in the
namespace by running the following command:openshift$ oc get is <image-stream-name> -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshiftFor example:
$ oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshiftExample output
1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12Schedule periodic importing of images for each tag present in the image stream by running the following command:
$ oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshiftFor example:
$ oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift $ oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshiftThis command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default.
Verify the scheduling status of the periodic import by running the following command:
oc get imagestream <image-stream-name> -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshiftFor example:
oc get imagestream ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshiftExample output
Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true
Chapter 5. Post-installation node tasks Copiar enlaceEnlace copiado en el portapapeles!
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements through certain node tasks.
5.1. Adding RHEL compute machines to an OpenShift Container Platform cluster Copiar enlaceEnlace copiado en el portapapeles!
Understand and work with RHEL compute nodes.
5.1.1. About adding RHEL compute nodes to a cluster Copiar enlaceEnlace copiado en el portapapeles!
In OpenShift Container Platform 4.8, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines, which are also known as worker machines, in your cluster if you use a user-provisioned infrastructure installation. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane, or master, machines in your cluster.
As with all installations that use user-provisioned infrastructure, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks.
Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster.
Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines.
You must add any RHEL compute machines to the cluster after you initialize the control plane.
5.1.2. System requirements for RHEL compute nodes Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat Enterprise Linux (RHEL) compute, or worker, machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements:
- You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information.
- Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
Each system must meet the following hardware requirements:
- Physical or virtual system, or an instance running on a public or private IaaS.
Base OS: RHEL 7.9 with "Minimal" installation option.
ImportantAdding RHEL 7 compute machines to an OpenShift Container Platform cluster is deprecated. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
In addition, you must not upgrade your compute machines to RHEL 8 because support is not available in this release.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
- If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Enabling FIPS Mode in the RHEL 7 documentation.
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64
- NetworkManager 1.0 or later.
- 1 vCPU.
- Minimum 8 GB RAM.
-
Minimum 15 GB hard disk space for the file system containing .
/var/ -
Minimum 1 GB hard disk space for the file system containing .
/usr/local/bin/ Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library.
-
Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the attribute must be set.
disk.enableUUID=true - Each system must be able to access the cluster’s API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster’s API service endpoints.
-
Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the
5.1.2.1. Certificate signing requests management Copiar enlaceEnlace copiado en el portapapeles!
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The
kube-controller-manager
machine-approver
5.1.3. Preparing the machine to run the playbook Copiar enlaceEnlace copiado en el portapapeles!
Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.8 cluster, you must prepare a RHEL 7 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it.
Prerequisites
-
Install the OpenShift CLI () on the machine that you run the playbook on.
oc -
Log in as a user with permission.
cluster-admin
Procedure
-
Ensure that the file for the cluster and the installation program that you used to install the cluster are on the machine. One way to accomplish this is to use the same machine that you used to install the cluster.
kubeconfig - Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN.
Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.
ImportantIf you use SSH key-based authentication, you must manage the key with an SSH agent.
If you have not already done so, register the machine with RHSM and attach a pool with an
subscription to it:OpenShiftRegister the machine with RHSM:
# subscription-manager register --username=<user_name> --password=<password>Pull the latest subscription data from RHSM:
# subscription-manager refreshList the available subscriptions:
# subscription-manager list --available --matches '*OpenShift*'In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
# subscription-manager attach --pool=<pool_id>
Enable the repositories required by OpenShift Container Platform 4.8:
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ansible-2.9-rpms" \ --enable="rhel-7-server-ose-4.8-rpms"Install the required packages, including
:openshift-ansible# yum install openshift-ansible openshift-clients jqThe
package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. Theopenshift-ansibleprovides theopenshift-clientsCLI, and theocpackage improves the display of JSON output on your command line.jq
5.1.4. Preparing a RHEL compute node Copiar enlaceEnlace copiado en el portapapeles!
Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories.
On each host, register with RHSM:
# subscription-manager register --username=<user_name> --password=<password>Pull the latest subscription data from RHSM:
# subscription-manager refreshList the available subscriptions:
# subscription-manager list --available --matches '*OpenShift*'In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
# subscription-manager attach --pool=<pool_id>Disable all yum repositories:
Disable all the enabled RHSM repositories:
# subscription-manager repos --disable="*"List the remaining yum repositories and note their names under
, if any:repo id# yum repolistUse
to disable the remaining yum repositories:yum-config-manager# yum-config-manager --disable <repo_id>Alternatively, disable all repositories:
# yum-config-manager --disable \*Note that this might take a few minutes if you have a large number of available repositories
Enable only the repositories required by OpenShift Container Platform 4.8:
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-fast-datapath-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-optional-rpms" \ --enable="rhel-7-server-ose-4.8-rpms"Stop and disable firewalld on the host:
# systemctl disable --now firewalld.serviceNoteYou must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker.
5.1.5. Adding a RHEL compute machine to your cluster Copiar enlaceEnlace copiado en el portapapeles!
You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.8 cluster.
Prerequisites
- You installed the required packages and performed the necessary configuration on the machine that you run the playbook on.
- You prepared the RHEL hosts for installation.
Procedure
Perform the following steps on the machine that you prepared to run the playbook:
Create an Ansible inventory file that is named
that defines your compute machine hosts and required variables:/<path>/inventory/hosts[all:vars] ansible_user=root1 #ansible_become=True2 openshift_kubeconfig_path="~/.kube/config"3 [new_workers]4 mycluster-rhel7-0.example.com mycluster-rhel7-1.example.com- 1
- Specify the user name that runs the Ansible tasks on the remote compute machines.
- 2
- If you do not specify
rootfor theansible_user, you must setansible_becometoTrueand assign the user sudo permissions. - 3
- Specify the path and file name of the
kubeconfigfile for your cluster. - 4
- List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine.
Navigate to the Ansible playbook directory:
$ cd /usr/share/ansible/openshift-ansibleRun the playbook:
$ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml1 - 1
- For
<path>, specify the path to the Ansible inventory file that you created.
5.1.6. Required parameters for the Ansible hosts file Copiar enlaceEnlace copiado en el portapapeles!
You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster.
| Paramter | Description | Values |
|---|---|---|
|
| The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. | A user name on the system. The default value is
|
|
| If the values of
|
|
|
| Specifies a path and file name to a local directory that contains the
| The path and name of the configuration file. |
5.1.7. Optional: Removing RHCOS compute machines from a cluster Copiar enlaceEnlace copiado en el portapapeles!
After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources.
Prerequisites
- You have added RHEL compute machines to your cluster.
Procedure
View the list of machines and record the node names of the RHCOS compute machines:
$ oc get nodes -o wideFor each RHCOS compute machine, delete the node:
Mark the node as unschedulable by running the
command:oc adm cordon$ oc adm cordon <node_name>1 - 1
- Specify the node name of one of the RHCOS compute machines.
Drain all the pods from the node:
$ oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets1 - 1
- Specify the node name of the RHCOS compute machine that you isolated.
Delete the node:
$ oc delete nodes <node_name>1 - 1
- Specify the node name of the RHCOS compute machine that you drained.
Review the list of compute machines to ensure that only the RHEL nodes remain:
$ oc get nodes -o wide- Remove the RHCOS machines from the load balancer for your cluster’s compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines.
5.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster Copiar enlaceEnlace copiado en el portapapeles!
You can add more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to your OpenShift Container Platform cluster on bare metal.
Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines.
5.2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- You installed a cluster on bare metal.
- You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure.
5.2.2. Creating more RHCOS machines using an ISO image Copiar enlaceEnlace copiado en el portapapeles!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
Procedure
Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
- Burn the ISO image to a disk and boot it directly.
- Use ISO redirection with a LOM interface.
-
After the instance boots, press the or
TABkey to edit the kernel command line.E Add the parameters to the kernel command line:
coreos.inst.install_dev=sda1 coreos.inst.ignition_url=http://example.com/worker.ign2 -
Press to complete the installation. After RHCOS installs, the system reboots. After the system reboots, it applies the Ignition config file that you specified.
Enter - Continue to create more compute machines for your cluster.
5.2.3. Creating more RHCOS machines by PXE or iPXE booting Copiar enlaceEnlace copiado en el portapapeles!
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting.
Prerequisites
- Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
-
Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, , and
kernelfiles that you uploaded to your HTTP server during cluster installation.initramfs - You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them.
-
If you use UEFI, you have access to the file that you modified during OpenShift Container Platform installation.
grub.conf
Procedure
Confirm that your PXE or iPXE installation for the RHCOS images is correct.
For PXE:
DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture>1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img2 - 1
- Specify the location of the live
kernelfile that you uploaded to your HTTP server. - 2
- Specify locations of the RHCOS files that you uploaded to your HTTP server. The
initrdparameter value is the location of the liveinitramfsfile, thecoreos.inst.ignition_urlparameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_urlparameter value is the location of the liverootfsfile. Thecoreos.inst.ignition_urlandcoreos.live.rootfs_urlparameters only support HTTP and HTTPS.
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more
console=
APPEND
console=tty0 console=ttyS0
For iPXE:
kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img2 - 1
- Specify locations of the RHCOS files that you uploaded to your HTTP server. The
kernelparameter value is the location of thekernelfile, theinitrd=mainargument is needed for booting on UEFI systems, thecoreos.inst.ignition_urlparameter value is the location of the worker Ignition config file, and thecoreos.live.rootfs_urlparameter value is the location of the liverootfsfile. Thecoreos.inst.ignition_urlandcoreos.live.rootfs_urlparameters only support HTTP and HTTPS. - 2
- Specify the location of the
initramfsfile that you uploaded to your HTTP server.
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more
console=
kernel
console=tty0 console=ttyS0
- Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
5.2.4. Approving the certificate signing requests for your machines Copiar enlaceEnlace copiado en el portapapeles!
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.21.0 master-1 Ready master 63m v1.21.0 master-2 Ready master 64m v1.21.0The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
orPendingstatus for each machine that you added to the cluster:Approved$ oc get csrExample output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
status, approve the CSRs for your cluster machines:PendingNoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
if the Kubelet requests a new certificate with identical parameters.machine-approverNoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
,oc exec, andoc rshcommands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by theoc logsservice account in thenode-bootstrapperorsystem:nodegroups, and confirm the identity of the node.system:adminTo approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>1 - 1
<csr_name>is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approveNoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csrExample output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...If the remaining CSRs are not approved, and are in the
status, approve the CSRs for your cluster machines:PendingTo approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>1 - 1
<csr_name>is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
status. Verify this by running the following command:Ready$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.21.0 master-1 Ready master 73m v1.21.0 master-2 Ready master 74m v1.21.0 worker-0 Ready worker 11m v1.21.0 worker-1 Ready worker 11m v1.21.0NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
status.Ready
Additional information
- For more information on CSRs, see Certificate Signing Requests.
5.3. Deploying machine health checks Copiar enlaceEnlace copiado en el portapapeles!
Understand and deploy machine health checks.
This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational.
5.3.1. About machine health checks Copiar enlaceEnlace copiado en el portapapeles!
Machine health checks automatically repair unhealthy machines in a particular machine pool.
To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the
NotReady
You cannot apply a machine health check to a machine with the master role.
The controller that observes a
MachineHealthCheck
machine deleted
To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the
maxUnhealthy
Consider the timeouts carefully, accounting for workloads and requirements.
- Long timeouts can result in long periods of downtime for the workload on the unhealthy machine.
-
Too short timeouts can result in a remediation loop. For example, the timeout for checking the status must be long enough to allow the machine to complete the startup process.
NotReady
To stop the check, remove the resource.
For example, you should stop the check during the upgrade process because the nodes in the cluster might become temporarily unavailable. The
MachineHealthCheck
MachineHealthCheck
MachineHealthCheck
machine-api-termination-handler
5.3.1.1. Limitations when deploying machine health checks Copiar enlaceEnlace copiado en el portapapeles!
There are limitations to consider before deploying a machine health check:
- Only machines owned by a machine set are remediated by a machine health check.
- Control plane machines are not currently supported and are not remediated if they are unhealthy.
- If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately.
-
If the corresponding node for a machine does not join the cluster after the , the machine is remediated.
nodeStartupTimeout -
A machine is remediated immediately if the resource phase is
Machine.Failed
5.3.2. Sample MachineHealthCheck resource Copiar enlaceEnlace copiado en el portapapeles!
The
MachineHealthCheck
apiVersion: machine.openshift.io/v1beta1
kind: MachineHealthCheck
metadata:
name: example
namespace: openshift-machine-api
spec:
selector:
matchLabels:
machine.openshift.io/cluster-api-machine-role: <role>
machine.openshift.io/cluster-api-machine-type: <role>
machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone>
unhealthyConditions:
- type: "Ready"
timeout: "300s"
status: "False"
- type: "Ready"
timeout: "300s"
status: "Unknown"
maxUnhealthy: "40%"
nodeStartupTimeout: "10m"
- 1
- Specify the name of the machine health check to deploy.
- 2 3
- Specify a label for the machine pool that you want to check.
- 4
- Specify the machine set to track in
<cluster_name>-<label>-<zone>format. For example,prod-node-us-east-1a. - 5 6
- Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine.
- 7
- Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by
maxUnhealthy, remediation is not performed. - 8
- Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy.
The
matchLabels
5.3.2.1. Short-circuiting machine health check remediation Copiar enlaceEnlace copiado en el portapapeles!
Short circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the
maxUnhealthy
MachineHealthCheck
If the user defines a value for the
maxUnhealthy
MachineHealthCheck
maxUnhealthy
maxUnhealthy
If
maxUnhealthy
100%
The appropriate
maxUnhealthy
MachineHealthCheck
maxUnhealthy
maxUnhealthy
The
maxUnhealthy
maxUnhealthy
5.3.2.1.1. Setting maxUnhealthy by using an absolute value Copiar enlaceEnlace copiado en el portapapeles!
If
maxUnhealthy
2
- Remediation will be performed if 2 or fewer nodes are unhealthy
- Remediation will not be performed if 3 or more nodes are unhealthy
These values are independent of how many machines are being checked by the machine health check.
5.3.2.1.2. Setting maxUnhealthy by using percentages Copiar enlaceEnlace copiado en el portapapeles!
If
maxUnhealthy
40%
- Remediation will be performed if 10 or fewer nodes are unhealthy
- Remediation will not be performed if 11 or more nodes are unhealthy
If
maxUnhealthy
40%
- Remediation will be performed if 2 or fewer nodes are unhealthy
- Remediation will not be performed if 3 or more nodes are unhealthy
The allowed number of machines is rounded down when the percentage of
maxUnhealthy
5.3.3. Creating a MachineHealthCheck resource Copiar enlaceEnlace copiado en el portapapeles!
You can create a
MachineHealthCheck
MachineSets
MachineHealthCheck
Prerequisites
-
Install the command line interface.
oc
Procedure
-
Create a file that contains the definition of your machine health check.
healthcheck.yml Apply the
file to your cluster:healthcheck.yml$ oc apply -f healthcheck.yml
5.3.4. Scaling a machine set manually Copiar enlaceEnlace copiado en el portapapeles!
To add or remove an instance of a machine in a machine set, you can manually scale the machine set.
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets.
Prerequisites
-
Install an OpenShift Container Platform cluster and the command line.
oc -
Log in to as a user with
ocpermission.cluster-admin
Procedure
View the machine sets that are in the cluster:
$ oc get machinesets -n openshift-machine-apiThe machine sets are listed in the form of
.<clusterid>-worker-<aws-region-az>View the machines that are in the cluster:
$ oc get machine -n openshift-machine-apiSet the annotation on the machine that you want to delete:
$ oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true"Cordon and drain the node that you want to delete:
$ oc adm cordon <node_name> $ oc adm drain <node_name>Scale the machine set:
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-apiOr:
$ oc edit machineset <machineset> -n openshift-machine-apiTipYou can alternatively apply the following YAML to scale the machine set:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2You can scale the machine set up or down. It takes several minutes for the new machines to be available.
Verification
Verify the deletion of the intended machine:
$ oc get machines
5.3.5. Understanding the difference between machine sets and the machine config pool Copiar enlaceEnlace copiado en el portapapeles!
MachineSet
The
MachineConfigPool
MachineConfigController
The
MachineConfigPool
The
NodeSelector
MachineSet
5.4. Recommended node host practices Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node:
podsPerCore
maxPods
When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in:
- Increased CPU utilization.
- Slow pod scheduling.
- Potential out-of-memory scenarios, depending on the amount of memory in the node.
- Exhausting the pool of IP addresses.
- Resource overcommitting, leading to poor user application performance.
In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running.
Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload.
podsPerCore
podsPerCore
10
40
kubeletConfig:
podsPerCore: 10
Setting
podsPerCore
0
0
podsPerCore
maxPods
maxPods
kubeletConfig:
maxPods: 250
5.4.1. Creating a KubeletConfig CRD to edit kubelet parameters Copiar enlaceEnlace copiado en el portapapeles!
The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new
kubelet-config-controller
KubeletConfig
As the fields in the
kubeletConfig
kubeletConfig
Consider the following guidance:
-
Create one CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one
KubeletConfigCR for all of the pools.KubeletConfig -
Edit an existing CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes.
KubeletConfig -
As needed, create multiple CRs with a limit of 10 per cluster. For the first
KubeletConfigCR, the Machine Config Operator (MCO) creates a machine config appended withKubeletConfig. With each subsequent CR, the controller creates anotherkubeletmachine config with a numeric suffix. For example, if you have akubeletmachine config with akubeletsuffix, the next-2machine config is appended withkubelet.-3
If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the
kubelet-3
kubelet-2
If you have a machine config with a
kubelet-9
KubeletConfig
kubelet
Example KubeletConfig CR
$ oc get kubeletconfig
NAME AGE
set-max-pods 15m
Example showing a KubeletConfig machine config
$ oc get mc | grep kubelet
...
99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m
...
The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes.
Prerequisites
Obtain the label associated with the static
CR for the type of node you want to configure. Perform one of the following steps:MachineConfigPoolView the machine config pool:
$ oc describe machineconfigpool <name>For example:
$ oc describe machineconfigpool workerExample output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods1 - 1
- If a label has been added it appears under
labels.
If the label is not present, add a key/value pair:
$ oc label machineconfigpool worker custom-kubelet=set-max-pods
Procedure
View the available machine configuration objects that you can select:
$ oc get machineconfigBy default, the two kubelet-related configs are
and01-master-kubelet.01-worker-kubeletCheck the current value for the maximum pods per node:
$ oc describe node <node_name>For example:
$ oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94Look for
in thevalue: pods: <value>stanza:AllocatableExample output
Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration:
apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods1 kubeletConfig: maxPods: 5002 NoteThe rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values,
for50andkubeAPIQPSfor100, are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node.kubeAPIBurstapiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>Update the machine config pool for workers with the label:
$ oc label machineconfigpool worker custom-kubelet=large-podsCreate the
object:KubeletConfig$ oc create -f change-maxPods-cr.yamlVerify that the
object is created:KubeletConfig$ oc get kubeletconfigExample output
NAME AGE set-max-pods 15mDepending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes.
Verify that the changes are applied to the node:
Check on a worker node that the
value changed:maxPods$ oc describe node <node_name>Locate the
stanza:Allocatable... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 5001 ...- 1
- In this example, the
podsparameter should report the value you set in theKubeletConfigobject.
Verify the change in the
object:KubeletConfig$ oc get kubeletconfigs set-max-pods -o yamlThis should show a
andstatus: "True":type:Successspec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success
5.4.3. Control plane node sizing Copiar enlaceEnlace copiado en el portapapeles!
The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing. The control plane tests create the following objects across the cluster in each of the namespaces depending on the node counts:
- 12 image streams
- 3 build configurations
- 6 builds
- 1 deployment with 2 pod replicas mounting two secrets each
- 2 deployments with 1 pod replica mounting two secrets
- 3 services pointing to the previous deployments
- 3 routes pointing to the previous deployments
- 10 secrets, 2 of which are mounted by the previous deployments
- 10 config maps, 2 of which are mounted by the previous deployments
| Number of worker nodes | Cluster load (namespaces) | CPU cores | Memory (GB) |
|---|---|---|---|
| 25 | 500 | 4 | 16 |
| 100 | 1000 | 8 | 32 |
| 250 | 4000 | 16 | 96 |
On a large and dense cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails. The failures can be due to unexpected issues with power, network or underlying infrastructure in addition to intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available which leads to increase in the resource usage. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources.
The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the
running
Operator Lifecycle Manager (OLM ) runs on the control plane nodes and it’s memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing.
| Number of namespaces | OLM memory at idle state (GB) | OLM memory with 5 user operators installed (GB) |
|---|---|---|
| 500 | 0.823 | 1.7 |
| 1000 | 1.2 | 2.5 |
| 1500 | 1.7 | 3.2 |
| 2000 | 2 | 4.4 |
| 3000 | 2.7 | 5.6 |
| 4000 | 3.8 | 7.6 |
| 5000 | 4.2 | 9.02 |
| 6000 | 5.8 | 11.3 |
| 7000 | 6.6 | 12.9 |
| 8000 | 6.9 | 14.8 |
| 9000 | 8 | 17.7 |
| 10,000 | 9.9 | 21.6 |
If you used an installer-provisioned infrastructure installation method, you cannot modify the control plane node size in a running OpenShift Container Platform 4.8 cluster. Instead, you must estimate your total node count and use the suggested control plane node size during installation.
The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShift SDN as the network plugin.
In OpenShift Container Platform 4.8, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and previous versions. The sizes are determined taking that into consideration.
5.4.4. Setting up CPU Manager Copiar enlaceEnlace copiado en el portapapeles!
Procedure
Optional: Label a node:
# oc label node perf-node.example.com cpumanager=trueEdit the
of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled:MachineConfigPool# oc edit machineconfigpool workerAdd a label to the worker machine config pool:
metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabledCreate a
,KubeletConfig, custom resource (CR). Refer to the label created in the previous step to have the correct nodes updated with the new kubelet config. See thecpumanager-kubeletconfig.yamlsection:machineConfigPoolSelectorapiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static1 cpuManagerReconcilePeriod: 5s2 - 1
- Specify a policy:
-
. This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically.
none -
. This policy allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node.
static
-
- 2
- Optional. Specify the CPU Manager reconcile frequency. The default is
5s.
Create the dynamic kubelet config:
# oc create -f cpumanager-kubeletconfig.yamlThis adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed.
Check for the merged kubelet config:
# oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7Example output
"ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ]Check the worker for the updated
:kubelet.conf# oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManagerExample output
cpuManagerPolicy: static1 cpuManagerReconcilePeriod: 5s2 Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod:
# cat cpumanager-pod.yamlExample output
apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" nodeSelector: cpumanager: "true"Create the pod:
# oc create -f cpumanager-pod.yamlVerify that the pod is scheduled to the node that you labeled:
# oc describe pod cpumanagerExample output
Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=trueVerify that the
are set up correctly. Get the process ID (PID) of thecgroupsprocess:pause# ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pausePods of quality of service (QoS) tier
are placed within theGuaranteed. Pods of other QoS tiers end up in childkubepods.sliceofcgroups:kubepods# cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus tasks` ; do echo -n "$i "; cat $i ; doneExample output
cpuset.cpus 1 tasks 32706Check the allowed CPU list for the task:
# grep ^Cpus_allowed_list /proc/32706/statusExample output
Cpus_allowed_list: 1Verify that another pod (in this case, the pod in the
QoS tier) on the system cannot run on the core allocated for theburstablepod:Guaranteed# cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 # oc describe node perf-node.example.comExample output
... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)This VM has two CPU cores. The
setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at thesystem-reservedamount. You can see thatNode Allocatableis 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled:Allocatable CPUNAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s
5.5. Huge pages Copiar enlaceEnlace copiado en el portapapeles!
Understand and configure huge pages.
5.5.1. What huge pages do Copiar enlaceEnlace copiado en el portapapeles!
Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.
A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.
5.5.2. How huge pages are consumed by apps Copiar enlaceEnlace copiado en el portapapeles!
Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size.
Huge pages can be consumed through container-level resource requirements using the resource name
hugepages-<size>
hugepages-2Mi
apiVersion: v1
kind: Pod
metadata:
generateName: hugepages-volume-
spec:
containers:
- securityContext:
privileged: true
image: rhel7:latest
command:
- sleep
- inf
name: example
volumeMounts:
- mountPath: /dev/hugepages
name: hugepage
resources:
limits:
hugepages-2Mi: 100Mi
memory: "1Gi"
cpu: "1"
volumes:
- name: hugepage
emptyDir:
medium: HugePages
- 1
- Specify the amount of memory for
hugepagesas the exact amount to be allocated. Do not specify this value as the amount of memory forhugepagesmultiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify100MBdirectly.
Allocating huge pages of a specific size
Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter
hugepagesz=<size>
<size>
kKmMgG
default_hugepagesz=<size>
Huge page requirements
- Huge page requests must equal the limits. This is the default if limits are specified, but requests are not.
- Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration.
-
volumes backed by huge pages must not consume more huge page memory than the pod request.
EmptyDir -
Applications that consume huge pages via with
shmget()must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group.SHM_HUGETLB
5.5.3. Configuring huge pages Copiar enlaceEnlace copiado en el portapapeles!
Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes.
5.5.3.1. At boot time Copiar enlaceEnlace copiado en el portapapeles!
Procedure
To minimize node reboots, the order of the steps below needs to be followed:
Label all nodes that need the same huge pages setting by a label.
$ oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=Create a file with the following content and name it
:hugepages-tuned-boottime.yamlapiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages1 namespace: openshift-cluster-node-tuning-operator spec: profile:2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=503 name: openshift-node-hugepages recommend: - machineConfigLabels:4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepagesCreate the Tuned
objecthugepages$ oc create -f hugepages-tuned-boottime.yamlCreate a file with the following content and name it
:hugepages-mcp.yamlapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: ""Create the machine config pool:
$ oc create -f hugepages-mcp.yaml
Given enough non-fragmented memory, all the nodes in the
worker-hp
$ oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}"
100Mi
This functionality is currently only supported on Red Hat Enterprise Linux CoreOS (RHCOS) 8.x worker nodes. On Red Hat Enterprise Linux (RHEL) 7.x worker nodes the TuneD
[bootloader]
5.6. Understanding device plugins Copiar enlaceEnlace copiado en el portapapeles!
The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them.
OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors.
A device plugin is a gRPC service running on the nodes (external to the
kubelet
service DevicePlugin {
// GetDevicePluginOptions returns options to be communicated with Device
// Manager
rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}
// ListAndWatch returns a stream of List of Devices
// Whenever a Device state change or a Device disappears, ListAndWatch
// returns the new list
rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}
// Allocate is called during container creation so that the Device
// Plug-in can run device specific operations and instruct Kubelet
// of the steps to make the Device available in the container
rpc Allocate(AllocateRequest) returns (AllocateResponse) {}
// PreStartcontainer is called, if indicated by Device Plug-in during
// registration phase, before each container start. Device plug-in
// can run device specific operations such as reseting the device
// before making devices available to the container
rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {}
}
Example device plugins
For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go.
5.6.1. Methods for deploying a device plugin Copiar enlaceEnlace copiado en el portapapeles!
- Daemon sets are the recommended approach for device plugin deployments.
- Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager.
- Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context.
- More specific details regarding deployment steps can be found with each device plugin implementation.
5.6.2. Understanding the Device Manager Copiar enlaceEnlace copiado en el portapapeles!
Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins.
You can advertise specialized hardware without requiring any upstream code changes.
OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors.
Device Manager advertises devices as Extended Resources. User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource.
Upon start, the device plugin registers itself with Device Manager invoking
Register
Device Manager, while processing a new registration request, invokes
ListAndWatch
While handling a new pod admission request, Kubelet passes requested
Extended Resources
Allocate
Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation.
5.6.3. Enabling Device Manager Copiar enlaceEnlace copiado en el portapapeles!
Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes.
Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins.
Obtain the label associated with the static
CRD for the type of node you want to configure by entering the following command. Perform one of the following steps:MachineConfigPoolView the machine config:
# oc describe machineconfig <name>For example:
# oc describe machineconfig 00-workerExample output
Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker1 - 1
- Label required for the Device Manager.
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a Device Manager CR
apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr2 kubeletConfig: feature-gates: - DevicePlugins=true3 Create the Device Manager:
$ oc create -f devicemgr.yamlExample output
kubeletconfig.machineconfiguration.openshift.io/devicemgr created- Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled.
5.7. Taints and tolerations Copiar enlaceEnlace copiado en el portapapeles!
Understand and work with taints and tolerations.
5.7.1. Understanding taints and tolerations Copiar enlaceEnlace copiado en el portapapeles!
A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration.
You apply taints to a node through the
Node
NodeSpec
Pod
PodSpec
Example taint in a node specification
spec:
taints:
- effect: NoExecute
key: key1
value: value1
....
Example toleration in a Pod spec
spec:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoExecute"
tolerationSeconds: 3600
....
Taints and tolerations consist of a key, value, and effect.
| Parameter | Description | ||||||
|---|---|---|---|---|---|---|---|
|
| The
| ||||||
|
| The
| ||||||
|
| The effect is one of the following:
| ||||||
|
|
|
If you add a
taint to a control plane node (also known as the master node) the node must have theNoScheduletaint, which is added by default.node-role.kubernetes.io/master=:NoScheduleFor example:
apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c ... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master ...
A toleration matches a taint:
If the
parameter is set tooperator:Equal-
the parameters are the same;
key -
the parameters are the same;
value -
the parameters are the same.
effect
-
the
If the
parameter is set tooperator:Exists-
the parameters are the same;
key -
the parameters are the same.
effect
-
the
The following taints are built into OpenShift Container Platform:
-
: The node is not ready. This corresponds to the node condition
node.kubernetes.io/not-ready.Ready=False -
: The node is unreachable from the node controller. This corresponds to the node condition
node.kubernetes.io/unreachable.Ready=Unknown -
: The node has memory pressure issues. This corresponds to the node condition
node.kubernetes.io/memory-pressure.MemoryPressure=True -
: The node has disk pressure issues. This corresponds to the node condition
node.kubernetes.io/disk-pressure.DiskPressure=True -
: The node network is unavailable.
node.kubernetes.io/network-unavailable -
: The node is unschedulable.
node.kubernetes.io/unschedulable -
: When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint.
node.cloudprovider.kubernetes.io/uninitialized - : The node has pid pressure. This corresponds to the node condition
node.kubernetes.io/pid-pressure.PIDPressure=TrueImportantOpenShift Container Platform does not set a default pid.available
.evictionHard
5.7.1.1. Understanding how to use toleration seconds to delay pod evictions Copiar enlaceEnlace copiado en el portapapeles!
You can specify how long a pod can remain bound to a node before being evicted by specifying the
tolerationSeconds
Pod
MachineSet
NoExecute
tolerationSeconds
Example output
spec:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoExecute"
tolerationSeconds: 3600
Here, if this pod is running but does not have a matching toleration, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted.
5.7.1.2. Understanding how to use multiple taints Copiar enlaceEnlace copiado en el portapapeles!
You can put multiple taints on the same node and multiple tolerations on the same pod. OpenShift Container Platform processes multiple taints and tolerations as follows:
- Process the taints for which the pod has a matching toleration.
The remaining unmatched taints have the indicated effects on the pod:
-
If there is at least one unmatched taint with effect , OpenShift Container Platform cannot schedule a pod onto that node.
NoSchedule -
If there is no unmatched taint with effect but there is at least one unmatched taint with effect
NoSchedule, OpenShift Container Platform tries to not schedule the pod onto the node.PreferNoSchedule If there is at least one unmatched taint with effect
, OpenShift Container Platform evicts the pod from the node if it is already running on the node, or the pod is not scheduled onto the node if it is not yet running on the node.NoExecute- Pods that do not tolerate the taint are evicted immediately.
-
Pods that tolerate the taint without specifying in their
tolerationSecondsspecification remain bound forever.Pod -
Pods that tolerate the taint with a specified remain bound for the specified amount of time.
tolerationSeconds
-
If there is at least one unmatched taint with effect
For example:
Add the following taints to the node:
$ oc adm taint nodes node1 key1=value1:NoSchedule$ oc adm taint nodes node1 key1=value1:NoExecute$ oc adm taint nodes node1 key2=value2:NoScheduleThe pod has the following tolerations:
spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute"
In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod.
5.7.1.3. Understanding pod scheduling and node conditions (taint node by condition) Copiar enlaceEnlace copiado en el portapapeles!
The Taint Nodes By Condition feature, which is enabled by default, automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the
NoSchedule
The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations.
To ensure backward compatibility, the daemon set controller automatically adds the following tolerations to all daemons:
- node.kubernetes.io/memory-pressure
- node.kubernetes.io/disk-pressure
- node.kubernetes.io/unschedulable (1.10 or later)
- node.kubernetes.io/network-unavailable (host network only)
You can also add arbitrary tolerations to daemon sets.
The control plane also adds the
node.kubernetes.io/memory-pressure
Guaranteed
Burstable
BestEffort
5.7.1.4. Understanding evicting pods by condition (taint-based evictions) Copiar enlaceEnlace copiado en el portapapeles!
The Taint-Based Evictions feature, which is enabled by default, evicts pods from a node that experiences specific conditions, such as
not-ready
unreachable
Taint Based Evictions have a
NoExecute
tolerationSeconds
The
tolerationSeconds
tolerationSeconds
tolerationSeconds
If you use the
tolerationSeconds
OpenShift Container Platform evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes.
By default, if more than 55% of nodes in a given zone are unhealthy, the node lifecycle controller changes that zone’s state to
PartialDisruption
For more information, see Rate limits on eviction in the Kubernetes documentation.
OpenShift Container Platform automatically adds a toleration for
node.kubernetes.io/not-ready
node.kubernetes.io/unreachable
tolerationSeconds=300
Pod
spec:
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- 1
- These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected.
You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction.
Pods spawned by a daemon set are created with
NoExecute
tolerationSeconds
-
node.kubernetes.io/unreachable -
node.kubernetes.io/not-ready
As a result, daemon set pods are never evicted because of these node conditions.
5.7.1.5. Tolerating all taints Copiar enlaceEnlace copiado en el portapapeles!
You can configure a pod to tolerate all taints by adding an
operator: "Exists"
key
value
Pod spec for tolerating all taints
spec:
tolerations:
- operator: "Exists"
5.7.2. Adding taints and tolerations Copiar enlaceEnlace copiado en el portapapeles!
You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration.
Procedure
Add a toleration to a pod by editing the
spec to include aPodstanza:tolerationsSample pod configuration file with an Equal operator
spec: tolerations: - key: "key1"1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 36002 For example:
Sample pod configuration file with an Exists operator
spec: tolerations: - key: "key1" operator: "Exists"1 effect: "NoExecute" tolerationSeconds: 3600- 1
- The
Existsoperator does not take avalue.
This example places a taint on
that has keynode1, valuekey1, and taint effectvalue1.NoExecuteAdd a taint to a node by using the following command with the parameters described in the Taint and toleration components table:
$ oc adm taint nodes <node_name> <key>=<value>:<effect>For example:
$ oc adm taint nodes node1 key1=value1:NoExecuteThis command places a taint on
that has keynode1, valuekey1, and effectvalue1.NoExecuteNoteIf you add a
taint to a control plane node (also known as the master node) the node must have theNoScheduletaint, which is added by default.node-role.kubernetes.io/master=:NoScheduleFor example:
apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c ... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master ...The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto
.node1
5.7.3. Adding taints and tolerations using a machine set Copiar enlaceEnlace copiado en el portapapeles!
You can add taints to nodes using a machine set. All nodes associated with the
MachineSet
Procedure
Add a toleration to a pod by editing the
spec to include aPodstanza:tolerationsSample pod configuration file with
Equaloperatorspec: tolerations: - key: "key1"1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 36002 For example:
Sample pod configuration file with
Existsoperatorspec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600Add the taint to the
object:MachineSetEdit the
YAML for the nodes you want to taint or you can create a newMachineSetobject:MachineSet$ oc edit machineset <machineset>Add the taint to the
section:spec.template.specExample taint in a machine set specification
spec: .... template: .... spec: taints: - effect: NoExecute key: key1 value: value1 ....This example places a taint that has the key
, valuekey1, and taint effectvalue1on the nodes.NoExecuteScale down the machine set to 0:
$ oc scale --replicas=0 machineset <machineset> -n openshift-machine-apiTipYou can alternatively apply the following YAML to scale the machine set:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0Wait for the machines to be removed.
Scale up the machine set as needed:
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-apiOr:
$ oc edit machineset <machineset> -n openshift-machine-apiWait for the machines to start. The taint is added to the nodes associated with the
object.MachineSet
5.7.4. Binding a user to a node using taints and tolerations Copiar enlaceEnlace copiado en el portapapeles!
If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster.
If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label.
Procedure
To configure a node so that users can use only that node:
Add a corresponding taint to those nodes:
For example:
$ oc adm taint nodes node1 dedicated=groupName:NoScheduleTipYou can alternatively apply the following YAML to add the taint:
kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: dedicated value: groupName effect: NoSchedule- Add a toleration to the pods by writing a custom admission controller.
5.7.5. Controlling nodes with special hardware using taints and tolerations Copiar enlaceEnlace copiado en el portapapeles!
In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes.
You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware.
Procedure
To ensure nodes with specialized hardware are reserved for specific pods:
Add a toleration to pods that need the special hardware.
For example:
spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600Taint the nodes that have the specialized hardware using one of the following commands:
$ oc adm taint nodes <node-name> disktype=ssd:NoScheduleOr:
$ oc adm taint nodes <node-name> disktype=ssd:PreferNoScheduleTipYou can alternatively apply the following YAML to add the taint:
kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule
5.7.6. Removing taints and tolerations Copiar enlaceEnlace copiado en el portapapeles!
You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration.
Procedure
To remove taints and tolerations:
To remove a taint from a node:
$ oc adm taint nodes <node-name> <key>-For example:
$ oc adm taint nodes ip-10-0-132-248.ec2.internal key1-Example output
node/ip-10-0-132-248.ec2.internal untaintedTo remove a toleration from a pod, edit the
spec to remove the toleration:Podspec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600
5.8. Topology Manager Copiar enlaceEnlace copiado en el portapapeles!
Understand and work with Topology Manager.
5.8.1. Topology Manager policies Copiar enlaceEnlace copiado en el portapapeles!
Topology Manager aligns
Pod
Pod
To align CPU resources with other requested resources in a
Pod
static
Topology Manager supports four allocation policies, which you assign in the
cpumanager-enabled
nonepolicy- This is the default policy and does not perform any topology alignment.
best-effortpolicy-
For each container in a pod with the
best-efforttopology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restrictedpolicy-
For each container in a pod with the
restrictedtopology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in aTerminatedstate with a pod admission failure. single-numa-nodepolicy-
For each container in a pod with the
single-numa-nodetopology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure.
5.8.2. Setting up Topology Manager Copiar enlaceEnlace copiado en el portapapeles!
To use Topology Manager, you must configure an allocation policy in the
cpumanager-enabled
Prequisites
-
Configure the CPU Manager policy to be . See the Using CPU Manager in the Scalability and Performance section.
static
Procedure
To activate Topololgy Manager:
Configure the Topology Manager allocation policy in the
custom resource (CR).cpumanager-enabled$ oc edit KubeletConfig cpumanager-enabledapiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node2
5.8.3. Pod interactions with Topology Manager policies Copiar enlaceEnlace copiado en el portapapeles!
The example
Pod
The following pod runs in the
BestEffort
spec:
containers:
- name: nginx
image: nginx
The next pod runs in the
Burstable
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
If the selected policy is anything other than
none
Pod
The last example pod below runs in the Guaranteed QoS class because requests are equal to limits.
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "200Mi"
cpu: "2"
example.com/device: "1"
requests:
memory: "200Mi"
cpu: "2"
example.com/device: "1"
Topology Manager would consider this pod. The Topology Manager consults the CPU Manager static policy, which returns the topology of available CPUs. Topology Manager also consults Device Manager to discover the topology of available devices for example.com/device.
Topology Manager will use this information to store the best Topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.
5.9. Resource requests and overcommitment Copiar enlaceEnlace copiado en el portapapeles!
For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node.
The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service.
Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted.
5.10. Cluster-level overcommit using the Cluster Resource Override Operator Copiar enlaceEnlace copiado en el portapapeles!
The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits.
You must install the Cluster Resource Override Operator using the OpenShift Container Platform console or CLI as shown in the following sections. During the installation, you create a
ClusterResourceOverride
apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
name: cluster
spec:
podResourceOverride:
spec:
memoryRequestToLimitPercent: 50
cpuRequestToLimitPercent: 25
limitCPUToMemoryPercent: 200
- 1
- The name must be
cluster. - 2
- Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50.
- 3
- Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25.
- 4
- Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200.
The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a
LimitRange
Pod
When configured, overrides can be enabled per-project by applying the following label to the Namespace object for each project:
apiVersion: v1
kind: Namespace
metadata:
....
labels:
clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true"
....
The Operator watches for the
ClusterResourceOverride
ClusterResourceOverride
5.10.1. Installing the Cluster Resource Override Operator using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
-
The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a object or configure limits in
LimitRangespecs for the overrides to apply.Pod
Procedure
To install the Cluster Resource Override Operator using the OpenShift Container Platform web console:
In the OpenShift Container Platform web console, navigate to Home → Projects
- Click Create Project.
-
Specify as the name of the project.
clusterresourceoverride-operator - Click Create.
Navigate to Operators → OperatorHub.
- Choose ClusterResourceOverride Operator from the list of available Operators and click Install.
- On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode.
- Make sure clusterresourceoverride-operator is selected for Installed Namespace.
- Select an Update Channel and Approval Strategy.
- Click Install.
On the Installed Operators page, click ClusterResourceOverride.
- On the ClusterResourceOverride Operator details page, click Create Instance.
On the Create ClusterResourceOverride page, edit the YAML template to set the overcommit values as needed:
apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 502 cpuRequestToLimitPercent: 253 limitCPUToMemoryPercent: 2004 - 1
- The name must be
cluster. - 2
- Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50.
- 3
- Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
- 4
- Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
- Click Create.
Check the current state of the admission webhook by checking the status of the cluster custom resource:
- On the ClusterResourceOverride Operator page, click cluster.
On the ClusterResourceOverride Details page, click YAML. The
section appears when the webhook is called.mutatingWebhookConfigurationRefapiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef:1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 ....- 1
- Reference to the
ClusterResourceOverrideadmission webhook.
5.10.2. Installing the Cluster Resource Override Operator using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster.
Prerequisites
-
The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a object or configure limits in
LimitRangespecs for the overrides to apply.Pod
Procedure
To install the Cluster Resource Override Operator using the CLI:
Create a namespace for the Cluster Resource Override Operator:
Create a
object YAML file (for example,Namespace) for the Cluster Resource Override Operator:cro-namespace.yamlapiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operatorCreate the namespace:
$ oc create -f <file-name>.yamlFor example:
$ oc create -f cro-namespace.yaml
Create an Operator group:
Create an
object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator:OperatorGroupapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operatorCreate the Operator Group:
$ oc create -f <file-name>.yamlFor example:
$ oc create -f cro-og.yaml
Create a subscription:
Create a
object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator:SubscriptionapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "4.8" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplaceCreate the subscription:
$ oc create -f <file-name>.yamlFor example:
$ oc create -f cro-sub.yaml
Create a
custom resource (CR) object in theClusterResourceOverridenamespace:clusterresourceoverride-operatorChange to the
namespace.clusterresourceoverride-operator$ oc project clusterresourceoverride-operatorCreate a
object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator:ClusterResourceOverrideapiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 502 cpuRequestToLimitPercent: 253 limitCPUToMemoryPercent: 2004 - 1
- The name must be
cluster. - 2
- Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50.
- 3
- Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
- 4
- Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
Create the
object:ClusterResourceOverride$ oc create -f <file-name>.yamlFor example:
$ oc create -f cro-cr.yaml
Verify the current state of the admission webhook by checking the status of the cluster custom resource.
$ oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yamlThe
section appears when the webhook is called.mutatingWebhookConfigurationRefExample output
apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef:1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 ....- 1
- Reference to the
ClusterResourceOverrideadmission webhook.
5.10.3. Configuring cluster-level overcommit Copiar enlaceEnlace copiado en el portapapeles!
The Cluster Resource Override Operator requires a
ClusterResourceOverride
Prerequisites
-
The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a object or configure limits in
LimitRangespecs for the overrides to apply.Pod
Procedure
To modify cluster-level overcommit:
Edit the
CR:ClusterResourceOverrideapiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 501 cpuRequestToLimitPercent: 252 limitCPUToMemoryPercent: 2003 - 1
- Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50.
- 2
- Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
- 3
- Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit:
apiVersion: v1 kind: Namespace metadata: ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true"1 ...- 1
- Add this label to each project.
5.11. Node-level overcommit Copiar enlaceEnlace copiado en el portapapeles!
You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects.
5.11.1. Understanding compute resources and containers Copiar enlaceEnlace copiado en el portapapeles!
The node-enforced behavior for compute resources is specific to the resource type.
5.11.1.1. Understanding container CPU requests Copiar enlaceEnlace copiado en el portapapeles!
A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container.
For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled.
5.11.1.2. Understanding container memory requests Copiar enlaceEnlace copiado en el portapapeles!
A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node’s resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount.
5.11.2. Understanding overcomitment and quality of service classes Copiar enlaceEnlace copiado en el portapapeles!
A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity.
In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class.
A pod is designated as one of three QoS classes with decreasing order of priority:
| Priority | Class Name | Description |
|---|---|---|
| 1 (highest) | Guaranteed | If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed. |
| 2 | Burstable | If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable. |
| 3 (lowest) | BestEffort | If requests and limits are not set for any of the resources, then the pod is classified as BestEffort. |
Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first:
- Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted.
- Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist.
- BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory.
5.11.2.1. Understanding how to reserve memory across quality of service tiers Copiar enlaceEnlace copiado en el portapapeles!
You can use the
qos-reserved
OpenShift Container Platform uses the
qos-reserved
-
A value of will prevent the
qos-reserved=memory=100%andBurstableQoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM onBestEffortandBestEffortworkloads in favor of increasing memory resource guarantees forBurstableandGuaranteedworkloads.Burstable -
A value of will allow the
qos-reserved=memory=50%andBurstableQoS classes to consume half of the memory requested by a higher QoS class.BestEffort -
A value of will allow a
qos-reserved=memory=0%andBurstableQoS classes to consume up to the full node allocatable amount if available, but increases the risk that aBestEffortworkload will not have access to requested memory. This condition effectively disables this feature.Guaranteed
5.11.3. Understanding swap memory and QOS Copiar enlaceEnlace copiado en el portapapeles!
You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement.
For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed.
Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure, resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event.
If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure.
5.11.4. Understanding nodes overcommitment Copiar enlaceEnlace copiado en el portapapeles!
In an overcommitted environment, it is important to properly configure your node to provide best system behavior.
When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory.
To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the
vm.overcommit_memory
1
OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the
vm.panic_on_oom
0
You can view the current setting by running the following commands on your nodes:
$ sysctl -a |grep commit
Example output
vm.overcommit_memory = 1
$ sysctl -a |grep panic
Example output
vm.panic_on_oom = 0
The above flags should already be set on nodes, and no further action is required.
You can also perform the following configurations for each node:
- Disable or enforce CPU limits using CPU CFS quotas
- Reserve resources for system processes
- Reserve memory across quality of service tiers
5.11.5. Disabling or enforcing CPU limits using CPU CFS quotas Copiar enlaceEnlace copiado en el portapapeles!
Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel.
If you disable CPU limit enforcement, it is important to understand the impact on your node:
- If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel.
- If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel.
- If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node.
Prerequisites
Obtain the label associated with the static
CRD for the type of node you want to configure by entering the following command:MachineConfigPool$ oc edit machineconfigpool <name>For example:
$ oc edit machineconfigpool workerExample output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: ""1 name: worker- 1
- The label appears under Labels.
TipIf the label is not present, add a key/value pair such as:
$ oc label machineconfigpool worker custom-kubelet=small-pods
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a disabling CPU limits
apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: ""2 kubeletConfig: cpuCfsQuota:3 - "false"Run the following command to create the CR:
$ oc create -f <file_name>.yaml
5.11.6. Reserving resources for system processes Copiar enlaceEnlace copiado en el portapapeles!
To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory.
Procedure
To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes.
5.11.7. Disabling overcommitment for a node Copiar enlaceEnlace copiado en el portapapeles!
When enabled, overcommitment can be disabled on each node.
Procedure
To disable overcommitment in a node run the following command on that node:
$ sysctl -w vm.overcommit_memory=0
5.12. Project-level limits Copiar enlaceEnlace copiado en el portapapeles!
To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed.
For information on project-level resource limits, see Additional resources.
Alternatively, you can disable overcommitment for specific projects.
5.12.1. Disabling overcommitment for a project Copiar enlaceEnlace copiado en el portapapeles!
When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment.
Procedure
To disable overcommitment in a project:
- Edit the project object file
Add the following annotation:
quota.openshift.io/cluster-resource-override-enabled: "false"Create the project object:
$ oc create -f <file-name>.yaml
5.13. Freeing node resources using garbage collection Copiar enlaceEnlace copiado en el portapapeles!
Understand and use garbage collection.
5.13.1. Understanding how terminated containers are removed through garbage collection Copiar enlaceEnlace copiado en el portapapeles!
Container garbage collection can be performed using eviction thresholds.
When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using
oc logs
- eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period.
- eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action.
The following table lists the eviction thresholds:
| Node condition | Eviction signal | Description |
|---|---|---|
| MemoryPressure |
| The available memory on the node. |
| DiskPressure |
| The available disk space or inodes on the node root file system,
|
For
evictionHard
If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between
true
false
To protect against this oscillation, use the
eviction-pressure-transition-period
5.13.2. Understanding how images are removed through garbage collection Copiar enlaceEnlace copiado en el portapapeles!
Image garbage collection relies on disk usage as reported by cAdvisor on the node to decide which images to remove from the node.
The policy for image garbage collection is based on two conditions:
- The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85.
- The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80.
For image garbage collection, you can modify any of the following variables using a custom resource.
| Setting | Description |
|---|---|
|
| The minimum age for an unused image before the image is removed by garbage collection. The default is 2m. |
|
| The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85. |
|
| The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80. |
Two lists of images are retrieved in each garbage collector run:
- A list of images currently running in at least one pod.
- A list of images available on a host.
As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the previous spins. All images are then sorted by the time stamp.
Once the collection starts, the oldest images get deleted first until the stopping criterion is met.
5.13.3. Configuring garbage collection for containers and images Copiar enlaceEnlace copiado en el portapapeles!
As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a
kubeletConfig
OpenShift Container Platform supports only one
kubeletConfig
You can configure any combination of the following:
- Soft eviction for containers
- Hard eviction for containers
- Eviction for images
Prerequisites
Obtain the label associated with the static
CRD for the type of node you want to configure by entering the following command:MachineConfigPool$ oc edit machineconfigpool <name>For example:
$ oc edit machineconfigpool workerExample output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: ""1 name: worker- 1
- The label appears under Labels.
TipIf the label is not present, add a key/value pair such as:
$ oc label machineconfigpool worker custom-kubelet=small-pods
Procedure
Create a custom resource (CR) for your configuration change.
ImportantIf there is one file system, or if
and/var/lib/kubeletare in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction./var/lib/containers/Sample configuration for a container garbage collection CR:
apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: ""2 kubeletConfig: evictionSoft:3 memory.available: "500Mi"4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod:5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard:6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 0s7 imageMinimumGCAge: 5m8 imageGCHighThresholdPercent: 809 imageGCLowThresholdPercent: 7510 - 1
- Name for the object.
- 2
- Specify the label from the machine config pool.
- 3
- Type of eviction:
evictionSoftorevictionHard. - 4
- Eviction thresholds based on a specific eviction trigger signal.
- 5
- Grace periods for the soft eviction. This parameter does not apply to
eviction-hard. - 6
- Eviction thresholds based on a specific eviction trigger signal. For
evictionHardyou must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. - 7
- The duration to wait before transitioning out of an eviction pressure condition.
- 8
- The minimum age for an unused image before the image is removed by garbage collection.
- 9
- The percent of disk usage (expressed as an integer) that triggers image garbage collection.
- 10
- The percent of disk usage (expressed as an integer) that image garbage collection attempts to free.
Run the following command to create the CR:
$ oc create -f <file_name>.yamlFor example:
$ oc create -f gc-container.yamlExample output
kubeletconfig.machineconfiguration.openshift.io/gc-container created
Verification
Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with
as 'true` until the change is fully implemented:UPDATING$ oc get machineconfigpoolExample output
NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True
5.14. Using the Node Tuning Operator Copiar enlaceEnlace copiado en el portapapeles!
Understand and use the Node Tuning Operator.
The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs.
The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node.
Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal.
The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later.
5.14.1. Accessing an example Node Tuning Operator specification Copiar enlaceEnlace copiado en el portapapeles!
Use this process to access an example Node Tuning Operator specification.
Procedure
Run:
$ oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator
The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities.
While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality might be deprecated in future versions of the Node Tuning Operator.
5.14.2. Custom tuning specification Copiar enlaceEnlace copiado en el portapapeles!
The custom resource (CR) for the Operator has two major sections. The first section,
profile:
recommend:
Multiple custom tuning specifications can co-exist as multiple CRs in the Operator’s namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated.
Management state
The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the
spec.managementState
- Managed: the Operator will update its operands as configuration resources are updated
- Unmanaged: the Operator will ignore changes to the configuration resources
- Removed: the Operator will remove its operands and resources the Operator provisioned
Profile data
The
profile:
profile:
- name: tuned_profile_1
data: |
# TuneD profile specification
[main]
summary=Description of tuned_profile_1 profile
[sysctl]
net.ipv4.ip_forward=1
# ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD
# ...
- name: tuned_profile_n
data: |
# TuneD profile specification
[main]
summary=Description of tuned_profile_n profile
# tuned_profile_n profile settings
Recommended profiles
The
profile:
recommend:
recommend:
recommend:
<recommend-item-1>
# ...
<recommend-item-n>
The individual items of the list:
- machineConfigLabels:
<mcLabels>
match:
<match>
priority: <priority>
profile: <tuned_profile_name>
operand:
debug: <bool>
- 1
- Optional.
- 2
- A dictionary of key/value
MachineConfiglabels. The keys must be unique. - 3
- If omitted, profile match is assumed unless a profile with a higher priority matches first or
machineConfigLabelsis set. - 4
- An optional list.
- 5
- Profile ordering priority. Lower numbers mean higher priority (
0is the highest priority). - 6
- A TuneD profile to apply on a match. For example
tuned_profile_1. - 7
- Optional operand configuration.
- 8
- Turn debugging on or off for the TuneD daemon. Options are
truefor on orfalsefor off. The default isfalse.
<match>
- label: <label_name>
value: <label_value>
type: <label_type>
<match>
If
<match>
<match>
true
false
<match>
<match>
<match>
<match>
true
If
machineConfigLabels
recommend:
<mcLabels>
<tuned_profile_name>
<mcLabels>
<tuned_profile_name>
The list items
match
machineConfigLabels
match
true
machineConfigLabels
When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool.
Example: node or pod label based matching
- match:
- label: tuned.openshift.io/elasticsearch
match:
- label: node-role.kubernetes.io/master
- label: node-role.kubernetes.io/infra
type: pod
priority: 10
profile: openshift-control-plane-es
- match:
- label: node-role.kubernetes.io/master
- label: node-role.kubernetes.io/infra
priority: 20
profile: openshift-control-plane
- priority: 30
profile: openshift-node
The CR above is translated for the containerized TuneD daemon into its
recommend.conf
10
openshift-control-plane-es
tuned.openshift.io/elasticsearch
<match>
false
<match>
true
node-role.kubernetes.io/master
node-role.kubernetes.io/infra
If the labels for the profile with priority
10
openshift-control-plane-es
openshift-control-plane
node-role.kubernetes.io/master
node-role.kubernetes.io/infra
Finally, the profile
openshift-node
30
<match>
openshift-node
Example: machine config pool based matching
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: openshift-node-custom
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Custom OpenShift node profile with an additional kernel parameter
include=openshift-node
[bootloader]
cmdline_openshift_node_custom=+skew_tick=1
name: openshift-node-custom
recommend:
- machineConfigLabels:
machineconfiguration.openshift.io/role: "worker-custom"
priority: 20
profile: openshift-node-custom
To minimize node reboots, label the target nodes with a label the machine config pool’s node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself.
5.14.3. Default profiles set on a cluster Copiar enlaceEnlace copiado en el portapapeles!
The following are the default profiles set on a cluster.
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: default
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- name: "openshift"
data: |
[main]
summary=Optimize systems running OpenShift (parent profile)
include=${f:virt_check:virtual-guest:throughput-performance}
[selinux]
avc_cache_threshold=8192
[net]
nf_conntrack_hashsize=131072
[sysctl]
net.ipv4.ip_forward=1
kernel.pid_max=>4194304
net.netfilter.nf_conntrack_max=1048576
net.ipv4.conf.all.arp_announce=2
net.ipv4.neigh.default.gc_thresh1=8192
net.ipv4.neigh.default.gc_thresh2=32768
net.ipv4.neigh.default.gc_thresh3=65536
net.ipv6.neigh.default.gc_thresh1=8192
net.ipv6.neigh.default.gc_thresh2=32768
net.ipv6.neigh.default.gc_thresh3=65536
vm.max_map_count=262144
[sysfs]
/sys/module/nvme_core/parameters/io_timeout=4294967295
/sys/module/nvme_core/parameters/max_retries=10
- name: "openshift-control-plane"
data: |
[main]
summary=Optimize systems running OpenShift control plane
include=openshift
[sysctl]
# ktune sysctl settings, maximizing i/o throughput
#
# Minimal preemption granularity for CPU-bound tasks:
# (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds)
kernel.sched_min_granularity_ns=10000000
# The total time the scheduler will consider a migrated process
# "cache hot" and thus less likely to be re-migrated
# (system default is 500000, i.e. 0.5 ms)
kernel.sched_migration_cost_ns=5000000
# SCHED_OTHER wake-up granularity.
#
# Preemption granularity when tasks wake up. Lower the value to
# improve wake-up latency and throughput for latency critical tasks.
kernel.sched_wakeup_granularity_ns=4000000
- name: "openshift-node"
data: |
[main]
summary=Optimize systems running OpenShift nodes
include=openshift
[sysctl]
net.ipv4.tcp_fastopen=3
fs.inotify.max_user_watches=65536
fs.inotify.max_user_instances=8192
recommend:
- profile: "openshift-control-plane"
priority: 30
match:
- label: "node-role.kubernetes.io/master"
- label: "node-role.kubernetes.io/infra"
- profile: "openshift-node"
priority: 40
5.14.4. Supported TuneD daemon plugins Copiar enlaceEnlace copiado en el portapapeles!
Excluding the
[main]
profile:
- audio
- cpu
- disk
- eeepc_she
- modules
- mounts
- net
- scheduler
- scsi_host
- selinux
- sysctl
- sysfs
- usb
- video
- vm
There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported:
- bootloader
- script
- systemd
See Available TuneD Plugins and Getting Started with TuneD for more information.
5.15. Configuring the maximum number of pods per node Copiar enlaceEnlace copiado en el portapapeles!
Two parameters control the maximum number of pods that can be scheduled to a node:
podsPerCore
maxPods
For example, if
podsPerCore
10
Prerequisites
Obtain the label associated with the static
CRD for the type of node you want to configure by entering the following command:MachineConfigPool$ oc edit machineconfigpool <name>For example:
$ oc edit machineconfigpool workerExample output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: ""1 name: worker- 1
- The label appears under Labels.
TipIf the label is not present, add a key/value pair such as:
$ oc label machineconfigpool worker custom-kubelet=small-pods
Procedure
Create a custom resource (CR) for your configuration change.
Sample configuration for a
max-podsCRapiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: ""2 kubeletConfig: podsPerCore: 103 maxPods: 2504 NoteSetting
topodsPerCoredisables this limit.0In the above example, the default value for
ispodsPerCoreand the default value for10ismaxPods. This means that unless the node has 25 cores or more, by default,250will be the limiting factor.podsPerCoreRun the following command to create the CR:
$ oc create -f <file_name>.yaml
Verification
List the
CRDs to see if the change is applied. TheMachineConfigPoolcolumn reportsUPDATINGif the change is picked up by the Machine Config Controller:True$ oc get machineconfigpoolsExample output
NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True FalseOnce the change is complete, the
column reportsUPDATED.True$ oc get machineconfigpoolsExample output
NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False
Chapter 6. Post-installation network configuration Copiar enlaceEnlace copiado en el portapapeles!
After installing OpenShift Container Platform, you can further expand and customize your network to your requirements.
6.1. Cluster Network Operator configuration Copiar enlaceEnlace copiado en el portapapeles!
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named
cluster
Network
operator.openshift.io
The CNO configuration inherits the following fields during cluster installation from the
Network
Network.config.openshift.io
clusterNetwork- IP address pools from which pod IP addresses are allocated.
serviceNetwork- IP address pool for services.
defaultNetwork.type- Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.
After cluster installation, you cannot modify the fields listed in the previous section.
6.2. Enabling the cluster-wide proxy Copiar enlaceEnlace copiado en el portapapeles!
The
Proxy
Proxy
spec
apiVersion: config.openshift.io/v1
kind: Proxy
metadata:
name: cluster
spec:
trustedCA:
name: ""
status:
A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this
cluster
Proxy
Only the
Proxy
cluster
Prerequisites
- Cluster administrator permissions
-
OpenShift Container Platform CLI tool installed
oc
Procedure
Create a config map that contains any additional CA certificates required for proxying HTTPS connections.
NoteYou can skip this step if the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
Create a file called
with the following contents, and provide the values of your PEM-encoded certificates:user-ca-bundle.yamlapiVersion: v1 data: ca-bundle.crt: |1 <MY_PEM_ENCODED_CERTS>2 kind: ConfigMap metadata: name: user-ca-bundle3 namespace: openshift-config4 Create the config map from this file:
$ oc create -f user-ca-bundle.yaml
Use the
command to modify theoc editobject:Proxy$ oc edit proxy/clusterConfigure the necessary fields for the proxy:
apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 readinessEndpoints: - http://www.google.com4 - https://www.google.com trustedCA: name: user-ca-bundle5 - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either
httporhttps. Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to usehttpsbut they only supporthttp. This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens forhttpsconnections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. - 3
- A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying.
Preface a domain with
to match subdomains only. For example,.matches.y.com, but notx.y.com. Usey.comto bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the*field from the installation configuration, you must add them to this list to prevent connection issues.networking.machineNetwork[].cidrThis field is ignored if neither the
orhttpProxyfields are set.httpsProxy - 4
- One or more URLs external to the cluster to use to perform a readiness check before writing the
httpProxyandhttpsProxyvalues to status. - 5
- A reference to the config map in the
openshift-confignamespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
- Save the file to apply the changes.
6.3. Setting DNS to private Copiar enlaceEnlace copiado en el portapapeles!
After you deploy a cluster, you can modify its DNS to use only a private zone.
Procedure
Review the
custom resource for your cluster:DNS$ oc get dnses.config.openshift.io/cluster -o yamlExample output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}Note that the
section contains both a private and a public zone.specPatch the
custom resource to remove the public zone:DNS$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patchedBecause the Ingress Controller consults the
definition when it createsDNSobjects, when you create or modifyIngressobjects, only private records are created.IngressImportantDNS records for the existing Ingress objects are not modified when you remove the public zone.
Optional: Review the
custom resource for your cluster and confirm that the public zone was removed:DNS$ oc get dnses.config.openshift.io/cluster -o yamlExample output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}
6.4. Configuring ingress cluster traffic Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster:
- If you have HTTP/HTTPS, use an Ingress Controller.
- If you have a TLS-encrypted protocol other than HTTPS, such as TLS with the SNI header, use an Ingress Controller.
- Otherwise, use a load balancer, an external IP, or a node port.
| Method | Purpose |
|---|---|
| Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS, such as TLS with the SNI header. | |
| Automatically assign an external IP by using a load balancer service | Allows traffic to non-standard ports through an IP address assigned from a pool. |
| Allows traffic to non-standard ports through a specific IP address. | |
| Expose a service on all nodes in the cluster. |
6.5. Configuring the node port service range Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports.
The default port range is
30000-32767
6.5.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
-
Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to , the inclusive port range of
30000-32900must be allowed by your firewall or packet filtering configuration.32768-32900
6.5.1.1. Expanding the node port range Copiar enlaceEnlace copiado en el portapapeles!
You can expand the node port range for the cluster.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster with a user with privileges.
cluster-admin
Procedure
To expand the node port range, enter the following command. Replace
with the largest port number in the new range.<port>$ oc patch network.config.openshift.io cluster --type=merge -p \ '{ "spec": { "serviceNodePortRange": "30000-<port>" } }'TipYou can alternatively apply the following YAML to update the node port range:
apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: "30000-<port>"Example output
network.config.openshift.io/cluster patchedTo confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply.
$ oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]'Example output
"service-node-port-range":["30000-33000"]
6.6. Configuring network policy Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator or project administrator, you can configure network policies for a project.
6.6.1. About network policy Copiar enlaceEnlace copiado en el portapapeles!
In a cluster using a Kubernetes Container Network Interface (CNI) plugin that supports Kubernetes network policy, network isolation is controlled entirely by
NetworkPolicy
When using the OpenShift SDN cluster network provider, the following limitations apply regarding network policies:
-
Egress network policy as specified by the field is not supported.
egress -
IPBlock is supported by network policy, but without support for clauses. If you create a policy with an IPBlock section that includes an
exceptclause, the SDN pods log warnings and the entire IPBlock section of that policy is ignored.except
Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules.
By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create
NetworkPolicy
NetworkPolicy
If a pod is matched by selectors in one or more
NetworkPolicy
NetworkPolicy
NetworkPolicy
The following example
NetworkPolicy
Deny all traffic:
To make a project deny by default, add a
object that matches all pods but accepts no traffic:NetworkPolicykind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []Only allow connections from the OpenShift Container Platform Ingress Controller:
To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following
object.NetworkPolicyapiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - IngressOnly accept connections from pods within a project:
To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following
object:NetworkPolicykind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}Only allow HTTP and HTTPS traffic based on pod labels:
To enable only HTTP and HTTPS access to the pods with a specific label (
in following example), add arole=frontendobject similar to the following:NetworkPolicykind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443Accept connections by using both namespace and pod selectors:
To match network traffic by combining namespace and pod selectors, you can use a
object similar to the following:NetworkPolicykind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods
NetworkPolicy
NetworkPolicy
For example, for the
NetworkPolicy
allow-same-namespace
allow-http-and-https
role=frontend
80
443
6.6.2. Example NetworkPolicy object Copiar enlaceEnlace copiado en el portapapeles!
The following annotates an example NetworkPolicy object:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-27107
spec:
podSelector:
matchLabels:
app: mongodb
ingress:
- from:
- podSelector:
matchLabels:
app: app
ports:
- protocol: TCP
port: 27017
- 1
- The name of the NetworkPolicy object.
- 2
- A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object.
- 3
- A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
- 4
- A list of one or more destination ports on which to accept traffic.
6.6.3. Creating a network policy Copiar enlaceEnlace copiado en el portapapeles!
To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy.
If you log in with a user with the
cluster-admin
Prerequisites
-
Your cluster uses a cluster network provider that supports objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider with
NetworkPolicyset. This mode is the default for OpenShift SDN.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
admin - You are working in the namespace that the network policy applies to.
Procedure
Create a policy rule:
Create a
file:<policy_name>.yaml$ touch <policy_name>.yamlwhere:
<policy_name>- Specifies the network policy file name.
Define a network policy in the file that you just created, such as in the following examples:
Deny ingress from all pods in all namespaces
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []
.Allow ingress from all pods in the same namespace
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}To create the network policy object, enter the following command:
$ oc apply -f <policy_name>.yaml -n <namespace>where:
<policy_name>- Specifies the network policy file name.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Example output
networkpolicy.networking.k8s.io/default-deny created
6.6.4. Configuring multitenant isolation by using network policy Copiar enlaceEnlace copiado en el portapapeles!
You can configure your project to isolate it from pods and services in other project namespaces.
Prerequisites
-
Your cluster uses a cluster network provider that supports objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider with
NetworkPolicyset. This mode is the default for OpenShift SDN.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
admin
Procedure
Create the following
objects:NetworkPolicyA policy named
.allow-from-openshift-ingress$ cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOFNoteis the preferred namespace selector label for OpenShift SDN. You can use thepolicy-group.network.openshift.io/ingress: ""namespace selector label, but this is a legacy label.network.openshift.io/policy-group: ingressA policy named
:allow-from-openshift-monitoring$ cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOFA policy named
:allow-same-namespace$ cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF
Optional: To confirm that the network policies exist in your current project, enter the following command:
$ oc describe networkpolicyExample output
Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress
6.6.5. Creating default network policies for a new project Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you can modify the new project template to automatically include
NetworkPolicy
6.6.6. Modifying the template for new projects Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.
To create your own custom project template:
Procedure
-
Log in as a user with privileges.
cluster-admin Generate the default project template:
$ oc adm create-bootstrap-project-template -o yaml > template.yaml-
Use a text editor to modify the generated file by adding objects or modifying existing objects.
template.yaml The project template must be created in the
namespace. Load your modified template:openshift-config$ oc create -f template.yaml -n openshift-configEdit the project configuration resource using the web console or CLI.
Using the web console:
- Navigate to the Administration → Cluster Settings page.
- Click Global Configuration to view all configuration resources.
- Find the entry for Project and click Edit YAML.
Using the CLI:
Edit the
resource:project.config.openshift.io/cluster$ oc edit project.config.openshift.io/cluster
Update the
section to include thespecandprojectRequestTemplateparameters, and set the name of your uploaded project template. The default name isname.project-requestProject configuration resource with custom project template
apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: <template_name>- After you save your changes, create a new project to verify that your changes were successfully applied.
6.6.6.1. Adding network policies to the new project template Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the
NetworkPolicy
Prerequisites
-
Your cluster uses a default CNI network provider that supports objects, such as the OpenShift SDN network provider with
NetworkPolicyset. This mode is the default for OpenShift SDN.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You must log in to the cluster with a user with privileges.
cluster-admin - You must have created a custom default project template for new projects.
Procedure
Edit the default template for a new project by running the following command:
$ oc edit template <project_template> -n openshift-configReplace
with the name of the default template that you configured for your cluster. The default template name is<project_template>.project-requestIn the template, add each
object as an element to theNetworkPolicyparameter. Theobjectsparameter accepts a collection of one or more objects.objectsIn the following example, the
parameter collection includes severalobjectsobjects.NetworkPolicyobjects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress ...Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands:
Create a new project:
$ oc new-project <project>1 - 1
- Replace
<project>with the name for the project you are creating.
Confirm that the network policy objects in the new project template exist in the new project:
$ oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s
6.7. Supported configurations Copiar enlaceEnlace copiado en el portapapeles!
The following configurations are supported for the current release of Red Hat OpenShift Service Mesh.
6.7.1. Supported platforms Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat OpenShift Service Mesh Operator supports multiple versions of the
ServiceMeshControlPlane
- Red Hat OpenShift Container Platform version 4.9 or later.
- Red Hat OpenShift Dedicated version 4.
- Azure Red Hat OpenShift (ARO) version 4.
- Red Hat OpenShift Service on AWS (ROSA).
6.7.2. Unsupported configurations Copiar enlaceEnlace copiado en el portapapeles!
Explicitly unsupported cases include:
- OpenShift Online is not supported for Red Hat OpenShift Service Mesh.
- Red Hat OpenShift Service Mesh does not support the management of microservices outside the cluster where Service Mesh is running.
6.7.3. Supported network configurations Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Service Mesh supports the following network configurations.
- OpenShift-SDN
- OVN-Kubernetes is supported on OpenShift Container Platform 4.7.32+, OpenShift Container Platform 4.8.12+, and OpenShift Container Platform 4.9+.
- Third-Party Container Network Interface (CNI) plugins that have been certified on OpenShift Container Platform and passed Service Mesh conformance testing. See Certified OpenShift CNI Plug-ins for more information.
6.7.4. Supported configurations for Service Mesh Copiar enlaceEnlace copiado en el portapapeles!
This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64, IBM Z, and IBM Power Systems.
- IBM Z is only supported on OpenShift Container Platform 4.6 and later.
- IBM Power Systems is only supported on OpenShift Container Platform 4.6 and later.
- Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster.
- Configurations that do not integrate external services such as virtual machines.
-
Red Hat OpenShift Service Mesh does not support configuration except where explicitly documented.
EnvoyFilter
6.7.5. Supported configurations for Kiali Copiar enlaceEnlace copiado en el portapapeles!
- The Kiali console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers.
6.7.6. Supported configurations for Distributed Tracing Copiar enlaceEnlace copiado en el portapapeles!
- Jaeger agent as a sidecar is the only supported configuration for Jaeger. Jaeger as a daemonset is not supported for multitenant installations or OpenShift Dedicated.
6.7.7. Supported WebAssembly module Copiar enlaceEnlace copiado en el portapapeles!
- 3scale WebAssembly is the only provided WebAssembly module. You can create custom WebAssembly modules.
6.7.8. Operator overview Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Service Mesh requires the following four Operators:
- OpenShift Elasticsearch - (Optional) Provides database storage for tracing and logging with the distributed tracing platform. It is based on the open source Elasticsearch project.
- Red Hat OpenShift distributed tracing platform - Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project.
- Kiali - Provides observability for your service mesh. Allows you to view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project.
-
Red Hat OpenShift Service Mesh - Allows you to connect, secure, control, and observe the microservices that comprise your applications. The Service Mesh Operator defines and monitors the resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project.
ServiceMeshControlPlane
Next steps
- Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment.
6.8. Optimizing routing Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Container Platform HAProxy router scales to optimize performance.
6.8.1. Baseline Ingress Controller (router) performance Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Container Platform Ingress Controller, or router, is the Ingress point for all external traffic destined for OpenShift Container Platform services.
When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular:
- HTTP keep-alive/close mode
- Route type
- TLS session resumption client support
- Number of concurrent connections per target route
- Number of target routes
- Back end server page size
- Underlying infrastructure (network/SDN solution, CPU, and so on)
While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second.
In HTTP keep-alive mode scenarios:
| Encryption | LoadBalancerService | HostNetwork |
|---|---|---|
| none | 21515 | 29622 |
| edge | 16743 | 22913 |
| passthrough | 36786 | 53295 |
| re-encrypt | 21583 | 25198 |
In HTTP close (no keep-alive) scenarios:
| Encryption | LoadBalancerService | HostNetwork |
|---|---|---|
| none | 5719 | 8273 |
| edge | 2729 | 4069 |
| passthrough | 4121 | 5344 |
| re-encrypt | 2320 | 2941 |
Default Ingress Controller configuration with
ROUTER_THREADS=4
When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router:
| Number of applications | Application type |
|---|---|
| 5-10 | static file/web server or caching proxy |
| 100-1000 | applications generating dynamic content |
In general, HAProxy can support routes for 5 to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content.
Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier.
6.8.2. Ingress Controller (router) performance optimizations Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform no longer supports modifying Ingress Controller deployments by setting environment variables such as
ROUTER_THREADS
ROUTER_DEFAULT_TUNNEL_TIMEOUT
ROUTER_DEFAULT_CLIENT_TIMEOUT
ROUTER_DEFAULT_SERVER_TIMEOUT
RELOAD_INTERVAL
You can modify the Ingress Controller deployment, but if the Ingress Operator is enabled, the configuration is overwritten.
6.9. Post-installation RHOSP network configuration Copiar enlaceEnlace copiado en el portapapeles!
You can configure some aspects of a OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) cluster after installation.
6.9.1. Configuring application access with floating IP addresses Copiar enlaceEnlace copiado en el portapapeles!
After you install OpenShift Container Platform, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic.
You do not need to perform this procedure if you provided values for
platform.openstack.apiFloatingIP
platform.openstack.ingressFloatingIP
install-config.yaml
os_api_fip
os_ingress_fip
inventory.yaml
Prerequisites
- OpenShift Container Platform cluster must be installed
- Floating IP addresses are enabled as described in the OpenShift Container Platform on RHOSP installation documentation.
Procedure
After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress port:
Show the port:
$ openstack port show <cluster_name>-<cluster_ID>-ingress-portAttach the port to the IP address:
$ openstack floating ip set --port <ingress_port_ID> <apps_FIP>Add a wildcard
record forAto your DNS file:*apps.*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>
If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to
/etc/hosts
<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain>
<apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain>
<apps_FIP> oauth-openshift.apps.<cluster name>.<base domain>
<apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain>
<apps_FIP> grafana-openshift-monitoring.apps.<cluster name>.<base domain>
<apps_FIP> <app name>.apps.<cluster name>.<base domain>
6.9.2. Kuryr ports pools Copiar enlaceEnlace copiado en el portapapeles!
A Kuryr ports pool maintains a number of ports on standby for pod creation.
Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted.
The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes.
Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair.
Prior to installing a cluster, you can set the following parameters in the
cluster-network-03-config.yml
-
The parameter controls pool prepopulation, which forces Kuryr to add ports to the pool when it is created, such as when a new host is added, or a new namespace is created. The default value is
enablePortPoolsPrepopulation.false -
The parameter is the minimum number of free ports that are kept in the pool. The default value is
poolMinPorts.1 The
parameter is the maximum number of free ports that are kept in the pool. A value ofpoolMaxPortsdisables that upper bound. This is the default setting.0If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted.
-
The parameter defines the maximum number of Neutron ports that can be created at once. The default value is
poolBatchPorts.3
6.9.3. Adjusting Kuryr ports pool settings in active deployments on RHOSP Copiar enlaceEnlace copiado en el portapapeles!
You can use a custom resource (CR) to configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation on a deployed cluster.
Procedure
From a command line, open the Cluster Network Operator (CNO) CR for editing:
$ oc edit networks.operator.openshift.io clusterEdit the settings to meet your requirements. The following file is provided as an example:
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false1 poolMinPorts: 12 poolBatchPorts: 33 poolMaxPorts: 54 - 1
- Set
enablePortPoolsPrepopulationtotrueto make Kuryr create new Neutron ports after a namespace is created or a new node is added to the cluster. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value isfalse. - 2
- Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of
poolMinPorts. The default value is1. - 3
poolBatchPortscontrols the number of new ports that are created if the number of free ports is lower than the value ofpoolMinPorts. The default value is3.- 4
- If the number of free ports in a pool is higher than the value of
poolMaxPorts, Kuryr deletes them until the number matches that value. Setting the value to0disables this upper bound, preventing pools from shrinking. The default value is0.
- Save your changes and quit the text editor to commit your changes.
Modifying these options on a running cluster forces the kuryr-controller and kuryr-cni pods to restart. As a result, the creation of new pods and services will be delayed.
6.9.4. Enabling RHOSP Octavia for load balancer services Copiar enlaceEnlace copiado en el portapapeles!
You can create load balancer service types and ingress controllers that have load balancers as back ends by using Octavia on Red Hat OpenStack Platform (RHOSP).
Services and controllers that rely on Octavia have the following limitations:
- Only TCP traffic is supported.
- Active Octavia load balancers and the floating IP addresses that are attached to them are not deleted during a cluster delete operation. You must delete these items prior performing the operation.
-
The property in the cloud provider configuration only applies to RHOSP tenants that have administrative privileges.
manage-security-groups -
The property for load balancer services is not supported.
loadBalancerSourceRanges -
The property for load balancer services is not supported.
loadBalancerIP
Prerequisites
- You have an active cluster.
-
You installed the OpenShift CLI ().
oc
Procedure
From a command line, open the cloud provider configuration for editing:
$ oc edit configmap -n openshift-config cloud-provider-configEdit the configuration for your driver type:
If you are using the Amphora driver, add the following section to your cloud provider configuration:
[LoadBalancer] use-octavia = true lb-provider = amphoraIf you are using the OVN driver, add the following section to your cloud provider configuration:
[LoadBalancer] use-octavia = true lb-provider = ovn lb-method = SOURCE_IP_PORTNoteIf you are using the OVN driver for Octavia, you must also modify the TCP ingress security group rules for the primary and worker security groups to allow IPv4 traffic to ports 30000 through 32767 from 0.0.0.0/0.
If you have multiple external networks, set the value of the
parameter in your cloud provider configuration to the UUID of the external network in which floating IP addresses are created. For example:floating-network-id[LoadBalancer] use-octavia = true lb-provider = amphora floating-network-id = <network_UUID>- Save the changes to your configuration.
Chapter 7. Post-installation storage configuration Copiar enlaceEnlace copiado en el portapapeles!
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration.
7.1. Dynamic provisioning Copiar enlaceEnlace copiado en el portapapeles!
7.1.1. About dynamic provisioning Copiar enlaceEnlace copiado en el portapapeles!
The
StorageClass
StorageClass
cluster-admin
storage-admin
StorageClass
The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs.
7.1.2. Available dynamic provisioning plugins Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:
| Storage type | Provisioner plugin name | Notes |
|---|---|---|
| Red Hat OpenStack Platform (RHOSP) Cinder |
| |
| RHOSP Manila Container Storage Interface (CSI) |
| Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. |
| AWS Elastic Block Store (EBS) |
| For dynamic provisioning when using multiple clusters in different zones, tag each node with
|
| Azure Disk |
| |
| Azure File |
| The
|
| GCE Persistent Disk (gcePD) |
| In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. |
|
|
Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation.
7.2. Defining a storage class Copiar enlaceEnlace copiado en el portapapeles!
StorageClass
cluster-admin
storage-admin
The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class.
The following sections describe the basic definition for a
StorageClass
7.2.1. Basic StorageClass object definition Copiar enlaceEnlace copiado en el portapapeles!
The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition.
Sample StorageClass definition
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: 'true'
...
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
...
- 1
- (required) The API object type.
- 2
- (required) The current apiVersion.
- 3
- (required) The name of the storage class.
- 4
- (optional) Annotations for the storage class.
- 5
- (required) The type of provisioner associated with this storage class.
- 6
- (optional) The parameters required for the specific provisioner, this will change from plugin to plugin.
7.2.2. Storage class annotations Copiar enlaceEnlace copiado en el portapapeles!
To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata:
storageclass.kubernetes.io/is-default-class: "true"
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
...
This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class.
The beta annotation
storageclass.beta.kubernetes.io/is-default-class
To set a storage class description, add the following annotation to your storage class metadata:
kubernetes.io/description: My Storage Class Description
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubernetes.io/description: My Storage Class Description
...
7.2.3. RHOSP Cinder object definition Copiar enlaceEnlace copiado en el portapapeles!
cinder-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
fsType: ext4
- 1
- Volume type created in Cinder. Default is empty.
- 2
- Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node.
- 3
- File system that is created on dynamically provisioned volumes. This value is copied to the
fsTypefield of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value isext4.
7.2.4. AWS Elastic Block Store (EBS) object definition Copiar enlaceEnlace copiado en el portapapeles!
aws-ebs-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
iopsPerGB: "10"
encrypted: "true"
kmsKeyId: keyvalue
fsType: ext4
- 1
- (required) Select from
io1,gp2,sc1,st1. The default isgp2. See the AWS documentation for valid Amazon Resource Name (ARN) values. - 2
- (optional) Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details.
- 3
- (optional) Denotes whether to encrypt the EBS volume. Valid values are
trueorfalse. - 4
- (optional) The full ARN of the key to use when encrypting the volume. If none is supplied, but
encyptedis set totrue, then AWS generates a key. See the AWS documentation for a valid ARN value. - 5
- (optional) File system that is created on dynamically provisioned volumes. This value is copied to the
fsTypefield of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value isext4.
7.2.5. Azure Disk object definition Copiar enlaceEnlace copiado en el portapapeles!
azure-advanced-disk-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-premium
provisioner: kubernetes.io/azure-disk
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
kind: Managed
storageaccounttype: Premium_LRS
reclaimPolicy: Delete
- 1
- Using
WaitForFirstConsumeris strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. - 2
- Possible values are
Shared(default),Managed, andDedicated.ImportantRed Hat only supports the use of
in the storage class.kind: ManagedWith
andShared, Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created withDedicatedorSharedcannot be attached to OpenShift Container Platform nodes.Dedicated - 3
- Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both
Standard_LRSandPremium_LRSdisks, Standard VMs can only attachStandard_LRSdisks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks.-
If is set to
kind, Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster.Shared -
If is set to
kind, Azure creates new managed disks.Managed If
is set tokindand aDedicatedis specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work:storageAccount- The specified storage account must be in the same region.
- Azure Cloud Provider must have write access to the storage account.
-
If is set to
kindand aDedicatedis not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster.storageAccount
-
If
7.2.6. Azure File object definition Copiar enlaceEnlace copiado en el portapapeles!
The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure.
Procedure
Define a
object that allows access to create and view secrets:ClusterRoleapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role>1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']- 1
- The name of the cluster role to view and create secrets.
Add the cluster role to the service account:
$ oc adm policy add-cluster-role-to-user <persistent-volume-binder-role>Example output
system:serviceaccount:kube-system:persistent-volume-binderCreate the Azure File
object:StorageClasskind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file>1 provisioner: kubernetes.io/azure-file parameters: location: eastus2 skuName: Standard_LRS3 storageAccount: <storage-account>4 reclaimPolicy: Delete volumeBindingMode: Immediate- 1
- Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
- 2
- Location of the Azure storage account, such as
eastus. Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster’s location. - 3
- SKU tier of the Azure storage account, such as
Standard_LRS. Default is empty, meaning that a new Azure storage account will be created with theStandard_LRSSKU. - 4
- Name of the Azure storage account. If a storage account is provided, then
skuNameandlocationare ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the definedskuNameandlocation.
7.2.6.1. Considerations when using Azure File Copiar enlaceEnlace copiado en el portapapeles!
The following file system features are not supported by the default Azure File storage class:
- Symlinks
- Hard links
- Extended attributes
- Sparse files
- Named pipes
Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The
uid
StorageClass
The following
StorageClass
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-file
mountOptions:
- uid=1500
- gid=1500
- mfsymlinks
provisioner: kubernetes.io/azure-file
parameters:
location: eastus
skuName: Standard_LRS
reclaimPolicy: Delete
volumeBindingMode: Immediate
7.2.7. GCE PersistentDisk (gcePD) object definition Copiar enlaceEnlace copiado en el portapapeles!
gce-pd-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
- 1
- Select either
pd-standardorpd-ssd. The default ispd-standard.
7.2.8. VMware vSphere object definition Copiar enlaceEnlace copiado en el portapapeles!
vsphere-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
- 1
- For more information about using VMware vSphere with OpenShift Container Platform, see the VMware vSphere documentation.
- 2
diskformat:thin,zeroedthickandeagerzeroedthickare all valid disk formats. See vSphere docs for additional details regarding the disk format types. The default value isthin.
7.3. Changing the default storage class Copiar enlaceEnlace copiado en el portapapeles!
Use the following process to change the default storage class. For example you have two defined storage classes,
gp2
standard
gp2
standard
List the storage class:
$ oc get storageclassExample output
NAME TYPE gp2 (default) kubernetes.io/aws-ebs1 standard kubernetes.io/aws-ebs- 1
(default)denotes the default storage class.
Change the value of the
annotation tostorageclass.kubernetes.io/is-default-classfor the default storage class:false$ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'Make another storage class the default by setting the
annotation tostorageclass.kubernetes.io/is-default-class:true$ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'Verify the changes:
$ oc get storageclassExample output
NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs
7.4. Optimizing storage Copiar enlaceEnlace copiado en el portapapeles!
Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner.
7.5. Available persistent storage options Copiar enlaceEnlace copiado en el portapapeles!
Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment.
| Storage type | Description | Examples |
|---|---|---|
| Block |
| AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform. |
| File |
| RHEL NFS, NetApp NFS [1], and Vendor NFS |
| Object |
| AWS S3 |
- NetApp NFS supports dynamic PV provisioning when using the Trident plugin.
Currently, CNS is not supported in OpenShift Container Platform 4.8.
7.6. Recommended configurable storage technology Copiar enlaceEnlace copiado en el portapapeles!
The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application.
| Storage type | ROX1 | RWX2 | Registry | Scaled registry | Metrics3 | Logging | Apps |
|---|---|---|---|---|---|---|---|
| 1
2
3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the
6 For logging, using any shared storage would be an anti-pattern. One volume per elasticsearch is required. 7 Object storage is not consumed through OpenShift Container Platform’s PVs or PVCs. Apps must integrate with the object storage REST API. | |||||||
| Block | Yes4 | No | Configurable | Not configurable | Recommended | Recommended | Recommended |
| File | Yes4 | Yes | Configurable | Configurable | Configurable5 | Configurable6 | Recommended |
| Object | Yes | Yes | Recommended | Recommended | Not configurable | Not configurable | Not configurable7 |
A scaled registry is an OpenShift Container Platform registry where two or more pod replicas are running.
7.6.1. Specific application storage recommendations Copiar enlaceEnlace copiado en el portapapeles!
Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.
Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.
7.6.1.1. Registry Copiar enlaceEnlace copiado en el portapapeles!
In a non-scaled/high-availability (HA) OpenShift Container Platform registry cluster deployment:
- The storage technology does not have to support RWX access mode.
- The storage technology must ensure read-after-write consistency.
- The preferred storage technology is object storage followed by block storage.
- File storage is not recommended for OpenShift Container Platform registry cluster deployment with production workloads.
7.6.1.2. Scaled registry Copiar enlaceEnlace copiado en el portapapeles!
In a scaled/HA OpenShift Container Platform registry cluster deployment:
- The storage technology must support RWX access mode.
- The storage technology must ensure read-after-write consistency.
- The preferred storage technology is object storage.
- Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported.
- Object storage should be S3 or Swift compliant.
- For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage.
- Block storage is not configurable.
7.6.1.3. Metrics Copiar enlaceEnlace copiado en el portapapeles!
In an OpenShift Container Platform hosted metrics cluster deployment:
- The preferred storage technology is block storage.
- Object storage is not configurable.
It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads.
7.6.1.4. Logging Copiar enlaceEnlace copiado en el portapapeles!
In an OpenShift Container Platform hosted logging cluster deployment:
- The preferred storage technology is block storage.
- Object storage is not configurable.
7.6.1.5. Applications Copiar enlaceEnlace copiado en el portapapeles!
Application use cases vary from application to application, as described in the following examples:
- Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster.
- Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer.
7.6.2. Other specific application storage recommendations Copiar enlaceEnlace copiado en el portapapeles!
It is not recommended to use RAID configurations on
Write
etcd
etcd
- Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases.
- Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage.
- The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices.
7.7. Deploy Red Hat OpenShift Container Storage Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Container Storage is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Container Storage is completely integrated with OpenShift Container Platform for deployment, management, and monitoring.
| If you are looking for Red Hat OpenShift Container Storage information about… | See the following Red Hat OpenShift Container Storage documentation: |
|---|---|
| What’s new, known issues, notable bug fixes, and Technology Previews | |
| Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations | |
| Instructions on preparing to deploy when your environment is not directly connected to the internet | Preparing to deploy OpenShift Container Storage 4.5 in a disconnected environment |
| Instructions on deploying OpenShift Container Storage to use an external Red Hat Ceph Storage cluster | |
| Instructions on deploying OpenShift Container Storage to local storage on bare metal infrastructure | Deploying OpenShift Container Storage 4.5 using bare metal infrastructure |
| Instructions on deploying OpenShift Container Storage on Red Hat OpenShift Container Platform VMware vSphere clusters | |
| Instructions on deploying OpenShift Container Storage using Amazon Web Services for local or cloud storage | Deploying OpenShift Container Storage 4.5 using Amazon Web Services |
| Instructions on deploying and managing OpenShift Container Storage on existing Red Hat OpenShift Container Platform Google Cloud clusters | Deploying and managing OpenShift Container Storage 4.5 using Google Cloud |
| Instructions on deploying and managing OpenShift Container Storage on existing Red Hat OpenShift Container Platform Azure clusters | Deploying and managing OpenShift Container Storage 4.5 using Microsoft Azure |
| Managing a Red Hat OpenShift Container Storage 4.5 cluster | |
| Monitoring a Red Hat OpenShift Container Storage 4.5 cluster | |
| Resolve issues encountered during operations | |
| Migrating your OpenShift Container Platform cluster from version 3 to version 4 |
Chapter 8. Preparing for users Copiar enlaceEnlace copiado en el portapapeles!
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including taking steps to prepare for users.
8.1. Understanding identity provider configuration Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Container Platform control plane includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API.
As an administrator, you can configure OAuth to specify an identity provider after you install your cluster.
8.1.1. About identity providers in OpenShift Container Platform Copiar enlaceEnlace copiado en el portapapeles!
By default, only a
kubeadmin
OpenShift Container Platform user names containing
/
:
%
8.1.2. Supported identity providers Copiar enlaceEnlace copiado en el portapapeles!
You can configure the following types of identity providers:
| Identity provider | Description |
|---|---|
| Configure the
htpasswd.
| |
| Configure the
| |
| Configure the
| |
| Configure a
| |
| Configure a
| |
| Configure a
| |
| Configure a
| |
| Configure a
| |
| Configure an
|
After you define an identity provider, you can use RBAC to define and apply permissions.
8.1.3. Identity provider parameters Copiar enlaceEnlace copiado en el portapapeles!
The following parameters are common to all identity providers:
| Parameter | Description |
|---|---|
|
| The provider name is prefixed to provider user names to form an identity name. |
|
| Defines how new identities are mapped to users when they log in. Enter one of the following values:
|
When adding or changing identity providers, you can map identities from the new provider to existing users by setting the
mappingMethod
add
8.1.4. Sample identity provider CR Copiar enlaceEnlace copiado en el portapapeles!
The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider.
Sample identity provider CR
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my_identity_provider
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret
8.2. Using RBAC to define and apply permissions Copiar enlaceEnlace copiado en el portapapeles!
Understand and apply role-based access control.
8.2.1. RBAC overview Copiar enlaceEnlace copiado en el portapapeles!
Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project.
Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects.
Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action.
Authorization is managed using:
| Authorization object | Description |
|---|---|
| Rules | Sets of permitted verbs on a set of objects. For example, whether a user or service account can
|
| Roles | Collections of rules. You can associate, or bind, users and groups to multiple roles. |
| Bindings | Associations between users and/or groups with a role. |
There are two levels of RBAC roles and bindings that control authorization:
| RBAC level | Description |
|---|---|
| Cluster RBAC | Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. |
| Local RBAC | Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. |
A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation.
This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles.
During evaluation, both the cluster role bindings and the local role bindings are used. For example:
- Cluster-wide "allow" rules are checked.
- Locally-bound "allow" rules are checked.
- Deny by default.
8.2.1.1. Default cluster roles Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally.
It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly.
| Default cluster role | Description |
|---|---|
|
| A project manager. If used in a local binding, an
|
|
| A user that can get basic information about projects and users. |
|
| A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. |
|
| A user that can get basic cluster status information. |
|
| A user that can get or view most of the objects but cannot modify them. |
|
| A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. |
|
| A user that can create their own projects. |
|
| A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. |
Be mindful of the difference between local and cluster bindings. For example, if you bind the
cluster-admin
cluster-admin
admin
cluster-admin
The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below.
8.2.1.2. Evaluating authorization Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform evaluates authorization by using:
- Identity
- The user name and list of groups that the user belongs to.
- Action
The action you perform. In most cases, this consists of:
- Project: The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities.
-
Verb : The action itself: ,
get,list,create,update,delete, ordeletecollection.watch - Resource name: The API endpoint that you access.
- Bindings
- The full list of bindings, the associations between users or groups with a role.
OpenShift Container Platform evaluates authorization by using the following steps:
- The identity and the project-scoped action is used to find all bindings that apply to the user or their groups.
- Bindings are used to locate all the roles that apply.
- Roles are used to find all the rules that apply.
- The action is checked against each rule to find a match.
- If no matching rule is found, the action is then denied by default.
Remember that users and groups can be associated with, or bound to, multiple roles at the same time.
Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with.
The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin.
Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level.
8.2.1.2.1. Cluster role aggregation Copiar enlaceEnlace copiado en el portapapeles!
The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation, where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources.
8.2.2. Projects and namespaces Copiar enlaceEnlace copiado en el portapapeles!
A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces.
Namespaces provide a unique scope for:
- Named resources to avoid basic naming collisions.
- Delegated management authority to trusted users.
- The ability to limit community resource consumption.
Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users.
A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects.
Projects can have a separate
name
displayName
description
-
The mandatory is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters.
name -
The optional is how the project is displayed in the web console (defaults to
displayName).name -
The optional can be a more detailed description of the project and is also visible in the web console.
description
Each project scopes its own set of:
| Object | Description |
|---|---|
|
| Pods, services, replication controllers, etc. |
|
| Rules for which users can or cannot perform actions on objects. |
|
| Quotas for each kind of object that can be limited. |
|
| Service accounts act automatically with designated access to objects in the project. |
Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects.
Developers and administrators can interact with projects by using the CLI or the web console.
8.2.3. Default projects Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform comes with a number of default projects, and projects starting with
openshift-
You cannot assign an SCC to pods created in one of the default namespaces:
default
kube-system
kube-public
openshift-node
openshift-infra
openshift
8.2.4. Viewing cluster roles and bindings Copiar enlaceEnlace copiado en el portapapeles!
You can use the
oc
oc describe
Prerequisites
-
Install the CLI.
oc - Obtain permission to view the cluster roles and bindings.
Users with the
cluster-admin
Procedure
To view the cluster roles and their associated rule sets:
$ oc describe clusterrole.rbacExample output
Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ...To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles:
$ oc describe clusterrolebinding.rbacExample output
Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ...
8.2.5. Viewing local roles and bindings Copiar enlaceEnlace copiado en el portapapeles!
You can use the
oc
oc describe
Prerequisites
-
Install the CLI.
oc Obtain permission to view the local roles and bindings:
-
Users with the default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings.
cluster-admin -
Users with the default cluster role bound locally can view and manage roles and bindings in that project.
admin
-
Users with the
Procedure
To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project:
$ oc describe rolebinding.rbacTo view the local role bindings for a different project, add the
flag to the command:-n$ oc describe rolebinding.rbac -n joe-projectExample output
Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project
8.2.6. Adding roles to users Copiar enlaceEnlace copiado en el portapapeles!
You can use the
oc adm
Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using
oc adm policy
You can bind any of the default cluster roles to local users or groups in your project.
Procedure
Add a role to a user in a specific project:
$ oc adm policy add-role-to-user <role> <user> -n <project>For example, you can add the
role to theadminuser inaliceproject by running:joe$ oc adm policy add-role-to-user admin alice -n joeTipYou can alternatively apply the following YAML to add the role to the user:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: aliceView the local role bindings and verify the addition in the output:
$ oc describe rolebinding.rbac -n <project>For example, to view the local role bindings for the
project:joe$ oc describe rolebinding.rbac -n joeExample output
Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe- 1
- The
aliceuser has been added to theadminsRoleBinding.
8.2.7. Creating a local role Copiar enlaceEnlace copiado en el portapapeles!
You can create a local role for a project and then bind it to a user.
Procedure
To create a local role for a project, run the following command:
$ oc create role <name> --verb=<verb> --resource=<resource> -n <project>In this command, specify:
-
, the local role’s name
<name> -
, a comma-separated list of the verbs to apply to the role
<verb> -
, the resources that the role applies to
<resource> -
, the project name
<project>
For example, to create a local role that allows a user to view pods in the
project, run the following command:blue$ oc create role podview --verb=get --resource=pod -n blue-
To bind the new role to a user, run the following command:
$ oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue
8.2.8. Creating a cluster role Copiar enlaceEnlace copiado en el portapapeles!
You can create a cluster role.
Procedure
To create a cluster role, run the following command:
$ oc create clusterrole <name> --verb=<verb> --resource=<resource>In this command, specify:
-
, the local role’s name
<name> -
, a comma-separated list of the verbs to apply to the role
<verb> -
, the resources that the role applies to
<resource>
For example, to create a cluster role that allows a user to view pods, run the following command:
$ oc create clusterrole podviewonly --verb=get --resource=pod-
8.2.9. Local role binding commands Copiar enlaceEnlace copiado en el portapapeles!
When you manage a user or group’s associated roles for local role bindings using the following operations, a project may be specified with the
-n
You can use the following commands for local RBAC management.
| Command | Description |
|---|---|
|
| Indicates which users can perform an action on a resource. |
|
| Binds a specified role to specified users in the current project. |
|
| Removes a given role from specified users in the current project. |
|
| Removes specified users and all of their roles in the current project. |
|
| Binds a given role to specified groups in the current project. |
|
| Removes a given role from specified groups in the current project. |
|
| Removes specified groups and all of their roles in the current project. |
8.2.10. Cluster role binding commands Copiar enlaceEnlace copiado en el portapapeles!
You can also manage cluster role bindings using the following operations. The
-n
| Command | Description |
|---|---|
|
| Binds a given role to specified users for all projects in the cluster. |
|
| Removes a given role from specified users for all projects in the cluster. |
|
| Binds a given role to specified groups for all projects in the cluster. |
|
| Removes a given role from specified groups for all projects in the cluster. |
8.2.11. Creating a cluster admin Copiar enlaceEnlace copiado en el portapapeles!
The
cluster-admin
Prerequisites
- You must have created a user to define as the cluster admin.
Procedure
Define the user as a cluster admin:
$ oc adm policy add-cluster-role-to-user cluster-admin <user>
8.3. The kubeadmin user Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform creates a cluster administrator,
kubeadmin
This user has the
cluster-admin
INFO Install complete!
INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI.
INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes).
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com
INFO Login to the console with user: kubeadmin, password: <provided>
8.3.1. Removing the kubeadmin user Copiar enlaceEnlace copiado en el portapapeles!
After you define an identity provider and create a new
cluster-admin
kubeadmin
If you follow this procedure before another user is a
cluster-admin
Prerequisites
- You must have configured at least one identity provider.
-
You must have added the role to a user.
cluster-admin - You must be logged in as an administrator.
Procedure
Remove the
secrets:kubeadmin$ oc delete secrets kubeadmin -n kube-system
8.4. Image configuration Copiar enlaceEnlace copiado en el portapapeles!
Understand and configure image registry settings.
8.4.1. Image controller configuration parameters Copiar enlaceEnlace copiado en el portapapeles!
The
image.config.openshift.io/cluster
cluster
spec
Parameters such as
DisableScheduledImport
MaxImagesBulkImportedPerRepository
MaxScheduledImportsPerMinute
ScheduledImageImportMinimumIntervalSeconds
InternalRegistryHostname
| Parameter | Description |
|---|---|
|
| Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or
Every element of this list contains a location of the registry specified by the registry domain name.
|
|
| A reference to a config map containing additional CAs that should be trusted during
The namespace for this config map is
|
|
| Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in
|
|
| Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry.
Either
|
When the
allowedRegistries
registry.redhat.io
quay.io
registry.redhat.io
quay.io
internalRegistryHostname
allowedRegistries
The
status
image.config.openshift.io/cluster
| Parameter | Description |
|---|---|
|
| Set by the Image Registry Operator, which controls the
|
|
| Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in
|
8.4.2. Configuring image registry settings Copiar enlaceEnlace copiado en el portapapeles!
You can configure image registry settings by editing the
image.config.openshift.io/cluster
image.config.openshift.io/cluster
Procedure
Edit the
custom resource:image.config.openshift.io/cluster$ oc edit image.config.openshift.io/clusterThe following is an example
CR:image.config.openshift.io/clusterapiVersion: config.openshift.io/v1 kind: Image1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport:2 - domainName: quay.io insecure: false additionalTrustedCA:3 name: myconfigmap registrySources:4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000- 1
Image: Holds cluster-wide information about how to handle images. The canonical, and only valid name iscluster.- 2
allowedRegistriesForImport: Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images orImageStreamMappingsfrom the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions.- 3
additionalTrustedCA: A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull,openshift-image-registrypullthrough, and builds. The namespace for this config map isopenshift-config. The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust.- 4
registrySources: Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either theallowedRegistriesparameter or theblockedRegistriesparameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses theallowedRegistriesparameter, which defines the registries that are allowed to be used. The insecure registryinsecure.comis also allowed. TheregistrySourcesparamter does not contain configuration for the internal cluster registry.
NoteWhen the
parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add theallowedRegistriesandregistry.redhat.ioregistries and thequay.ioto theinternalRegistryHostnamelist, as they are required by payload images within your environment. Do not add theallowedRegistriesandregistry.redhat.ioregistries to thequay.iolist.blockedRegistriesWhen using the
,allowedRegistries, orblockedRegistriesparameter, you can specify an individual repository within a registry. For example:insecureRegistries.reg1.io/myrepo/myapp:latestInsecure external registries should be avoided to reduce possible security risks.
To check that the changes are applied, list your nodes:
$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION ci-ln-j5cd0qt-f76d1-vfj5x-master-0 Ready master 98m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-master-1 Ready,SchedulingDisabled master 99m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-master-2 Ready master 98m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4 Ready worker 90m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz NotReady,SchedulingDisabled worker 90m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv Ready worker 90m v1.19.0+7070803
For more information on the allowed, blocked, and insecure registry parameters, see Configuring image registry settings.
8.4.2.1. Configuring additional trust stores for image registry access Copiar enlaceEnlace copiado en el portapapeles!
The
image.config.openshift.io/cluster
Prerequisites
- The certificate authorities (CA) must be PEM-encoded.
Procedure
You can create a config map in the
openshift-config
AdditionalTrustedCA
image.config.openshift.io
The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the base64-encoded certificate is the value, for each additional registry CA to trust.
Image registry CA config map example
apiVersion: v1
kind: ConfigMap
metadata:
name: my-registry-ca
data:
registry.example.com: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
registry-with-port.example.com..5000: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
- 1
- If the registry has the port, such as
registry-with-port.example.com:5000,:should be replaced with...
You can configure additional CAs with the following procedure.
To configure an additional CA:
$ oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config$ oc edit image.config.openshift.io clusterspec: additionalTrustedCA: name: registry-config
8.4.2.2. Configuring image registry repository mirroring Copiar enlaceEnlace copiado en el portapapeles!
Setting up container registry repository mirroring enables you to do the following:
- Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry.
- Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used.
The attributes of repository mirroring in OpenShift Container Platform include:
- Image pulls are resilient to registry downtimes.
- Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images.
- A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried.
-
The mirror information you enter is added to the file on every node in the OpenShift Container Platform cluster.
/etc/containers/registries.conf - When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node.
Setting up repository mirroring can be done in the following ways:
At OpenShift Container Platform installation:
By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company’s firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment.
After OpenShift Container Platform installation:
Even if you don’t configure mirroring during OpenShift Container Platform installation, you can do so later using the
object.ImageContentSourcePolicy
The following procedure provides a post-installation mirror configuration, where you create an
ImageContentSourcePolicy
- The source of the container image repository you want to mirror.
- A separate entry for each mirror repository you want to offer the content requested from the source repository.
You can only configure global pull secrets for clusters that have an
ImageContentSourcePolicy
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin
Procedure
Configure mirrored repositories, by either:
- Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring. Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time.
Using a tool such as
to copy images manually from the source directory to the mirrored repository.skopeoFor example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the
command as shown in this example:skopeo$ skopeo copy \ docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 \ docker://example.io/example/ubi-minimalIn this example, you have a container image registry that is named
with an image repository namedexample.ioto which you want to copy theexampleimage fromubi8/ubi-minimal. After you create the registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository.registry.access.redhat.com
- Log in to your OpenShift Container Platform cluster.
Create an
file (for example,ImageContentSourcePolicy), replacing the source and mirrors with your own registry and repository pairs and images:registryrepomirror.yamlapiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal1 source: registry.access.redhat.com/ubi8/ubi-minimal2 - mirrors: - example.com/example/ubi-minimal source: registry.access.redhat.com/ubi8/ubi-minimal - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift43 - 1
- Indicates the name of the image registry and repository.
- 2
- Indicates the registry and repository containing the content that is mirrored.
- 3
- You can configure a namespace inside a registry to use any image in that namespace. If you use a registry domain as a source, the
ImageContentSourcePolicyresource is applied to all repositories from the registry.
Create the new
object:ImageContentSourcePolicy$ oc create -f registryrepomirror.yamlAfter the
object is created, the new settings are deployed to each node and the cluster starts using the mirrored repository for requests to the source repository.ImageContentSourcePolicyTo check that the mirrored configuration settings, are applied, do the following on one of the nodes.
List your nodes:
$ oc get nodeExample output
NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.21.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.21.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.21.0 ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.21.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.21.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.21.0You can see that scheduling on each worker node is disabled as the change is being applied.
Start the debugging process to access the node:
$ oc debug node/ip-10-0-147-35.ec2.internalExample output
Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host`Change your root directory to
:/hostsh-4.2# chroot /hostCheck the
file to make sure the changes were made:/etc/containers/registries.confsh-4.2# cat /etc/containers/registries.confExample output
unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] location = "registry.access.redhat.com/ubi8/" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "example.io/example/ubi8-minimal" insecure = false [[registry.mirror]] location = "example.com/example/ubi8-minimal" insecure = falsePull an image digest to the node from the source and check if it is resolved by the mirror.
objects support image digests only, not image tags.ImageContentSourcePolicysh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6
Troubleshooting repository mirroring
If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem.
- The first working mirror is used to supply the pulled image.
- The main registry is only used if no other mirror works.
-
From the system context, the flags are used as fallback.
Insecure -
The format of the file has changed recently. It is now version 2 and in TOML format.
/etc/containers/registries.conf
8.5. About Operator installation with OperatorHub Copiar enlaceEnlace copiado en el portapapeles!
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a cluster administrator, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.
During installation, you must determine the following initial settings for the Operator:
- Installation Mode
- Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces… to make the Operator available to all users and projects.
- Update Channel
- If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
- Approval Strategy
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
8.5.1. Installing from OperatorHub using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin
Procedure
- Navigate in the web console to the Operators → OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type
to find the Jaeger Operator.jaegerYou can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
NoteChoosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.
- Read the information about the Operator and click Install.
On the Install Operator page:
Select one of the following:
-
All namespaces on the cluster (default) installs the Operator in the default namespace to watch and be made available to all namespaces in the cluster. This option is not always available.
openshift-operators - A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
-
All namespaces on the cluster (default) installs the Operator in the default
- Select an Update Channel (if more than one is available).
- Select Automatic or Manual approval strategy, as described earlier.
Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
- If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.
NoteFor the All namespaces… installation mode, the status resolves to InstallSucceeded in the
namespace, but the status is Copied if you check in other namespaces.openshift-operatorsIf it does not:
-
Check the logs in any pods in the project (or other relevant namespace if A specific namespace… installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.
openshift-operators
-
Check the logs in any pods in the
8.5.2. Installing from OperatorHub using the CLI Copiar enlaceEnlace copiado en el portapapeles!
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the
oc
Subscription
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin -
Install the command to your local system.
oc
Procedure
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplaceExample output
NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ...Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplaceAn Operator group, defined by an
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.OperatorGroupThe namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
orAllNamespacesmode. If the Operator you intend to install uses theSingleNamespace, then theAllNamespacesnamespace already has an appropriate Operator group in place.openshift-operatorsHowever, if the Operator uses the
mode and you do not already have an appropriate Operator group in place, you must create one.SingleNamespaceNoteThe web console version of this procedure handles the creation of the
andOperatorGroupobjects automatically behind the scenes for you when choosingSubscriptionmode.SingleNamespaceCreate an
object YAML file, for exampleOperatorGroup:operatorgroup.yamlExample
OperatorGroupobjectapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>Create the
object:OperatorGroup$ oc apply -f operatorgroup.yaml
Create a
object YAML file to subscribe a namespace to an Operator, for exampleSubscription:sub.yamlExample
SubscriptionobjectapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators1 spec: channel: <channel_name>2 name: <operator_name>3 source: redhat-operators4 sourceNamespace: openshift-marketplace5 config: env:6 - name: ARGS value: "-v=10" envFrom:7 - secretRef: name: license-secret volumes:8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts:9 - mountPath: <directory_name> name: <volume_name> tolerations:10 - operator: "Exists" resources:11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector:12 foo: bar- 1
- For
AllNamespacesinstall mode usage, specify theopenshift-operatorsnamespace. Otherwise, specify the relevant single namespace forSingleNamespaceinstall mode usage. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplacefor the default OperatorHub catalog sources. - 6
- The
envparameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. - 7
- The
envFromparameter defines a list of sources to populate Environment Variables in the container. - 8
- The
volumesparameter defines a list of Volumes that must exist on the pod created by OLM. - 9
- The
volumeMountsparameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If avolumeMountreferences avolumethat does not exist, OLM fails to deploy the Operator. - 10
- The
tolerationsparameter defines a list of Tolerations for the pod created by OLM. - 11
- The
resourcesparameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelectorparameter defines aNodeSelectorfor the pod created by OLM.
Create the
object:Subscription$ oc apply -f sub.yamlAt this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Chapter 9. Configuring alert notifications Copiar enlaceEnlace copiado en el portapapeles!
In OpenShift Container Platform, an alert is fired when the conditions defined in an alerting rule are true. An alert provides a notification that a set of circumstances are apparent within a cluster. Firing alerts can be viewed in the Alerting UI in the OpenShift Container Platform web console by default. After an installation, you can configure OpenShift Container Platform to send alert notifications to external systems.
9.1. Sending notifications to external systems Copiar enlaceEnlace copiado en el portapapeles!
In OpenShift Container Platform 4.8, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types:
- PagerDuty
- Webhook
- Slack
Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review.
Checking that alerting is operational by using the watchdog alert
OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider.
9.1.1. Configuring alert receivers Copiar enlaceEnlace copiado en el portapapeles!
You can configure alert receivers to ensure that you learn about important issues with your cluster.
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin
Procedure
In the Administrator perspective, navigate to Administration → Cluster Settings → Global Configuration → Alertmanager.
NoteAlternatively, you can navigate to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert.
- Select Create Receiver in the Receivers section of the page.
- In the Create Receiver form, add a Receiver Name and choose a Receiver Type from the list.
Edit the receiver configuration:
For PagerDuty receivers:
- Choose an integration type and add a PagerDuty integration key.
- Add the URL of your PagerDuty installation.
- Select Show advanced configuration if you want to edit the client and incident details or the severity specification.
For webhook receivers:
- Add the endpoint to send HTTP POST requests to.
- Select Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver.
For email receivers:
- Add the email address to send notifications to.
- Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details.
- Choose whether TLS is required.
- Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration.
For Slack receivers:
- Add the URL of the Slack webhook.
- Add the Slack channel or user name to send notifications to.
- Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames.
By default, firing alerts with labels that match all of the selectors will be sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver:
- Add routing label names and values in the Routing Labels section of the form.
- Select Regular Expression if want to use a regular expression.
- Select Add Label to add further routing labels.
- Select Create to create the receiver.
Chapter 10. Configuring additional devices in an IBM Z or LinuxONE environment Copiar enlaceEnlace copiado en el portapapeles!
After installing OpenShift Container Platform, you can configure additional devices for your cluster in an IBM Z or LinuxONE environment, which is installed with z/VM. The following devices can be configured:
- Fibre Channel Protocol (FCP) host
- FCP LUN
- DASD
- qeth
You can configure devices by adding udev rules using the Machine Config Operator (MCO) or you can configure devices manually.
The procedures described here apply only to z/VM installations. If you have installed your cluster with RHEL KVM on IBM Z or LinuxONE infrastructure, no additional configuration is needed inside the KVM guest after the devices were added to the KVM guests. However, both in z/VM and RHEL KVM environments the next steps to configure the Local Storage Operator and Kubernetes NMState Operator need to be applied.
10.1. Configuring additional devices using the Machine Config Operator (MCO) Copiar enlaceEnlace copiado en el portapapeles!
Tasks in this section describe how to use features of the Machine Config Operator (MCO) to configure additional devices in an IBM Z or LinuxONE environment. Configuring devices with the MCO is persistent but only allows specific configurations for compute nodes. MCO does not allow control plane nodes to have different configurations.
Prerequisites
- You are logged in to the cluster as a user with administrative privileges.
- The device must be available to the z/VM guest.
- The device is already attached.
-
The device is not included in the list, which can be set in the kernel parameters.
cio_ignore You have created a
object file with the following YAML:MachineConfigapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: ""
10.1.1. Configuring a Fibre Channel Protocol (FCP) host Copiar enlaceEnlace copiado en el portapapeles!
The following is an example of how to configure an FCP host adapter with N_Port Identifier Virtualization (NPIV) by adding a udev rule.
Procedure
Take the following sample udev rule
:441-zfcp-host-0.0.8000.rulesACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.8000", DRIVER=="zfcp", GOTO="cfg_zfcp_host_0.0.8000" ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="zfcp", TEST=="[ccw/0.0.8000]", GOTO="cfg_zfcp_host_0.0.8000" GOTO="end_zfcp_host_0.0.8000" LABEL="cfg_zfcp_host_0.0.8000" ATTR{[ccw/0.0.8000]online}="1" LABEL="end_zfcp_host_0.0.8000"Convert the rule to Base64 encoded by running the following command:
$ base64 /path/to/file/Copy the following MCO sample profile into a YAML file:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker01 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string>2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules3
10.1.2. Configuring an FCP LUN Copiar enlaceEnlace copiado en el portapapeles!
The following is an example of how to configure an FCP LUN by adding a udev rule. You can add new FCP LUNs or add additional paths to LUNs that are already configured with multipathing.
Procedure
Take the following sample udev rule
:41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rulesACTION=="add", SUBSYSTEMS=="ccw", KERNELS=="0.0.8000", GOTO="start_zfcp_lun_0.0.8207" GOTO="end_zfcp_lun_0.0.8000" LABEL="start_zfcp_lun_0.0.8000" SUBSYSTEM=="fc_remote_ports", ATTR{port_name}=="0x500507680d760026", GOTO="cfg_fc_0.0.8000_0x500507680d760026" GOTO="end_zfcp_lun_0.0.8000" LABEL="cfg_fc_0.0.8000_0x500507680d760026" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}="0x00bc000000000000" GOTO="end_zfcp_lun_0.0.8000" LABEL="end_zfcp_lun_0.0.8000"Convert the rule to Base64 encoded by running the following command:
$ base64 /path/to/file/Copy the following MCO sample profile into a YAML file:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker01 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string>2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules3
10.1.3. Configuring DASD Copiar enlaceEnlace copiado en el portapapeles!
The following is an example of how to configure a DASD device by adding a udev rule.
Procedure
Take the following sample udev rule
:41-dasd-eckd-0.0.4444.rulesACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.4444", DRIVER=="dasd-eckd", GOTO="cfg_dasd_eckd_0.0.4444" ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="dasd-eckd", TEST=="[ccw/0.0.4444]", GOTO="cfg_dasd_eckd_0.0.4444" GOTO="end_dasd_eckd_0.0.4444" LABEL="cfg_dasd_eckd_0.0.4444" ATTR{[ccw/0.0.4444]online}="1" LABEL="end_dasd_eckd_0.0.4444"Convert the rule to Base64 encoded by running the following command:
$ base64 /path/to/file/Copy the following MCO sample profile into a YAML file:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker01 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string>2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules3
10.1.4. Configuring qeth Copiar enlaceEnlace copiado en el portapapeles!
The following is an example of how to configure a qeth device by adding a udev rule.
Procedure
Take the following sample udev rule
:41-qeth-0.0.1000.rulesACTION=="add", SUBSYSTEM=="drivers", KERNEL=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.1000", DRIVER=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.1001", DRIVER=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.1002", DRIVER=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccwgroup", KERNEL=="0.0.1000", DRIVER=="qeth", GOTO="cfg_qeth_0.0.1000" GOTO="end_qeth_0.0.1000" LABEL="group_qeth_0.0.1000" TEST=="[ccwgroup/0.0.1000]", GOTO="end_qeth_0.0.1000" TEST!="[ccw/0.0.1000]", GOTO="end_qeth_0.0.1000" TEST!="[ccw/0.0.1001]", GOTO="end_qeth_0.0.1000" TEST!="[ccw/0.0.1002]", GOTO="end_qeth_0.0.1000" ATTR{[drivers/ccwgroup:qeth]group}="0.0.1000,0.0.1001,0.0.1002" GOTO="end_qeth_0.0.1000" LABEL="cfg_qeth_0.0.1000" ATTR{[ccwgroup/0.0.1000]online}="1" LABEL="end_qeth_0.0.1000"Convert the rule to Base64 encoded by running the following command:
$ base64 /path/to/file/Copy the following MCO sample profile into a YAML file:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker01 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string>2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules3
10.2. Configuring additional devices manually Copiar enlaceEnlace copiado en el portapapeles!
Tasks in this section describe how to manually configure additional devices in an IBM Z or LinuxONE environment. This configuration method is persistent over node restarts but not OpenShift Container Platform native and you need to redo the steps if you replace the node.
Prerequisites
- You are logged in to the cluster as a user with administrative privileges.
- The device must be available to the node.
- In a z/VM environment, the device must be attached to the z/VM guest.
Procedure
Connect to the node via SSH by running the following command:
$ ssh <user>@<node_ip_address>You can also start a debug session to the node by running the following command:
$ oc debug node/<node_name>To enable the devices with the
command, enter the following command:chzdev$ sudo chzdev -e 0.0.8000 sudo chzdev -e 1000-1002 sude chzdev -e 4444 sudo chzdev -e 0.0.8000:0x500507680d760026:0x00bc000000000000
10.3. RoCE network Cards Copiar enlaceEnlace copiado en el portapapeles!
RoCE (RDMA over Converged Ethernet) network cards do not need to be enabled and their interfaces can be configured with the Kubernetes NMState Operator whenever they are available in the node. For example, RoCE network cards are available if they are attached in a z/VM environment or passed through in a RHEL KVM environment.
10.4. Enabling multipathing for FCP LUNs Copiar enlaceEnlace copiado en el portapapeles!
Tasks in this section describe how to manually configure additional devices in an IBM Z or LinuxONE environment. This configuration method is persistent over node restarts but not OpenShift Container Platform native and you need to redo the steps if you replace the node.
On IBM Z and LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z and LinuxONE.
Prerequisites
- You are logged in to the cluster as a user with administrative privileges.
- You have configured multiple paths to a LUN with either method explained above.
Procedure
Connect to the node via SSH by running the following command:
$ ssh <user>@<node_ip_address>You can also start a debug session to the node by running the following command:
$ oc debug node/<node_name>To enable multipathing, run the following command:
$ sudo /sbin/mpathconf --enableTo start the
daemon, run the following command:multipathd$ sudo multipathOptional: To format your multipath device with fdisk, run the following command:
$ sudo fdisk /dev/mapper/mpatha
Verification
To verify that the devices have been grouped, run the following command:
$ sudo multipath -IIExample output
mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running
Legal Notice
Copiar enlaceEnlace copiado en el portapapeles!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.