Este conteúdo não está disponível no idioma selecionado.
Chapter 2. Using machine config objects to configure nodes
You can use the tasks in this section to create MachineConfig
objects that modify files, systemd unit files, and other operating system features running on OpenShift Container Platform nodes. For more ideas on working with machine configs, see content related to updating SSH authorized keys, verifying image signatures, enabling SCTP, and configuring iSCSI initiatornames for OpenShift Container Platform.
OpenShift Container Platform supports Ignition specification version 3.2. All new machine configs you create going forward should be based on Ignition specification version 3.2. If you are upgrading your OpenShift Container Platform cluster, any existing Ignition specification version 2.x machine configs will be translated automatically to specification version 3.2.
There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded
until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. For more information on configuration drift, see Understanding configuration drift detection.
Use the following "Configuring chrony time service" procedure as a model for how to go about adding other configuration files to OpenShift Container Platform nodes.
2.1. Configuring chrony time service
You can set the time server and related settings used by the chrony time service (chronyd
) by modifying the contents of the chrony.conf
file and passing those contents to your nodes as a machine config.
Procedure
Create a Butane config including the contents of the
chrony.conf
file. For example, to configure chrony on worker nodes, create a99-worker-chrony.bu
file.NoteSee "Creating machine configs with Butane" for information about Butane.
variant: openshift version: 4.17.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony
- 1 2
- On control plane nodes, substitute
master
forworker
in both of these locations. - 3
- Specify an octal value mode for the
mode
field in the machine config file. After creating the file and applying the changes, themode
is converted to a decimal value. You can check the YAML file with the commandoc get mc <mc-name> -o yaml
. - 4
- Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers:
1.rhel.pool.ntp.org
,2.rhel.pool.ntp.org
, or3.rhel.pool.ntp.org
.
Use Butane to generate a
MachineConfig
object file,99-worker-chrony.yaml
, containing the configuration to be delivered to the nodes:$ butane 99-worker-chrony.bu -o 99-worker-chrony.yaml
Apply the configurations in one of two ways:
-
If the cluster is not running yet, after you generate manifest files, add the
MachineConfig
object file to the<installation_directory>/openshift
directory, and then continue to create the cluster. If the cluster is already running, apply the file:
$ oc apply -f ./99-worker-chrony.yaml
-
If the cluster is not running yet, after you generate manifest files, add the
Additional resources
2.2. Disabling the chrony time service
You can disable the chrony time service (chronyd
) for nodes with a specific role by using a MachineConfig
custom resource (CR).
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create the
MachineConfig
CR that disableschronyd
for the specified node role.Save the following YAML in the
disable-chronyd.yaml
file:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd $OPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: "chronyd.service"
- 1
- Node role where you want to disable
chronyd
, for example,master
.
Create the
MachineConfig
CR by running the following command:$ oc create -f disable-chronyd.yaml
2.3. Adding kernel arguments to nodes
In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set.
Improper use of kernel arguments can result in your systems becoming unbootable.
Examples of kernel arguments you could set include:
-
nosmt: Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider
nosmt
in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy: Enables Linux control group version 2 (cgroup v2). cgroup v2 is the next version of the kernel control group and offers multiple improvements.
Importantcgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
enforcing=0: Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging.
WarningDisabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster.
See Kernel.org kernel parameters for a list and descriptions of kernel arguments.
In the following procedure, you create a MachineConfig
object that identifies:
- A set of machines to which you want to add the kernel argument. In this case, machines with a worker role.
- Kernel arguments that are appended to the end of the existing kernel arguments.
- A label that indicates where in the list of machine configs the change is applied.
Prerequisites
- Have administrative privilege to a working OpenShift Container Platform cluster.
Procedure
List existing
MachineConfig
objects for your OpenShift Container Platform cluster to determine how to label your machine config:$ oc get MachineConfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
Create a
MachineConfig
object file that identifies the kernel argument (for example,05-worker-kernelarg-selinuxpermissive.yaml
)apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker1 name: 05-worker-kernelarg-selinuxpermissive2 spec: kernelArguments: - enforcing=03
Create the new machine config:
$ oc create -f 05-worker-kernelarg-selinuxpermissive.yaml
Check the machine configs to see that the new one was added:
$ oc get MachineConfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
Check the nodes:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.30.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.30.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.30.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.30.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.30.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.30.3
You can see that scheduling on each worker node is disabled as the change is being applied.
Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in
/proc/cmdline
on the host):$ oc debug node/ip-10-0-141-105.ec2.internal
Example output
Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit
You should see the
enforcing=0
argument added to the other kernel arguments.
2.4. Enabling multipathing with kernel arguments on RHCOS
Enabling multipathing during installation is supported and recommended for nodes provisioned in OpenShift Container Platform 4.8 or later versions. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. For more information about enabling multipathing during installation time, see "Enabling multipathing post installation" in the Installing on bare metal documentation.
Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Postinstallation support is available by activating multipathing via the machine config.
On IBM Z® and IBM® LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z® and IBM® LinuxONE.
When a OpenShift Container Platform 4.17 cluster is installed or configured as a post-installation activity on a single VIOS host with "vSCSI" storage on IBM Power® with multipath configured, the CoreOS nodes with multipath enabled fail to boot. This behavior is expected, as only one path is available to the Node.
Prerequisites
- You have a running OpenShift Container Platform cluster that uses version 4.7 or later.
- You are logged in to the cluster as a user with administrative privileges.
- You have confirmed that the disk is enabled for multipathing. Multipathing is only supported on hosts that are connected to a SAN via an HBA adapter.
Procedure
To enable multipathing postinstallation on control plane nodes:
Create a machine config file, such as
99-master-kargs-mpath.yaml
, that instructs the cluster to add themaster
label and that identifies the multipath kernel argument, for example:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'
To enable multipathing postinstallation on worker nodes:
Create a machine config file, such as
99-worker-kargs-mpath.yaml
, that instructs the cluster to add theworker
label and that identifies the multipath kernel argument, for example:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'
Create the new machine config by using either the master or worker YAML file you previously created:
$ oc create -f ./99-worker-kargs-mpath.yaml
Check the machine configs to see that the new one was added:
$ oc get MachineConfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
Check the nodes:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.30.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.30.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.30.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.30.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.30.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.30.3
You can see that scheduling on each worker node is disabled as the change is being applied.
Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in
/proc/cmdline
on the host):$ oc debug node/ip-10-0-141-105.ec2.internal
Example output
Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit
You should see the added kernel arguments.
Additional resources
- See Enabling multipathing with kernel arguments on RHCOS for more information about enabling multipathing during installation time.
2.5. Adding a real-time kernel to nodes
Some OpenShift Container Platform workloads require a high degree of determinism.While Linux is not a real-time operating system, the Linux real-time kernel includes a preemptive scheduler that provides the operating system with real-time characteristics.
If your OpenShift Container Platform workloads require these real-time characteristics, you can switch your machines to the Linux real-time kernel. For OpenShift Container Platform, 4.17 you can make this switch using a MachineConfig
object. Although making the change is as simple as changing a machine config kernelType
setting to realtime
, there are a few other considerations before making the change:
- Currently, real-time kernel is supported only on worker nodes, and only for radio access network (RAN) use.
- The following procedure is fully supported with bare metal installations that use systems that are certified for Red Hat Enterprise Linux for Real Time 8.
- Real-time support in OpenShift Container Platform is limited to specific subscriptions.
- The following procedure is also supported for use with Google Cloud Platform.
Prerequisites
- Have a running OpenShift Container Platform cluster (version 4.4 or later).
- Log in to the cluster as a user with administrative privileges.
Procedure
Create a machine config for the real-time kernel: Create a YAML file (for example,
99-worker-realtime.yaml
) that contains aMachineConfig
object for therealtime
kernel type. This example tells the cluster to use a real-time kernel for all worker nodes:$ cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-realtime spec: kernelType: realtime EOF
Add the machine config to the cluster. Type the following to add the machine config to the cluster:
$ oc create -f 99-worker-realtime.yaml
Check the real-time kernel: Once each impacted node reboots, log in to the cluster and run the following commands to make sure that the real-time kernel has replaced the regular kernel for the set of nodes you configured:
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.30.3 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.30.3 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.30.3
$ oc debug node/ip-10-0-143-147.us-east-2.compute.internal
Example output
Starting pod/ip-10-0-143-147us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
The kernel name contains
rt
and text “PREEMPT RT” indicates that this is a real-time kernel.To go back to the regular kernel, delete the
MachineConfig
object:$ oc delete -f 99-worker-realtime.yaml
2.6. Configuring journald settings
If you need to configure settings for the journald
service on OpenShift Container Platform nodes, you can do that by modifying the appropriate configuration file and passing the file to the appropriate pool of nodes as a machine config.
This procedure describes how to modify journald
rate limiting settings in the /etc/systemd/journald.conf
file and apply them to worker nodes. See the journald.conf
man page for information on how to use that file.
Prerequisites
- Have a running OpenShift Container Platform cluster.
- Log in to the cluster as a user with administrative privileges.
Procedure
Create a Butane config file,
40-worker-custom-journald.bu
, that includes an/etc/systemd/journald.conf
file with the required settings.NoteSee "Creating machine configs with Butane" for information about Butane.
variant: openshift version: 4.17.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s
Use Butane to generate a
MachineConfig
object file,40-worker-custom-journald.yaml
, containing the configuration to be delivered to the worker nodes:$ butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml
Apply the machine config to the pool:
$ oc apply -f 40-worker-custom-journald.yaml
Check that the new machine config is applied and that the nodes are not in a degraded state. It might take a few minutes. The worker pool will show the updates in progress, as each node successfully has the new machine config applied:
$ oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m
To check that the change was applied, you can log in to a worker node:
$ oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+$Format:%h$ $ oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug ... ... sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit
Additional resources
2.7. Adding extensions to RHCOS
RHCOS is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to OpenShift Container Platform clusters across all platforms. While adding software packages to RHCOS systems is generally discouraged, the MCO provides an extensions
feature you can use to add a minimal set of features to RHCOS nodes.
Currently, the following extensions are available:
-
usbguard: Adding the
usbguard
extension protects RHCOS systems from attacks from intrusive USB devices. See USBGuard for details. -
kerberos: Adding the
kerberos
extension provides a mechanism that allows both users and machines to identify themselves to the network to receive defined, limited access to the areas and services that an administrator has configured. See Using Kerberos for details, including how to set up a Kerberos client and mount a Kerberized NFS share.
The following procedure describes how to use a machine config to add one or more extensions to your RHCOS nodes.
Prerequisites
- Have a running OpenShift Container Platform cluster (version 4.6 or later).
- Log in to the cluster as a user with administrative privileges.
Procedure
Create a machine config for extensions: Create a YAML file (for example,
80-extensions.yaml
) that contains aMachineConfig
extensions
object. This example tells the cluster to add theusbguard
extension.$ cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF
Add the machine config to the cluster. Type the following to add the machine config to the cluster:
$ oc create -f 80-extensions.yaml
This sets all worker nodes to have rpm packages for
usbguard
installed.Check that the extensions were applied:
$ oc get machineconfig 80-worker-extensions
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s
Check that the new machine config is now applied and that the nodes are not in a degraded state. It may take a few minutes. The worker pool will show the updates in progress, as each machine successfully has the new machine config applied:
$ oc get machineconfigpool
Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m
Check the extensions. To check that the extension was applied, run:
$ oc get node | grep worker
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.30.3
$ oc debug node/ip-10-0-169-2.us-east-2.compute.internal
Example output
... To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm
2.8. Loading custom firmware blobs in the machine config manifest
Because the default location for firmware blobs in /usr/lib
is read-only, you can locate a custom firmware blob by updating the search path. This enables you to load local firmware blobs in the machine config manifest when the blobs are not managed by RHCOS.
Procedure
Create a Butane config file,
98-worker-firmware-blob.bu
, that updates the search path so that it is root-owned and writable to local storage. The following example places the custom blob file from your local workstation onto nodes under/var/lib/firmware
.NoteSee "Creating machine configs with Butane" for information about Butane.
Butane config file for custom firmware blob
variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4
- 1
- Sets the path on the node where the firmware package is copied to.
- 2
- Specifies a file with contents that are read from a local file directory on the system running Butane. The path of the local file is relative to a
files-dir
directory, which must be specified by using the--files-dir
option with Butane in the following step. - 3
- Sets the permissions for the file on the RHCOS node. It is recommended to set
0644
permissions. - 4
- The
firmware_class.path
parameter customizes the kernel search path of where to look for the custom firmware blob that was copied from your local workstation onto the root file system of the node. This example uses/var/lib/firmware
as the customized path.
Run Butane to generate a
MachineConfig
object file that uses a copy of the firmware blob on your local workstation named98-worker-firmware-blob.yaml
. The firmware blob contains the configuration to be delivered to the nodes. The following example uses the--files-dir
option to specify the directory on your workstation where the local file or files are located:$ butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>
Apply the configurations to the nodes in one of two ways:
-
If the cluster is not running yet, after you generate manifest files, add the
MachineConfig
object file to the<installation_directory>/openshift
directory, and then continue to create the cluster. If the cluster is already running, apply the file:
$ oc apply -f 98-worker-firmware-blob.yaml
A
MachineConfig
object YAML file is created for you to finish configuring your machines.
-
If the cluster is not running yet, after you generate manifest files, add the
-
Save the Butane config in case you need to update the
MachineConfig
object in the future.
Additional resources
2.9. Changing the core user password for node access
By default, Red Hat Enterprise Linux CoreOS (RHCOS) creates a user named core
on the nodes in your cluster. You can use the core
user to access the node through a cloud provider serial console or a bare metal baseboard controller manager (BMC). This can be helpful, for example, if a node is down and you cannot access that node by using SSH or the oc debug node
command. However, by default, there is no password for this user, so you cannot log in without creating one.
You can create a password for the core
user by using a machine config. The Machine Config Operator (MCO) assigns the password and injects the password into the /etc/shadow
file, allowing you to log in with the core
user. The MCO does not examine the password hash. As such, the MCO cannot report if there is a problem with the password.
- The password works only through a cloud provider serial console or a BMC. It does not work with SSH.
-
If you have a machine config that includes an
/etc/shadow
file or a systemd unit that sets a password, it takes precedence over the password hash.
You can change the password, if needed, by editing the machine config you used to create the password. Also, you can remove the password by deleting the machine config. Deleting the machine config does not remove the user account.
Procedure
Using a tool that is supported by your operating system, create a hashed password. For example, create a hashed password using
mkpasswd
by running the following command:$ mkpasswd -m SHA-512 testpass
Example output
$ $6$CBZwA6s6AVFOtiZe$aUKDWpthhJEyR3nnhM02NM1sKCpHn9XN.NPrJNQ3HYewioaorpwL3mKGLxvW0AOb4pJxqoqP4nFX77y0p00.8.
Create a machine config file that contains the
core
username and the hashed password:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: set-core-user-password spec: config: ignition: version: 3.2.0 passwd: users: - name: core 1 passwordHash: <password> 2
Create the machine config by running the following command:
$ oc create -f <file-name>.yaml
The nodes do not reboot and should become available in a few moments. You can use the
oc get mcp
to watch for the machine config pools to be updated, as shown in the following example:NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-d686a3ffc8fdec47280afec446fce8dd True False False 3 3 3 0 64m worker rendered-worker-4605605a5b1f9de1d061e9d350f251e5 False True False 3 0 0 0 64m
Verification
After the nodes return to the
UPDATED=True
state, start a debug session for a node by running the following command:$ oc debug node/<node_name>
Set
/host
as the root directory within the debug shell by running the following command:sh-4.4# chroot /host
Check the contents of the
/etc/shadow
file:Example output
... core:$6$2sE/010goDuRSxxv$o18K52wor.wIwZp:19418:0:99999:7::: ...
The hashed password is assigned to the
core
user.