Chapter 10. Handling machine configuration for hosted control planes
In a standalone OpenShift Container Platform cluster, a machine config pool manages a set of nodes. You can handle a machine configuration by using the MachineConfigPool
custom resource (CR).
You can reference any machineconfiguration.openshift.io
resources in the nodepool.spec.config
field of the NodePool
CR.
In hosted control planes, the MachineConfigPool
CR does not exist. A node pool contains a set of compute nodes. You can handle a machine configuration by using node pools.
In OpenShift Container Platform 4.18 or later, the default container runtime for worker nodes is changed from runC to crun.
10.1. Configuring node pools for hosted control planes
On hosted control planes, you can configure node pools by creating a MachineConfig
object inside of a config map in the management cluster.
Procedure
To create a
MachineConfig
object inside of a config map in the management cluster, enter the following information:apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: ${PATH} 1
- 1
- Sets the path on the node where the
MachineConfig
object is stored.
After you add the object to the config map, you can apply the config map to the node pool as follows:
$ oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>
apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ...
- 1
- Replace
<configmap_name>
with the name of your config map.
10.2. Referencing the kubelet configuration in node pools
To reference your kubelet configuration in node pools, you add the kubelet configuration in a config map and then apply the config map in the NodePool
resource.
Procedure
Add the kubelet configuration inside of a config map in the management cluster by entering the following information:
Example
ConfigMap
object with the kubelet configurationapiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> 1 namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: <kubeletconfig_name> 2 spec: kubeletConfig: registerWithTaints: - key: "example.sh/unregistered" value: "true" effect: "NoExecute"
Apply the config map to the node pool by entering the following command:
$ oc edit nodepool <nodepool_name> --namespace clusters 1
- 1
- Replace
<nodepool_name>
with the name of your node pool.
Example
NodePool
resource configurationapiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ...
- 1
- Replace
<configmap_name>
with the name of your config map.
10.3. Configuring node tuning in a hosted cluster
To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain Tuned
objects and referencing those config maps in your node pools.
Procedure
Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a
Tuned
manifest defines a profile that setsvm.dirty_ratio
to 55 on nodes that contain thetuned-1-node-label
node label with any value. Save the followingConfigMap
manifest in a file namedtuned-1.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio="55" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile
NoteIf you do not add any labels to an entry in the
spec.recommend
section of the Tuned spec, node-pool-based matching is assumed, so the highest priority profile in thespec.recommend
section is applied to nodes in the pool. Although you can achieve more fine-grained node-label-based matching by setting a label value in the Tuned.spec.recommend.match
section, node labels will not persist during an upgrade unless you set the.spec.management.upgradeType
value of the node pool toInPlace
.Create the
ConfigMap
object in the management cluster:$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-1.yaml
Reference the
ConfigMap
object in thespec.tuningConfig
field of the node pool, either by editing a node pool or creating one. In this example, assume that you have only oneNodePool
, namednodepool-1
, which contains 2 nodes.apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: ... name: nodepool-1 namespace: clusters ... spec: ... tuningConfig: - name: tuned-1 status: ...
NoteYou can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster.
Verification
Now that you have created the ConfigMap
object that contains a Tuned
manifest and referenced it in a NodePool
, the Node Tuning Operator syncs the Tuned
objects into the hosted cluster. You can verify which Tuned
objects are defined and which TuneD profiles are applied to each node.
List the
Tuned
objects in the hosted cluster:$ oc --kubeconfig="$HC_KUBECONFIG" get tuned.tuned.openshift.io \ -n openshift-cluster-node-tuning-operator
Example output
NAME AGE default 7m36s rendered 7m36s tuned-1 65s
List the
Profile
objects in the hosted cluster:$ oc --kubeconfig="$HC_KUBECONFIG" get profile.tuned.openshift.io \ -n openshift-cluster-node-tuning-operator
Example output
NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s
NoteIf no custom profiles are created, the
openshift-node
profile is applied by default.To confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values:
$ oc --kubeconfig="$HC_KUBECONFIG" \ debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio
Example output
vm.dirty_ratio = 55
10.4. Deploying the SR-IOV Operator for hosted control planes
After you configure and deploy your hosting service cluster, you can create a subscription to the SR-IOV Operator on a hosted cluster. The SR-IOV pod runs on worker machines rather than the control plane.
Prerequisites
You must configure and deploy the hosted cluster on AWS.
Procedure
Create a namespace and an Operator group:
apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator
Create a subscription to the SR-IOV Operator:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: "" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace
Verification
To verify that the SR-IOV Operator is ready, run the following command and view the resulting output:
$ oc get csv -n openshift-sriov-network-operator
Example output
NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.18.0-202211021237 SR-IOV Network Operator 4.18.0-202211021237 sriov-network-operator.4.18.0-202210290517 Succeeded
To verify that the SR-IOV pods are deployed, run the following command:
$ oc get pods -n openshift-sriov-network-operator
10.5. Configuring the NTP server for hosted clusters
You can configure the Network Time Protocol (NTP) server for your hosted clusters by using Butane.
Procedure
Create a Butane config file,
99-worker-chrony.bu
, that includes the contents of thechrony.conf
file. For more information about Butane, see "Creating machine configs with Butane".Example
99-worker-chrony.bu
configuration# ... variant: openshift version: 4.18.0 metadata: name: 99-worker-chrony labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 1 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 2 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony # ...
- 1
- Specify an octal value mode for the
mode
field in the machine config file. After creating the file and applying the changes, themode
field is converted to a decimal value. - 2
- Specify any valid, reachable time source, such as the one provided by your Dynamic Host Configuration Protocol (DHCP) server.
NoteFor machine-to-machine communication, the NTP on the User Datagram Protocol (UDP) port is
123
. If you configured an external NTP time server, you must open UDP port123
.Use Butane to generate a
MachineConfig
object file,99-worker-chrony.yaml
, that contains a configuration that Butane sends to the nodes. Run the following command:$ butane 99-worker-chrony.bu -o 99-worker-chrony.yaml
Example
99-worker-chrony.yaml
configuration# Generated by Butane; do not edit apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: /example/path
Add the contents of the
99-worker-chrony.yaml
file inside of a config map in the management cluster:Example config map
apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: <namespace> 1 data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: /example/path # ...
- 1
- Replace
<namespace>
with the name of your namespace where you created the node pool, such asclusters
.
Apply the config map to your node pool by running the following command:
$ oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>
Example
NodePool
configurationapiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ...
- 1
- Replace
<configmap_name>
with the name of your config map.
Add the list of your NTP servers in the
infra-env.yaml
file, which defines theInfraEnv
custom resource (CR):Example
infra-env.yaml
fileapiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv # ... spec: additionalNTPSources: - <ntp_server> 1 - <ntp_server1> - <ntp_server2> # ...
- 1
- Replace
<ntp_server>
with the name of your NTP server. For more details about creating a host inventory and theInfraEnv
CR, see "Creating a host inventory".
Apply the
InfraEnv
CR by running the following command:$ oc apply -f infra-env.yaml
Verification
Check the following fields to know the status of your host inventory:
-
conditions
: The standard Kubernetes conditions indicating if the image was created successfully. -
isoDownloadURL
: The URL to download the Discovery Image. createdTime
: The time at which the image was last created. If you modify theInfraEnv
CR, ensure that you have updated the timestamp before downloading a new image.Verify that your host inventory is created by running the following command:
$ oc describe infraenv <infraenv_resource_name> -n <infraenv_namespace>
NoteIf you modify the
InfraEnv
CR, confirm that theInfraEnv
CR has created a new Discovery Image by looking at thecreatedTime
field. If you already booted hosts, boot them again with the latest Discovery Image.
-
Additional resources