Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Handling a machine configuration for hosted control planes
In a standalone OpenShift Container Platform cluster, a machine config pool manages a set of nodes. You can handle a machine configuration by using the
MachineConfigPool
You can reference any
machineconfiguration.openshift.io
nodepool.spec.config
NodePool
In hosted control planes, the
MachineConfigPool
4.1. Configuring node pools for hosted control planes Link kopierenLink in die Zwischenablage kopiert!
On hosted control planes, you can configure node pools by creating a
MachineConfig
Procedure
To create a
object inside of a config map in the management cluster, enter the following information:MachineConfigapiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: ${PATH}1 - 1
- Sets the path on the node where the
MachineConfigobject is stored.
After you add the object to the config map, you can apply the config map to the node pool as follows:
$ oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name>1 # ...- 1
- Replace
<configmap_name>with the name of your config map.
4.2. Referencing the kubelet configuration in node pools Link kopierenLink in die Zwischenablage kopiert!
To reference your kubelet configuration in node pools, you add the kubelet configuration in a config map and then apply the config map in the
NodePool
Procedure
Add the kubelet configuration inside of a config map in the management cluster by entering the following information:
Example
ConfigMapobject with the kubelet configurationapiVersion: v1 kind: ConfigMap metadata: name: <configmap_name>1 namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: <kubeletconfig_name>2 spec: kubeletConfig: registerWithTaints: - key: "example.sh/unregistered" value: "true" effect: "NoExecute"Apply the config map to the node pool by entering the following command:
$ oc edit nodepool <nodepool_name> --namespace clusters1 - 1
- Replace
<nodepool_name>with the name of your node pool.
Example
NodePoolresource configurationapiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name>1 # ...- 1
- Replace
<configmap_name>with the name of your config map.
4.3. Configuring node tuning in a hosted cluster Link kopierenLink in die Zwischenablage kopiert!
To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain
Tuned
Procedure
Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a
manifest defines a profile that setsTunedto 55 on nodes that contain thevm.dirty_rationode label with any value. Save the followingtuned-1-node-labelmanifest in a file namedConfigMap:tuned-1.yamlapiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio="55" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profileNoteIf you do not add any labels to an entry in the
section of the Tuned spec, node-pool-based matching is assumed, so the highest priority profile in thespec.recommendsection is applied to nodes in the pool. Although you can achieve more fine-grained node-label-based matching by setting a label value in the Tunedspec.recommendsection, node labels will not persist during an upgrade unless you set the.spec.recommend.matchvalue of the node pool to.spec.management.upgradeType.InPlaceCreate the
object in the management cluster:ConfigMap$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-1.yamlReference the
object in theConfigMapfield of the node pool, either by editing a node pool or creating one. In this example, assume that you have only onespec.tuningConfig, namedNodePool, which contains 2 nodes.nodepool-1apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: ... name: nodepool-1 namespace: clusters ... spec: ... tuningConfig: - name: tuned-1 status: ...NoteYou can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster.
Verification
Now that you have created the
ConfigMap
Tuned
NodePool
Tuned
Tuned
List the
objects in the hosted cluster:Tuned$ oc --kubeconfig="$HC_KUBECONFIG" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operatorExample output
NAME AGE default 7m36s rendered 7m36s tuned-1 65sList the
objects in the hosted cluster:Profile$ oc --kubeconfig="$HC_KUBECONFIG" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operatorExample output
NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14sNoteIf no custom profiles are created, the
profile is applied by default.openshift-nodeTo confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values:
$ oc --kubeconfig="$HC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratioExample output
vm.dirty_ratio = 55
4.4. Deploying the SR-IOV Operator for hosted control planes Link kopierenLink in die Zwischenablage kopiert!
Hosted control planes on the AWS platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
After you configure and deploy your hosting service cluster, you can create a subscription to the SR-IOV Operator on a hosted cluster. The SR-IOV pod runs on worker machines rather than the control plane.
Prerequisites
You must configure and deploy the hosted cluster on AWS. For more information, see Configuring the hosting cluster on AWS (Technology Preview).
Procedure
Create a namespace and an Operator group:
apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operatorCreate a subscription to the SR-IOV Operator:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: "" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace
Verification
To verify that the SR-IOV Operator is ready, run the following command and view the resulting output:
$ oc get csv -n openshift-sriov-network-operatorExample output
NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.14.0-202211021237 SR-IOV Network Operator 4.14.0-202211021237 sriov-network-operator.4.14.0-202210290517 SucceededTo verify that the SR-IOV pods are deployed, run the following command:
$ oc get pods -n openshift-sriov-network-operator
4.5. Configuring the NTP server for hosted clusters Link kopierenLink in die Zwischenablage kopiert!
You can configure the Network Time Protocol (NTP) server for your hosted clusters by using Butane.
Procedure
Create a Butane config file,
, that includes the contents of the99-worker-chrony.bufile. For more information about Butane, see "Creating machine configs with Butane".chrony.confExample
99-worker-chrony.buconfiguration# ... variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 06441 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst2 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony # ...- 1
- Specify an octal value mode for the
modefield in the machine config file. After creating the file and applying the changes, themodefield is converted to a decimal value. - 2
- Specify any valid, reachable time source, such as the one provided by your Dynamic Host Configuration Protocol (DHCP) server.
NoteFor machine-to-machine communication, the NTP on the User Datagram Protocol (UDP) port is
. If you configured an external NTP time server, you must open UDP port123.123Use Butane to generate a
object file,MachineConfig, that contains a configuration that Butane sends to the nodes. Run the following command:99-worker-chrony.yaml$ butane 99-worker-chrony.bu -o 99-worker-chrony.yamlExample
99-worker-chrony.yamlconfiguration# Generated by Butane; do not edit apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: /example/pathAdd the contents of the
file inside of a config map in the management cluster:99-worker-chrony.yamlExample config map
apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: <namespace>1 data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: /example/path # ...- 1
- Replace
<namespace>with the name of your namespace where you created the node pool, such asclusters.
Apply the config map to your node pool by running the following command:
$ oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>Example
NodePoolconfigurationapiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name>1 # ...- 1
- Replace
<configmap_name>with the name of your config map.
Add the list of your NTP servers in the
file, which defines theinfra-env.yamlcustom resource (CR):InfraEnvExample
infra-env.yamlfileapiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv # ... spec: additionalNTPSources: - <ntp_server>1 - <ntp_server1> - <ntp_server2> # ...- 1
- Replace
<ntp_server>with the name of your NTP server. For more details about creating a host inventory and theInfraEnvCR, see "Creating a host inventory".
Apply the
CR by running the following command:InfraEnv$ oc apply -f infra-env.yaml
Verification
Check the following fields to know the status of your host inventory:
-
: The standard Kubernetes conditions indicating if the image was created successfully.
conditions -
: The URL to download the Discovery Image.
isoDownloadURL - : The time at which the image was last created. If you modify the
createdTimeCR, ensure that you have updated the timestamp before downloading a new image.InfraEnvVerify that your host inventory is created by running the following command:
$ oc describe infraenv <infraenv_resource_name> -n <infraenv_namespace>NoteIf you modify the
CR, confirm that theInfraEnvCR has created a new Discovery Image by looking at theInfraEnvfield. If you already booted hosts, boot them again with the latest Discovery Image.createdTime
-