Questo contenuto non è disponibile nella lingua selezionata.
Chapter 17. Deploying distributed units manually on single-node OpenShift
The procedures in this topic tell you how to manually deploy clusters on a small number of single nodes as a distributed unit (DU) during installation.
The procedures do not describe how to install single-node OpenShift. This can be accomplished through many mechanisms. Rather, they are intended to capture the elements that should be configured as part of the installation process:
- Networking is needed to enable connectivity to the single-node OpenShift DU when the installation is complete.
- Workload partitioning, which can only be configured during installation.
- Additional items that help minimize the potential reboots post installation.
17.1. Configuring the distributed units (DUs) Copia collegamentoCollegamento copiato negli appunti!
This section describes a set of configurations for an OpenShift Container Platform cluster so that it meets the feature and performance requirements necessary for running a distributed unit (DU) application. Some of this content must be applied during installation and other configurations can be applied post-install.
After you have installed the single-node OpenShift DU, further configuration is needed to enable the platform to carry a DU workload.
The configurations in this section are applied to the cluster after installation in order to configure the cluster for DU workloads.
17.1.1. Enabling workload partitioning Copia collegamentoCollegamento copiato negli appunti!
A key feature to enable as part of a single-node OpenShift installation is workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads. You must configure workload partitioning at cluster installation time.
You can enable workload partitioning during cluster installation only. You cannot disable workload partitioning post-installation. However, you can reconfigure workload partitioning by updating the cpu value that you define in the performance profile, and in the related cpuset value in the MachineConfig custom resource (CR).
Procedure
The base64-encoded content below contains the CPU set that the management workloads are constrained to. This content must be adjusted to match the set specified in the
performanceprofileand must be accurate for the number of cores on the cluster.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKW2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudC5yZXNvdXJjZXNdCmNwdXNoYXJlcyA9IDAKQ1BVcyA9ICIwLTEsIDUyLTUzIgo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: rootThe contents of
/etc/crio/crio.conf.d/01-workload-partitioningshould look like this:[crio.runtime.workloads.management] activation_annotation = "target.workload.openshift.io/management" annotation_prefix = "resources.workload.openshift.io" [crio.runtime.workloads.management.resources] cpushares = 0 cpuset = "0-1, 52-53"1 - 1
- The
cpusetvalue varies based on the installation.
If Hyper-Threading is enabled, specify both threads for each core. The
cpusetvalue must match the reserved CPUs that you define in thespec.cpu.reservedfield in the performance profile.
If Hyper-Threading is enabled, specify both threads of each core. The CPUs value must match the reserved CPU set specified in the performance profile.
This content should be base64 encoded and provided in the 01-workload-partitioning-content in the manifest above.
The contents of
/etc/kubernetes/openshift-workload-pinningshould look like this:{ "management": { "cpuset": "0-1,52-53"1 } }- 1
- The
cpusetmust match thecpusetvalue in/etc/crio/crio.conf.d/01-workload-partitioning.
17.1.2. Configuring the container mount namespace Copia collegamentoCollegamento copiato negli appunti!
To reduce the overall management footprint of the platform, a machine configuration is provided to contain the mount points. No configuration changes are needed. Use the provided settings:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: container-mount-namespace-and-kubelet-conf-master
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo=
mode: 493
path: /usr/local/bin/extractExecStart
- contents:
source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo=
mode: 493
path: /usr/local/bin/nsenterCmns
systemd:
units:
- contents: |
[Unit]
Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=container-mount-namespace
Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace
Environment=BIND_POINT=%t/container-mount-namespace/mnt
ExecStartPre=bash -c "findmnt ${RUNTIME_DIRECTORY} || mount --make-unbindable --bind ${RUNTIME_DIRECTORY} ${RUNTIME_DIRECTORY}"
ExecStartPre=touch ${BIND_POINT}
ExecStart=unshare --mount=${BIND_POINT} --propagation slave mount --make-rshared /
ExecStop=umount -R ${RUNTIME_DIRECTORY}
enabled: true
name: container-mount-namespace.service
- dropins:
- contents: |
[Unit]
Wants=container-mount-namespace.service
After=container-mount-namespace.service
[Service]
ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
EnvironmentFile=-/%t/%N-execstart.env
ExecStart=
ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
${ORIG_EXECSTART}"
name: 90-container-mount-namespace.conf
name: crio.service
- dropins:
- contents: |
[Unit]
Wants=container-mount-namespace.service
After=container-mount-namespace.service
[Service]
ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
EnvironmentFile=-/%t/%N-execstart.env
ExecStart=
ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
${ORIG_EXECSTART} --housekeeping-interval=30s"
name: 90-container-mount-namespace.conf
- contents: |
[Service]
Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s"
Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s"
name: 30-kubelet-interval-tuning.conf
name: kubelet.service
17.1.3. Enabling Stream Control Transmission Protocol (SCTP) Copia collegamentoCollegamento copiato negli appunti!
SCTP is a key protocol used in RAN applications. This MachineConfig object adds the SCTP kernel module to the node to enable this protocol.
Procedure
No configuration changes are needed. Use the provided settings:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf
17.1.4. Creating OperatorGroups for Operators Copia collegamentoCollegamento copiato negli appunti!
This configuration is provided to enable addition of the Operators needed to configure the platform post-installation. It adds the Namespace and OperatorGroup objects for the Local Storage Operator, Logging Operator, Performance Addon Operator, PTP Operator, and SRIOV Network Operator.
Procedure
No configuration changes are needed. Use the provided settings:
Local Storage Operator
apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-local-storage --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storageLogging Operator
apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-logging --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-loggingPerformance Addon Operator
apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" name: openshift-performance-addon-operator spec: {} --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: performance-addon-operator namespace: openshift-performance-addon-operatorPTP Operator
apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" name: openshift-ptp --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptpSRIOV Network Operator
apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator
17.1.5. Subscribing to the Operators Copia collegamentoCollegamento copiato negli appunti!
The subscription provides the location to download the Operators needed for platform configuration.
Procedure
Use the following example to configure the subscription:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable"1 name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual2 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: "stable"3 installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: performance-addon-operator namespace: openshift-performance-addon-operator spec: channel: "4.10"4 name: performance-addon-operator source: performance-addon-operator sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: "stable"5 name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: "stable"6 name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual- 1
- Specify the channel to get the
cluster-loggingOperator. - 2
- Specify
ManualorAutomatic. InAutomaticmode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. InManualmode, new Operator versions are installed only after they are explicitly approved. - 3
- Specify the channel to get the
local-storage-operatorOperator. - 4
- Specify the channel to get the
performance-addon-operatorOperator. - 5
- Specify the channel to get the
ptp-operatorOperator. - 6
- Specify the channel to get the
sriov-network-operatorOperator.
17.1.6. Configuring logging locally and forwarding Copia collegamentoCollegamento copiato negli appunti!
To be able to debug a single node distributed unit (DU), logs need to be stored for further analysis.
Procedure
Edit the
ClusterLoggingcustom resource (CR) in theopenshift-loggingproject:apiVersion: logging.openshift.io/v1 kind: ClusterLogging1 metadata: name: instance namespace: openshift-logging spec: collection: logs: fluentd: {} type: fluentd curation: type: "curator" curator: schedule: "30 3 * * *" managementState: Managed --- apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder2 metadata: name: instance namespace: openshift-logging spec: inputs: - infrastructure: {} outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test3 pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open
17.1.7. Configuring the Performance Addon Operator Copia collegamentoCollegamento copiato negli appunti!
This is a key configuration for the single node distributed unit (DU). Many of the real-time capabilities and service assurance are configured here.
Procedure
Configure the performance addons using the following example:
Recommended performance profile configuration
apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile1 spec: additionalKernelArgs: - "idle=poll" - "rcupdate.rcu_normal_after_boot=0" cpu: isolated: 2-51,54-1032 reserved: 0-1,52-533 hugepages: defaultHugepagesSize: 1G pages: - count: 324 size: 1G5 node: 06 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true7 nodeSelector: node-role.kubernetes.io/master: '' numa: topologyPolicy: "restricted" realTimeKernel: enabled: true8 - 1
- Ensure that the value for
namematches that specified in thespec.profile.datafield ofTunedPerformancePatch.yamland thestatus.configuration.source.namefield ofvalidatorCRs/informDuValidator.yaml. - 2
- Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match.
- 3
- Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved.
- 4
- Set the number of huge pages.
- 5
- Set the huge page size.
- 6
- Set
nodeto the NUMA node where thehugepagesare allocated. - 7
- Set
userLevelNetworkingtotrueto isolate the CPUs from networking interrupts. - 8
- Set
enabledtotrueto install the real-time Linux kernel.
17.1.8. Configuring Precision Time Protocol (PTP) Copia collegamentoCollegamento copiato negli appunti!
In the far edge, the RAN uses PTP to synchronize the systems.
Procedure
Configure PTP using the following example:
apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: profile: - interface: ens5f01 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison ieee1588 G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport UDPv4 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 ptp4lOpts: -2 -s --summary_interval -4 recommend: - match: - nodeLabel: node-role.kubernetes.io/master priority: 4 profile: slave
- 1
- Sets the interface used for PTP.
17.1.9. Disabling Network Time Protocol (NTP) Copia collegamentoCollegamento copiato negli appunti!
After the system is configured for Precision Time Protocol (PTP), you need to remove NTP to prevent it from impacting the system clock.
Procedure
No configuration changes are needed. Use the provided settings:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: disable-chronyd spec: config: systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd $OPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: chronyd.service ignition: version: 2.2.0
17.1.10. Configuring single root I/O virtualization (SR-IOV) Copia collegamentoCollegamento copiato negli appunti!
SR-IOV is commonly used to enable the fronthaul and the midhaul networks.
Procedure
Use the following configuration to configure SRIOV on a single node distributed unit (DU). Note that the first custom resource (CR) is required. The following CRs are examples.
apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/master: "" disableDrain: true enableInjector: true enableOperatorWebhook: true --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-mh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_mh vlan: 1501 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci2 isRdma: false nicSelector: pfNames: - ens7f03 nodeSelector: node-role.kubernetes.io/master: "" numVfs: 84 priority: 10 resourceName: du_mh --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-fh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_fh vlan: 1405 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-fh namespace: openshift-sriov-network-operator spec: deviceType: netdevice6 isRdma: true nicSelector: pfNames: - ens5f07 nodeSelector: node-role.kubernetes.io/master: "" numVfs: 88 priority: 10 resourceName: du_fh
- 1
- Specifies the VLAN for the midhaul network.
- 2
- Select either
vfio-pciornetdevice, as needed. - 3
- Specifies the interface connected to the midhaul network.
- 4
- Specifies the number of VFs for the midhaul network.
- 5
- The VLAN for the fronthaul network.
- 6
- Select either
vfio-pciornetdevice, as needed. - 7
- Specifies the interface connected to the fronthaul network.
- 8
- Specifies the number of VFs for the fronthaul network.
17.1.11. Disabling the console Operator Copia collegamentoCollegamento copiato negli appunti!
The console-operator installs and maintains the web console on a cluster. When the node is centrally managed the Operator is not needed and makes space for application workloads.
Procedure
You can disable the Operator using the following configuration file. No configuration changes are needed. Use the provided settings:
apiVersion: operator.openshift.io/v1 kind: Console metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "false" include.release.openshift.io/self-managed-high-availability: "false" include.release.openshift.io/single-node-developer: "false" release.openshift.io/create-only: "true" name: cluster spec: logLevel: Normal managementState: Removed operatorLogLevel: Normal
17.2. Applying the distributed unit (DU) configuration to a single-node OpenShift cluster Copia collegamentoCollegamento copiato negli appunti!
Perform the following tasks to configure a single-node cluster for a DU:
- Apply the required extra installation manifests at installation time.
- Apply the post-install configuration custom resources (CRs).
17.2.1. Applying the extra installation manifests Copia collegamentoCollegamento copiato negli appunti!
To apply the distributed unit (DU) configuration to the single-node cluster, the following extra installation manifests need to be included during installation:
- Enable workload partitioning.
-
Other
MachineConfigobjects – There is a set ofMachineConfigcustom resources (CRs) included by default. You can choose to include these additionalMachineConfigCRs that are unique to their environment. It is recommended, but not required, to apply these CRs during installation in order to minimize the number of reboots that can occur during post-install configuration.
17.2.2. Applying the post-install configuration custom resources (CRs) Copia collegamentoCollegamento copiato negli appunti!
- After OpenShift Container Platform is installed on the cluster, use the following command to apply the CRs you configured for the distributed units (DUs):
$ oc apply -f <file_name>.yaml