This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Chapter 17. Deploying distributed units manually on single-node OpenShift
The procedures in this topic tell you how to manually deploy clusters on a small number of single nodes as a distributed unit (DU) during installation.
The procedures do not describe how to install single-node OpenShift. This can be accomplished through many mechanisms. Rather, they are intended to capture the elements that should be configured as part of the installation process:
- Networking is needed to enable connectivity to the single-node OpenShift DU when the installation is complete.
- Workload partitioning, which can only be configured during installation.
- Additional items that help minimize the potential reboots post installation.
17.1. Configuring the distributed units (DUs) Copy linkLink copied to clipboard!
This section describes a set of configurations for an OpenShift Container Platform cluster so that it meets the feature and performance requirements necessary for running a distributed unit (DU) application. Some of this content must be applied during installation and other configurations can be applied post-install.
After you have installed the single-node OpenShift DU, further configuration is needed to enable the platform to carry a DU workload.
The configurations in this section are applied to the cluster after installation in order to configure the cluster for DU workloads.
17.1.1. Enabling workload partitioning Copy linkLink copied to clipboard!
A key feature to enable as part of a single-node OpenShift installation is workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads. You must configure workload partitioning at cluster installation time.
You can enable workload partitioning during cluster installation only. You cannot disable workload partitioning post-installation. However, you can reconfigure workload partitioning by updating the cpu value that you define in the performance profile, and in the related cpuset value in the MachineConfig custom resource (CR).
Procedure
The base64-encoded content below contains the CPU set that the management workloads are constrained to. This content must be adjusted to match the set specified in the
performanceprofileand must be accurate for the number of cores on the cluster.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The contents of
/etc/crio/crio.conf.d/01-workload-partitioningshould look like this:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
cpusetvalue varies based on the installation.
If Hyper-Threading is enabled, specify both threads for each core. The
cpusetvalue must match the reserved CPUs that you define in thespec.cpu.reservedfield in the performance profile.
If Hyper-Threading is enabled, specify both threads of each core. The CPUs value must match the reserved CPU set specified in the performance profile.
This content should be base64 encoded and provided in the 01-workload-partitioning-content in the manifest above.
The contents of
/etc/kubernetes/openshift-workload-pinningshould look like this:{ "management": { "cpuset": "0-1,52-53" } }{ "management": { "cpuset": "0-1,52-53"1 } }Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
cpusetmust match thecpusetvalue in/etc/crio/crio.conf.d/01-workload-partitioning.
17.1.2. Configuring the container mount namespace Copy linkLink copied to clipboard!
To reduce the overall management footprint of the platform, a machine configuration is provided to contain the mount points. No configuration changes are needed. Use the provided settings:
17.1.3. Enabling Stream Control Transmission Protocol (SCTP) Copy linkLink copied to clipboard!
SCTP is a key protocol used in RAN applications. This MachineConfig object adds the SCTP kernel module to the node to enable this protocol.
Procedure
No configuration changes are needed. Use the provided settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.1.4. Creating OperatorGroups for Operators Copy linkLink copied to clipboard!
This configuration is provided to enable addition of the Operators needed to configure the platform post-installation. It adds the Namespace and OperatorGroup objects for the Local Storage Operator, Logging Operator, Performance Addon Operator, PTP Operator, and SRIOV Network Operator.
Procedure
No configuration changes are needed. Use the provided settings:
Local Storage Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Logging Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Performance Addon Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow PTP Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow SRIOV Network Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.1.5. Subscribing to the Operators Copy linkLink copied to clipboard!
The subscription provides the location to download the Operators needed for platform configuration.
Procedure
Use the following example to configure the subscription:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the channel to get the
cluster-loggingOperator. - 2
- Specify
ManualorAutomatic. InAutomaticmode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. InManualmode, new Operator versions are installed only after they are explicitly approved. - 3
- Specify the channel to get the
local-storage-operatorOperator. - 4
- Specify the channel to get the
performance-addon-operatorOperator. - 5
- Specify the channel to get the
ptp-operatorOperator. - 6
- Specify the channel to get the
sriov-network-operatorOperator.
17.1.6. Configuring logging locally and forwarding Copy linkLink copied to clipboard!
To be able to debug a single node distributed unit (DU), logs need to be stored for further analysis.
Procedure
Edit the
ClusterLoggingcustom resource (CR) in theopenshift-loggingproject:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.1.7. Configuring the Performance Addon Operator Copy linkLink copied to clipboard!
This is a key configuration for the single node distributed unit (DU). Many of the real-time capabilities and service assurance are configured here.
Procedure
Configure the performance addons using the following example:
Recommended performance profile configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Ensure that the value for
namematches that specified in thespec.profile.datafield ofTunedPerformancePatch.yamland thestatus.configuration.source.namefield ofvalidatorCRs/informDuValidator.yaml. - 2
- Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match.
- 3
- Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved.
- 4
- Set the number of huge pages.
- 5
- Set the huge page size.
- 6
- Set
nodeto the NUMA node where thehugepagesare allocated. - 7
- Set
userLevelNetworkingtotrueto isolate the CPUs from networking interrupts. - 8
- Set
enabledtotrueto install the real-time Linux kernel.
17.1.8. Configuring Precision Time Protocol (PTP) Copy linkLink copied to clipboard!
In the far edge, the RAN uses PTP to synchronize the systems.
Procedure
Configure PTP using the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 1
- Sets the interface used for PTP.
17.1.9. Disabling Network Time Protocol (NTP) Copy linkLink copied to clipboard!
After the system is configured for Precision Time Protocol (PTP), you need to remove NTP to prevent it from impacting the system clock.
Procedure
No configuration changes are needed. Use the provided settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.1.10. Configuring single root I/O virtualization (SR-IOV) Copy linkLink copied to clipboard!
SR-IOV is commonly used to enable the fronthaul and the midhaul networks.
Procedure
Use the following configuration to configure SRIOV on a single node distributed unit (DU). Note that the first custom resource (CR) is required. The following CRs are examples.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 1
- Specifies the VLAN for the midhaul network.
- 2
- Select either
vfio-pciornetdevice, as needed. - 3
- Specifies the interface connected to the midhaul network.
- 4
- Specifies the number of VFs for the midhaul network.
- 5
- The VLAN for the fronthaul network.
- 6
- Select either
vfio-pciornetdevice, as needed. - 7
- Specifies the interface connected to the fronthaul network.
- 8
- Specifies the number of VFs for the fronthaul network.
17.1.11. Disabling the console Operator Copy linkLink copied to clipboard!
The console-operator installs and maintains the web console on a cluster. When the node is centrally managed the Operator is not needed and makes space for application workloads.
Procedure
You can disable the Operator using the following configuration file. No configuration changes are needed. Use the provided settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.2. Applying the distributed unit (DU) configuration to a single-node OpenShift cluster Copy linkLink copied to clipboard!
Perform the following tasks to configure a single-node cluster for a DU:
- Apply the required extra installation manifests at installation time.
- Apply the post-install configuration custom resources (CRs).
17.2.1. Applying the extra installation manifests Copy linkLink copied to clipboard!
To apply the distributed unit (DU) configuration to the single-node cluster, the following extra installation manifests need to be included during installation:
- Enable workload partitioning.
-
Other
MachineConfigobjects – There is a set ofMachineConfigcustom resources (CRs) included by default. You can choose to include these additionalMachineConfigCRs that are unique to their environment. It is recommended, but not required, to apply these CRs during installation in order to minimize the number of reboots that can occur during post-install configuration.
17.2.2. Applying the post-install configuration custom resources (CRs) Copy linkLink copied to clipboard!
- After OpenShift Container Platform is installed on the cluster, use the following command to apply the CRs you configured for the distributed units (DUs):
oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yaml