This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Chapter 17. Deploying distributed units manually on single-node OpenShift
The procedures in this topic tell you how to manually deploy clusters on a small number of single nodes as a distributed unit (DU) during installation.
The procedures do not describe how to install single-node OpenShift. This can be accomplished through many mechanisms. Rather, they are intended to capture the elements that should be configured as part of the installation process:
- Networking is needed to enable connectivity to the single-node OpenShift DU when the installation is complete.
- Workload partitioning, which can only be configured during installation.
- Additional items that help minimize the potential reboots post installation.
17.1. Configuring the distributed units (DUs) Copia collegamentoCollegamento copiato negli appunti!
This section describes a set of configurations for an OpenShift Container Platform cluster so that it meets the feature and performance requirements necessary for running a distributed unit (DU) application. Some of this content must be applied during installation and other configurations can be applied post-install.
After you have installed the single-node OpenShift DU, further configuration is needed to enable the platform to carry a DU workload.
The configurations in this section are applied to the cluster after installation in order to configure the cluster for DU workloads.
17.1.1. Enabling workload partitioning Copia collegamentoCollegamento copiato negli appunti!
A key feature to enable as part of a single-node OpenShift installation is workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads. You must configure workload partitioning at cluster installation time.
You can enable workload partitioning during cluster installation only. You cannot disable workload partitioning post-installation. However, you can reconfigure workload partitioning by updating the cpu
value that you define in the performance profile, and in the related cpuset
value in the MachineConfig
custom resource (CR).
Procedure
The base64-encoded content below contains the CPU set that the management workloads are constrained to. This content must be adjusted to match the set specified in the
performanceprofile
and must be accurate for the number of cores on the cluster.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The contents of
/etc/crio/crio.conf.d/01-workload-partitioning
should look like this:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
cpuset
value varies based on the installation.
If Hyper-Threading is enabled, specify both threads for each core. The
cpuset
value must match the reserved CPUs that you define in thespec.cpu.reserved
field in the performance profile.
If Hyper-Threading is enabled, specify both threads of each core. The CPUs
value must match the reserved CPU set specified in the performance profile.
This content should be base64 encoded and provided in the 01-workload-partitioning-content
in the manifest above.
The contents of
/etc/kubernetes/openshift-workload-pinning
should look like this:{ "management": { "cpuset": "0-1,52-53" } }
{ "management": { "cpuset": "0-1,52-53"
1 } }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
cpuset
must match thecpuset
value in/etc/crio/crio.conf.d/01-workload-partitioning
.
17.1.2. Configuring the container mount namespace Copia collegamentoCollegamento copiato negli appunti!
To reduce the overall management footprint of the platform, a machine configuration is provided to contain the mount points. No configuration changes are needed. Use the provided settings:
17.1.3. Enabling Stream Control Transmission Protocol (SCTP) Copia collegamentoCollegamento copiato negli appunti!
SCTP is a key protocol used in RAN applications. This MachineConfig
object adds the SCTP kernel module to the node to enable this protocol.
Procedure
No configuration changes are needed. Use the provided settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.1.4. Creating OperatorGroups for Operators Copia collegamentoCollegamento copiato negli appunti!
This configuration is provided to enable addition of the Operators needed to configure the platform post-installation. It adds the Namespace
and OperatorGroup
objects for the Local Storage Operator, Logging Operator, Performance Addon Operator, PTP Operator, and SRIOV Network Operator.
Procedure
No configuration changes are needed. Use the provided settings:
Local Storage Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Logging Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Performance Addon Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow PTP Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow SRIOV Network Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.1.5. Subscribing to the Operators Copia collegamentoCollegamento copiato negli appunti!
The subscription provides the location to download the Operators needed for platform configuration.
Procedure
Use the following example to configure the subscription:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the channel to get the
cluster-logging
Operator. - 2
- Specify
Manual
orAutomatic
. InAutomatic
mode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. InManual
mode, new Operator versions are installed only after they are explicitly approved. - 3
- Specify the channel to get the
local-storage-operator
Operator. - 4
- Specify the channel to get the
performance-addon-operator
Operator. - 5
- Specify the channel to get the
ptp-operator
Operator. - 6
- Specify the channel to get the
sriov-network-operator
Operator.
17.1.6. Configuring logging locally and forwarding Copia collegamentoCollegamento copiato negli appunti!
To be able to debug a single node distributed unit (DU), logs need to be stored for further analysis.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.1.7. Configuring the Performance Addon Operator Copia collegamentoCollegamento copiato negli appunti!
This is a key configuration for the single node distributed unit (DU). Many of the real-time capabilities and service assurance are configured here.
Procedure
Configure the performance addons using the following example:
Recommended performance profile configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Ensure that the value for
name
matches that specified in thespec.profile.data
field ofTunedPerformancePatch.yaml
and thestatus.configuration.source.name
field ofvalidatorCRs/informDuValidator.yaml
. - 2
- Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match.
- 3
- Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved.
- 4
- Set the number of huge pages.
- 5
- Set the huge page size.
- 6
- Set
node
to the NUMA node where thehugepages
are allocated. - 7
- Set
userLevelNetworking
totrue
to isolate the CPUs from networking interrupts. - 8
- Set
enabled
totrue
to install the real-time Linux kernel.
17.1.8. Configuring Precision Time Protocol (PTP) Copia collegamentoCollegamento copiato negli appunti!
In the far edge, the RAN uses PTP to synchronize the systems.
Procedure
Configure PTP using the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 1
- Sets the interface used for PTP.
17.1.9. Disabling Network Time Protocol (NTP) Copia collegamentoCollegamento copiato negli appunti!
After the system is configured for Precision Time Protocol (PTP), you need to remove NTP to prevent it from impacting the system clock.
Procedure
No configuration changes are needed. Use the provided settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.1.10. Configuring single root I/O virtualization (SR-IOV) Copia collegamentoCollegamento copiato negli appunti!
SR-IOV is commonly used to enable the fronthaul and the midhaul networks.
Procedure
Use the following configuration to configure SRIOV on a single node distributed unit (DU). Note that the first custom resource (CR) is required. The following CRs are examples.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 1
- Specifies the VLAN for the midhaul network.
- 2
- Select either
vfio-pci
ornetdevice
, as needed. - 3
- Specifies the interface connected to the midhaul network.
- 4
- Specifies the number of VFs for the midhaul network.
- 5
- The VLAN for the fronthaul network.
- 6
- Select either
vfio-pci
ornetdevice
, as needed. - 7
- Specifies the interface connected to the fronthaul network.
- 8
- Specifies the number of VFs for the fronthaul network.
17.1.11. Disabling the console Operator Copia collegamentoCollegamento copiato negli appunti!
The console-operator installs and maintains the web console on a cluster. When the node is centrally managed the Operator is not needed and makes space for application workloads.
Procedure
You can disable the Operator using the following configuration file. No configuration changes are needed. Use the provided settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.2. Applying the distributed unit (DU) configuration to a single-node OpenShift cluster Copia collegamentoCollegamento copiato negli appunti!
Perform the following tasks to configure a single-node cluster for a DU:
- Apply the required extra installation manifests at installation time.
- Apply the post-install configuration custom resources (CRs).
17.2.1. Applying the extra installation manifests Copia collegamentoCollegamento copiato negli appunti!
To apply the distributed unit (DU) configuration to the single-node cluster, the following extra installation manifests need to be included during installation:
- Enable workload partitioning.
-
Other
MachineConfig
objects – There is a set ofMachineConfig
custom resources (CRs) included by default. You can choose to include these additionalMachineConfig
CRs that are unique to their environment. It is recommended, but not required, to apply these CRs during installation in order to minimize the number of reboots that can occur during post-install configuration.
17.2.2. Applying the post-install configuration custom resources (CRs) Copia collegamentoCollegamento copiato negli appunti!
- After OpenShift Container Platform is installed on the cluster, use the following command to apply the CRs you configured for the distributed units (DUs):
oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yaml