Questo contenuto non è disponibile nella lingua selezionata.
Chapter 8. Configuring low latency
8.1. Configuring low latency
You can configure and tune low latency capabilities to improve application performance on edge devices.
8.1.1. Lowering latency in MicroShift applications
Latency is defined as the time from an event to the response to that event. You can use low latency configurations and tuning in a MicroShift cluster running in an operational or software-defined control system where an edge device has to respond quickly to an external event. You can fully optimize low latency performance by combining MicroShift configurations with operating system tuning and workload partitioning.
The CPU set for management applications, such as the MicroShift service, OVS, CRI-O, MicroShift pods, and isolated cores, must contain all-online CPUs.
8.1.1.1. Workflow for configuring low latency for MicroShift applications
To configure low latency for applications running in a MicroShift cluster, you must complete the following tasks:
- Required
-
Install the
microshift-low-latency
RPM. - Configure workload partitioning.
-
Configure the
kubelet
section of theconfig.yaml
file in the/etc/microshift/
directory. - Configure and activate a TuneD profile. TuneD is a Red Hat Enterprise Linux (RHEL) service that monitors the host system and optimizes performance under certain workloads.
- Restart the host.
-
Install the
- Optional
- If you are using the x86_64 architecture, you can install Red Hat Enterprise Linux for Real Time 9.
Additional resources
- About low latency (OpenShift Container Platform documentation)
8.1.2. Installing the MicroShift low latency RPM package
When you install MicroShift, the low latency RPM package is not installed by default. You can install the low latency RPM as an optional package.
Prerequisites
- You installed the MicroShift RPM.
- You configured workload partitioning for MicroShift.
Procedure
Install the low latency RPM package by running the following command:
$ sudo dnf install -y microshift-low-latency
TipWait to restart the host until after activating your TuneD profile. Restarting the host restarts MicroShift and CRI-O, which applies the low latency manifests and activates the TuneD profile.
Next steps
-
Configure the kubelet parameter for low latency in the MicroShift
config.yaml
. - Tune your operating system, for example, configure and activate a TuneD profile.
- Optional: Configure automatic activation of your TuneD profile.
- Optional: If you are using the x86_64 architecture, install Red Hat Enterprise Linux for Real Time (real-time kernel).
- Prepare your workloads for low latency.
8.1.3. Configuration kubelet parameters and values in MicroShift
The first step in enabling low latency to a MicroShift cluster is to add configurations to the MicroShift config.yaml
file.
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You have root access to the cluster.
-
You made a copy of the provided
config.yaml.default
file in the/etc/microshift/
directory, and renamed itconfig.yaml
.
Procedure
Add the kubelet configuration to the MicroShift
config.yaml
file:Example passthrough
kubelet
configurationapiServer: # ... kubelet: 1 cpuManagerPolicy: static 2 cpuManagerPolicyOptions: full-pcpus-only: "true" 3 cpuManagerReconcilePeriod: 5s memoryManagerPolicy: Static 4 topologyManagerPolicy: single-numa-node reservedSystemCPUs: 0-1 5 reservedMemory: - limits: memory: 1100Mi 6 numaNode: 0 kubeReserved: memory: 500Mi systemReserved: memory: 500Mi evictionHard: 7 imagefs.available: "15%" 8 memory.available: "100Mi" 9 nodefs.available: "10%" 10 nodefs.inodesFree: "5%" 11 evictionPressureTransitionPeriod: 0s # ...
- 1
- If you change the CPU or memory managers in the kubelet configuration, you must remove files that cache the previous configuration. Restart the host to remove them automatically, or manually remove the
/var/lib/kubelet/cpu_manager_state
and/var/lib/kubelet/memory_manager_state
files. - 2
- The name of the policy to use. Valid values are
none
andstatic
. Requires theCPUManager
feature gate to be enabled. Default value isnone
. - 3
- A set of
key=value
pairs for setting extra options that fine tune the behavior of theCPUManager
policies. The default value isnull
. Requires both theCPUManager
andCPUManagerPolicyOptions
feature gates to be enabled. - 4
- The name of the policy used by Memory Manager. Case-sensitive. The default value is
none
. Requires theMemoryManager
feature gate to be enabled. - 5
- Required. The
reservedSystemCPUs
value must be the inverse of the offlined CPUs because both values combined must account for all of the CPUs on the system. This parameter is essential to dividing the management and application workloads. Use this parameter to define a static CPU set for the host-level system and Kubernetes daemons, plus interrupts and timers. Then the rest of the CPUs on the system can be used exclusively for workloads. - 6
- The value in
reservedMemory[0].limits.memory
,1100
Mi in this example, is equal tokubeReserved.memory
+systemReserved.memory
+evictionHard.memory.available
. - 7
- The
evictionHard
parameters define under which conditions the kubelet evicts pods. When you change the default value of only one parameter for theevictionHard
stanza, the default values of other parameters are not inherited and are set to zero. Provide all the threshold values even when you want to change just one. - 8
- The
imagefs
is a filesystem that container runtimes use to store container images and container writable layers. In this example, theevictionHard.imagefs.available
parameter means that the pod is evicted when the available space of the image filesystem is less than 15%. - 9
- In this example, the
evictionHard.memory.available
parameter means that the pods are evicted when the available memory of the node drops below 100MiB. - 10
- In this example, the
evictionHard.nodefs.available
parameter means that the pods are evicted when the main filesystem of the node has less than 10% available space. - 11
- In this example, the
evictionHard.nodefs.inodesFree
parameter means that the pods are evicted when more than 15% of the node’s main filesystem’s inodes are in use.
Verification
-
After you complete the next steps and restart the host, you can use a root-access account to check that your settings are in the
config.yaml
file in the/var/lib/microshift/resources/kubelet/config/
directory.
Next steps
- Enable workload partitioning.
- Tune your operating system. For example, configure and activate a TuneD profile.
- Optional: Configure automatic enablement of your TuneD profile.
- Optional: If you are using the x86_64 architecture, you can install Red Hat Enterprise Linux for Real Time (real-time kernel).
- Prepare your MicroShift workloads for low latency.
Additional resources
- Using a YAML configuration file
- KubeletConfiguration reference (Kubernetes upstream documentation)
8.1.4. Tuning Red Hat Enterprise Linux 9
As a Red Hat Enterprise Linux (RHEL) system administrator, you can use the TuneD service to optimize the performance profile of RHEL for a variety of use cases. TuneD monitors and optimizes system performance under certain workloads, including latency performance.
- Use TuneD profiles to tune your system for different use cases, such as deploying a low-latency MicroShift cluster.
- You can modify the rules defined for each profile and customize tuning for a specific device.
- When you switch to another profile or deactivate TuneD, all changes made to the system settings by the previous profile revert back to their original state.
- You can also configure TuneD to react to changes in device usage and adjusts settings to improve performance of active devices and reduce power consumption of inactive devices.
8.1.4.1. Configuring the MicroShift TuneD profile
Configure a TuneD profile for your host to use low latency with MicroShift workloads using the microshift-baseline-variables.conf
configuration file provided in the Red Hat Enterprise Linux (RHEL) /etc/tuned/
host directory after you install the microshift-low-latency
RPM package.
Prerequisites
- You have root access to the cluster.
-
You installed the
microshift-low-latency
RPM package. - Your RHEL host has TuneD installed. See Getting started with TuneD (RHEL documentation).
Procedure
You can use the default
microshift-baseline-variables.conf
TuneD profile in the/etc/tuned/
directory profile, or create your own to add more tunings.Example
microshift-baseline-variables.conf
TuneD profile# Isolate cores 2-7 for running application workloads isolated_cores=2-7 1 # Size of the hugepages hugepages_size=2M 2 # Number of hugepages hugepages=0 # Additional kernel arguments additional_args= 3 # CPU set to be offlined offline_cpu_set= 4
- 1
- Controls which cores should be isolated. By default, 1 core per socket is reserved in MicroShift for housekeeping. The other cores are isolated. Valid values are a core list or range. You can isolate any range, for example:
isolated_cores=2,4-7
orisolated_cores=2-23
.ImportantYou must keep only one
isolated_cores=
variable.NoteThe Kubernetes CPU manager can use any CPU to run the workload except the reserved CPUs defined in the kubelet configuration. For this reason it is best that:
- The sum of the kubelet’s reserved CPUs and isolated cores include all online CPUs.
- Isolated cores are complementary to the reserved CPUs defined in the kubelet configuration.
- 2
- Size of the hugepages. Valid values are 2M or 1G.
- 3
- Additional kernel arguments, for example,
additional_args=console=tty0 console=ttyS0,115200
. - 4
- The CPU set to be offlined.Important
Must not overlap with
isolated_cores
.
Enable the profile or make changes active, by running the following command:
$ sudo tuned-adm profile microshift-baseline
- Reboot the host to make kernel arguments active.
Verification
Optional: You can read the
/proc/cmdline
file that contains the arguments given to the currently running kernel on start.$ cat /proc/cmdline
Example output
BOOT_IMAGE=(hd0,msdos2)/ostree/rhel-7f82ccd9595c3c70af16525470e32c6a81c9138c4eae6c79ab86d5a2d108d7fc/vmlinuz-5.14.0-427.31.1.el9_4.x86_64+rt crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M rd.lvm.lv=rhel/root fips=0 console=ttyS0,115200n8 root=/dev/mapper/rhel-root rw ostree=/ostree/boot.1/rhel/7f82ccd9595c3c70af16525470e32c6a81c9138c4eae6c79ab86d5a2d108d7fc/0 skew_tick=1 tsc=reliable rcupdate.rcu_normal_after_boot=1 nohz=on nohz_full=2,4-5 rcu_nocbs=2,4-5 tuned.non_isolcpus=0000000b intel_pstate=disable nosoftlockup hugepagesz=2M hugepages=10
Next steps
- Prepare your MicroShift workloads for low latency.
- Optional: Configure automatic enablement of your TuneD profile.
- Optional: If you are using the x86_64 architecture, you can install Red Hat Enterprise Linux for Real Time (real-time kernel).
Additional resources
- Getting started with TuneD (RHEL documentation)
- How to manage tuning profiles in Linux (Red Hat blog)
8.1.4.2. Automatically enable the MicroShift TuneD profile
Included in the microshift-low-latency
RPM package is a systemd service that you can configure to automatically enable a TuneD profile when the system starts. This ability is particularly useful if you are installing MicroShift in a large fleet of devices.
Prerequisites
- You installed the microshift-low-latency RPM package on the host.
-
You enabled low latency in the MicroShift
config.yaml
. - You created a TuneD profile.
-
You configured the
microshift-baseline-variables.conf
file.
Procedure
Configure the
tuned.yaml
in the/etc/microshift/
directory, for example:Example tuned.yaml
profile: microshift-baseline 1 reboot_after_apply: True 2
- 1
- Controls which TuneD profile is activated. In this example, the name of the profile is
microshift-baseline
. - 2
- Controls whether the host must be rebooted after applying the profile. Valid values are
True
andFalse
. For example, use theTrue
setting to automatically restart the host after a newostree
commit is deployed.
ImportantThe host is restarted when the
microshift-tuned.service
runs, but it does not restart the system when a new commit is deployed. You must restart the host to enable a new commit, then the system starts again when themicroshift-tuned.service
runs on that boot and detects changes to profiles and variables.This double-boot can effect rollbacks. Ensure that you adjust the number of reboots in greenboot that are allowed before rollback when using automatic profile activation. For example, if 3 reboots are allowed before a rollback in greenboot, increase that number to 4. See the "Additional resources" list for more information.
Enable the
microshift-tuned.service
to run on each system start by entering the following command:$ sudo systemctl enable microshift-tuned.service
ImportantIf you set
reboot_after_apply
toTrue
, ensure that a TuneD profile is active and that no other profiles have been activated outside of the MicroShift service. Otherwise, starting themicroshift-tuned.service
results in a host reboot.Start the
microshift-tuned.service
by running the following command:$ sudo systemctl start microshift-tuned.service
NoteThe
microshift-tuned.service
uses collected checksums to detect changes to selected TuneD profiles and variables. If there are no checksums on the disk, the service activates the TuneD profile and restarts the host. Expect a host restart when first starting themicroshift-tuned.service
.
Next steps
- Optional: If you are using the x86_64 architecture, you can install Red Hat Enterprise Linux for Real Time (real-time kernel).
Additional resources
8.1.5. Using Red Hat Enterprise Linux for Real Time
If your workload has stringent low-latency determinism requirements for core kernel features such as interrupt handling and process scheduling in the microsecond (μs) range, you can use the Red Hat Enterprise Linux for Real Time (real-time kernel). The goal of the real-time kernel is consistent, low-latency determinism that offers predictable response times.
When considering system tuning, consider the following factors:
- System tuning is just as important when using the real-time kernel as it is for the standard kernel.
- Installing the real-time kernel on an untuned system running the standard kernel supplied as part of the RHEL 9 release is not likely to result in any noticeable benefit.
- Tuning the standard kernel yields 90% of possible latency gains.
- The real-time kernel provides the last 10% of latency reduction required by the most demanding workloads.
8.1.5.1. Installing the Red Hat Enterprise Linux for Real Time (real-time kernel)
Although the real-time kernel is not necessary for low latency workloads, using the real-time kernel can optimize low latency performance. You can install it on a host using RPM packages, and include it in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image deployment.
Prerequisites
- You have a Red Hat subscription that includes Red Hat Enterprise Linux for Real Time (real-time kernel). For example, your host machine is registered and Red Hat Enterprise Linux (RHEL) is attached to a RHEL for Real Time subscription.
- You are using x86_64 architecture.
Procedure
Enable the real-time kernel repository by running the following command:
$ sudo subscription-manager repos --enable rhel-9-for-x86_64-rt-rpms
Install the real-time kernel by running the following command:
$ sudo dnf install -y kernel-rt
Query the real-time kernel version by running the following command:
$ RTVER=$(rpm -q --queryformat '%{version}-%{release}.%{arch}' kernel-rt | sort | tail -1)
Make a persistent change in GRUB that designates the real-time kernel as the default kernel by running the following command:
$ sudo grubby --set-default="/boot/vmlinuz-${RTVER}+rt"
- Restart the host to activate the real-time kernel.
Next steps
- Prepare your MicroShift workloads for low latency.
- Optional: Use a blueprint to install the real-time kernel in a RHEL for Edge image.
8.1.5.2. Installing the Red Hat Enterprise Linux for Real Time (real-time kernel) in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image
You can include the real-time kernel in a RHEL for Edge image deployment using image builder. The following example blueprint sections include references gathered from the previous steps required to configure low latency for a MicroShift cluster.
Prerequisites
- You have a Red Hat subscription enabled on the host that includes Red Hat Enterprise Linux for Real Time (real-time kernel).
- You are using the x86_64 architecture.
-
You configured
osbuild
to use thekernel-rt
repository.
A subscription that includes the real-time kernel must be enabled on the host used to build the commit.
Procedure
Add the following example blueprint sections to your complete installation blueprint for installing the real-time kernel in a RHEL for Edge image:
Example blueprint snippet for the real-time kernel
[[packages]] name = "microshift-low-latency" version = "*" # Kernel RT is supported only on the x86_64 architecture [customizations.kernel] name = "kernel-rt" [customizations.services] enabled = ["microshift", "microshift-tuned"] [[customizations.files]] path = "/etc/microshift/config.yaml" data = """ kubelet: cpuManagerPolicy: static cpuManagerPolicyOptions: full-pcpus-only: "true" cpuManagerReconcilePeriod: 5s memoryManagerPolicy: Static topologyManagerPolicy: single-numa-node reservedSystemCPUs: 0-1 reservedMemory: - limits: memory: 1100Mi numaNode: 0 kubeReserved: memory: 500Mi systemReserved: memory: 500Mi evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 0s """ [[customizations.files]] path = "/etc/tuned/microshift-baseline-variables.conf" data = """ # Isolated cores should be complementary to the kubelet configuration reserved CPUs. # Isolated and reserved CPUs must contain all online CPUs. # Core #3 is for testing offlining, therefore it is skipped. isolated_cores=2,4-5 hugepages_size=2M hugepages=10 additional_args=test1=on test2=true dummy offline_cpu_set=3 """ [[customizations.files]] path = "/etc/microshift/tuned.yaml" data = """ profile: microshift-baseline reboot_after_apply: True """
Next steps
- Complete the image building process.
- If you have not completed the previous steps for enabling low latency for your MicroShift cluster, do so now. Update the blueprint with the information gathered in those steps.
- If you have not configured workload partitioning, do so now.
- Prepare your MicroShift workloads for low latency.
8.1.6. Building the Red Hat Enterprise Linux for Edge (RHEL for Edge) image with the real-time kernel
Complete the build process by starting with the following procedure to embed MicroShiftin a RHEL for Edge image. Then complete the remaining steps in the installation documentation for installing MicroShift in a RHEL for Edge image:
Additional resources
- Red Hat Enterprise Linux for Real Time 9 (RHEL documentation)
- Using repositories that require subscription (osbuild documentation)
- Building RHEL images by using the real-time kernel for more information.
- Post installation instructions (RHEL for Real Time documentation)
- Embedding in a RHEL for Edge image
- FAQ about RHEL for Real Time (kernel-rt)
8.1.7. Preparing a MicroShift workload for low latency
To take advantage of low latency, workloads running on MicroShift must have the microshift-low-latency
container runtime configuration set by using the RuntimeClass
feature. The CRI-O RuntimeClass
object is installed with the microshift-low-latency
RPM, so only the pod annotations need to be configured.
Prerequisites
-
You installed the
microshift-low-latency
RPM package. - You configured workload partitioning.
Procedure
Use the following example to set the following annotations in the pod spec:
cpu-load-balancing.crio.io: "disable" irq-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" cpu-load-balancing.crio.io: "disable" cpu-freq-governor.crio.io: "<governor>"
Example pod that runs
oslat
test:apiVersion: v1 kind: Pod metadata: name: oslat annotations: cpu-load-balancing.crio.io: "disable" 1 irq-load-balancing.crio.io: "disable" 2 cpu-quota.crio.io: "disable" 3 cpu-c-states.crio.io: "disable" 4 cpu-freq-governor.crio.io: "<governor>" 5 spec: runtimeClassName: microshift-low-latency 6 containers: - name: oslat image: quay.io/container-perf-tools/oslat imagePullPolicy: Always resources: requests: memory: "400Mi" cpu: "2" limits: memory: "400Mi" cpu: "2" env: - name: tool value: "oslat" - name: manual value: "n" - name: PRIO value: "1" - name: delay value: "0" - name: RUNTIME_SECONDS value: "60" - name: TRACE_THRESHOLD value: "" - name: EXTRA_ARGS value: "" securityContext: privileged: true capabilities: add: - SYS_NICE - IPC_LOCK
- 1
- Disables the CPU load balancing for the pod.
- 2
- Opts the pod out of interrupt handling (IRQ).
- 3
- Disables the CPU completely fair scheduler (CFS) quota at the pod run time.
- 4
- Enables or disables C-states for each CPU. Set the value to
disable
to provide the best performance for a high-priority pod. - 5
- Sets the
cpufreq
governor for each CPU. Theperformance
governor is recommended for high-priority workloads. - 6
- The
runtimeClassName
must match the name of the performance profile configured in the cluster. For example,microshift-low-latency
.
NoteDisable CPU load balancing only when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the cluster.
ImportantFor the pod to have the
Guaranteed
QoS class, it must have the same values of CPU and memory in requests and limits. See Guaranteed (Kubernetes upstream documentation)
Additional resources
- Disabling power saving mode for high priority pods (Red Hat OpenShift Container Platform documentation)
- Disabling CPU CFS quota (Red Hat OpenShift Container Platform documentation)
- Disabling interrupt processing for CPUs where pinned containers are running (Red Hat OpenShift Container Platform documentation)
8.1.8. Reference blueprint for installing Red Hat Enterprise Linux for Real Time (real-time kernel) in a RHEL for Edge image
An image blueprint is a persistent definition of the required image customizations that enable you to create multiple builds. Instead of reconfiguring the blueprint for each image build, you can edit, rebuild, delete, and save the blueprint so that you can keep rebuilding images from it.
Example blueprint used to install the real-time kernel in a RHEL for Edge image
name = "microshift-low-latency" description = "RHEL 9.4 and MicroShift configured for low latency" version = "0.0.1" modules = [] groups = [] distro = "rhel-94" [[packages]] name = "microshift" version = "*" [[packages]] name = "microshift-greenboot" version = "*" [[packages]] name = "microshift-networking" version = "*" [[packages]] name = "microshift-selinux" version = "*" [[packages]] name = "microshift-low-latency" version = "*" # Kernel RT is only available for x86_64 [customizations.kernel] name = "kernel-rt" [customizations.services] enabled = ["microshift", "microshift-tuned"] [customizations.firewall] ports = ["22:tcp", "80:tcp", "443:tcp", "5353:udp", "6443:tcp", "30000-32767:tcp", "30000-32767:udp"] [customizations.firewall.services] enabled = ["mdns", "ssh", "http", "https"] [[customizations.firewall.zones]] name = "trusted" sources = ["10.42.0.0/16", "169.254.169.1"] [[customizations.files]] path = "/etc/microshift/config.yaml" data = """ kubelet: cpuManagerPolicy: static cpuManagerPolicyOptions: full-pcpus-only: "true" cpuManagerReconcilePeriod: 5s memoryManagerPolicy: Static topologyManagerPolicy: single-numa-node reservedSystemCPUs: 0-1 reservedMemory: - limits: memory: 1100Mi numaNode: 0 kubeReserved: memory: 500Mi systemReserved: memory: 500Mi evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 0s """ [[customizations.files]] path = "/etc/tuned/microshift-baseline-variables.conf" data = """ # Isolated cores should be complementary to the kubelet configuration reserved CPUs. # Isolated and reserved CPUs must contain all online CPUs. # Core #3 is for testing offlining, therefore it is skipped. isolated_cores=2,4-5 hugepages_size=2M hugepages=10 additional_args=test1=on test2=true dummy offline_cpu_set=3 """ [[customizations.files]] path = "/etc/microshift/tuned.yaml" data = """ profile: microshift-baseline reboot_after_apply: True """
Additional resources
8.2. Workload partitioning
Workload partitioning divides the node CPU resources into distinct CPU sets. The primary objective is to limit the amount of CPU usage for all control plane components which reserves rest of the device CPU resources for workloads of the user.
Workload partitioning allocates reserved set of CPUs to MicroShift services, cluster management workloads, and infrastructure pods, ensuring that the remaining CPUs in the cluster deployment are untouched and available exclusively for non-platform workloads.
8.2.1. Enabling workload partitioning
To enable workload partitioning on MicroShift, make the following configuration changes:
-
Update the MicroShift
config.yaml
file to include the kubelet configuration file. - Create the CRI-O systemd and configuration files.
- Create and update the systemd configuration file for the MicroShift and CRI-O services respectively.
Procedure
Update the MicroShift
config.yaml
file to include the kubelet configuration file to enable and configure CPU Manager for the workloads:Create the kubelet configuration file in the path
/etc/kubernetes/openshift-workload-pinning
. The kubelet configuration directs the kubelet to modify the node resources based on the capacity and allocatable CPUs.kubelet configuration example
# ... { "management": { "cpuset": "0,6,7" 1 } } # ...
- 1
- The
cpuset
applies to a machine with 8 VCPUs (4 cores) and is valid throughout the document.
Update the MicroShift config.yaml file in the path
/etc/microshift/config.yaml
. Embed the kubelet configuration in the MicroShiftconfig.yaml
file to enable and configure CPU Manager for the workloads.MicroShift
config.yaml
example# ... kubelet: reservedSystemCPUs: 0,6,7 1 cpuManagerPolicy: static cpuManagerPolicyOptions: full-pcpus-only: "true" 2 cpuManagerReconcilePeriod: 5s # ...
Create the CRI-O systemd and configuration files:
Create the CRI-O configuration file in the path
/etc/crio/crio.conf.d/20-microshift-workload-partition.conf
which overrides the default configuration that already exists in the11-microshift-ovn.conf
file.CRI-O configuration example
# ... [crio.runtime] infra_ctr_cpuset = "0,6,7" [crio.runtime.workloads.management] activation_annotation = "target.workload.openshift.io/management" annotation_prefix = "resources.workload.openshift.io" resources = { "cpushares" = 0, "cpuset" = "0,6,7" } # ...
Create the systemd file for CRI-O in the path
/etc/systemd/system/crio.service.d/microshift-cpuaffinity.conf
.CRI-O systemd configuration example
# ... [Service] CPUAffinity=0,6,7 # ...
Create and update the systemd configuration file with
CPUAffinity
value for the MicroShift and CRI-O services:Create the MicroShift services systemd file in the path
/etc/systemd/system/microshift.service.d/microshift-cpuaffinity.conf
. MicroShift will be pinned using the systemdCPUAffinity
value.MicroShift services systemd configuration example
# ... [Service] CPUAffinity=0,6,7 # ...
Update the
CPUAffinity
value in the MicroShift ovs-vswitchd systemd file in the path/etc/systemd/system/ovs-vswitchd.service.d/microshift-cpuaffinity.conf
.MicroShift ovs-vswitchd systemd configuration example
# ... [Service] CPUAffinity=0,6,7 # ...
Update the
CPUAffinity
value in the MicroShift ovsdb-server systemd file in the path/etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.conf
MicroShift ovsdb-server systemd configuration example
# ... [Service] CPUAffinity=0,6,7 # ...