Chapter 3. Node Feature Discovery Operator
Learn about the Node Feature Discovery (NFD) Operator and how you can use it to expose node-level information by orchestrating Node Feature Discovery, a Kubernetes add-on for detecting hardware features and system configuration.
The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an OpenShift Container Platform cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on.
The NFD Operator can be found on the Operator Hub by searching for “Node Feature Discovery”.
3.1. Installing the Node Feature Discovery Operator Copy linkLink copied to clipboard!
The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the NFD daemon set. As a cluster administrator, you can install the NFD Operator by using the OpenShift Container Platform CLI or the web console.
3.1.1. Installing the NFD Operator using the CLI Copy linkLink copied to clipboard!
As a cluster administrator, you can install the NFD Operator using the CLI.
Prerequisites
- An OpenShift Container Platform cluster
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges.
Procedure
Create a namespace for the NFD Operator.
Create the following
Namespacecustom resource (CR) that defines theopenshift-nfdnamespace, and then save the YAML in thenfd-namespace.yamlfile. Setcluster-monitoringto"true".Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace by running the following command:
oc create -f nfd-namespace.yaml
$ oc create -f nfd-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Install the NFD Operator in the namespace you created in the previous step by creating the following objects:
Create the following
OperatorGroupCR and save the YAML in thenfd-operatorgroup.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
OperatorGroupCR by running the following command:oc create -f nfd-operatorgroup.yaml
$ oc create -f nfd-operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
SubscriptionCR and save the YAML in thenfd-sub.yamlfile:Example Subscription
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the subscription object by running the following command:
oc create -f nfd-sub.yaml
$ oc create -f nfd-sub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-nfdproject:oc project openshift-nfd
$ oc project openshift-nfdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the Operator deployment is successful, run:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m
NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow A successful deployment shows a
Runningstatus.
3.1.2. Installing the NFD Operator using the web console Copy linkLink copied to clipboard!
As a cluster administrator, you can install the NFD Operator using the web console.
Procedure
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Choose Node Feature Discovery from the list of available Operators, and then click Install.
- On the Install Operator page, select A specific namespace on the cluster, and then click Install. You do not need to create a namespace because it is created for you.
Verification
To verify that the NFD Operator installed successfully:
-
Navigate to the Operators
Installed Operators page. Ensure that Node Feature Discovery is listed in the openshift-nfd project with a Status of InstallSucceeded.
NoteDuring installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.
Troubleshooting
If the Operator does not appear as installed, troubleshoot further:
-
Navigate to the Operators
Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status. -
Navigate to the Workloads
Pods page and check the logs for pods in the openshift-nfdproject.
3.2. Using the Node Feature Discovery Operator Copy linkLink copied to clipboard!
The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the Node-Feature-Discovery daemon set by watching for a NodeFeatureDiscovery custom resource (CR). Based on the NodeFeatureDiscovery CR, the Operator creates the operand (NFD) components in the selected namespace. You can edit the CR to use another namespace, image, image pull policy, and nfd-worker-conf config map, among other options.
As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift CLI (oc) or the web console.
Starting with version 4.12, the operand.image field in the NodeFeatureDiscovery CR is mandatory. If the NFD Operator is deployed by using Operator Lifecycle Manager (OLM), OLM automatically sets the operand.image field. If you create the NodeFeatureDiscovery CR by using the OpenShift Container Platform CLI or the OpenShift Container Platform web console, you must set the operand.image field explicitly.
3.2.1. Creating a NodeFeatureDiscovery CR by using the CLI Copy linkLink copied to clipboard!
As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI (oc).
The spec.operand.image setting requires a -rhel9 image to be defined for use with OpenShift Container Platform releases 4.13 and later.
The following example shows the use of -rhel9 to acquire the correct image.
Prerequisites
- You have access to an OpenShift Container Platform cluster
-
You installed the OpenShift CLI (
oc). -
You logged in as a user with
cluster-adminprivileges. - You installed the NFD Operator.
Procedure
Create a
NodeFeatureDiscoveryCR:Example
NodeFeatureDiscoveryCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
operand.imagefield is mandatory.
Create the
NodeFeatureDiscoveryCR by running the following command:oc apply -f <filename>
$ oc apply -f <filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
NodeFeatureDiscoveryCR was created by running the following command:oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A successful deployment shows a
Runningstatus.
3.2.2. Creating a NodeFeatureDiscovery CR by using the CLI in a disconnected environment Copy linkLink copied to clipboard!
As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI (oc).
Prerequisites
- You have access to an OpenShift Container Platform cluster
-
You installed the OpenShift CLI (
oc). -
You logged in as a user with
cluster-adminprivileges. - You installed the NFD Operator.
- You have access to a mirror registry with the required images.
-
You installed the
skopeoCLI tool.
Procedure
Determine the digest of the registry image:
Run the following command:
skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>
$ skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12
$ skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect the output to identify the image digest:
Example output
{ ... "Digest": "sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", ... }{ ... "Digest": "sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", ... }Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the
skopeoCLI tool to copy the image fromregistry.redhat.ioto your mirror registry, by running the following command:skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>
skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdefCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
NodeFeatureDiscoveryCR:Example
NodeFeatureDiscoveryCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
operand.imagefield is mandatory.
Create the
NodeFeatureDiscoveryCR by running the following command:oc apply -f <filename>
$ oc apply -f <filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the status of the
NodeFeatureDiscoveryCR by running the following command:oc get nodefeaturediscovery nfd-instance -o yaml
$ oc get nodefeaturediscovery nfd-instance -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the pods are running without
ImagePullBackOfferrors by running the following command:oc get pods -n <nfd_namespace>
$ oc get pods -n <nfd_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.3. Creating a NodeFeatureDiscovery CR by using the web console Copy linkLink copied to clipboard!
As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift Container Platform web console.
Prerequisites
- You have access to an OpenShift Container Platform cluster
-
You logged in as a user with
cluster-adminprivileges. - You installed the NFD Operator.
Procedure
-
Navigate to the Operators
Installed Operators page. - In the Node Feature Discovery section, under Provided APIs, click Create instance.
-
Edit the values of the
NodeFeatureDiscoveryCR. - Click Create.
Starting with version 4.12, the operand.image field in the NodeFeatureDiscovery CR is mandatory. If the NFD Operator is deployed by using Operator Lifecycle Manager (OLM), OLM automatically sets the operand.image field. If you create the NodeFeatureDiscovery CR by using the OpenShift Container Platform CLI or the OpenShift Container Platform web console, you must set the operand.image field explicitly.
3.3. Configuring the Node Feature Discovery Operator Copy linkLink copied to clipboard!
3.3.1. core Copy linkLink copied to clipboard!
The core section contains common configuration settings that are not specific to any particular feature source.
3.3.1.1. core.sleepInterval Copy linkLink copied to clipboard!
core.sleepInterval specifies the interval between consecutive passes of feature detection or re-detection, and thus also the interval between node re-labeling. A non-positive value implies infinite sleep interval; no re-detection or re-labeling is done.
This value is overridden by the deprecated --sleep-interval command-line flag, if specified.
Example usage
core: sleepInterval: 60s
core:
sleepInterval: 60s
The default value is 60s.
3.3.1.2. core.sources Copy linkLink copied to clipboard!
core.sources specifies the list of enabled feature sources. A special value all enables all feature sources.
This value is overridden by the deprecated --sources command-line flag, if specified.
Default: [all]
Example usage
core:
sources:
- system
- custom
core:
sources:
- system
- custom
3.3.1.3. core.labelWhiteList Copy linkLink copied to clipboard!
core.labelWhiteList specifies a regular expression for filtering feature labels based on the label name. Non-matching labels are not published.
The regular expression is only matched against the basename part of the label, the part of the name after '/'. The label prefix, or namespace, is omitted.
This value is overridden by the deprecated --label-whitelist command-line flag, if specified.
Default: null
Example usage
core: labelWhiteList: '^cpu-cpuid'
core:
labelWhiteList: '^cpu-cpuid'
3.3.1.4. core.noPublish Copy linkLink copied to clipboard!
Setting core.noPublish to true disables all communication with the nfd-master. It is effectively a dry run flag; nfd-worker runs feature detection normally, but no labeling requests are sent to nfd-master.
This value is overridden by the --no-publish command-line flag, if specified.
Example:
Example usage
core: noPublish: true
core:
noPublish: true
The default value is false.
3.3.2. core.klog Copy linkLink copied to clipboard!
The following options specify the logger configuration, most of which can be dynamically adjusted at run-time.
The logger options can also be specified using command-line flags, which take precedence over any corresponding config file options.
3.3.2.1. core.klog.addDirHeader Copy linkLink copied to clipboard!
If set to true, core.klog.addDirHeader adds the file directory to the header of the log messages.
Default: false
Run-time configurable: yes
3.3.2.2. core.klog.alsologtostderr Copy linkLink copied to clipboard!
Log to standard error as well as files.
Default: false
Run-time configurable: yes
3.3.2.3. core.klog.logBacktraceAt Copy linkLink copied to clipboard!
When logging hits line file:N, emit a stack trace.
Default: empty
Run-time configurable: yes
3.3.2.4. core.klog.logDir Copy linkLink copied to clipboard!
If non-empty, write log files in this directory.
Default: empty
Run-time configurable: no
3.3.2.5. core.klog.logFile Copy linkLink copied to clipboard!
If not empty, use this log file.
Default: empty
Run-time configurable: no
3.3.2.6. core.klog.logFileMaxSize Copy linkLink copied to clipboard!
core.klog.logFileMaxSize defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
Default: 1800
Run-time configurable: no
3.3.2.7. core.klog.logtostderr Copy linkLink copied to clipboard!
Log to standard error instead of files
Default: true
Run-time configurable: yes
3.3.2.8. core.klog.skipHeaders Copy linkLink copied to clipboard!
If core.klog.skipHeaders is set to true, avoid header prefixes in the log messages.
Default: false
Run-time configurable: yes
3.3.2.9. core.klog.skipLogHeaders Copy linkLink copied to clipboard!
If core.klog.skipLogHeaders is set to true, avoid headers when opening log files.
Default: false
Run-time configurable: no
3.3.2.10. core.klog.stderrthreshold Copy linkLink copied to clipboard!
Logs at or above this threshold go to stderr.
Default: 2
Run-time configurable: yes
3.3.2.11. core.klog.v Copy linkLink copied to clipboard!
core.klog.v is the number for the log level verbosity.
Default: 0
Run-time configurable: yes
3.3.2.12. core.klog.vmodule Copy linkLink copied to clipboard!
core.klog.vmodule is a comma-separated list of pattern=N settings for file-filtered logging.
Default: empty
Run-time configurable: yes
3.3.3. sources Copy linkLink copied to clipboard!
The sources section contains feature source specific configuration parameters.
3.3.3.1. sources.cpu.cpuid.attributeBlacklist Copy linkLink copied to clipboard!
Prevent publishing cpuid features listed in this option.
This value is overridden by sources.cpu.cpuid.attributeWhitelist, if specified.
Default: [BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT, RDRAND, RDSEED, RDTSCP, SGX, SGXLC, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSSE3]
Example usage
sources:
cpu:
cpuid:
attributeBlacklist: [MMX, MMXEXT]
sources:
cpu:
cpuid:
attributeBlacklist: [MMX, MMXEXT]
3.3.3.2. sources.cpu.cpuid.attributeWhitelist Copy linkLink copied to clipboard!
Only publish the cpuid features listed in this option.
sources.cpu.cpuid.attributeWhitelist takes precedence over sources.cpu.cpuid.attributeBlacklist.
Default: empty
Example usage
sources:
cpu:
cpuid:
attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]
sources:
cpu:
cpuid:
attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]
3.3.3.3. sources.kernel.kconfigFile Copy linkLink copied to clipboard!
sources.kernel.kconfigFile is the path of the kernel config file. If empty, NFD runs a search in the well-known standard locations.
Default: empty
Example usage
sources:
kernel:
kconfigFile: "/path/to/kconfig"
sources:
kernel:
kconfigFile: "/path/to/kconfig"
3.3.3.4. sources.kernel.configOpts Copy linkLink copied to clipboard!
sources.kernel.configOpts represents kernel configuration options to publish as feature labels.
Default: [NO_HZ, NO_HZ_IDLE, NO_HZ_FULL, PREEMPT]
Example usage
sources:
kernel:
configOpts: [NO_HZ, X86, DMI]
sources:
kernel:
configOpts: [NO_HZ, X86, DMI]
3.3.3.5. sources.pci.deviceClassWhitelist Copy linkLink copied to clipboard!
sources.pci.deviceClassWhitelist is a list of PCI device class IDs for which to publish a label. It can be specified as a main class only (for example, 03) or full class-subclass combination (for example 0300). The former implies that all subclasses are accepted. The format of the labels can be further configured with deviceLabelFields.
Default: ["03", "0b40", "12"]
Example usage
sources:
pci:
deviceClassWhitelist: ["0200", "03"]
sources:
pci:
deviceClassWhitelist: ["0200", "03"]
3.3.3.6. sources.pci.deviceLabelFields Copy linkLink copied to clipboard!
sources.pci.deviceLabelFields is the set of PCI ID fields to use when constructing the name of the feature label. Valid fields are class, vendor, device, subsystem_vendor and subsystem_device.
Default: [class, vendor]
Example usage
sources:
pci:
deviceLabelFields: [class, vendor, device]
sources:
pci:
deviceLabelFields: [class, vendor, device]
With the example config above, NFD would publish labels such as feature.node.kubernetes.io/pci-<class-id>_<vendor-id>_<device-id>.present=true
3.3.3.7. sources.usb.deviceClassWhitelist Copy linkLink copied to clipboard!
sources.usb.deviceClassWhitelist is a list of USB device class IDs for which to publish a feature label. The format of the labels can be further configured with deviceLabelFields.
Default: ["0e", "ef", "fe", "ff"]
Example usage
sources:
usb:
deviceClassWhitelist: ["ef", "ff"]
sources:
usb:
deviceClassWhitelist: ["ef", "ff"]
3.3.3.8. sources.usb.deviceLabelFields Copy linkLink copied to clipboard!
sources.usb.deviceLabelFields is the set of USB ID fields from which to compose the name of the feature label. Valid fields are class, vendor, and device.
Default: [class, vendor, device]
Example usage
sources:
pci:
deviceLabelFields: [class, vendor]
sources:
pci:
deviceLabelFields: [class, vendor]
With the example config above, NFD would publish labels like: feature.node.kubernetes.io/usb-<class-id>_<vendor-id>.present=true.
3.3.3.9. sources.custom Copy linkLink copied to clipboard!
sources.custom is the list of rules to process in the custom feature source to create user-specific labels.
Default: empty
Example usage
3.4. About the NodeFeatureRule custom resource Copy linkLink copied to clipboard!
NodeFeatureRule objects are a NodeFeatureDiscovery custom resource designed for rule-based custom labeling of nodes. Some use cases include application-specific labeling or distribution by hardware vendors to create specific labels for their devices.
NodeFeatureRule objects provide a method to create vendor- or application-specific labels and taints. It uses a flexible rule-based mechanism for creating labels and optionally taints based on node features.
3.5. Using the NodeFeatureRule custom resource Copy linkLink copied to clipboard!
Create a NodeFeatureRule object to label nodes if a set of rules match the conditions.
Procedure
Create a custom resource file named
nodefeaturerule.yamlthat contains the following text:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This custom resource specifies that labelling occurs when the
vethmodule is loaded and any PCI device with vendor code8086exists in the cluster.Apply the
nodefeaturerule.yamlfile to your cluster by running the following command:oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml
$ oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The example applies the feature label on nodes with the
vethmodule loaded and any PCI device with vendor code8086exists.NoteA relabeling delay of up to 1 minute might occur.
3.6. Using the NFD Topology Updater Copy linkLink copied to clipboard!
The Node Feature Discovery (NFD) Topology Updater is a daemon responsible for examining allocated resources on a worker node. It accounts for resources that are available to be allocated to new pod on a per-zone basis, where a zone can be a Non-Uniform Memory Access (NUMA) node. The NFD Topology Updater communicates the information to nfd-master, which creates a NodeResourceTopology custom resource (CR) corresponding to all of the worker nodes in the cluster. One instance of the NFD Topology Updater runs on each node of the cluster.
To enable the Topology Updater workers in NFD, set the topologyupdater variable to true in the NodeFeatureDiscovery CR, as described in the section Using the Node Feature Discovery Operator.
3.6.1. NodeResourceTopology CR Copy linkLink copied to clipboard!
When run with NFD Topology Updater, NFD creates custom resource instances corresponding to the node resource hardware topology, such as:
3.6.2. NFD Topology Updater command-line flags Copy linkLink copied to clipboard!
To view available command-line flags, run the nfd-topology-updater -help command. For example, in a podman container, run the following command:
podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help
$ podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help
3.6.2.1. -ca-file Copy linkLink copied to clipboard!
The -ca-file flag is one of the three flags, together with the -cert-file and `-key-file`flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS root certificate that is used for verifying the authenticity of nfd-master.
Default: empty
The -ca-file flag must be specified together with the -cert-file and -key-file flags.
Example
nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key
$ nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key
3.6.2.2. -cert-file Copy linkLink copied to clipboard!
The -cert-file flag is one of the three flags, together with the -ca-file and -key-file flags, that controls mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS certificate presented for authenticating outgoing requests.
Default: empty
The -cert-file flag must be specified together with the -ca-file and -key-file flags.
Example
nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt
$ nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt
3.6.2.3. -h, -help Copy linkLink copied to clipboard!
Print usage and exit.
3.6.2.4. -key-file Copy linkLink copied to clipboard!
The -key-file flag is one of the three flags, together with the -ca-file and -cert-file flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the private key corresponding the given certificate file, or -cert-file, that is used for authenticating outgoing requests.
Default: empty
The -key-file flag must be specified together with the -ca-file and -cert-file flags.
Example
nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt
$ nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt
3.6.2.5. -kubelet-config-file Copy linkLink copied to clipboard!
The -kubelet-config-file specifies the path to the Kubelet’s configuration file.
Default: /host-var/lib/kubelet/config.yaml
Example
nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml
$ nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml
3.6.2.6. -no-publish Copy linkLink copied to clipboard!
The -no-publish flag disables all communication with the nfd-master, making it a dry run flag for nfd-topology-updater. NFD Topology Updater runs resource hardware topology detection normally, but no CR requests are sent to nfd-master.
Default: false
Example
nfd-topology-updater -no-publish
$ nfd-topology-updater -no-publish
3.6.2.7. -oneshot Copy linkLink copied to clipboard!
The -oneshot flag causes the NFD Topology Updater to exit after one pass of resource hardware topology detection.
Default: false
Example
nfd-topology-updater -oneshot -no-publish
$ nfd-topology-updater -oneshot -no-publish
3.6.2.8. -podresources-socket Copy linkLink copied to clipboard!
The -podresources-socket flag specifies the path to the Unix socket where kubelet exports a gRPC service to enable discovery of in-use CPUs and devices, and to provide metadata for them.
Default: /host-var/liblib/kubelet/pod-resources/kubelet.sock
Example
nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock
$ nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock
3.6.2.9. -server Copy linkLink copied to clipboard!
The -server flag specifies the address of the nfd-master endpoint to connect to.
Default: localhost:8080
Example
nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443
$ nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443
3.6.2.10. -server-name-override Copy linkLink copied to clipboard!
The -server-name-override flag specifies the common name (CN) which to expect from the nfd-master TLS certificate. This flag is mostly intended for development and debugging purposes.
Default: empty
Example
nfd-topology-updater -server-name-override=localhost
$ nfd-topology-updater -server-name-override=localhost
3.6.2.11. -sleep-interval Copy linkLink copied to clipboard!
The -sleep-interval flag specifies the interval between resource hardware topology re-examination and custom resource updates. A non-positive value implies infinite sleep interval and no re-detection is done.
Default: 60s
Example
nfd-topology-updater -sleep-interval=1h
$ nfd-topology-updater -sleep-interval=1h
3.6.2.12. -version Copy linkLink copied to clipboard!
Print version and exit.
3.6.2.13. -watch-namespace Copy linkLink copied to clipboard!
The -watch-namespace flag specifies the namespace to ensure that resource hardware topology examination only happens for the pods running in the specified namespace. Pods that are not running in the specified namespace are not considered during resource accounting. This is particularly useful for testing and debugging purposes. A * value means that all of the pods across all namespaces are considered during the accounting process.
Default: *
Example
nfd-topology-updater -watch-namespace=rte
$ nfd-topology-updater -watch-namespace=rte