Kubernetes NMState
Observing and updating node network state and configuration using Kubernetes NMState in OpenShift Container Platform
Abstract
Chapter 1. Observing and updating the node network state and configuration Copy linkLink copied to clipboard!
To observe and update the node network state and configuration in your cluster, you can use the Kubernetes NMState Operator. You can view network states, create and manage network configuration policies, and configure interfaces on cluster nodes.
For more information about how to install the NMState Operator, see Kubernetes NMState Operator.
You cannot modify an existing
br-ex
br-ex
For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the Deploying installer-provisioned clusters on bare metal document or the Installing a user-provisioned cluster on bare metal document.
1.1. Viewing the network state of a node by using the CLI Copy linkLink copied to clipboard!
Node network state is the network configuration for all nodes in the cluster. A
NodeNetworkState
Prerequisites
-
You have installed the OpenShift CLI ().
oc
Procedure
List all the
objects in the cluster:NodeNetworkState$ oc get nnsInspect a
object to view the network on that node. The output in this example has been redacted for clarity:NodeNetworkState$ oc get nns node01 -o yamlExample output
apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 status: currentState: dns-resolver: # ... interfaces: # ... route-rules: # ... routes: # ... lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z"metadata.name-
The name of the
NodeNetworkStateobject is taken from the node. status.currentState-
The
currentStatecontains the complete network configuration for the node, including DNS, interfaces, and routes. status.lastSuccessfulUpdateTime- Timestamp of the last successful update. This is updated periodically if the node is reachable and can be used to evaluate the freshness of the report.
1.2. Viewing a graphical representation of the network state of a node (NNS) topology from the web console Copy linkLink copied to clipboard!
To make the configuration of the node network in the cluster easier to understand, you can view it in the form of a diagram. The NNS topology diagram displays all node components (network interface controllers, bridges, bonds, and VLANs), their properties and configurations, and connections between the nodes.
To open the topology view of the cluster, use the following steps:
In the Administrator view of the OpenShift Container Platform web console, navigate to Networking → Node Network Configuration.
The NNS topology diagram opens. Each group of components represents a single node.
- To display the configuration and properties of a node, click inside the border of the node.
- To display the features or the YAML file of a specific component (for example, an interface or a bridge), click the icon of the component.
- The icons of active components have green borders; the icons of disconnected components have red borders.
1.3. Viewing the list of NodeNetworkState resources Copy linkLink copied to clipboard!
As an administrator, you can use the OpenShift Container Platform web console to view the list of
NodeNetworkState
Procedure
- Navigate to Networking → Node Network Configuration.
Click the List icon.
You can now view the list of
resources and the corresponding interfaces that are created on the nodes.NodeNetworkState-
You can use Filter based on Interface state, Interface type, and IP, or the search bar based on criteria Name or Label, to narrow down the displayed resources.
NodeNetworkState -
To access the detailed information about a resource, click the
NodeNetworkStateresource name listed in the Name column .NodeNetworkState -
To expand and view the Network Details section for the resource, click the greater than (>) symbol . Alternatively, you can click on each interface type under the Network interface column to view the network details.
NodeNetworkState
-
You can use Filter based on Interface state, Interface type, and IP, or the search bar based on criteria Name or Label, to narrow down the displayed
1.4. About the NodeNetworkConfigurationPolicy manifest file Copy linkLink copied to clipboard!
A
NodeNetworkConfigurationPolicy
If you want to apply multiple NNCP CRs to a node, you must create the NNCPs in a logical order that is based on the alphanumeric sorting of the policy names. The Kubernetes NMState Operator continuously checks for a newly created NNCP CR so that the Operator can instantly apply the CR to node. Consider the following logical order issue example:
-
You create NNCP 1 for defining the bridge interface that listens on a VLAN port, such as .
eth1.1000 -
You create NNCP 2 for defining the VLAN interface and specify the port for this interface, such as .
eth1.1000 - You apply NNCP 1 before you apply NNCP 2 to the node.
The node experiences a node connectivity issue because port
eth1.1000
After you apply a node network policy to a node, the Kubernetes NMState Operator configures the networking configuration for nodes according to the node network policy details.
The following list of interface names are reserved and you cannot use the names with NMstate configurations:
-
br-ext -
br-int -
br-local -
br-nexthop -
br0 -
ext-vxlan -
ext -
genev_sys_* -
int -
k8s-* -
ovn-k8s-* -
patch-br-* -
tun0 -
vxlan_sys_*
You can create an NNCP by using either the OpenShift CLI (
oc
Before you create an NNCP, ensure that you read the "Example policy configurations for different interfaces" document.
If you want to delete an NNCP, you can use the
oc delete nncp
Deleting the node network policy that added an interface to a node does not change the configuration of the policy on the node. Similarly, removing an interface does not delete the policy, because the Kubernetes NMState Operator re-adds the removed interface whenever a pod or a node is restarted.
To effectively delete the NNCP, the node network policy, and any interfaces would typically require the following actions:
-
Edit the NNCP and remove interface details from the file. Ensure that you do not remove ,
name, andstateparameters from the file.type -
Add under the
state: absentsection of the NNCP.interfaces.state -
Run . After the Kubernetes NMState Operator applies the node network policy to each node in your cluster, any interface that exists on each node is now marked as absent.
oc apply -f <nncp_file_name> -
Run to delete the NNCP.
oc delete nncp
Additional resources
1.5. Managing policy from the web console Copy linkLink copied to clipboard!
You can update the node network configuration, such as adding or removing interfaces from nodes, by applying
NodeNetworkConfigurationPolicy
1.5.1. Monitoring the policy status Copy linkLink copied to clipboard!
You can monitor the policy status from the NodeNetworkConfigurationPolicy page. This page displays all the policies created in the cluster in a tabular format, with the following columns:
- Name
- The name of the policy created.
- Matched nodes
- The count of nodes where the policies are applied. This could be either a subset of nodes based on the node selector or all the nodes on the cluster.
- Node network state
- The enactment state of the matched nodes. You can click on the enactment state and view detailed information on the status.
To find the desired policy, you can filter the list either based on enactment state by using the Filter option, or by using the search option.
1.5.2. Creating a policy Copy linkLink copied to clipboard!
You can create a policy by using either a form or YAML in the web console. When creating a policy using a form, you can see how the new policy changes the topology of the nodes in your cluster in real time.
Procedure
- Navigate to Networking → Node Network Configuration.
On the Node Network Configuration page, click Create and select the From Form option.
NoteTo create a policy using YAML, click Create → With YAML option. However, the following steps apply only to the form method.
- Optional: Check the Apply this NodeNetworkConfigurationPolicy only to specific subsets of nodes using the node selector checkbox to specify the nodes where the policy must be applied.
- Enter the policy name in the Policy name field.
- Optional: Enter the description of the policy in the Description field.
- Click Next to move to the Policy Interfaces section.
In the Bridging part of the Policy Interfaces section, a bridge interface named
is added by default with preset values in editable fields. If required, edit the values by performing the following steps:br0- Enter the name of the interface in Interface name field.
- Select the required network state. The default selected state is Up.
Select the type of interface. The available types are Bridge, Bonding, and Ethernet. The default selected value is Bridge.
NoteAddition of a VLAN interface by using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy by using form.
Optional: In the IP configuration section, check IPv4 checkbox to assign an IPv4 address to the interface, and configure the IP address assignment details:
- Click IP address to configure the interface with a static IP address, or DHCP to auto-assign an IP address.
If you have selected IP address option, enter the IPv4 address in IPV4 address field, and enter the prefix length in Prefix length field.
If you have selected DHCP option, uncheck the options that you want to disable. The available options are Auto-DNS, Auto-routes, and Auto-gateway. All the options are selected by default.
- Optional: Enter the port number in Port field.
- Optional: Check the checkbox Enable STP to enable STP.
- Optional: To add an interface to the policy, click Add another interface to the policy.
- Optional: To remove an interface from the policy, click icon next to the interface.
NoteAlternatively, you can click Edit YAML on the top of the page to continue editing the form using YAML.
- Click Next to go to the Review section of the form.
- Verify the settings and click Create to create the policy.
1.6. Updating the NodeNetworkConfigurationPolicy manifest file Copy linkLink copied to clipboard!
To modify the network configuration for nodes in your OpenShift Container Platform cluster, you can update the
NodeNetworkConfigurationPolicy
1.6.1. Updating the policy by using form Copy linkLink copied to clipboard!
Procedure
- Navigate to Networking → NodeNetworkConfigurationPolicy.
-
In the NodeNetworkConfigurationPolicy page, click the
icon placed next to the policy you want to edit, and click Edit.
- Edit the fields that you want to update.
- Click Save.
Addition of a VLAN interface using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy using form.
1.6.2. Updating the policy by using YAML Copy linkLink copied to clipboard!
Procedure
- Navigate to Networking → NodeNetworkConfigurationPolicy.
- In the NodeNetworkConfigurationPolicy page, click the policy name under the Name column for the policy you want to edit.
- Click the YAML tab, and edit the YAML.
- Click Save.
1.6.3. Deleting the policy Copy linkLink copied to clipboard!
Procedure
- Navigate to Networking → NodeNetworkConfigurationPolicy.
-
In the NodeNetworkConfigurationPolicy page, click the
icon placed next to the policy you want to delete, and click Delete.
- In the pop-up window, enter the policy name to confirm deletion, and click Delete.
1.7. Managing the NodeNetworkConfigurationPolicy manifest file Copy linkLink copied to clipboard!
To configure network interfaces on nodes in your OpenShift Container Platform cluster, you can manage the
NodeNetworkConfigurationPolicy
1.7.1. Creating an interface on nodes Copy linkLink copied to clipboard!
Create an interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the
spec: nodeSelector
<key>:<value>
You can configure multiple nmstate-enabled nodes concurrently. The configuration applies to 50% of the nodes in parallel. This strategy prevents the entire cluster from being unavailable if the network connection fails. To apply the policy configuration in parallel to a specific portion of the cluster, use the
maxUnavailable
NodeNetworkConfigurationPolicy
If you have two nodes and you apply an NNCP manifest with the
maxUnavailable
50%
maxUnavailable
50%
Prerequisites
-
You have installed the OpenShift CLI ().
oc
Procedure
Create the
manifest. The following example configures a Linux bridge on all worker nodes and configures the DNS resolver:NodeNetworkConfigurationPolicyapiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy1 spec: nodeSelector:2 node-role.kubernetes.io/worker: ""3 maxUnavailable: 34 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver:6 config: search: - example.com - example.org server: - 8.8.8.8- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses the
node-role.kubernetes.io/worker: ""node selector to select all worker nodes in the cluster. - 4
- Optional: Specifies the maximum number of nmstate-enabled nodes that the policy configuration can be applied to concurrently. This parameter can be set to either a percentage value (string), for example,
"10%", or an absolute value (number), such as3. - 5
- Optional: Human-readable description for the interface.
- 6
- Optional: Specifies the search and server settings for the DNS server.
Create the node network policy:
$ oc apply -f br1-eth1-policy.yaml1 - 1
- File name of the node network configuration policy manifest.
1.7.2. Confirming node network policy updates on nodes Copy linkLink copied to clipboard!
When you apply a node network policy, a
NodeNetworkConfigurationEnactment
Prerequisites
-
You have installed the OpenShift CLI ().
oc
Procedure
To confirm that a policy has been applied to the cluster, list the policies and their status:
$ oc get nncpOptional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy:
$ oc get nncp <policy> -o yamlOptional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster:
$ oc get nnceOptional: To view the configuration of a particular enactment, including any error reporting for a failed configuration:
$ oc get nnce <node>.<policy> -o yaml
1.7.3. Removing an interface from nodes Copy linkLink copied to clipboard!
You can remove an interface from one or more nodes in the cluster by editing the
NodeNetworkConfigurationPolicy
state
absent
Removing an interface from a node does not automatically restore the node network configuration to a previous state. If you want to restore the previous state, you will need to define that node network configuration in the policy.
If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a
down
up
Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a
NodeNetworkConfigurationPolicy
Prerequisites
-
You have installed the OpenShift CLI ().
oc
Procedure
Update the
manifest used to create the interface. The following example removes a Linux bridge and configures theNodeNetworkConfigurationPolicyNIC with DHCP to avoid losing connectivity:eth1apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> spec: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - name: br1 type: linux-bridge state: absent - name: eth1 type: ethernet state: up ipv4: dhcp: true enabled: true-
defines the name of the policy.
metadata.name -
defines the
spec.nodeSelectorparameter. This parameter is optional. If you do not include thenodeSelectorparameter, the policy applies to all nodes in the cluster. This example uses thenodeSelectornode selector to select all worker nodes in the cluster.node-role.kubernetes.io/worker: "" -
defines the name, type, and desired state of an interface. This example creates both Linux bridge and Ethernet networking interfaces. Setting
spec.desiredState.interfacesremoves the interface.state: absent -
defines
spec.desiredState.interfaces.ipv4settings for the interface. These settings are optional. If you do not useipv4, you can either set a static IP or leave the interface without an IP address. Settingdhcpenablesenabled: truein this example.ipv4
-
Update the policy on the node and remove the interface:
$ oc apply -f <filename.yaml>Where
is the filename of the policy manifest.<filename.yaml>
1.8. Example policy configurations for different interfaces Copy linkLink copied to clipboard!
Before you read the different example
NodeNetworkConfigurationPolicy
- If you want to apply multiple NNCP CRs to a node, you must create the NNCPs in a logical order that is based on the alphanumeric sorting of the policy names. The Kubernetes NMState Operator continuously checks for a newly created NNCP CR so that the Operator can instantly apply the CR to node.
-
When you need to apply a policy to many nodes but you only want to create a single NNCP for all the nodes, the Kubernetes NMState Operator applies the policy to each node in sequence. You can set the speed and coverage of policy application for target nodes with the parameter in the cluster’s configuration file. By setting a lower percentage value for the parameter, you can reduce the risk of a cluster-wide outage if the outage impacts the small percentage of nodes that are receiving the policy application.
maxUnavailable -
If you set the parameter to
maxUnavailablein two NNCP manifests, the policy configuration coverage applies to 100% of the nodes in your cluster.50% - When a node restarts, the Kubernetes NMState Operator cannot control the order to which it applies policies to nodes. The Kubernetes NMState Operator might apply interdependent policies in a sequence that results in a degraded network object.
- Consider specifying all related network configurations in a single policy.
1.8.1. Example: Ethernet interface node network configuration policy Copy linkLink copied to clipboard!
Configure an Ethernet interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: eth1-policy
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: eth1
description: Configuring eth1 on node01
type: ethernet
state: up
ipv4:
dhcp: true
enabled: true
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostnamenode selector. - 4
- Name of the interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates an Ethernet networking interface.
- 7
- The requested state for the interface after creation.
- 8
- Optional: If you do not use
dhcp, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4in this example.
1.8.2. Example: Linux bridge interface node network configuration policy Copy linkLink copied to clipboard!
Create a Linux bridge interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br1-eth1-policy
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: br1
description: Linux bridge with eth1 as a port
type: linux-bridge
state: up
ipv4:
dhcp: true
enabled: true
bridge:
options:
stp:
enabled: false
port:
- name: eth1
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostnamenode selector. - 4
- Name of the interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates a bridge.
- 7
- The requested state for the interface after creation.
- 8
- Optional: If you do not use
dhcp, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4in this example. - 10
- Disables
stpin this example. - 11
- The node NIC to which the bridge attaches.
1.8.3. Example: VLAN interface node network configuration policy Copy linkLink copied to clipboard!
Create a VLAN interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
Define all related configurations for the VLAN interface of a node in a single
NodeNetworkConfigurationPolicy
NodeNetworkConfigurationPolicy
When a node restarts, the Kubernetes NMState Operator cannot control the order in which policies are applied. Therefore, if you use separate policies for related network configurations, the Kubernetes NMState Operator might apply these policies in a sequence that results in a degraded network object.
The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: vlan-eth1-policy
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: eth1.102
description: VLAN using eth1
type: vlan
state: up
vlan:
base-iface: eth1
id: 102
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostnamenode selector. - 4
- Name of the interface. When deploying on bare metal, only the
<interface_name>.<vlan_number>VLAN format is supported. - 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates a VLAN.
- 7
- The requested state for the interface after creation.
- 8
- The node NIC to which the VLAN is attached.
- 9
- The VLAN tag.
Additional resources
1.8.4. Example: Bond interface node network configuration policy Copy linkLink copied to clipboard!
Create a bond interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
OpenShift Container Platform only supports the following bond modes:
-
active-backup
-
balance-xor
-
802.3ad
Other bond modes are not supported.
The
balance-xor
802.3ad
active-backup
The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: bond0-eth1-eth2-policy
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: bond0
description: Bond with ports eth1 and eth2
type: bond
state: up
ipv4:
dhcp: true
enabled: true
link-aggregation:
mode: active-backup
options:
miimon: '140'
port:
- eth1
- eth2
mtu: 1450
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostnamenode selector. - 4
- Name of the interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates a bond.
- 7
- The requested state for the interface after creation.
- 8
- Optional: If you do not use
dhcp, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4in this example. - 10
- The driver mode for the bond. This example uses
active backup. - 11
- Optional: This example uses miimon to inspect the bond link every 140ms.
- 12
- The subordinate node NICs in the bond.
- 13
- Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to
1500by default.
1.8.5. Example: Multiple interfaces in the same node network configuration policy Copy linkLink copied to clipboard!
You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest.
If multiple interfaces use the same default configuration, a single Network Manager connection profile activates on multiple interfaces simultaneously and this causes connections to have the same universally unique identifier (UUID). To avoid this issue, ensure that each interface has a specific configuration that is different from the default configuration.
The following example YAML file creates a bond that is named
bond10
bond10.103
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: bond-vlan
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: bond10
description: Bonding eth2 and eth3
type: bond
state: up
link-aggregation:
mode: balance-xor
options:
miimon: '140'
port:
- eth2
- eth3
- name: bond10.103
description: vlan using bond10
type: vlan
state: up
vlan:
base-iface: bond10
id: 103
ipv4:
dhcp: true
enabled: true
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses
hostnamenode selector. - 4 11
- Name of the interface.
- 5 12
- Optional: Human-readable description of the interface.
- 6 13
- The type of interface.
- 7 14
- The requested state for the interface after creation.
- 8
- The driver mode for the bond.
- 9
- Optional: This example uses miimon to inspect the bond link every 140ms.
- 10
- The subordinate node NICs in the bond.
- 15
- The node NIC to which the VLAN is attached.
- 16
- The VLAN tag.
- 17
- Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
- 18
- Enables ipv4 in this example.
1.8.6. Example: Node network configuration policy for virtual functions Copy linkLink copied to clipboard!
Update host network settings for Single Root I/O Virtualization (SR-IOV) network virtual functions (VF) in an existing cluster by applying a
NodeNetworkConfigurationPolicy
You can apply a
NodeNetworkConfigurationPolicy
- Configure QoS host network settings for VFs to optimize performance.
- Add, remove, or update VFs for a network interface.
- Manage VF bonding configurations.
To update host network settings for SR-IOV VFs by using NMState on physical functions that are also managed through the SR-IOV Network Operator, you must set the
externallyManaged
SriovNetworkNodePolicy
true
The following YAML file is an example of a manifest that defines QoS policies for a VF. This YAML includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: qos
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
desiredState:
interfaces:
- name: ens1f0
description: Change QOS on VF0
type: ethernet
state: up
ethernet:
sr-iov:
total-vfs: 3
vfs:
- id: 0
max-tx-rate: 200
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example applies to all nodes with the
workerrole. - 4
- Name of the physical function (PF) network interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface.
- 7
- The requested state for the interface after configuration.
- 8
- The total number of VFs.
- 9
- Identifies the VF with an ID of
0. - 10
- Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps.
The following YAML file is an example of a manifest that adds a VF for a network interface.
In this sample configuration, the
ens1f1v0
ens1f1
bond0
active-backup
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: addvf
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
maxUnavailable: 3
desiredState:
interfaces:
- name: ens1f1
type: ethernet
state: up
ethernet:
sr-iov:
total-vfs: 1
vfs:
- id: 0
trust: true
vlan-id: 477
- name: bond0
description: Attach VFs to bond
type: bond
state: up
link-aggregation:
mode: active-backup
options:
primary: ens1f0v0
port:
- ens1f0v0
- ens1f1v0
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- The example applies to all nodes with the
workerrole. - 4
- Name of the VF network interface.
- 5
- Number of VFs to create.
- 6
- Setting to allow failover bonding between the active and backup VFs.
- 7
- ID of the VLAN. The example uses hardward offloading to define a VLAN directly on the VF.
- 8
- Name of the bonding network interface.
- 9
- Optional: Human-readable description of the interface.
- 10
- The type of interface.
- 11
- The requested state for the interface after configuration.
- 12
- The bonding policy for the bond.
- 13
- The primary attached bonding port.
- 14
- The ports for the bonded network interface.
- 15
- In this example, the VLAN network interface is added as an additional interface to the bonded network interface.
1.8.7. Example: Network interface with a VRF instance node network configuration policy Copy linkLink copied to clipboard!
Associate a Virtual Routing and Forwarding (VRF) instance with a network interface by applying a
NodeNetworkConfigurationPolicy
By associating a VRF instance with a network interface, you can support traffic isolation, independent routing decisions, and the logical separation of network resources.
When configuring Virtual Route Forwarding (VRF), you must change the VRF value to a table ID lower than
1000
1000
In a bare-metal environment, you can announce load balancer services through interfaces belonging to a VRF instance by using MetalLB. For more information, see the Additional resources section.
The following YAML file is an example of associating a VRF instance to a network interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: vrfpolicy
spec:
nodeSelector:
vrf: "true"
maxUnavailable: 3
desiredState:
interfaces:
- name: ens4vrf
type: vrf
state: up
vrf:
port:
- ens4
route-table-id: 2
Additional resources
1.9. Creating an IP over InfiniBand interface on nodes Copy linkLink copied to clipboard!
On the OpenShift Container Platform web console, you can install a Red Hat certified third-party Operator, such as the NVIDIA Network Operator, that supports IP over InfiniBand (IPoIB) mode. Typically, you would use the third-party Operator with other vendor infrastructure to manage resources in an OpenShift Container Platform cluster. To create an IPoIB interface on nodes in your cluster, you must define an InfiniBand (IPoIB) interface in a
NodeNetworkConfigurationPolicy
If you need to attach IPoIB to a bond interface, only the
active-backup
The OpenShift Container Platform documentation describes defining only the IPoIB interface configuration in a
NodeNetworkConfigurationPolicy
For more information about the NVIDIA Operator, see Getting Started with Red Hat OpenShift (NVIDIA Docs Hub).
Prerequisites
- You installed a Red Hat certified third-party Operator that supports an IPoIB interface.
-
You have installed the OpenShift CLI ().
oc
Procedure
Create or edit a
(NNCP) manifest file, and then specify an IPoIB interface in the file.NodeNetworkConfigurationPolicyapiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-ipoib spec: # ... interfaces: - description: "" infiniband: mode: datagram pkey: "0xffff" ipv4: address: - ip: 100.125.3.4 prefix-length: 16 dhcp: false enabled: true ipv6: enabled: false name: ibp27s0 state: up identifier: mac-address mac-address: 20:00:55:04:01:FE:80:00:00:00:00:00:00:00:02:C9:02:00:23:13:92 type: infiniband # ...where:
<mode>-
datagramis the default mode for an IPoIB interface. This mode provides improved CPU performance and low-latency capabilitities for pod-to-pod communication.connectedmode is a supported mode but consider only using this mode when you need to adjust the maximum transmission unit (MTU) value to improve node connectivity with surrounding network devices. <pkey>-
Supports a string or an integer value. The parameter defines the protection key, or P-key, for the interface for the purposes of authentication and encrypted communications with a third-party vendor, such as NVIDIA. Values
Noneand0xffffindicate the protection key for the base interface in an InfiniBand system. <identifier>-
Supported values include
name, the default value, andmac-address. Thenamevalue applies a configuration to an interface that holds a specified interface name. <mac-address>- Holds the MAC address of an interface. For an IP-over-InfiniBand (IPoIB) interface, the address is a 20-byte string.
<type>-
Sets the type of interface to
infiniband.
Apply the NNCP configuration to each node in your cluster by running the following command. The Kubernetes NMState Operator can then create an IPoIB interface on each node.
$ oc apply -f <nncp_file_name>where:
<nncp_file_name>-
Replace
<nncp_file_name>with the name of your NNCP file.
1.10. Example policy configurations that use dynamic matching and templating Copy linkLink copied to clipboard!
The following example configuration snippets show node network policies that use dynamic matching and templating.
Applying node network configuration policies that use dynamic matching and templating is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single
NodeNetworkConfigurationPolicy
The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br1-eth1-copy-ipv4-policy
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
capture:
eth1-nic: interfaces.name=="eth1"
eth1-routes: routes.running.next-hop-interface=="eth1"
br1-routes: capture.eth1-routes | routes.running.next-hop-interface := "br1"
desiredState:
interfaces:
- name: br1
description: Linux bridge with eth1 as a port
type: linux-bridge
state: up
ipv4: "{{ capture.eth1-nic.interfaces.0.ipv4 }}"
bridge:
options:
stp:
enabled: false
port:
- name: eth1
routes:
config: "{{ capture.br1-routes.routes.running }}"
- 1
- The name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. This example uses thenode-role.kubernetes.io/worker: ""node selector to select all worker nodes in the cluster. - 3
- The reference to the node NIC to which the bridge attaches.
- 4
- The type of interface. This example creates a bridge.
- 5
- The IP address of the bridge interface. This value matches the IP address of the NIC which is referenced by the
spec.capture.eth1-nicentry. - 6
- The node NIC to which the bridge attaches.
1.10.2. Example: Node network configuration policy to enable LLDP reporting Copy linkLink copied to clipboard!
The following YAML file is an example of a
NodeNetworkConfigurationPolicy
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: enable-lldp-ethernets-up
spec:
capture:
ethernets: interfaces.type=="ethernet"
ethernets-up: capture.ethernets | interfaces.state=="up"
ethernets-lldp: capture.ethernets-up | interfaces.lldp.enabled:=true
desiredState:
interfaces: "{{ capture.ethernets-lldp.interfaces }}"
# ...
Additional resources
1.11. Examples: IP management Copy linkLink copied to clipboard!
The following example configuration snippets show different methods of IP management.
These examples use the
ethernet
1.11.1. Static Copy linkLink copied to clipboard!
The following snippet statically configures an IP address on the Ethernet interface:
# ...
interfaces:
- name: eth1
description: static IP on eth1
type: ethernet
state: up
ipv4:
dhcp: false
address:
- ip: 192.168.122.250
prefix-length: 24
enabled: true
# ...
- 1
- Replace this value with the static IP address for the interface.
1.11.2. No IP address Copy linkLink copied to clipboard!
The following snippet ensures that the interface has no IP address:
# ...
interfaces:
- name: eth1
description: No IP on eth1
type: ethernet
state: up
ipv4:
enabled: false
# ...
Always set the
state
up
ipv4.enabled
ipv6.enabled
false
state: down
1.11.3. Dynamic host configuration Copy linkLink copied to clipboard!
The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS:
# ...
interfaces:
- name: eth1
description: DHCP on eth1
type: ethernet
state: up
ipv4:
dhcp: true
enabled: true
# ...
The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS:
# ...
interfaces:
- name: eth1
description: DHCP without gateway or DNS on eth1
type: ethernet
state: up
ipv4:
dhcp: true
auto-gateway: false
auto-dns: false
enabled: true
# ...
1.11.4. Media Access Control (MAC) address Copy linkLink copied to clipboard!
You can use a MAC address to identify a network interface instead of using the name of the network interface. A network interface name can change for various reasons, such as an operating system configuration change. However, every network interface has a unique MAC address that does not change. This means that using a MAC address is a more permanent way to identify a specific network interface.
Supported values for the
identifier
name
mac-address
name
Using a
mac-address
identifier
identifier
mac-address
mac-address
You can still specify a value for the
name
identifier: mac-address
nmstate
The following snippet specifies a MAC address as the primary identifier for an Ethernet device, named
eth1
8A:8C:92:1A:F6:98
# ...
interfaces:
- name: eth1
profile-name: wan0
type: ethernet
state: up
identifier: mac-address
mac-address: 8A:8C:92:1A:F6:98
# ...
1.11.5. DNS Copy linkLink copied to clipboard!
By default, the
nmstate
Setting a DNS configuration is comparable to modifying the
/etc/resolv.conf
To define a DNS configuration for a network interface, you must initially specify the
dns-resolver
oc apply -f <nncp_file_name>
The following example shows a default situation that stores DNS values globally:
Configure a static DNS without a network interface. Note that when updating the
file on a host node, you do not need to specify an interface, IPv4 or IPv6, in the/etc/resolv.conf(NNCP) manifest.NodeNetworkConfigurationPolicyExample of a DNS configuration for a network interface that globally stores DNS values
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: server: - 2001:db8:f::1 - 192.0.2.251 search: - example.com - example.org # ...ImportantYou can specify DNS options under the
section of your NNCP file as demonstrated in the following example:dns-resolver.config# ... desiredState: dns-resolver: config: options: - timeout:2 - attempts:3 # ...If you want to remove the DNS options from your network interface, apply the following configuration to your NNCP and then run the
command:oc apply -f <nncp_file_name># ... dns-resolver: config: {} interfaces: [] # ...
The following examples show situations that require configuring a network interface to store DNS values:
If you want to rank a static DNS name server over a dynamic DNS name server, define the interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration (
) mechanism in the network interface YAML configuration file.autoconfExample configuration that adds
192.0.2.1to DNS name servers retrieved from the DHCPv4 network protocol# ... dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true # ...If you need to configure a network interface to store DNS values instead of adopting the default method, which uses the
API to store DNS values globally, you can set static DNS values and static IP addresses in the network interface YAML file.nmstateImportantStoring DNS values at the network interface level might cause name resolution issues after you attach the interface to network components, such as an Open vSwitch (OVS) bridge, a Linux bridge, or a bond.
Example configuration that stores DNS values at the interface level
# ... dns-resolver: config: server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 search: - example.com - example.org interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false # ...If you want to set static DNS search domains and static DNS name servers for your network interface, define the static interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration (
) mechanism in the network interface YAML configuration file.autoconfImportantSpecifying the following
configurations in the network interface YAML file might cause a race condition at reboot that prevents thedns-resolver(NNCP) from applying to pods that run in your cluster:NodeNetworkConfigurationPolicy- Setting static DNS search domains and dynamic DNS name servers for your network interface.
-
Specifying domain suffixes for the parameter and not setting IP addresses for the
searchparameter.server
Example configuration that sets
example.comandexample.orgstatic DNS search domains along with static DNS name server settings# ... dns-resolver: config: server: - 2001:db8:f::1 - 192.0.2.251 search: - example.com - example.org interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true # ...
1.12. Routes and route rules Copy linkLink copied to clipboard!
After you configure an IP address for a network interface, you can configure routes and route rules in the NMState configuration for cluster nodes.
You cannot use the OVN-Kubernetes
br-ex
br-ex
For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the Deploying installer-provisioned clusters on bare metal document or the Installing a user-provisioned cluster on bare metal document.
The
routes
running
config
After you apply an NMState configuration to cluster nodes and you want to change existing routes, you must specify the old route with the
state: absent
state: present
Setting the
state
ignore
The
route-rules
The following YAML configuration shows a static route and a static IP confiuration on interface
eth1
dns-resolver:
config:
# ...
interfaces:
- name: eth1
description: Static routing on eth1
type: ethernet
state: up
ipv4:
dhcp: false
enabled: true
address:
- ip: 192.0.2.251
prefix-length: 24
route-rules:
config:
- ip-from: 198.51.100.0/24
priority: 1000
route-table: 200
routes:
config:
- destination: 198.51.100.0/24
next-hop-interface: eth1
next-hop-address: 192.0.2.1
metric: 150
table-id: 200
# ...
-
: Applies a rule to any network packet that originates from the specified IP address.
config.ip-from -
: Sets the priority order for the rule.
config.priority -
: Specifies the routing table that the Operator uses to check that network traffic matches the
config.route-tablecondition.ip-from -
: The static IP address for the Ethernet interface.
address.ip -
: The next hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface.
config.next-hop-address
Chapter 2. Troubleshooting node network configuration Copy linkLink copied to clipboard!
If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as:
- The configuration fails to be applied on the host.
- The host loses connection to the default gateway.
- The host loses connection to the API server.
2.1. Troubleshooting an incorrect node network configuration policy configuration Copy linkLink copied to clipboard!
You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy.
If you applied an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy. The example attempts to apply a Linux bridge policy to a cluster that has three control plane nodes and three compute nodes. The policy is not applied because the policy references the wrong interface.
To find an error, you need to investigate the available NMState resources. You can then update the policy with the correct configuration.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You ensured that an interface does not exist on your Linux system.
ens01
Procedure
Create a policy on your cluster. The following example creates a simple bridge,
that hasbr1as its member:ens01apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01 # ...Apply the policy to your network interface:
$ oc apply -f ens01-bridge-testfail.yamlExample output
nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail createdVerify the status of the policy by running the following command:
$ oc get nncpThe output shows that the policy failed:
Example output
NAME STATUS ens01-bridge-testfail FailedToConfigureThe policy status alone does not indicate if it failed on all nodes or a subset of nodes.
List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, the output suggests that the problem is with a specific node configuration. If the policy failed on all nodes, the output suggests that the problem is with the policy.
$ oc get nnceThe output shows that the policy failed on all nodes:
Example output
NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigureView one of the failed enactments. The following command uses the output tool
to filter the output:jsonpath$ oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}'Example output
[2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01The previous example shows the output from an
error that indicates that theInvalidArgumentis an unknown port. For this example, you might need to change the port configuration in the policy configuration file.ens01To ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the
object. The following command returns the network configuration for theNodeNetworkStatenode:control-plane-1$ oc get nns control-plane-1 -o yamlThe output shows that the interface name on the nodes is
but the failed policy incorrectly usesens1:ens01Example output
- ipv4: # ... name: ens1 state: up type: ethernetCorrect the error by editing the existing policy:
$ oc edit nncp ens01-bridge-testfail# ... port: - name: ens1Save the policy to apply the correction.
Check the status of the policy to ensure it updated successfully:
$ oc get nncpExample output
NAME STATUS ens01-bridge-testfail SuccessfullyConfiguredThe updated policy is successfully configured on all nodes in the cluster.
2.2. Troubleshooting DNS connectivity issues in a disconnected environment Copy linkLink copied to clipboard!
If you experience health check probe issues when configuring
nmstate
root-servers.net
Ensure that the DNS server includes a name server (NS) entry for the
root-servers.net
2.2.1. Configuring the bind9 DNS named server Copy linkLink copied to clipboard!
For a cluster configured to query a
bind9
root-servers.net
/var/named/named.localhost
Procedure
Add the
zone at the end of theroot-servers.netconfiguration file by running the following command:/etc/named.conf$ cat >> /etc/named.conf <<EOF zone "root-servers.net" IN { type master; file "named.localhost"; }; EOFRestart the
service by running the following command:named$ systemctl restart namedConfirm that the
zone is present by running the following command:root-servers.net$ journalctl -u named|grep root-servers.netExample output
Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0Verify that the DNS server can resolve the NS record for the
domain by running the following command:root-servers.net$ host -t NS root-servers.net. 127.0.0.1Example output
Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net.
2.2.2. Configuring the dnsmasq DNS server Copy linkLink copied to clipboard!
If you are using
dnsmasq
root-servers.net
root-servers.net
Procedure
Create a configuration file that delegates the domain
to another DNS server by running the following command:root-servers.net$ echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.confRestart the
service by running the following command:dnsmasq$ systemctl restart dnsmasqConfirm that the
domain is delegated to another DNS server by running the following command:root-servers.net$ journalctl -u dnsmasq|grep root-servers.netExample output
Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.netVerify that the DNS server can resolve the NS record for the
domain by running the following command:root-servers.net$ host -t NS root-servers.net. 127.0.0.1Example output
Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net.
2.2.3. Creating a custom DNS host name to resolve DNS connectivity issues Copy linkLink copied to clipboard!
In a disconnected environment where the external DNS server cannot be reached, you can resolve Kubernetes NMState Operator health probe issues by specifying a custom DNS host name in the
NMState
Procedure
Add the DNS host name configuration to the
CRD of your cluster:NMStateapiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate spec: probeConfiguration: dns: host: redhat.com # ...Apply the DNS host name configuration to your cluster network by running the following command. Ensure that you replace
with the name of your CRD file.<filename>$ oc apply -f <filename>.yaml
Legal Notice
Copy linkLink copied to clipboard!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.