Chapter 10. Node networking
10.1. Observing node network state
Node network state is the network configuration for all nodes in the cluster.
10.1.1. About nmstate
Container-native virtualization uses nmstate
to report on and configure the state of the node network. This makes it possible to modify network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster.
Node networking is monitored and updated by the following objects:
NodeNetworkState
- Reports the state of the network on that node.
NodeNetworkConfigurationPolicy
-
Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicy
manifest to the cluster. NodeNetworkConfigurationEnactment
- Reports the network policies enacted upon each node.
10.1.2. Viewing the network state of a node
A NodeNetworkState
object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node.
Procedure
List all the
NodeNetworkState
objects in the cluster:$ oc get nns
Inspect a
NodeNetworkState
to view the network on that node. The output in this example has been redacted for clarity:$ oc get nns node01 -o yaml
apiVersion: nmstate.io/v1alpha1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: ... interfaces: ... route-rules: ... routes: ... lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z" 3
- 1
- The name of the
NodeNetworkState
is taken from the node. - 2
- The
currentState
contains the complete network configuration for the node, including DNS, interfaces, and routes. - 3
- Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report.
10.2. Updating node network configuration
You can update the node network configuration, such as adding or removing interfaces from nodes, by applying NodeNetworkConfigurationPolicy
manifests to the cluster.
10.2.1. About nmstate
Container-native virtualization uses nmstate
to report on and configure the state of the node network. This makes it possible to modify network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster.
Node networking is monitored and updated by the following objects:
NodeNetworkState
- Reports the state of the network on that node.
NodeNetworkConfigurationPolicy
-
Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicy
manifest to the cluster. NodeNetworkConfigurationEnactment
- Reports the network policies enacted upon each node.
10.2.2. Creating an interface on nodes
Create an interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy
manifest to the cluster. The manifest details the requested configuration for the interface.
By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the spec: nodeSelector
parameter and the appropriate <key>:<value>
for your node selector.
Procedure
Create the
NodeNetworkConfigurationPolicy
manifest. The following example configures a Linux bridge on all worker nodes:apiVersion: nmstate.io/v1alpha1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 4 type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: eth1
Create the Policy:
$ oc apply -f <br1-eth1-policy.yaml> 1
- 1
- File name of the Policy manifest.
10.2.3. Confirming Policy updates on nodes
A NodeNetworkConfigurationPolicy
manifest describes your requested network configuration for nodes in the cluster. The Policy object includes your requestd network configuration and the status of execution of the Policy on the cluster as a whole.
When you apply a Policy, a NodeNetworkConfigurationEnactment
is created for every node in the cluster. The Enactment is a read-only object that represents the status of execution of the Policy on that node. If the Policy fails to be applied on the node, the Enactment for that node includes a traceback for troubleshooting.
Procedure
To confirm that a Policy has been applied to the cluster, list the Policies and their status:
$ oc get nncp
(Optional) If a Policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular Policy:
$ oc get nncp <policy> -o yaml
(Optional) If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the Enactments on the cluster:
$ oc get nnce
(Optional) To view the configuration of a particular Enactment, including any error reporting for a failed configuration:
$ oc get nnce <node>.<policy> -o yaml
10.2.4. Removing an interface from nodes
Remove an interface from nodes by editing the NodeNetworkConfigurationPolicy
object and set the state
of the interface to absent
.
Deleting the Policy that added an interface does not change the configuration of the network policy on the node. Although a NodeNetworkConfigurationPolicy
is an object in the cluster, it only represents the requested configuration.
Similarly, removing an interface does not delete the Policy.
Procedure
Update the
NodeNetworkConfigurationPolicy
manifest used to create the interface. The following example removes a Linux bridge:apiVersion: nmstate.io/v1alpha1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4
Update the Policy on the node and remove the interface:
$ oc apply -f <br1-eth1-policy.yaml> 1
- 1
- File name of the Policy manifest.
10.2.5. Restoring node network configuration after removing an interface
Removing an interface from a node does not automatically restore the node network configuration to a previous state. After you remove an interface, any of the node NICs throughout the cluster that were previously attached or subordinate to the interface are placed in a down
state. Restore the NICs by applying a new NodeNetworkConfigurationPolicy
manifest to the cluster.
Procedure
Create a
NodeNetworkConfigurationPolicy
manifest that specifies the NIC and the desired state ofup
:apiVersion: nmstate.io/v1alpha1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1 spec: desiredState: interfaces: - name: eth1 type: ethernet state: up ipv4: dhcp: true enabled: true
Apply the manifest to the cluster:
$ oc apply -f <eth1.yaml> 1
- 1
- File name of the Policy manifest.
10.2.6. Example: Linux bridge interface NodeNetworkConfigurationPolicy
Create a Linux bridge interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy
manifest to the cluster.
The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1alpha1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11
- 1
- Name of the Policy.
- 2
- Optional. If you do not include the
nodeSelector
, the Policy applies to all nodes in the cluster. - 3
- This example uses a
hostname
node selector. - 4
- Name of the interface.
- 5
- Optional. Human-readable description of the interface.
- 6
- The type of interface. This example creates a bridge.
- 7
- The requested state for the interface after creation.
- 8
- Optional. If you do not use
dhcp
, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4
in this example. - 10
- Disables
stp
in this example. - 11
- The node NIC to which the bridge attaches.
10.2.7. Example: VLAN interface NodeNetworkConfigurationPolicy
Create a VLAN interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy
manifest to the cluster.
The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1alpha1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9
- 1
- Name of the Policy.
- 2
- Optional. If you do not include the
nodeSelector
, the Policy applies to all nodes in the cluster. - 3
- This example uses a
hostname
node selector. - 4
- Name of the interface.
- 5
- Optional. Human-readable description of the interface.
- 6
- The type of interface. This example creates a VLAN.
- 7
- The requested state for the interface after creation.
- 8
- The node NIC to which the VLAN is attached.
- 9
- The VLAN tag.
10.2.8. Example: Bond interface NodeNetworkConfigurationPolicy
Create a bond interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy
manifest to the cluster.
Container-native virtualization only supports the following bond modes:
-
mode=1 active-backup
-
mode=5 balance-tlb
- mode=6 balance-alb
The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1alpha1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond enslaving eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 slaves: 12 - eth1 - eth2 mtu: 1450 13
- 1
- Name of the Policy.
- 2
- Optional. If you do not include the
nodeSelector
, the Policy applies to all nodes in the cluster. - 3
- This example uses a
hostname
node selector. - 4
- Name of the interface.
- 5
- Optional. Human-readable description of the interface.
- 6
- The type of interface. This example creates a bond.
- 7
- The requested state for the interface after creation.
- 8
- Optional. If you do not use
dhcp
, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4
in this example. - 10
- The driver mode for the bond. This example uses an active backup mode.
- 11
- Optional. This example uses miimon to inspect the bond link every 140ms.
- 12
- The subordinate node NICs in the bond.
- 13
- Optional. The maximum transmission unit (MTU) for the bond. If not specified, this value is set to
1500
by default.
10.3. Troubleshooting node network configuration
If the node network configuration encounters an issue, the Policy is automatically rolled back and the Enactments report failure. This includes issues such as:
- The configuration fails to be applied on the host.
- The host loses connection to the default gateway.
- The host loses connection to the API server.
10.3.1. Troubleshooting an incorrect NodeNetworkConfigurationPolicy configuration
You can apply changes to the node network configuration across your entire cluster by applying a NodeNetworkConfigurationPolicy. If you apply an incorrect configuration, you can use the following example to troubleshoot and correct the failed network Policy.
In this example, a Linux bridge Policy is applied to an example cluster that has 3 master nodes and 3 worker nodes. The Policy fails to be applied because it references an incorrect interface. To find the error, investigate the available nmstate resources. You can then update the Policy with the correct configuration.
Procedure
Create a Policy and apply it to your cluster. The following example creates a simple bridge on the
ens01
interface:apiVersion: nmstate.io/v1alpha1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01
$ oc apply -f ens01-bridge-testfail.yaml
nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created
Verify the status of the Policy by running the following command:
$ oc get nncp
The output shows that the Policy failed:
NAME STATUS ens01-bridge-testfail FailedToConfigure
However the Policy status alone does not indicate if it failed on all nodes or a subset of nodes.
List the Enactments to see if the Policy was successful on any of the nodes. If the Policy failed for only a subset it suggests the problem is with specific node configuration; if the Policy failed on all nodes it suggest the problem is with the Policy.
$ oc get nnce
The output shows that the Policy failed on all nodes:
NAME STATUS utput master-1.ens01-bridge-testfail FailedToConfigure master-2.ens01-bridge-testfail FailedToConfigure master-3.ens01-bridge-testfail FailedToConfigure worker-1.ens01-bridge-testfail FailedToConfigure worker-2.ens01-bridge-testfail FailedToConfigure worker-3.ens01-bridge-testfail FailedToConfigure
View one of the failed Enactments and look at the traceback. The following command uses the output tool
jsonpath
to filter the output:$ oc get nnce worker-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}'
This command returns a large traceback that has been edited for brevity:
error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' ... libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\n current_state.interfaces[ifname],\nlibnmstate.error.NmstateVerificationError:
The
NmstateVerificationError
lists thedesired
Policy configuration, thecurrent
configuration of the Policy on the node, and thedifference
highlighting the parameters that do not match. In this example, theport
is included in thedifference
, which suggests that the problem is the port configuration in the Policy.To ensure that the Policy is configured properly, view the network configuration for one or all of the nodes by requesting the
NodeNetworkState
. The following command returns the network configuration for themaster-1
node:$ oc get nns master-1 -o yaml
The output shows that the interface name on the nodes is
ens1
but the failed Policy incorrectly usesens01
:- ipv4: ... name: ens1 state: up type: ethernet
Correct the error by editing the existing Policy:
$ oc edit nncp ens01-bridge-testfail
... port: - name: ens1
Save the Policy to apply the correction.
Check the status of the Policy to ensure it updated successfully:
$ oc get nncp
NAME STATUS ens01-bridge-testfail SuccessfullyConfigured
The updated Policy is successfully configured on all nodes in the cluster.