Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 23. Kubernetes NMState
23.1. Observing and updating the node network state and configuration
After you install the Kubernetes NMState Operator, you can use the Operator to observe and update your cluster’s node network state and network configuration.
For more information about how to install the NMState Operator, see Kubernetes NMState Operator.
23.1.1. Viewing the network state of a node by using the CLI
Node network state is the network configuration for all nodes in the cluster. A NodeNetworkState
object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node.
Procedure
List all the
NodeNetworkState
objects in the cluster:$ oc get nns
Inspect a
NodeNetworkState
object to view the network on that node. The output in this example has been redacted for clarity:$ oc get nns node01 -o yaml
Example output
apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: # ... interfaces: # ... route-rules: # ... routes: # ... lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z" 3
- 1
- The name of the
NodeNetworkState
object is taken from the node. - 2
- The
currentState
contains the complete network configuration for the node, including DNS, interfaces, and routes. - 3
- Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report.
23.1.2. Viewing the network state of a node from the web console
As an administrator, you can use the OpenShift Container Platform web console to observe NodeNetworkState
resources and network interfaces, and access network details.
Procedure
Navigate to Networking
NodeNetworkState. In the NodeNetworkState page, you can view the list of
NodeNetworkState
resources and the corresponding interfaces that are created on the nodes. You can use Filter based on Interface state, Interface type, and IP, or the search bar based on criteria Name or Label, to narrow down the displayedNodeNetworkState
resources.-
To access the detailed information about
NodeNetworkState
resource, click theNodeNetworkState
resource name listed in the Name column . -
to expand and view the Network Details section for the
NodeNetworkState
resource, click the > icon . Alternatively, you can click on each interface type under the Network interface column to view the network details.
23.1.3. Managing policy from the web console
You can update the node network configuration, such as adding or removing interfaces from nodes, by applying NodeNetworkConfigurationPolicy
manifests to the cluster. Manage the policy from the web console by accessing the list of created policies in the NodeNetworkConfigurationPolicy page under the Networking menu. This page enables you to create, update, monitor, and delete the policies.
23.1.3.1. Monitoring the policy status
You can monitor the policy status from the NodeNetworkConfigurationPolicy page. This page displays all the policies created in the cluster in a tabular format, with the following columns:
- Name
- The name of the policy created.
- Matched nodes
- The count of nodes where the policies are applied. This could be either a subset of nodes based on the node selector or all the nodes on the cluster.
- Node network state
- The enactment state of the matched nodes. You can click on the enactment state and view detailed information on the status.
To find the desired policy, you can filter the list either based on enactment state by using the Filter option, or by using the search option.
23.1.3.2. Creating a policy
You can create a policy by using either a form or YAML in the web console.
Procedure
-
Navigate to Networking
NodeNetworkConfigurationPolicy. In the NodeNetworkConfigurationPolicy page, click Create, and select From Form option.
In case there are no existing policies, you can alternatively click Create NodeNetworkConfigurationPolicy to createa policy using form.
NoteTo create policy using YAML, click Create, and select With YAML option. The following steps are applicable to create a policy only by using form.
- Optional: Check the Apply this NodeNetworkConfigurationPolicy only to specific subsets of nodes using the node selector checkbox to specify the nodes where the policy must be applied.
- Enter the policy name in the Policy name field.
- Optional: Enter the description of the policy in the Description field.
Optional: In the Policy Interface(s) section, a bridge interface is added by default with preset values in editable fields. Edit the values by executing the following steps:
- Enter the name of the interface in Interface name field.
- Select the network state from Network state dropdown. The default selected value is Up.
Select the type of interface from Type dropdown. The available values are Bridge, Bonding, and Ethernet. The default selected value is Bridge.
NoteAddition of a VLAN interface by using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy by using form.
Optional: In the IP configuration section, check IPv4 checkbox to assign an IPv4 address to the interface, and configure the IP address assignment details:
- Click IP address to configure the interface with a static IP address, or DHCP to auto-assign an IP address.
If you have selected IP address option, enter the IPv4 address in IPV4 address field, and enter the prefix length in Prefix length field.
If you have selected DHCP option, uncheck the options that you want to disable. The available options are Auto-DNS, Auto-routes, and Auto-gateway. All the options are selected by default.
- Optional: Enter the port number in Port field.
- Optional: Check the checkbox Enable STP to enable STP.
- Optional: To add an interface to the policy, click Add another interface to the policy.
- Optional: To remove an interface from the policy, click icon next to the interface.
NoteAlternatively, you can click Edit YAML on the top of the page to continue editing the form using YAML.
- Click Create to complete policy creation.
23.1.3.3. Updating the policy
23.1.3.3.1. Updating the policy by using form
Procedure
-
Navigate to Networking
NodeNetworkConfigurationPolicy. - In the NodeNetworkConfigurationPolicy page, click the icon placed next to the policy you want to edit, and click Edit.
- Edit the fields that you want to update.
- Click Save.
Addition of a VLAN interface using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy using form.
23.1.3.3.2. Updating the policy by using YAML
Procedure
-
Navigate to Networking
NodeNetworkConfigurationPolicy. - In the NodeNetworkConfigurationPolicy page, click the policy name under the Name column for the policy you want to edit.
- Click the YAML tab, and edit the YAML.
- Click Save.
23.1.3.4. Deleting the policy
Procedure
-
Navigate to Networking
NodeNetworkConfigurationPolicy. - In the NodeNetworkConfigurationPolicy page, click the icon placed next to the policy you want to delete, and click Delete.
- In the pop-up window, enter the policy name to confirm deletion, and click Delete.
23.1.4. Managing policy by using the CLI
23.1.4.1. Creating an interface on nodes
Create an interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy
manifest to the cluster. The manifest details the requested configuration for the interface.
By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the spec: nodeSelector
parameter and the appropriate <key>:<value>
for your node selector.
You can configure multiple nmstate-enabled nodes concurrently. The configuration applies to 50% of the nodes in parallel. This strategy prevents the entire cluster from being unavailable if the network connection fails. To apply the policy configuration in parallel to a specific portion of the cluster, use the maxUnavailable
field.
Procedure
Create the
NodeNetworkConfigurationPolicy
manifest. The following example configures a Linux bridge on all worker nodes and configures the DNS resolver:apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. - 3
- This example uses the
node-role.kubernetes.io/worker: ""
node selector to select all worker nodes in the cluster. - 4
- Optional: Specifies the maximum number of nmstate-enabled nodes that the policy configuration can be applied to concurrently. This parameter can be set to either a percentage value (string), for example,
"10%"
, or an absolute value (number), such as3
. - 5
- Optional: Human-readable description for the interface.
- 6
- Optional: Specifies the search and server settings for the DNS server.
Create the node network policy:
$ oc apply -f br1-eth1-policy.yaml 1
- 1
- File name of the node network configuration policy manifest.
Additional resources
23.1.4.2. Confirming node network policy updates on nodes
A NodeNetworkConfigurationPolicy
manifest describes your requested network configuration for nodes in the cluster. The node network policy includes your requested network configuration and the status of execution of the policy on the cluster as a whole.
When you apply a node network policy, a NodeNetworkConfigurationEnactment
object is created for every node in the cluster. The node network configuration enactment is a read-only object that represents the status of execution of the policy on that node. If the policy fails to be applied on the node, the enactment for that node includes a traceback for troubleshooting.
Procedure
To confirm that a policy has been applied to the cluster, list the policies and their status:
$ oc get nncp
Optional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy:
$ oc get nncp <policy> -o yaml
Optional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster:
$ oc get nnce
Optional: To view the configuration of a particular enactment, including any error reporting for a failed configuration:
$ oc get nnce <node>.<policy> -o yaml
23.1.4.3. Removing an interface from nodes
You can remove an interface from one or more nodes in the cluster by editing the NodeNetworkConfigurationPolicy
object and setting the state
of the interface to absent
.
Removing an interface from a node does not automatically restore the node network configuration to a previous state. If you want to restore the previous state, you will need to define that node network configuration in the policy.
If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a down
state and become unreachable. To avoid losing connectivity, configure the node NIC in the same policy so that it has a status of up
and either DHCP or a static IP address.
Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a NodeNetworkConfigurationPolicy
is an object in the cluster, it only represents the requested configuration.
Similarly, removing an interface does not delete the policy.
Procedure
Update the
NodeNetworkConfigurationPolicy
manifest used to create the interface. The following example removes a Linux bridge and configures theeth1
NIC with DHCP to avoid losing connectivity:apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. - 3
- This example uses the
node-role.kubernetes.io/worker: ""
node selector to select all worker nodes in the cluster. - 4
- Changing the state to
absent
removes the interface. - 5
- The name of the interface that is to be unattached from the bridge interface.
- 6
- The type of interface. This example creates an Ethernet networking interface.
- 7
- The requested state for the interface.
- 8
- Optional: If you do not use
dhcp
, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4
in this example.
Update the policy on the node and remove the interface:
$ oc apply -f <br1-eth1-policy.yaml> 1
- 1
- File name of the policy manifest.
23.1.5. Example policy configurations for different interfaces
Before you read the different example NodeNetworkConfigurationPolicy
(NNCP) manifest configurations, consider the following factors when you apply a policy so that your cluster runs at its best performance conditions:
-
When you need to apply a policy to more than one node, create a
NodeNetworkConfigurationPolicy
manifest for each target node. The Kubernetes NMState Operator applies the policy to each node with an NNCP in an unspecified order. Scoping a policy with this approach reduces the length of time for policy application but risks a cluster-wide outage if an error is in the cluster’s configuration. To avoid this type of error, initially apply NNCP to some nodes, and after you confirm they are configured correctly, proceed with applying the policy to the remaining nodes. -
When you need to apply a policy to many nodes but you only want to create a single NNCP for all target nodes, the Kubernetes NMState Operator applies the policy to each node in sequence. You can set the speed and coverage of policy application for target nodes with the
maxUnavailable
parameter in the cluster configuration. By setting a lower percentage value for the parameter, you can reduce the risk of a cluster-wide outage if the outage impacts the small percentage of nodes that are receiving the policy application. - Consider specifying all related network configurations in a single policy.
- When a node restarts, the Kubernetes NMState Operator cannot control the order that it applies policies to nodes. The Kubernetes NMState Operator might apply interdependent policies in a sequence that results in a degraded network object.
23.1.5.1. Example: Linux bridge interface node network configuration policy
Create a Linux bridge interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy
manifest to the cluster.
The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostname
node selector. - 4
- Name of the interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates a bridge.
- 7
- The requested state for the interface after creation.
- 8
- Optional: If you do not use
dhcp
, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4
in this example. - 10
- Disables
stp
in this example. - 11
- The node NIC to which the bridge attaches.
23.1.5.2. Example: VLAN interface node network configuration policy
Create a VLAN interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy
manifest to the cluster.
Define all related configurations for the VLAN interface of a node in a single NodeNetworkConfigurationPolicy
manifest. For example, define the VLAN interface for a node and the related routes for the VLAN interface in the same NodeNetworkConfigurationPolicy
manifest.
When a node restarts, the Kubernetes NMState Operator cannot control the order in which policies are applied. Therefore, if you use separate policies for related network configurations, the Kubernetes NMState Operator might apply these policies in a sequence that results in a degraded network object.
The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostname
node selector. - 4
- Name of the interface. When deploying on bare metal, only the
<interface_name>.<vlan_number>
VLAN format is supported. - 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates a VLAN.
- 7
- The requested state for the interface after creation.
- 8
- The node NIC to which the VLAN is attached.
- 9
- The VLAN tag.
23.1.5.3. Example: Node network configuration policy for virtual functions
Update host network settings for Single Root I/O Virtualization (SR-IOV) network virtual functions (VF) in an existing cluster by applying a NodeNetworkConfigurationPolicy
manifest.
You can apply a NodeNetworkConfigurationPolicy
manifest to an existing cluster to complete the following tasks:
- Configure QoS host network settings for VFs to optimize performance.
- Add, remove, or update VFs for a network interface.
- Manage VF bonding configurations.
To update host network settings for SR-IOV VFs by using NMState on physical functions that are also managed through the SR-IOV Network Operator, you must set the externallyManaged
parameter in the relevant SriovNetworkNodePolicy
resource to true
. For more information, see the Additional resources section.
The following YAML file is an example of a manifest that defines QoS policies for a VF. This YAML includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: qos 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: ens1f0 4 description: Change QOS on VF0 5 type: ethernet 6 state: up 7 ethernet: sr-iov: total-vfs: 3 8 vfs: - id: 0 9 max-tx-rate: 200 10
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. - 3
- This example applies to all nodes with the
worker
role. - 4
- Name of the physical function (PF) network interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface.
- 7
- The requested state for the interface after configuration.
- 8
- The total number of VFs.
- 9
- Identifies the VF with an ID of
0
. - 10
- Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps.
The following YAML file is an example of a manifest that adds a VF for a network interface.
In this sample configuration, the ens1f1v0
VF is created on the ens1f1
physical interface, and this VF is added to a bonded network interface bond0
. The bond uses active-backup
mode for redundancy. In this example, the VF is configured to use hardware offloading to manage the VLAN directly on the physical interface.
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: addvf 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 maxUnavailable: 3 desiredState: interfaces: - name: ens1f1 4 type: ethernet state: up ethernet: sr-iov: total-vfs: 1 5 vfs: - id: 0 trust: true 6 vlan-id: 477 7 - name: bond0 8 description: Attach VFs to bond 9 type: bond 10 state: up 11 link-aggregation: mode: active-backup 12 options: primary: ens1f0v0 13 port: 14 - ens1f0v0 - ens1f1v0 15
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. - 3
- The example applies to all nodes with the
worker
role. - 4
- Name of the VF network interface.
- 5
- Number of VFs to create.
- 6
- Setting to allow failover bonding between the active and backup VFs.
- 7
- ID of the VLAN. The example uses hardward offloading to define a VLAN directly on the VF.
- 8
- Name of the bonding network interface.
- 9
- Optional: Human-readable description of the interface.
- 10
- The type of interface.
- 11
- The requested state for the interface after configuration.
- 12
- The bonding policy for the bond.
- 13
- The primary attached bonding port.
- 14
- The ports for the bonded network interface.
- 15
- In this example, the VLAN network interface is added as an additional interface to the bonded network interface.
Additional resources
23.1.5.4. Example: Bond interface node network configuration policy
Create a bond interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy
manifest to the cluster.
OpenShift Container Platform only supports the following bond modes:
-
mode=1 active-backup
-
mode=2 balance-xor
-
mode=4 802.3ad
Other bond modes are not supported.
The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostname
node selector. - 4
- Name of the interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates a bond.
- 7
- The requested state for the interface after creation.
- 8
- Optional: If you do not use
dhcp
, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4
in this example. - 10
- The driver mode for the bond. This example uses an active backup mode.
- 11
- Optional: This example uses miimon to inspect the bond link every 140ms.
- 12
- The subordinate node NICs in the bond.
- 13
- Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to
1500
by default.
23.1.5.5. Example: Ethernet interface node network configuration policy
Configure an Ethernet interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy
manifest to the cluster.
The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information.
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostname
node selector. - 4
- Name of the interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates an Ethernet networking interface.
- 7
- The requested state for the interface after creation.
- 8
- Optional: If you do not use
dhcp
, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4
in this example.
23.1.5.6. Example: Multiple interfaces in the same node network configuration policy
You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest.
The following example YAML file creates a bond that is named bond10
across two NICs and VLAN that is named bond10.103
that connects to the bond.
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond-vlan 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond10 4 description: Bonding eth2 and eth3 5 type: bond 6 state: up 7 link-aggregation: mode: balance-xor 8 options: miimon: '140' 9 port: 10 - eth2 - eth3 - name: bond10.103 11 description: vlan using bond10 12 type: vlan 13 state: up 14 vlan: base-iface: bond10 15 id: 103 16 ipv4: dhcp: true 17 enabled: true 18
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. - 3
- This example uses
hostname
node selector. - 4 11
- Name of the interface.
- 5 12
- Optional: Human-readable description of the interface.
- 6 13
- The type of interface.
- 7 14
- The requested state for the interface after creation.
- 8
- The driver mode for the bond.
- 9
- Optional: This example uses miimon to inspect the bond link every 140ms.
- 10
- The subordinate node NICs in the bond.
- 15
- The node NIC to which the VLAN is attached.
- 16
- The VLAN tag.
- 17
- Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
- 18
- Enables ipv4 in this example.
23.1.5.7. Example: Network interface with a VRF instance node network configuration policy
Associate a Virtual Routing and Forwarding (VRF) instance with a network interface by applying a NodeNetworkConfigurationPolicy
custom resource (CR).
Associating a VRF instance with a network interface is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By associating a VRF instance with a network interface, you can support traffic isolation, independent routing decisions, and the logical separation of network resources.
In a bare-metal environment, you can announce load balancer services through interfaces belonging to a VRF instance by using MetalLB. For more information, see the Additional resources section.
The following YAML file is an example of associating a VRF instance to a network interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: "true" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6
Additional resources
23.1.6. Capturing the static IP of a NIC attached to a bridge
Capturing the static IP of a NIC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
23.1.6.1. Example: Linux bridge interface node network configuration policy to inherit static IP address from the NIC attached to the bridge
Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single NodeNetworkConfigurationPolicy
manifest to the cluster.
The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information.
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" capture: eth1-nic: interfaces.name=="eth1" 3 eth1-routes: routes.running.next-hop-interface=="eth1" br1-routes: capture.eth1-routes | routes.running.next-hop-interface := "br1" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: "{{ capture.eth1-nic.interfaces.0.ipv4 }}" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: "{{ capture.br1-routes.routes.running }}"
- 1
- The name of the policy.
- 2
- Optional: If you do not include the
nodeSelector
parameter, the policy applies to all nodes in the cluster. This example uses thenode-role.kubernetes.io/worker: ""
node selector to select all worker nodes in the cluster. - 3
- The reference to the node NIC to which the bridge attaches.
- 4
- The type of interface. This example creates a bridge.
- 5
- The IP address of the bridge interface. This value matches the IP address of the NIC which is referenced by the
spec.capture.eth1-nic
entry. - 6
- The node NIC to which the bridge attaches.
Additional resources
23.1.7. Examples: IP management
The following example configuration snippets show different methods of IP management.
These examples use the ethernet
interface type to simplify the example while showing the related context in the policy configuration. These IP management examples can be used with the other interface types.
23.1.7.1. Static
The following snippet statically configures an IP address on the Ethernet interface:
# ...
interfaces:
- name: eth1
description: static IP on eth1
type: ethernet
state: up
ipv4:
dhcp: false
address:
- ip: 192.168.122.250 1
prefix-length: 24
enabled: true
# ...
- 1
- Replace this value with the static IP address for the interface.
23.1.7.2. No IP address
The following snippet ensures that the interface has no IP address:
# ... interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false # ...
23.1.7.3. Dynamic host configuration
The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS:
# ... interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true # ...
The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS:
# ... interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true # ...
23.1.7.4. DNS
By default, the nmstate
API stores DNS values globally as against storing them in a network interface. For certain situations, you must configure a network interface to store DNS values.
Setting a DNS configuration is comparable to modifying the /etc/resolv.conf
file.
To define a DNS configuration for a network interface, you must initially specify the dns-resolver
section in the network interface’s YAML configuration file.
You cannot use br-ex
bridge, an OVNKubernetes-managed Open vSwitch bridge, as the interface when configuring DNS resolvers unless you manually configured a customized br-ex
bridge.
For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the Deploying installer-provisioned clusters on bare metal document or the Installing a user-provisioned cluster on bare metal document.
The following example shows a default situation that stores DNS values globally:
Configure a static DNS without a network interface. Note that when updating the
/etc/resolv.conf
file on a host node, you do not need to specify an interface, IPv4 or IPv6, in theNodeNetworkConfigurationPolicy
(NNCP) manifest.Example of a DNS configuration for a network interface that globally stores DNS values
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: search: - example.com - example.org server: - 2001:db8:f::1 - 192.0.2.251 # ...
The following examples show situations that require configuring a network interface to store DNS values:
If you want to rank a static DNS name server over a dynamic DNS name server, define the interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration (
autoconf
) mechanism in the network interface YAML configuration file.Example configuration that adds
192.0.2.1
to DNS name servers retrieved from the DHCPv4 network protocol# ... dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true # ...
If you need to configure a network interface to store DNS values instead of adopting the default method, which uses the
nmstate
API to store DNS values globally, you can set static DNS values and static IP addresses in the network interface YAML file.ImportantStoring DNS values at the network interface level might cause name resolution issues after you attach the interface to network components, such as an Open vSwitch (OVS) bridge, a Linux bridge, or a bond.
Example configuration that stores DNS values at the interface level
# ... dns-resolver: config: search: - example.com - example.org server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false # ...
If you want to set static DNS search domains and dynamic DNS name servers for your network interface, define the dynamic interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration (
autoconf
) mechanism in the network interface YAML configuration file.Example configuration that sets
example.com
andexample.org
static DNS search domains along with dynamic DNS name server settings# ... dns-resolver: config: search: - example.com - example.org server: [] interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true # ...
23.1.7.5. Static routing
The following snippet configures a static route and a static IP on interface eth1
.
dns-resolver: config: # ... interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false enabled: true address: - ip: 192.0.2.251 1 prefix-length: 24 routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254 # ...
23.2. Troubleshooting node network configuration
If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as:
- The configuration fails to be applied on the host.
- The host loses connection to the default gateway.
- The host loses connection to the API server.
23.2.1. Troubleshooting an incorrect node network configuration policy configuration
You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy.
If you applied an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy. The example attempts to apply a Linux bridge policy to a cluster that has three control plane nodes and three compute nodes. The policy is not applied because the policy references the wrong interface.
To find an error, you need to investigate the available NMState resources. You can then update the policy with the correct configuration.
Prerequisites
-
You ensured that an
ens01
interface does not exist on your Linux system.
Procedure
Create a policy on your cluster. The following example creates a simple bridge,
br1
that hasens01
as its member:apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01 # ...
Apply the policy to your network interface:
$ oc apply -f ens01-bridge-testfail.yaml
Example output
nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created
Verify the status of the policy by running the following command:
$ oc get nncp
The output shows that the policy failed:
Example output
NAME STATUS ens01-bridge-testfail FailedToConfigure
The policy status alone does not indicate if it failed on all nodes or a subset of nodes.
List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, the output suggests that the problem is with a specific node configuration. If the policy failed on all nodes, the output suggests that the problem is with the policy.
$ oc get nnce
The output shows that the policy failed on all nodes:
Example output
NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure
View one of the failed enactments. The following command uses the output tool
jsonpath
to filter the output:$ oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}'
Example output
[2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01
The previous example shows the output from an
InvalidArgument
error that indicates that theens01
is an unknown port. For this example, you might need to change the port configuration in the policy configuration file.To ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the
NodeNetworkState
object. The following command returns the network configuration for thecontrol-plane-1
node:$ oc get nns control-plane-1 -o yaml
The output shows that the interface name on the nodes is
ens1
but the failed policy incorrectly usesens01
:Example output
- ipv4: # ... name: ens1 state: up type: ethernet
Correct the error by editing the existing policy:
$ oc edit nncp ens01-bridge-testfail
# ... port: - name: ens1
Save the policy to apply the correction.
Check the status of the policy to ensure it updated successfully:
$ oc get nncp
Example output
NAME STATUS ens01-bridge-testfail SuccessfullyConfigured
The updated policy is successfully configured on all nodes in the cluster.
23.2.2. Troubleshooting DNS connectivity issues in a disconnected environment
If you experience DNS connectivity issues when configuring nmstate
in a disconnected environment, you can configure the DNS server to resolve the list of name servers for the domain root-servers.net
.
Ensure that the DNS server includes a name server (NS) entry for the root-servers.net
zone. The DNS server does not need to forward a query to an upstream resolver, but the server must return a correct answer for the NS query.
23.2.2.1. Configuring the bind9 DNS named server
For a cluster configured to query a bind9
DNS server, you can add the root-servers.net
zone to a configuration file that contains at least one NS record. For example you can use the /var/named/named.localhost
as a zone file that already matches this criteria.
Procedure
Add the
root-servers.net
zone at the end of the/etc/named.conf
configuration file by running the following command:$ cat >> /etc/named.conf <<EOF zone "root-servers.net" IN { type master; file "named.localhost"; }; EOF
Restart the
named
service by running the following command:$ systemctl restart named
Confirm that the
root-servers.net
zone is present by running the following command:$ journalctl -u named|grep root-servers.net
Example output
Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0
Verify that the DNS server can resolve the NS record for the
root-servers.net
domain by running the following command:$ host -t NS root-servers.net. 127.0.0.1
Example output
Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net.
23.2.2.2. Configuring the dnsmasq DNS server
If you are using dnsmasq
as the DNS server, you can delegate resolution of the root-servers.net
domain to another DNS server, for example, by creating a new configuration file that resolves root-servers.net
using a DNS server that you specify.
Create a configuration file that delegates the domain
root-servers.net
to another DNS server by running the following command:$ echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf
Restart the
dnsmasq
service by running the following command:$ systemctl restart dnsmasq
Confirm that the
root-servers.net
domain is delegated to another DNS server by running the following command:$ journalctl -u dnsmasq|grep root-servers.net
Example output
Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net
Verify that the DNS server can resolve the NS record for the
root-servers.net
domain by running the following command:$ host -t NS root-servers.net. 127.0.0.1
Example output
Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net.