Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 27. Kubernetes NMState


27.1. About the Kubernetes NMState Operator

The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster’s nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node’s network interfaces to the API server.

Important

Red Hat supports the Kubernetes NMState Operator in production environments on bare-metal, IBM Power, IBM Z, IBM® LinuxONE, VMware vSphere, and OpenStack installations.

Before you can use NMState with OpenShift Container Platform, you must install the Kubernetes NMState Operator.

Note

The Kubernetes NMState Operator updates the network configuration of a secondary NIC. It cannot update the network configuration of the primary NIC or the

br-ex
bridge.

OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify the network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster.

Node networking is monitored and updated by the following objects:

NodeNetworkState
Reports the state of the network on that node.
NodeNetworkConfigurationPolicy
Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster.
NodeNetworkConfigurationEnactment
Reports the network policies enacted upon each node.

27.1.1. Installing the Kubernetes NMState Operator

You can install the Kubernetes NMState Operator by using the web console or the CLI.

You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.

Prerequisites

  • You are logged in as a user with
    cluster-admin
    privileges.

Procedure

  1. Select Operators OperatorHub.
  2. In the search field below All Items, enter
    nmstate
    and click Enter to search for the Kubernetes NMState Operator.
  3. Click on the Kubernetes NMState Operator search result.
  4. Click on Install to open the Install Operator window.
  5. Click Install to install the Operator.
  6. After the Operator finishes installing, click View Operator.
  7. Under Provided APIs, click Create Instance to open the dialog box for creating an instance of
    kubernetes-nmstate
    .
  8. In the Name field of the dialog box, ensure the name of the instance is

    nmstate.

    Note

    The name restriction is a known issue. The instance is a singleton for the entire cluster.

  9. Accept the default settings and click Create to create the instance.

Summary

Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.

27.1.1.2. Installing the Kubernetes NMState Operator by using the CLI

You can install the Kubernetes NMState Operator by using the OpenShift CLI (

oc)
. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.

Prerequisites

  • You have installed the OpenShift CLI (
    oc
    ).
  • You are logged in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create the

    nmstate
    Operator namespace:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-nmstate
    spec:
      finalizers:
      - kubernetes
    EOF
  2. Create the

    OperatorGroup
    :

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-nmstate
      namespace: openshift-nmstate
    spec:
      targetNamespaces:
      - openshift-nmstate
    EOF
  3. Subscribe to the

    nmstate
    Operator:

    $ cat << EOF| oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: kubernetes-nmstate-operator
      namespace: openshift-nmstate
    spec:
      channel: stable
      installPlanApproval: Automatic
      name: kubernetes-nmstate-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    EOF
  4. Create instance of the

    nmstate
    operator:

    $ cat << EOF | oc apply -f -
    apiVersion: nmstate.io/v1
    kind: NMState
    metadata:
      name: nmstate
    EOF

Verification

  • Confirm that the deployment for the

    nmstate
    operator is running:

    oc get clusterserviceversion -n openshift-nmstate \
     -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    Name                                             Phase
    kubernetes-nmstate-operator.4.12.0-202210210157   Succeeded

27.1.2. Uninstalling the Kubernetes NMState Operator

You can use the Operator Lifecycle Manager (OLM) to uninstall the Kubernetes NMState Operator, but by design OLM does not delete any associated custom resource definitions (CRDs), custom resources (CRs), or API Services.

Before you uninstall the Kubernetes NMState Operator from the

Subcription
resource used by OLM, identify what Kubernetes NMState Operator resources to delete. This identification ensures that you can delete resources without impacting your running cluster.

If you need to reinstall the Kubernetes NMState Operator, see "Installing the Kubernetes NMState Operator by using the CLI" or "Installing the Kubernetes NMState Operator by using the web console".

Prerequisites

  • You have installed the OpenShift CLI (
    oc
    ).
  • You are logged in as a user with
    cluster-admin
    privileges.

Procedure

  1. Unsubscribe the Kubernetes NMState Operator from the

    Subcription
    resource by running the following command:

    $ oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator
  2. Find the

    ClusterServiceVersion
    (CSV) resource that associates with the Kubernetes NMState Operator:

    $ oc get --namespace openshift-nmstate clusterserviceversion

    Example output that lists a CSV resource

    NAME                              	  DISPLAY                   	VERSION   REPLACES     PHASE
    kubernetes-nmstate-operator.v4.18.0   Kubernetes NMState Operator   4.18.0           	   Succeeded

  3. Delete the CSV resource. After you delete the file, OLM deletes certain resources, such as

    RBAC
    , that it created for the Operator.

    $ oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0
  4. Delete the

    nmstate
    CR and any associated
    Deployment
    resources by running the following commands:

    $ oc -n openshift-nmstate delete nmstate nmstate
    $ oc delete --all deployments --namespace=openshift-nmstate
  5. Delete all the custom resource definition (CRD), such as

    nmstates
    , that exist in the
    nmstate.io
    namespace by running the following commands:

    $ oc delete crd nmstates.nmstate.io
    $ oc delete crd nodenetworkconfigurationenactments.nmstate.io
    $ oc delete crd nodenetworkstates.nmstate.io
    $ oc delete crd nodenetworkconfigurationpolicies.nmstate.io
  6. Delete the namespace:

    $ oc delete namespace kubernetes-nmstate

27.2. Observing and updating the node network state and configuration

For more information about how to install the NMState Operator, see Kubernetes NMState Operator.

27.2.1. Viewing the network state of a node

Node network state is the network configuration for all nodes in the cluster. A

NodeNetworkState
object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node.

Procedure

  1. List all the

    NodeNetworkState
    objects in the cluster:

    $ oc get nns
  2. Inspect a

    NodeNetworkState
    object to view the network on that node. The output in this example has been redacted for clarity:

    $ oc get nns node01 -o yaml

    Example output

    apiVersion: nmstate.io/v1
    kind: NodeNetworkState
    metadata:
      name: node01 
    1
    
    status:
      currentState: 
    2
    
        dns-resolver:
    ...
        interfaces:
    ...
        route-rules:
    ...
        routes:
    ...
      lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z" 
    3

    1
    The name of the NodeNetworkState object is taken from the node.
    2
    The currentState contains the complete network configuration for the node, including DNS, interfaces, and routes.
    3
    Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report.

27.2.2. The NodeNetworkConfigurationPolicy manifest file

A

NodeNetworkConfigurationPolicy
(NNCP) manifest file defines policies that the Kubernetes NMState Operator uses to configure networking for nodes that exist in an OpenShift Container Platform cluster.

Important

If you want to apply multiple NNCP CRs to a node, you must create the NNCPs in a logical order that is based on the alphanumeric sorting of the policy names. The Kubernetes NMState Operator continuously checks for a newly created NNCP CR so that the Operator can instantly apply the CR to node. Consider the following logical order issue example:

  1. You create NNCP 1 for defining the bridge interface that listens on a VLAN port, such as
    eth1.1000
    .
  2. You create NNCP 2 for defining the VLAN interface and specify the port for this interface, such as
    eth1.1000
    .
  3. You apply NNCP 1 before you apply NNCP 2 to the node.

The node experiences a node connectivity issue because port

eth1.1000
does not exist. As a result, the cluster fails.

After you apply a node network policy to a node, the Kubernetes NMState Operator configures the networking configuration for nodes according to the node network policy details.

You can create an NNCP by using either the OpenShift CLI (

oc
) or the OpenShift Container Platform web console. As a postinstallation task you can create an NNCP or edit an existing NNCP.

Note

Before you create an NNCP, ensure that you read the "Example policy configurations for different interfaces" document.

If you want to delete an NNCP, you can use the

oc delete nncp
command to complete this action. However, this command does not delete any objects, such as a bridge interface.

Deleting the node network policy that added an interface to a node does not change the configuration of the policy on the node. Similarly, removing an interface does not delete the policy, because the Kubernetes NMState Operator re-adds the removed interface whenever a pod or a node is restarted.

To effectively delete the NNCP, the node network policy, and any interfaces would typically require the following actions:

  1. Edit the NNCP and remove interface details from the file. Ensure that you do not remove
    name
    ,
    state
    , and
    type
    parameters from the file.
  2. Add
    state: absent
    under the
    interfaces.state
    section of the NNCP.
  3. Run
    oc apply -f <nncp_file_name>
    . After the Kubernetes NMState Operator applies the node network policy to each node in your cluster, any interface that exists on each node is now marked as absent.
  4. Run
    oc delete nncp
    to delete the NNCP.

Additional resources

27.2.3. Managing policy by using the CLI

27.2.3.1. Creating an interface on nodes

Create an interface on nodes in the cluster by applying a

NodeNetworkConfigurationPolicy
(NNCP) manifest to the cluster. The manifest details the requested configuration for the interface.

By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the

spec: nodeSelector
parameter and the appropriate
<key>:<value>
for your node selector.

You can configure multiple nmstate-enabled nodes concurrently. The configuration applies to 50% of the nodes in parallel. This strategy prevents the entire cluster from being unavailable if the network connection fails. To apply the policy configuration in parallel to a specific portion of the cluster, use the

maxUnavailable
parameter in the
NodeNetworkConfigurationPolicy
manifest configuration file.

Note

If you have two nodes and you apply an NNCP manifest with the

maxUnavailable
parameter set to
50%
to these nodes, one node at a time receives the NNCP configuration. If you then introduce an additional NNCP manifest file with the
maxUnavailable
parameter set to
50%
, this NCCP is independent of the initial NNCP; this means that if both NNCP manifests apply a bad configuration to nodes, you can no longer guarantee that half of your cluster is functional.

Procedure

  1. Create the

    NodeNetworkConfigurationPolicy
    manifest. The following example configures a Linux bridge on all worker nodes and configures the DNS resolver:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: br1-eth1-policy 
    1
    
    spec:
      nodeSelector: 
    2
    
        node-role.kubernetes.io/worker: "" 
    3
    
      maxUnavailable: 3 
    4
    
      desiredState:
        interfaces:
          - name: br1
            description: Linux bridge with eth1 as a port 
    5
    
            type: linux-bridge
            state: up
            ipv4:
              dhcp: true
              enabled: true
              auto-dns: false
            bridge:
              options:
                stp:
                  enabled: false
              port:
                - name: eth1
        dns-resolver: 
    6
    
          config:
            search:
            - example.com
            - example.org
            server:
            - 8.8.8.8
    1
    Name of the policy.
    2
    Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
    3
    This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster.
    4
    Optional: Specifies the maximum number of nmstate-enabled nodes that the policy configuration can be applied to concurrently. This parameter can be set to either a percentage value (string), for example, "10%", or an absolute value (number), such as 3.
    5
    Optional: Human-readable description for the interface.
    6
    Optional: Specifies the search and server settings for the DNS server.
  2. Create the node network policy:

    $ oc apply -f br1-eth1-policy.yaml 
    1
    1
    File name of the node network configuration policy manifest.

Additional resources

27.2.4. Confirming node network policy updates on nodes

When you apply a node network policy, a

NodeNetworkConfigurationEnactment
object is created for every node in the cluster. The node network configuration enactment is a read-only object that represents the status of execution of the policy on that node. If the policy fails to be applied on the node, the enactment for that node includes a traceback for troubleshooting.

Procedure

  1. To confirm that a policy has been applied to the cluster, list the policies and their status:

    $ oc get nncp
  2. Optional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy:

    $ oc get nncp <policy> -o yaml
  3. Optional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster:

    $ oc get nnce
  4. Optional: To view the configuration of a particular enactment, including any error reporting for a failed configuration:

    $ oc get nnce <node>.<policy> -o yaml

27.2.5. Removing an interface from nodes

You can remove an interface from one or more nodes in the cluster by editing the

NodeNetworkConfigurationPolicy
object and setting the
state
of the interface to
absent
.

Removing an interface from a node does not automatically restore the node network configuration to a previous state. If you want to restore the previous state, you will need to define that node network configuration in the policy.

If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a

down
state and become unreachable. To avoid losing connectivity, configure the node NIC in the same policy so that it has a status of
up
and either DHCP or a static IP address.

Note

Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a

NodeNetworkConfigurationPolicy
is an object in the cluster, the object only represents the requested configuration. Similarly, removing an interface does not delete the policy.

Procedure

  1. Update the

    NodeNetworkConfigurationPolicy
    manifest used to create the interface. The following example removes a Linux bridge and configures the
    eth1
    NIC with DHCP to avoid losing connectivity:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: <br1-eth1-policy>
    spec:
      nodeSelector:
        node-role.kubernetes.io/worker: ""
      desiredState:
        interfaces:
        - name: br1
          type: linux-bridge
          state: absent
        - name: eth1
          type: ethernet
          state: up
          ipv4:
            dhcp: true
            enabled: true
    • metadata.name
      defines the name of the policy.
    • spec.nodeSelector
      defines the
      nodeSelector
      parameter. This parameter is optional. If you do not include the
      nodeSelector
      parameter, the policy applies to all nodes in the cluster. This example uses the
      node-role.kubernetes.io/worker: ""
      node selector to select all worker nodes in the cluster.
    • spec.desiredState.interfaces
      defines the name, type, and desired state of an interface. This example creates both Linux bridge and Ethernet networking interfaces. Setting
      state: absent
      removes the interface.
    • spec.desiredState.interfaces.ipv4
      defines
      ipv4
      settings for the interface. These settings are optional. If you do not use
      dhcp
      , you can either set a static IP or leave the interface without an IP address. Setting
      enabled: true
      enables
      ipv4
      in this example.
  2. Update the policy on the node and remove the interface:

    $ oc apply -f <filename.yaml>

    Where

    <filename.yaml>
    is the filename of the policy manifest.

27.2.6. Example policy configurations for different interfaces

Before you read the different example

NodeNetworkConfigurationPolicy
(NNCP) manifest configurations, consider the following factors when you apply a policy to nodes so that your cluster runs under its best performance conditions:

  • If you want to apply multiple NNCP CRs to a node, you must create the NNCPs in a logical order that is based on the alphanumeric sorting of the policy names. The Kubernetes NMState Operator continuously checks for a newly created NNCP CR so that the Operator can instantly apply the CR to node.
  • When you need to apply a policy to many nodes but you only want to create a single NNCP for all the nodes, the Kubernetes NMState Operator applies the policy to each node in sequence. You can set the speed and coverage of policy application for target nodes with the
    maxUnavailable
    parameter in the cluster’s configuration file. By setting a lower percentage value for the parameter, you can reduce the risk of a cluster-wide outage if the outage impacts the small percentage of nodes that are receiving the policy application.
  • If you set the
    maxUnavailable
    parameter to
    50%
    in two NNCP manifests, the policy configuration coverage applies to 100% of the nodes in your cluster.
  • When a node restarts, the Kubernetes NMState Operator cannot control the order to which it applies policies to nodes. The Kubernetes NMState Operator might apply interdependent policies in a sequence that results in a degraded network object.
  • Consider specifying all related network configurations in a single policy.

Create a Linux bridge interface on nodes in the cluster by applying a

NodeNetworkConfigurationPolicy
manifest to the cluster.

The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: br1-eth1-policy 
1

spec:
  nodeSelector: 
2

    kubernetes.io/hostname: <node01> 
3

  desiredState:
    interfaces:
      - name: br1 
4

        description: Linux bridge with eth1 as a port 
5

        type: linux-bridge 
6

        state: up 
7

        ipv4:
          dhcp: true 
8

          enabled: true 
9

        bridge:
          options:
            stp:
              enabled: false 
10

          port:
            - name: eth1 
11
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses a hostname node selector.
4
Name of the interface.
5
Optional: Human-readable description of the interface.
6
The type of interface. This example creates a bridge.
7
The requested state for the interface after creation.
8
Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
9
Enables ipv4 in this example.
10
Disables stp in this example.
11
The node NIC to which the bridge attaches.

27.2.6.2. Example: VLAN interface node network configuration policy

Create a VLAN interface on nodes in the cluster by applying a

NodeNetworkConfigurationPolicy
manifest to the cluster.

Note

Define all related configurations for the VLAN interface of a node in a single

NodeNetworkConfigurationPolicy
manifest. For example, define the VLAN interface for a node and the related routes for the VLAN interface in the same
NodeNetworkConfigurationPolicy
manifest.

When a node restarts, the Kubernetes NMState Operator cannot control the order in which policies are applied. Therefore, if you use separate policies for related network configurations, the Kubernetes NMState Operator might apply these policies in a sequence that results in a degraded network object.

The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: vlan-eth1-policy 
1

spec:
  nodeSelector: 
2

    kubernetes.io/hostname: <node01> 
3

  desiredState:
    interfaces:
    - name: eth1.102 
4

      description: VLAN using eth1 
5

      type: vlan 
6

      state: up 
7

      vlan:
        base-iface: eth1 
8

        id: 102 
9
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses a hostname node selector.
4
Name of the interface. When deploying on bare metal, only the <interface_name>.<vlan_number> VLAN format is supported.
5
Optional: Human-readable description of the interface.
6
The type of interface. This example creates a VLAN.
7
The requested state for the interface after creation.
8
The node NIC to which the VLAN is attached.
9
The VLAN tag.

27.2.6.3. Example: Bond interface node network configuration policy

Create a bond interface on nodes in the cluster by applying a

NodeNetworkConfigurationPolicy
manifest to the cluster.

Note

OpenShift Container Platform only supports the following bond modes:

  • active-backup

  • balance-xor

  • 802.3ad

Other bond modes are not supported.

The

balance-xor
and
802.3ad
bond modes require switch configuration to establish an "EtherChannel" or similar port grouping. Those two modes also require additional load-balancing configuration, depending on the source and destination of traffic being passed through the interface. The
active-backup
bond mode does not require any switch configuration. Other bond modes are not supported.

The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: bond0-eth1-eth2-policy 
1

spec:
  nodeSelector: 
2

    kubernetes.io/hostname: <node01> 
3

  desiredState:
    interfaces:
    - name: bond0 
4

      description: Bond with ports eth1 and eth2 
5

      type: bond 
6

      state: up 
7

      ipv4:
        dhcp: true 
8

        enabled: true 
9

      link-aggregation:
        mode: active-backup 
10

        options:
          miimon: '140' 
11

        port: 
12

        - eth1
        - eth2
      mtu: 1450 
13
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses a hostname node selector.
4
Name of the interface.
5
Optional: Human-readable description of the interface.
6
The type of interface. This example creates a bond.
7
The requested state for the interface after creation.
8
Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
9
Enables ipv4 in this example.
10
The driver mode for the bond. This example uses active backup.
11
Optional: This example uses miimon to inspect the bond link every 140ms.
12
The subordinate node NICs in the bond.
13
Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to 1500 by default.

27.2.6.4. Example: Ethernet interface node network configuration policy

Configure an Ethernet interface on nodes in the cluster by applying a

NodeNetworkConfigurationPolicy
manifest to the cluster.

The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: eth1-policy 
1

spec:
  nodeSelector: 
2

    kubernetes.io/hostname: <node01> 
3

  desiredState:
    interfaces:
    - name: eth1 
4

      description: Configuring eth1 on node01 
5

      type: ethernet 
6

      state: up 
7

      ipv4:
        dhcp: true 
8

        enabled: true 
9
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses a hostname node selector.
4
Name of the interface.
5
Optional: Human-readable description of the interface.
6
The type of interface. This example creates an Ethernet networking interface.
7
The requested state for the interface after creation.
8
Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
9
Enables ipv4 in this example.

You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest.

The following example YAML file creates a bond that is named

bond10
across two NICs and VLAN that is named
bond10.103
that connects to the bond.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: bond-vlan 
1

spec:
  nodeSelector: 
2

    kubernetes.io/hostname: <node01> 
3

  desiredState:
    interfaces:
    - name: bond10 
4

      description: Bonding eth2 and eth3 
5

      type: bond 
6

      state: up 
7

      link-aggregation:
        mode: balance-xor 
8

        options:
          miimon: '140' 
9

        port: 
10

        - eth2
        - eth3
    - name: bond10.103 
11

      description: vlan using bond10 
12

      type: vlan 
13

      state: up 
14

      vlan:
         base-iface: bond10 
15

         id: 103 
16

      ipv4:
        dhcp: true 
17

        enabled: true 
18
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses hostname node selector.
4 11
Name of the interface.
5 12
Optional: Human-readable description of the interface.
6 13
The type of interface.
7 14
The requested state for the interface after creation.
8
The driver mode for the bond.
9
Optional: This example uses miimon to inspect the bond link every 140ms.
10
The subordinate node NICs in the bond.
15
The node NIC to which the VLAN is attached.
16
The VLAN tag.
17
Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
18
Enables ipv4 in this example.

27.2.7. Capturing the static IP of a NIC attached to a bridge

Important

Capturing the static IP of a NIC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single

NodeNetworkConfigurationPolicy
manifest to the cluster.

The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: br1-eth1-copy-ipv4-policy 
1

spec:
  nodeSelector: 
2

    node-role.kubernetes.io/worker: ""
  capture:
    eth1-nic: interfaces.name=="eth1" 
3

    eth1-routes: routes.running.next-hop-interface=="eth1"
    br1-routes: capture.eth1-routes | routes.running.next-hop-interface := "br1"
  desiredState:
    interfaces:
      - name: br1
        description: Linux bridge with eth1 as a port
        type: linux-bridge 
4

        state: up
        ipv4: "{{ capture.eth1-nic.interfaces.0.ipv4 }}" 
5

        bridge:
          options:
            stp:
              enabled: false
          port:
            - name: eth1 
6

     routes:
        config: "{{ capture.br1-routes.routes.running }}"
1
The name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster.
3
The reference to the node NIC to which the bridge attaches.
4
The type of interface. This example creates a bridge.
5
The IP address of the bridge interface. This value matches the IP address of the NIC which is referenced by the spec.capture.eth1-nic entry.
6
The node NIC to which the bridge attaches.

27.2.8. Examples: IP management

The following example configuration snippets demonstrate different methods of IP management.

These examples use the

ethernet
interface type to simplify the example while showing the related context in the policy configuration. These IP management examples can be used with the other interface types.

27.2.8.1. Static

The following snippet statically configures an IP address on the Ethernet interface:

...
    interfaces:
    - name: eth1
      description: static IP on eth1
      type: ethernet
      state: up
      ipv4:
        dhcp: false
        address:
        - ip: 192.168.122.250 
1

          prefix-length: 24
        enabled: true
...
1
Replace this value with the static IP address for the interface.

27.2.8.2. No IP address

The following snippet ensures that the interface has no IP address:

...
    interfaces:
    - name: eth1
      description: No IP on eth1
      type: ethernet
      state: up
      ipv4:
        enabled: false
...
Important

Always set the

state
parameter to
up
when you set both the
ipv4.enabled
and the
ipv6.enabled
parameter to
false
to disable an interface. If you set
state: down
with this configuration, the interface receives a DHCP IP address because of automatic DHCP assignment.

27.2.8.3. Dynamic host configuration

The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS:

...
    interfaces:
    - name: eth1
      description: DHCP on eth1
      type: ethernet
      state: up
      ipv4:
        dhcp: true
        enabled: true
...

The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS:

...
    interfaces:
    - name: eth1
      description: DHCP without gateway or DNS on eth1
      type: ethernet
      state: up
      ipv4:
        dhcp: true
        auto-gateway: false
        auto-dns: false
        enabled: true
...

27.2.8.4. DNS

Setting the DNS configuration is analagous to modifying the

/etc/resolv.conf
file. The following snippet sets the DNS configuration on the host.

...
    interfaces: 
1

       ...
       ipv4:
         ...
         auto-dns: false
         ...
    dns-resolver:
      config:
        search:
        - example.com
        - example.org
        server:
        - 8.8.8.8
...
1
You must configure an interface with auto-dns: false or you must use static IP configuration on an interface in order for Kubernetes NMState to store custom DNS settings.
Important

You cannot use

br-ex
, an OVNKubernetes-managed Open vSwitch bridge, as the interface when configuring DNS resolvers.

27.2.8.5. Static routing

The following snippet configures a static route and a static IP on interface

eth1
.

...
    interfaces:
    - name: eth1
      description: Static routing on eth1
      type: ethernet
      state: up
      ipv4:
        dhcp: false
        address:
        - ip: 192.0.2.251 
1

          prefix-length: 24
        enabled: true
    routes:
      config:
      - destination: 198.51.100.0/24
        metric: 150
        next-hop-address: 192.0.2.1 
2

        next-hop-interface: eth1
        table-id: 254
...
1
The static IP address for the Ethernet interface.
2
The next hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface.
Important

You cannot use the OVN-Kubernetes

br-ex
bridge as the next hop interface when configuring a static route unless you manually configured a customized
br-ex
bridge.

27.3. Troubleshooting node network configuration

If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as:

  • The configuration fails to be applied on the host.
  • The host loses connection to the default gateway.
  • The host loses connection to the API server.

You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. If you apply an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy.

In this example, a Linux bridge policy is applied to an example cluster that has three control plane nodes and three compute nodes. The policy fails to be applied because it references an incorrect interface. To find the error, investigate the available NMState resources. You can then update the policy with the correct configuration.

Procedure

  1. Create a policy and apply it to your cluster. The following example creates a simple bridge on the

    ens01
    interface:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: ens01-bridge-testfail
    spec:
      desiredState:
        interfaces:
          - name: br1
            description: Linux bridge with the wrong port
            type: linux-bridge
            state: up
            ipv4:
              dhcp: true
              enabled: true
            bridge:
              options:
                stp:
                  enabled: false
              port:
                - name: ens01
    $ oc apply -f ens01-bridge-testfail.yaml

    Example output

    nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created

  2. Verify the status of the policy by running the following command:

    $ oc get nncp

    The output shows that the policy failed:

    Example output

    NAME                    STATUS
    ens01-bridge-testfail   FailedToConfigure

    However, the policy status alone does not indicate if it failed on all nodes or a subset of nodes.

  3. List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, it suggests that the problem is with a specific node configuration. If the policy failed on all nodes, it suggests that the problem is with the policy.

    $ oc get nnce

    The output shows that the policy failed on all nodes:

    Example output

    NAME                                         STATUS
    control-plane-1.ens01-bridge-testfail        FailedToConfigure
    control-plane-2.ens01-bridge-testfail        FailedToConfigure
    control-plane-3.ens01-bridge-testfail        FailedToConfigure
    compute-1.ens01-bridge-testfail              FailedToConfigure
    compute-2.ens01-bridge-testfail              FailedToConfigure
    compute-3.ens01-bridge-testfail              FailedToConfigure

  4. View one of the failed enactments and look at the traceback. The following command uses the output tool

    jsonpath
    to filter the output:

    $ oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}'

    This command returns a large traceback that has been edited for brevity:

    Example output

    error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' ''
    ...
    libnmstate.error.NmstateVerificationError:
    desired
    =======
    ---
    name: br1
    type: linux-bridge
    state: up
    bridge:
      options:
        group-forward-mask: 0
        mac-ageing-time: 300
        multicast-snooping: true
        stp:
          enabled: false
          forward-delay: 15
          hello-time: 2
          max-age: 20
          priority: 32768
      port:
      - name: ens01
    description: Linux bridge with the wrong port
    ipv4:
      address: []
      auto-dns: true
      auto-gateway: true
      auto-routes: true
      dhcp: true
      enabled: true
    ipv6:
      enabled: false
    mac-address: 01-23-45-67-89-AB
    mtu: 1500
    
    current
    =======
    ---
    name: br1
    type: linux-bridge
    state: up
    bridge:
      options:
        group-forward-mask: 0
        mac-ageing-time: 300
        multicast-snooping: true
        stp:
          enabled: false
          forward-delay: 15
          hello-time: 2
          max-age: 20
          priority: 32768
      port: []
    description: Linux bridge with the wrong port
    ipv4:
      address: []
      auto-dns: true
      auto-gateway: true
      auto-routes: true
      dhcp: true
      enabled: true
    ipv6:
      enabled: false
    mac-address: 01-23-45-67-89-AB
    mtu: 1500
    
    difference
    ==========
    --- desired
    +++ current
    @@ -13,8 +13,7 @@
           hello-time: 2
           max-age: 20
           priority: 32768
    -  port:
    -  - name: ens01
    +  port: []
     description: Linux bridge with the wrong port
     ipv4:
       address: []
      line 651, in _assert_interfaces_equal\n    current_state.interfaces[ifname],\nlibnmstate.error.NmstateVerificationError:

    The

    NmstateVerificationError
    lists the
    desired
    policy configuration, the
    current
    configuration of the policy on the node, and the
    difference
    highlighting the parameters that do not match. In this example, the
    port
    is included in the
    difference
    , which suggests that the problem is the port configuration in the policy.

  5. To ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the

    NodeNetworkState
    object. The following command returns the network configuration for the
    control-plane-1
    node:

    $ oc get nns control-plane-1 -o yaml

    The output shows that the interface name on the nodes is

    ens1
    but the failed policy incorrectly uses
    ens01
    :

    Example output

       - ipv4:
    ...
          name: ens1
          state: up
          type: ethernet

  6. Correct the error by editing the existing policy:

    $ oc edit nncp ens01-bridge-testfail
    ...
              port:
                - name: ens1

    Save the policy to apply the correction.

  7. Check the status of the policy to ensure it updated successfully:

    $ oc get nncp

    Example output

    NAME                    STATUS
    ens01-bridge-testfail   SuccessfullyConfigured

The updated policy is successfully configured on all nodes in the cluster.

If you experience DNS connectivity issues when configuring

nmstate
in a disconnected environment, you can configure the DNS server to resolve the list of name servers for the domain
root-servers.net
.

Important

Ensure that the DNS server includes a name server (NS) entry for the

root-servers.net
zone. The DNS server does not need to forward a query to an upstream resolver, but the server must return a correct answer for the NS query.

27.3.2.1. Configuring the bind9 DNS named server

For a cluster configured to query a

bind9
DNS server, you can add the
root-servers.net
zone to a configuration file that contains at least one NS record. For example you can use the
/var/named/named.localhost
as a zone file that already matches this criteria.

Procedure

  1. Add the

    root-servers.net
    zone at the end of the
    /etc/named.conf
    configuration file by running the following command:

    $ cat >> /etc/named.conf <<EOF
    zone "root-servers.net" IN {
        	type master;
        	file "named.localhost";
    };
    EOF
  2. Restart the

    named
    service by running the following command:

    $ systemctl restart named
  3. Confirm that the

    root-servers.net
    zone is present by running the following command:

    $ journalctl -u named|grep root-servers.net

    Example output

    Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0
    Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0

  4. Verify that the DNS server can resolve the NS record for the

    root-servers.net
    domain by running the following command:

    $ host -t NS root-servers.net. 127.0.0.1

    Example output

    Using domain server:
    Name: 127.0.0.1
    Address: 127.0.0.53
    Aliases:
    root-servers.net name server root-servers.net.

27.3.2.2. Configuring the dnsmasq DNS server

If you are using

dnsmasq
as the DNS server, you can delegate resolution of the
root-servers.net
domain to another DNS server, for example, by creating a new configuration file that resolves
root-servers.net
using a DNS server that you specify.

  1. Create a configuration file that delegates the domain

    root-servers.net
    to another DNS server by running the following command:

    $ echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf
  2. Restart the

    dnsmasq
    service by running the following command:

    $ systemctl restart dnsmasq
  3. Confirm that the

    root-servers.net
    domain is delegated to another DNS server by running the following command:

    $ journalctl -u dnsmasq|grep root-servers.net

    Example output

    Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net

  4. Verify that the DNS server can resolve the NS record for the

    root-servers.net
    domain by running the following command:

    $ host -t NS root-servers.net. 127.0.0.1

    Example output

    Using domain server:
    Name: 127.0.0.1
    Address: 127.0.0.1#53
    Aliases:
    root-servers.net name server root-servers.net.

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben