Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 30. Kubernetes NMState

download PDF

30.1. About the Kubernetes NMState Operator

The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster’s nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node’s network interfaces to the API server.

Important

Red Hat supports the Kubernetes NMState Operator in production environments on bare-metal, IBM Power®, IBM Z®, IBM® LinuxONE, VMware vSphere, and OpenStack installations.

Before you can use NMState with OpenShift Container Platform, you must install the Kubernetes NMState Operator.

Note

The Kubernetes NMState Operator updates the network configuration of a secondary NIC. It cannot update the network configuration of the primary NIC or the br-ex bridge.

OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify the network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster.

Node networking is monitored and updated by the following objects:

NodeNetworkState
Reports the state of the network on that node.
NodeNetworkConfigurationPolicy
Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster.
NodeNetworkConfigurationEnactment
Reports the network policies enacted upon each node.

30.1.1. Installing the Kubernetes NMState Operator

You can install the Kubernetes NMState Operator by using the web console or the CLI.

30.1.1.1. Installing the Kubernetes NMState Operator by using the web console

You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.

Prerequisites

  • You are logged in as a user with cluster-admin privileges.

Procedure

  1. Select Operators OperatorHub.
  2. In the search field below All Items, enter nmstate and click Enter to search for the Kubernetes NMState Operator.
  3. Click on the Kubernetes NMState Operator search result.
  4. Click on Install to open the Install Operator window.
  5. Click Install to install the Operator.
  6. After the Operator finishes installing, click View Operator.
  7. Under Provided APIs, click Create Instance to open the dialog box for creating an instance of kubernetes-nmstate.
  8. In the Name field of the dialog box, ensure the name of the instance is nmstate.

    Note

    The name restriction is a known issue. The instance is a singleton for the entire cluster.

  9. Accept the default settings and click Create to create the instance.

Summary

Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.

30.1.1.2. Installing the Kubernetes NMState Operator using the CLI

You can install the Kubernetes NMState Operator by using the OpenShift CLI (oc). After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You are logged in as a user with cluster-admin privileges.

Procedure

  1. Create the nmstate Operator namespace:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        kubernetes.io/metadata.name: openshift-nmstate
        name: openshift-nmstate
      name: openshift-nmstate
    spec:
      finalizers:
      - kubernetes
    EOF
  2. Create the OperatorGroup:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      annotations:
        olm.providedAPIs: NMState.v1.nmstate.io
      name: openshift-nmstate
      namespace: openshift-nmstate
    spec:
      targetNamespaces:
      - openshift-nmstate
    EOF
  3. Subscribe to the nmstate Operator:

    $ cat << EOF| oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      labels:
        operators.coreos.com/kubernetes-nmstate-operator.openshift-nmstate: ""
      name: kubernetes-nmstate-operator
      namespace: openshift-nmstate
    spec:
      channel: stable
      installPlanApproval: Automatic
      name: kubernetes-nmstate-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    EOF
  4. Confirm the ClusterServiceVersion (CSV) status for the nmstate Operator deployment equals Succeeded:

    $ oc get clusterserviceversion -n openshift-nmstate \
     -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    Name                                             Phase
    kubernetes-nmstate-operator.4.16.0-202210210157   Succeeded

  5. Create an instance of the nmstate Operator:

    $ cat << EOF | oc apply -f -
    apiVersion: nmstate.io/v1
    kind: NMState
    metadata:
      name: nmstate
    EOF
  6. Verify the pods for NMState Operator are running:

    $ oc get pod -n openshift-nmstate

    Example output

    Name                                      Ready   Status  Restarts  Age
    pod/nmstate-cert-manager-5b47d8dddf-5wnb5   1/1   Running  0         77s
    pod/nmstate-console-plugin-d6b76c6b9-4dcwm  1/1   Running  0         77s
    pod/nmstate-handler-6v7rm                   1/1   Running  0         77s
    pod/nmstate-handler-bjcxw                   1/1   Running  0         77s
    pod/nmstate-handler-fv6m2                   1/1   Running  0         77s
    pod/nmstate-handler-kb8j6                   1/1   Running  0         77s
    pod/nmstate-handler-wn55p                   1/1   Running  0         77s
    pod/nmstate-operator-f6bb869b6-v5m92        1/1   Running  0        4m51s
    pod/nmstate-webhook-66d6bbd84b-6n674        1/1   Running  0         77s
    pod/nmstate-webhook-66d6bbd84b-vlzrd        1/1   Running  0         77s

30.2. Observing and updating the node network state and configuration

30.2.1. Viewing the network state of a node by using the CLI

Node network state is the network configuration for all nodes in the cluster. A NodeNetworkState object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node.

Procedure

  1. List all the NodeNetworkState objects in the cluster:

    $ oc get nns
  2. Inspect a NodeNetworkState object to view the network on that node. The output in this example has been redacted for clarity:

    $ oc get nns node01 -o yaml

    Example output

    apiVersion: nmstate.io/v1
    kind: NodeNetworkState
    metadata:
      name: node01 1
    status:
      currentState: 2
        dns-resolver:
    # ...
        interfaces:
    # ...
        route-rules:
    # ...
        routes:
    # ...
      lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z" 3

    1
    The name of the NodeNetworkState object is taken from the node.
    2
    The currentState contains the complete network configuration for the node, including DNS, interfaces, and routes.
    3
    Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report.

30.2.2. Viewing the network state of a node from the web console

As an administrator, you can use the OpenShift Container Platform web console to observe NodeNetworkState resources and network interfaces, and access network details.

Procedure

  1. Navigate to Networking NodeNetworkState.

    In the NodeNetworkState page, you can view the list of NodeNetworkState resources and the corresponding interfaces that are created on the nodes. You can use Filter based on Interface state, Interface type, and IP, or the search bar based on criteria Name or Label, to narrow down the displayed NodeNetworkState resources.

  2. To access the detailed information about NodeNetworkState resource, click the NodeNetworkState resource name listed in the Name column .
  3. to expand and view the Network Details section for the NodeNetworkState resource, click the > icon . Alternatively, you can click on each interface type under the Network interface column to view the network details.

30.2.3. Managing policy from the web console

You can update the node network configuration, such as adding or removing interfaces from nodes, by applying NodeNetworkConfigurationPolicy manifests to the cluster. Manage the policy from the web console by accessing the list of created policies in the NodeNetworkConfigurationPolicy page under the Networking menu. This page enables you to create, update, monitor, and delete the policies.

30.2.3.1. Monitoring the policy status

You can monitor the policy status from the NodeNetworkConfigurationPolicy page. This page displays all the policies created in the cluster in a tabular format, with the following columns:

Name
The name of the policy created.
Matched nodes
The count of nodes where the policies are applied. This could be either a subset of nodes based on the node selector or all the nodes on the cluster.
Node network state
The enactment state of the matched nodes. You can click on the enactment state and view detailed information on the status.

To find the desired policy, you can filter the list either based on enactment state by using the Filter option, or by using the search option.

30.2.3.2. Creating a policy

You can create a policy by using either a form or YAML in the web console.

Procedure

  1. Navigate to Networking NodeNetworkConfigurationPolicy.
  2. In the NodeNetworkConfigurationPolicy page, click Create, and select From Form option.

    In case there are no existing policies, you can alternatively click Create NodeNetworkConfigurationPolicy to createa policy using form.

    Note

    To create policy using YAML, click Create, and select With YAML option. The following steps are applicable to create a policy only by using form.

  3. Optional: Check the Apply this NodeNetworkConfigurationPolicy only to specific subsets of nodes using the node selector checkbox to specify the nodes where the policy must be applied.
  4. Enter the policy name in the Policy name field.
  5. Optional: Enter the description of the policy in the Description field.
  6. Optional: In the Policy Interface(s) section, a bridge interface is added by default with preset values in editable fields. Edit the values by executing the following steps:

    1. Enter the name of the interface in Interface name field.
    2. Select the network state from Network state dropdown. The default selected value is Up.
    3. Select the type of interface from Type dropdown. The available values are Bridge, Bonding, and Ethernet. The default selected value is Bridge.

      Note

      Addition of a VLAN interface by using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy by using form.

    4. Optional: In the IP configuration section, check IPv4 checkbox to assign an IPv4 address to the interface, and configure the IP address assignment details:

      1. Click IP address to configure the interface with a static IP address, or DHCP to auto-assign an IP address.
      2. If you have selected IP address option, enter the IPv4 address in IPV4 address field, and enter the prefix length in Prefix length field.

        If you have selected DHCP option, uncheck the options that you want to disable. The available options are Auto-DNS, Auto-routes, and Auto-gateway. All the options are selected by default.

    5. Optional: Enter the port number in Port field.
    6. Optional: Check the checkbox Enable STP to enable STP.
    7. Optional: To add an interface to the policy, click Add another interface to the policy.
    8. Optional: To remove an interface from the policy, click icon next to the interface.
    Note

    Alternatively, you can click Edit YAML on the top of the page to continue editing the form using YAML.

  7. Click Create to complete policy creation.

30.2.3.3. Updating the policy

30.2.3.3.1. Updating the policy by using form

Procedure

  1. Navigate to Networking NodeNetworkConfigurationPolicy.
  2. In the NodeNetworkConfigurationPolicy page, click the kebab icon placed next to the policy you want to edit, and click Edit.
  3. Edit the fields that you want to update.
  4. Click Save.
Note

Addition of a VLAN interface using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy using form.

30.2.3.3.2. Updating the policy by using YAML

Procedure

  1. Navigate to Networking NodeNetworkConfigurationPolicy.
  2. In the NodeNetworkConfigurationPolicy page, click the policy name under the Name column for the policy you want to edit.
  3. Click the YAML tab, and edit the YAML.
  4. Click Save.

30.2.3.4. Deleting the policy

Procedure

  1. Navigate to Networking NodeNetworkConfigurationPolicy.
  2. In the NodeNetworkConfigurationPolicy page, click the kebab icon placed next to the policy you want to delete, and click Delete.
  3. In the pop-up window, enter the policy name to confirm deletion, and click Delete.

30.2.4. Managing policy by using the CLI

30.2.4.1. Creating an interface on nodes

Create an interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The manifest details the requested configuration for the interface.

By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the spec: nodeSelector parameter and the appropriate <key>:<value> for your node selector.

You can configure multiple nmstate-enabled nodes concurrently. The configuration applies to 50% of the nodes in parallel. This strategy prevents the entire cluster from being unavailable if the network connection fails. To apply the policy configuration in parallel to a specific portion of the cluster, use the maxUnavailable field.

Procedure

  1. Create the NodeNetworkConfigurationPolicy manifest. The following example configures a Linux bridge on all worker nodes and configures the DNS resolver:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: br1-eth1-policy 1
    spec:
      nodeSelector: 2
        node-role.kubernetes.io/worker: "" 3
      maxUnavailable: 3 4
      desiredState:
        interfaces:
          - name: br1
            description: Linux bridge with eth1 as a port 5
            type: linux-bridge
            state: up
            ipv4:
              dhcp: true
              enabled: true
              auto-dns: false
            bridge:
              options:
                stp:
                  enabled: false
              port:
                - name: eth1
        dns-resolver: 6
          config:
            search:
            - example.com
            - example.org
            server:
            - 8.8.8.8
    1
    Name of the policy.
    2
    Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
    3
    This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster.
    4
    Optional: Specifies the maximum number of nmstate-enabled nodes that the policy configuration can be applied to concurrently. This parameter can be set to either a percentage value (string), for example, "10%", or an absolute value (number), such as 3.
    5
    Optional: Human-readable description for the interface.
    6
    Optional: Specifies the search and server settings for the DNS server.
  2. Create the node network policy:

    $ oc apply -f br1-eth1-policy.yaml 1
    1
    File name of the node network configuration policy manifest.

Additional resources

30.2.4.2. Confirming node network policy updates on nodes

A NodeNetworkConfigurationPolicy manifest describes your requested network configuration for nodes in the cluster. The node network policy includes your requested network configuration and the status of execution of the policy on the cluster as a whole.

When you apply a node network policy, a NodeNetworkConfigurationEnactment object is created for every node in the cluster. The node network configuration enactment is a read-only object that represents the status of execution of the policy on that node. If the policy fails to be applied on the node, the enactment for that node includes a traceback for troubleshooting.

Procedure

  1. To confirm that a policy has been applied to the cluster, list the policies and their status:

    $ oc get nncp
  2. Optional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy:

    $ oc get nncp <policy> -o yaml
  3. Optional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster:

    $ oc get nnce
  4. Optional: To view the configuration of a particular enactment, including any error reporting for a failed configuration:

    $ oc get nnce <node>.<policy> -o yaml

30.2.4.3. Removing an interface from nodes

You can remove an interface from one or more nodes in the cluster by editing the NodeNetworkConfigurationPolicy object and setting the state of the interface to absent.

Removing an interface from a node does not automatically restore the node network configuration to a previous state. If you want to restore the previous state, you will need to define that node network configuration in the policy.

If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a down state and become unreachable. To avoid losing connectivity, configure the node NIC in the same policy so that it has a status of up and either DHCP or a static IP address.

Note

Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a NodeNetworkConfigurationPolicy is an object in the cluster, it only represents the requested configuration.
Similarly, removing an interface does not delete the policy.

Procedure

  1. Update the NodeNetworkConfigurationPolicy manifest used to create the interface. The following example removes a Linux bridge and configures the eth1 NIC with DHCP to avoid losing connectivity:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: <br1-eth1-policy> 1
    spec:
      nodeSelector: 2
        node-role.kubernetes.io/worker: "" 3
      desiredState:
        interfaces:
        - name: br1
          type: linux-bridge
          state: absent 4
        - name: eth1 5
          type: ethernet 6
          state: up 7
          ipv4:
            dhcp: true 8
            enabled: true 9
    1
    Name of the policy.
    2
    Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
    3
    This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster.
    4
    Changing the state to absent removes the interface.
    5
    The name of the interface that is to be unattached from the bridge interface.
    6
    The type of interface. This example creates an Ethernet networking interface.
    7
    The requested state for the interface.
    8
    Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
    9
    Enables ipv4 in this example.
  2. Update the policy on the node and remove the interface:

    $ oc apply -f <br1-eth1-policy.yaml> 1
    1
    File name of the policy manifest.

30.2.5. Example policy configurations for different interfaces

The following examples show different NodeNetworkConfigurationPolicy manifest configurations.

For best performance, consider the following factors when applying a policy:

  • When you need to apply a policy to more than one node, create a NodeNetworkConfigurationPolicy manifest for each target node. Scoping a policy to a single node reduces the overall length of time for the Kubernetes NMState Operator to apply the policies.

    In contrast, if a single policy includes configurations for several nodes, the Kubernetes NMState Operator applies the policy to each node in sequence, which increases the overall length of time for policy application.

  • All related network configurations should be specified in a single policy.

    When a node restarts, the Kubernetes NMState Operator cannot control the order in which policies are applied. Therefore, the Kubernetes NMState Operator might apply interdependent policies in a sequence that results in a degraded network object.

30.2.5.1. Example: Linux bridge interface node network configuration policy

Create a Linux bridge interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster.

The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: br1-eth1-policy 1
spec:
  nodeSelector: 2
    kubernetes.io/hostname: <node01> 3
  desiredState:
    interfaces:
      - name: br1 4
        description: Linux bridge with eth1 as a port 5
        type: linux-bridge 6
        state: up 7
        ipv4:
          dhcp: true 8
          enabled: true 9
        bridge:
          options:
            stp:
              enabled: false 10
          port:
            - name: eth1 11
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses a hostname node selector.
4
Name of the interface.
5
Optional: Human-readable description of the interface.
6
The type of interface. This example creates a bridge.
7
The requested state for the interface after creation.
8
Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
9
Enables ipv4 in this example.
10
Disables stp in this example.
11
The node NIC to which the bridge attaches.

30.2.5.2. Example: VLAN interface node network configuration policy

Create a VLAN interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster.

Note

Define all related configurations for the VLAN interface of a node in a single NodeNetworkConfigurationPolicy manifest. For example, define the VLAN interface for a node and the related routes for the VLAN interface in the same NodeNetworkConfigurationPolicy manifest.

When a node restarts, the Kubernetes NMState Operator cannot control the order in which policies are applied. Therefore, if you use separate policies for related network configurations, the Kubernetes NMState Operator might apply these policies in a sequence that results in a degraded network object.

The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: vlan-eth1-policy 1
spec:
  nodeSelector: 2
    kubernetes.io/hostname: <node01> 3
  desiredState:
    interfaces:
    - name: eth1.102 4
      description: VLAN using eth1 5
      type: vlan 6
      state: up 7
      vlan:
        base-iface: eth1 8
        id: 102 9
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses a hostname node selector.
4
Name of the interface. When deploying on bare metal, only the <interface_name>.<vlan_number> VLAN format is supported.
5
Optional: Human-readable description of the interface.
6
The type of interface. This example creates a VLAN.
7
The requested state for the interface after creation.
8
The node NIC to which the VLAN is attached.
9
The VLAN tag.

30.2.5.3. Example: Node network configuration policy for virtual functions (Technology Preview)

Update host network settings for Single Root I/O Virtualization (SR-IOV) network virtual functions (VF) in an existing cluster by applying a NodeNetworkConfigurationPolicy manifest.

Important

Updating host network settings for SR-IOV network VFs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can apply a NodeNetworkConfigurationPolicy manifest to an existing cluster to complete the following tasks:

  • Configure QoS or MTU host network settings for VFs to optimize performance.
  • Add, remove, or update VFs for a network interface.
  • Manage VF bonding configurations.
Note

To update host network settings for SR-IOV VFs by using NMState on physical functions that are also managed through the SR-IOV Network Operator, you must set the externallyManaged parameter in the relevant SriovNetworkNodePolicy resource to true. For more information, see the Additional resources section.

The following YAML file is an example of a manifest that defines QoS policies for a VF. This file includes samples values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: qos 1
spec:
  nodeSelector: 2
    node-role.kubernetes.io/worker: "" 3
  desiredState:
    interfaces:
      - name: ens1f0 4
        description: Change QOS on VF0 5
        type: ethernet 6
        state: up 7
        ethernet:
         sr-iov:
           total-vfs: 3 8
           vfs:
           - id: 0 9
             max-tx-rate: 200 10
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example applies to all nodes with the worker role.
4
Name of the physical function (PF) network interface.
5
Optional: Human-readable description of the interface.
6
The type of interface.
7
The requested state for the interface after configuration.
8
The total number of VFs.
9
Identifies the VF with an ID of 0.
10
Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps.

The following YAML file is an example of a manifest that creates a VLAN interface on top of a VF and adds it to a bonded network interface. It includes samples values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: addvf 1
spec:
  nodeSelector: 2
    node-role.kubernetes.io/worker: "" 3
  maxUnavailable: 3
  desiredState:
    interfaces:
      - name: ens1f0v1 4
        type: ethernet
        state: up
      - name: ens1f0v1.477 5
        type: vlan
        state: up
        vlan:
          base-iface: ens1f0v1 6
          id: 477
      - name: bond0 7
        description: Add vf 8
        type: bond 9
        state: up 10
        link-aggregation:
          mode: active-backup 11
          options:
            primary: ens1f1v0.477 12
          port: 13
            - ens1f1v0.477
            - ens1f0v0.477
            - ens1f0v1.477 14
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example applies to all nodes with the worker role.
4
Name of the VF network interface.
5
Name of the VLAN network interface.
6
The VF network interface to which the VLAN interface is attached.
7
Name of the bonding network interface.
8
Optional: Human-readable description of the interface.
9
The type of interface.
10
The requested state for the interface after configuration.
11
The bonding policy for the bond.
12
The primary attached bonding port.
13
The ports for the bonded network interface.
14
In this example, this VLAN network interface is added as an additional interface to the bonded network interface.

30.2.5.4. Example: Bond interface node network configuration policy

Create a bond interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster.

Note

OpenShift Container Platform only supports the following bond modes:

  • mode=1 active-backup
  • mode=2 balance-xor
  • mode=4 802.3ad
  • mode=5 balance-tlb
  • mode=6 balance-alb

The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: bond0-eth1-eth2-policy 1
spec:
  nodeSelector: 2
    kubernetes.io/hostname: <node01> 3
  desiredState:
    interfaces:
    - name: bond0 4
      description: Bond with ports eth1 and eth2 5
      type: bond 6
      state: up 7
      ipv4:
        dhcp: true 8
        enabled: true 9
      link-aggregation:
        mode: active-backup 10
        options:
          miimon: '140' 11
        port: 12
        - eth1
        - eth2
      mtu: 1450 13
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses a hostname node selector.
4
Name of the interface.
5
Optional: Human-readable description of the interface.
6
The type of interface. This example creates a bond.
7
The requested state for the interface after creation.
8
Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
9
Enables ipv4 in this example.
10
The driver mode for the bond. This example uses an active backup mode.
11
Optional: This example uses miimon to inspect the bond link every 140ms.
12
The subordinate node NICs in the bond.
13
Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to 1500 by default.

30.2.5.5. Example: Ethernet interface node network configuration policy

Configure an Ethernet interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster.

The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: eth1-policy 1
spec:
  nodeSelector: 2
    kubernetes.io/hostname: <node01> 3
  desiredState:
    interfaces:
    - name: eth1 4
      description: Configuring eth1 on node01 5
      type: ethernet 6
      state: up 7
      ipv4:
        dhcp: true 8
        enabled: true 9
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses a hostname node selector.
4
Name of the interface.
5
Optional: Human-readable description of the interface.
6
The type of interface. This example creates an Ethernet networking interface.
7
The requested state for the interface after creation.
8
Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
9
Enables ipv4 in this example.

30.2.5.6. Example: Multiple interfaces in the same node network configuration policy

You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest.

The following example YAML file creates a bond that is named bond10 across two NICs and VLAN that is named bond10.103 that connects to the bond.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: bond-vlan 1
spec:
  nodeSelector: 2
    kubernetes.io/hostname: <node01> 3
  desiredState:
    interfaces:
    - name: bond10 4
      description: Bonding eth2 and eth3 5
      type: bond 6
      state: up 7
      link-aggregation:
        mode: balance-rr 8
        options:
          miimon: '140' 9
        port: 10
        - eth2
        - eth3
    - name: bond10.103 11
      description: vlan using bond10 12
      type: vlan 13
      state: up 14
      vlan:
         base-iface: bond10 15
         id: 103 16
      ipv4:
        dhcp: true 17
        enabled: true 18
1
Name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster.
3
This example uses hostname node selector.
4 11
Name of the interface.
5 12
Optional: Human-readable description of the interface.
6 13
The type of interface.
7 14
The requested state for the interface after creation.
8
The driver mode for the bond.
9
Optional: This example uses miimon to inspect the bond link every 140ms.
10
The subordinate node NICs in the bond.
15
The node NIC to which the VLAN is attached.
16
The VLAN tag.
17
Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
18
Enables ipv4 in this example.

30.2.5.7. Example: Network interface with a VRF instance node network configuration policy

Associate a Virtual Routing and Forwarding (VRF) instance with a network interface by applying a NodeNetworkConfigurationPolicy custom resource (CR).

Important

Associating a VRF instance with a network interface is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

By associating a VRF instance with a network interface, you can support traffic isolation, independent routing decisions, and the logical separation of network resources.

In a bare-metal environment, you can announce load balancer services through interfaces belonging to a VRF instance by using MetalLB. For more information, see the Additional resources section.

The following YAML file is an example of associating a VRF instance to a network interface. It includes samples values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: vrfpolicy 1
spec:
  nodeSelector:
    vrf: "true" 2
  maxUnavailable: 3
  desiredState:
    interfaces:
      - name: ens4vrf 3
        type: vrf 4
        state: up
        vrf:
          port:
            - ens4 5
          route-table-id: 2 6
1
The name of the policy.
2
This example applies the policy to all nodes with the label vrf:true.
3
The name of the interface.
4
The type of interface. This example creates a VRF instance.
5
The node interface to which the VRF attaches.
6
The name of the route table ID for the VRF.

30.2.6. Capturing the static IP of a NIC attached to a bridge

Important

Capturing the static IP of a NIC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

30.2.6.1. Example: Linux bridge interface node network configuration policy to inherit static IP address from the NIC attached to the bridge

Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single NodeNetworkConfigurationPolicy manifest to the cluster.

The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: br1-eth1-copy-ipv4-policy 1
spec:
  nodeSelector: 2
    node-role.kubernetes.io/worker: ""
  capture:
    eth1-nic: interfaces.name=="eth1" 3
    eth1-routes: routes.running.next-hop-interface=="eth1"
    br1-routes: capture.eth1-routes | routes.running.next-hop-interface := "br1"
  desiredState:
    interfaces:
      - name: br1
        description: Linux bridge with eth1 as a port
        type: linux-bridge 4
        state: up
        ipv4: "{{ capture.eth1-nic.interfaces.0.ipv4 }}" 5
        bridge:
          options:
            stp:
              enabled: false
          port:
            - name: eth1 6
     routes:
        config: "{{ capture.br1-routes.routes.running }}"
1
The name of the policy.
2
Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster.
3
The reference to the node NIC to which the bridge attaches.
4
The type of interface. This example creates a bridge.
5
The IP address of the bridge interface. This value matches the IP address of the NIC which is referenced by the spec.capture.eth1-nic entry.
6
The node NIC to which the bridge attaches.

30.2.7. Examples: IP management

The following example configuration snippets demonstrate different methods of IP management.

These examples use the ethernet interface type to simplify the example while showing the related context in the policy configuration. These IP management examples can be used with the other interface types.

30.2.7.1. Static

The following snippet statically configures an IP address on the Ethernet interface:

# ...
    interfaces:
    - name: eth1
      description: static IP on eth1
      type: ethernet
      state: up
      ipv4:
        dhcp: false
        address:
        - ip: 192.168.122.250 1
          prefix-length: 24
        enabled: true
# ...
1
Replace this value with the static IP address for the interface.

30.2.7.2. No IP address

The following snippet ensures that the interface has no IP address:

# ...
    interfaces:
    - name: eth1
      description: No IP on eth1
      type: ethernet
      state: up
      ipv4:
        enabled: false
# ...

30.2.7.3. Dynamic host configuration

The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS:

# ...
    interfaces:
    - name: eth1
      description: DHCP on eth1
      type: ethernet
      state: up
      ipv4:
        dhcp: true
        enabled: true
# ...

The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS:

# ...
    interfaces:
    - name: eth1
      description: DHCP without gateway or DNS on eth1
      type: ethernet
      state: up
      ipv4:
        dhcp: true
        auto-gateway: false
        auto-dns: false
        enabled: true
# ...

30.2.7.4. DNS

By default, the nmstate API stores DNS values globally as against storing them in a network interface. For certain situations, you must configure a network interface to store DNS values. To define a DNS configuration for a network interface, you must initially specify the dns-resolver section in the network interface’s YAML configuration file.

Tip

Setting a DNS configuration is comparable to modifying the /etc/resolv.conf file.

Important

You cannot use br-ex bridge, an OVNKubernetes-managed Open vSwitch bridge, as the interface when configuring DNS resolvers.

The following example shows a default situation that stores DNS values globally:

  • Configure a static DNS without a network interface. Note that when updating the /etc/resolv.conf file on a host node, you do not need to specify an interface (IPv4 or IPv6) in the NodeNetworkConfigurationPolicy (NNCP) manifest.

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
     name: worker-0-dns-testing
    spec:
      nodeSelector:
        kubernetes.io/hostname: <target_node>
      desiredState:
        dns-resolver:
          config:
            search:
            - example.com
            - example.org
            server:
            - 2001:db8:f::1
            - 192.0.2.251
    # ...

The following examples show situations that require configuring a network interface to store DNS values:

  • Configure a static DNS for a network interface with an automatic IP configuration. Note that for this configuration, you must set the auto-dns parameter to false, so that the Kubernetes NMState Operator can store custom DNS settings for the network interface.

    dns-resolver:
      config:
        search:
        - example.com
        - example.org
        server:
        - 2001:db8:f::1
        - 192.0.2.251
    interfaces:
      - name: eth1
        type: ethernet
        state: up
        ipv4:
          enabled: true
          dhcp: true
          auto-dns: false
        ipv6:
          enabled: true
          dhcp: true
          autoconf: true
          auto-dns: false
    # ...
  • Configure a static DNS for a network interface with a static IP configuration. Note that for this configuration, you must set the dhcp parameter to false and the autoconf parameter to false.

    dns-resolver:
      config:
    # ...
        server:
        - 2001:4860:4860::8844
        - 192.0.2.251
    interfaces:
      - name: eth1
        type: ethernet
        state: up
        ipv4:
          enabled: true
          dhcp: false
          address:
          - ip: 192.0.2.251
            prefix-length: 24
        ipv6:
          enabled: true
          dhcp: false
          autoconf: false
          address:
          - ip: 2001:db8:1::1
            prefix-length: 64
    routes:
      config:
      - destination: 0.0.0.0/0
        next-hop-address: 192.0.2.1
        next-hop-interface: eth1
      - destination: ::/0
        next-hop-address: 2001:db8:1::3
        next-hop-interface: eth1
    # ...
  • Configure a static DNS name server to append to Dynamic Host Configuration Protocol (DHCP) and IPv6 Stateless Address AutoConfiguration (SLAAC) servers.

    dns-resolver:
      config:
    # ...
        server:
        - 192.0.2.251
    interfaces:
      - name: eth1
        type: ethernet
        state: up
        ipv4:
          enabled: true
          dhcp: true
          auto-dns: true
        ipv6:
          enabled: true
          dhcp: true
          autoconf: true
          auto-dns: true
    # ...

30.2.7.5. Static routing

The following snippet configures a static route and a static IP on interface eth1.

dns-resolver:
  config:
# ...
interfaces:
  - name: eth1
    description: Static routing on eth1
    type: ethernet
    state: up
    ipv4:
      dhcp: false
      enabled: true
      address:
      - ip: 192.0.2.251 1
        prefix-length: 24
routes:
  config:
  - destination: 198.51.100.0/24
    metric: 150
    next-hop-address: 192.0.2.1 2
    next-hop-interface: eth1
    table-id: 254
# ...
1
The static IP address for the Ethernet interface.
2
Next hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface.

30.3. Troubleshooting node network configuration

If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as:

  • The configuration fails to be applied on the host.
  • The host loses connection to the default gateway.
  • The host loses connection to the API server.

30.3.1. Troubleshooting an incorrect node network configuration policy configuration

You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. If you apply an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy.

In this example, a Linux bridge policy is applied to an example cluster that has three control plane nodes and three compute nodes. The policy fails to be applied because it references an incorrect interface. To find the error, investigate the available NMState resources. You can then update the policy with the correct configuration.

Procedure

  1. Create a policy and apply it to your cluster. The following example creates a simple bridge on the ens01 interface:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: ens01-bridge-testfail
    spec:
      desiredState:
        interfaces:
          - name: br1
            description: Linux bridge with the wrong port
            type: linux-bridge
            state: up
            ipv4:
              dhcp: true
              enabled: true
            bridge:
              options:
                stp:
                  enabled: false
              port:
                - name: ens01
    $ oc apply -f ens01-bridge-testfail.yaml

    Example output

    nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created

  2. Verify the status of the policy by running the following command:

    $ oc get nncp

    The output shows that the policy failed:

    Example output

    NAME                    STATUS
    ens01-bridge-testfail   FailedToConfigure

    However, the policy status alone does not indicate if it failed on all nodes or a subset of nodes.

  3. List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, it suggests that the problem is with a specific node configuration. If the policy failed on all nodes, it suggests that the problem is with the policy.

    $ oc get nnce

    The output shows that the policy failed on all nodes:

    Example output

    NAME                                         STATUS
    control-plane-1.ens01-bridge-testfail        FailedToConfigure
    control-plane-2.ens01-bridge-testfail        FailedToConfigure
    control-plane-3.ens01-bridge-testfail        FailedToConfigure
    compute-1.ens01-bridge-testfail              FailedToConfigure
    compute-2.ens01-bridge-testfail              FailedToConfigure
    compute-3.ens01-bridge-testfail              FailedToConfigure

  4. View one of the failed enactments and look at the traceback. The following command uses the output tool jsonpath to filter the output:

    $ oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}'

    This command returns a large traceback that has been edited for brevity:

    Example output

    error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' ''
    ...
    libnmstate.error.NmstateVerificationError:
    desired
    =======
    ---
    name: br1
    type: linux-bridge
    state: up
    bridge:
      options:
        group-forward-mask: 0
        mac-ageing-time: 300
        multicast-snooping: true
        stp:
          enabled: false
          forward-delay: 15
          hello-time: 2
          max-age: 20
          priority: 32768
      port:
      - name: ens01
    description: Linux bridge with the wrong port
    ipv4:
      address: []
      auto-dns: true
      auto-gateway: true
      auto-routes: true
      dhcp: true
      enabled: true
    ipv6:
      enabled: false
    mac-address: 01-23-45-67-89-AB
    mtu: 1500
    
    current
    =======
    ---
    name: br1
    type: linux-bridge
    state: up
    bridge:
      options:
        group-forward-mask: 0
        mac-ageing-time: 300
        multicast-snooping: true
        stp:
          enabled: false
          forward-delay: 15
          hello-time: 2
          max-age: 20
          priority: 32768
      port: []
    description: Linux bridge with the wrong port
    ipv4:
      address: []
      auto-dns: true
      auto-gateway: true
      auto-routes: true
      dhcp: true
      enabled: true
    ipv6:
      enabled: false
    mac-address: 01-23-45-67-89-AB
    mtu: 1500
    
    difference
    ==========
    --- desired
    +++ current
    @@ -13,8 +13,7 @@
           hello-time: 2
           max-age: 20
           priority: 32768
    -  port:
    -  - name: ens01
    +  port: []
     description: Linux bridge with the wrong port
     ipv4:
       address: []
      line 651, in _assert_interfaces_equal\n    current_state.interfaces[ifname],\nlibnmstate.error.NmstateVerificationError:

    The NmstateVerificationError lists the desired policy configuration, the current configuration of the policy on the node, and the difference highlighting the parameters that do not match. In this example, the port is included in the difference, which suggests that the problem is the port configuration in the policy.

  5. To ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the NodeNetworkState object. The following command returns the network configuration for the control-plane-1 node:

    $ oc get nns control-plane-1 -o yaml

    The output shows that the interface name on the nodes is ens1 but the failed policy incorrectly uses ens01:

    Example output

       - ipv4:
    # ...
          name: ens1
          state: up
          type: ethernet

  6. Correct the error by editing the existing policy:

    $ oc edit nncp ens01-bridge-testfail
    # ...
              port:
                - name: ens1

    Save the policy to apply the correction.

  7. Check the status of the policy to ensure it updated successfully:

    $ oc get nncp

    Example output

    NAME                    STATUS
    ens01-bridge-testfail   SuccessfullyConfigured

The updated policy is successfully configured on all nodes in the cluster.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.