Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 27. Kubernetes NMState
27.1. About the Kubernetes NMState Operator Link kopierenLink in die Zwischenablage kopiert!
The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster’s nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node’s network interfaces to the API server.
Red Hat supports the Kubernetes NMState Operator in production environments on bare-metal, IBM Power, IBM Z, IBM® LinuxONE, VMware vSphere, and OpenStack installations.
Before you can use NMState with OpenShift Container Platform, you must install the Kubernetes NMState Operator.
The Kubernetes NMState Operator updates the network configuration of a secondary NIC. It cannot update the network configuration of the primary NIC or the
br-ex
OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify the network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster.
Node networking is monitored and updated by the following objects:
NodeNetworkState- Reports the state of the network on that node.
NodeNetworkConfigurationPolicy-
Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicymanifest to the cluster. NodeNetworkConfigurationEnactment- Reports the network policies enacted upon each node.
27.1.1. Installing the Kubernetes NMState Operator Link kopierenLink in die Zwischenablage kopiert!
You can install the Kubernetes NMState Operator by using the web console or the CLI.
27.1.1.1. Installing the Kubernetes NMState Operator using the web console Link kopierenLink in die Zwischenablage kopiert!
You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.
Prerequisites
-
You are logged in as a user with privileges.
cluster-admin
Procedure
-
Select Operators
OperatorHub. -
In the search field below All Items, enter and click Enter to search for the Kubernetes NMState Operator.
nmstate - Click on the Kubernetes NMState Operator search result.
- Click on Install to open the Install Operator window.
- Click Install to install the Operator.
- After the Operator finishes installing, click View Operator.
-
Under Provided APIs, click Create Instance to open the dialog box for creating an instance of .
kubernetes-nmstate In the Name field of the dialog box, ensure the name of the instance is
nmstate.NoteThe name restriction is a known issue. The instance is a singleton for the entire cluster.
- Accept the default settings and click Create to create the instance.
Summary
Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.
27.1.1.2. Installing the Kubernetes NMState Operator by using the CLI Link kopierenLink in die Zwischenablage kopiert!
You can install the Kubernetes NMState Operator by using the OpenShift CLI (
oc)
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You are logged in as a user with privileges.
cluster-admin
Procedure
Create the
Operator namespace:nmstate$ cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate spec: finalizers: - kubernetes EOFCreate the
:OperatorGroup$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOFSubscribe to the
Operator:nmstate$ cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOFCreate instance of the
operator:nmstate$ cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF
Verification
Confirm that the deployment for the
operator is running:nmstateoc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phaseExample output
Name Phase kubernetes-nmstate-operator.4.12.0-202210210157 Succeeded
27.1.2. Uninstalling the Kubernetes NMState Operator Link kopierenLink in die Zwischenablage kopiert!
You can use the Operator Lifecycle Manager (OLM) to uninstall the Kubernetes NMState Operator, but by design OLM does not delete any associated custom resource definitions (CRDs), custom resources (CRs), or API Services.
Before you uninstall the Kubernetes NMState Operator from the
Subcription
If you need to reinstall the Kubernetes NMState Operator, see "Installing the Kubernetes NMState Operator by using the CLI" or "Installing the Kubernetes NMState Operator by using the web console".
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You are logged in as a user with privileges.
cluster-admin
Procedure
Unsubscribe the Kubernetes NMState Operator from the
resource by running the following command:Subcription$ oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operatorFind the
(CSV) resource that associates with the Kubernetes NMState Operator:ClusterServiceVersion$ oc get --namespace openshift-nmstate clusterserviceversionExample output that lists a CSV resource
NAME DISPLAY VERSION REPLACES PHASE kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 SucceededDelete the CSV resource. After you delete the file, OLM deletes certain resources, such as
, that it created for the Operator.RBAC$ oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0Delete the
CR and any associatednmstateresources by running the following commands:Deployment$ oc -n openshift-nmstate delete nmstate nmstate$ oc delete --all deployments --namespace=openshift-nmstateDelete all the custom resource definition (CRD), such as
, that exist in thenmstatesnamespace by running the following commands:nmstate.io$ oc delete crd nmstates.nmstate.io$ oc delete crd nodenetworkconfigurationenactments.nmstate.io$ oc delete crd nodenetworkstates.nmstate.io$ oc delete crd nodenetworkconfigurationpolicies.nmstate.ioDelete the namespace:
$ oc delete namespace kubernetes-nmstate
27.2. Observing and updating the node network state and configuration Link kopierenLink in die Zwischenablage kopiert!
For more information about how to install the NMState Operator, see Kubernetes NMState Operator.
27.2.1. Viewing the network state of a node Link kopierenLink in die Zwischenablage kopiert!
Node network state is the network configuration for all nodes in the cluster. A
NodeNetworkState
Procedure
List all the
objects in the cluster:NodeNetworkState$ oc get nnsInspect a
object to view the network on that node. The output in this example has been redacted for clarity:NodeNetworkState$ oc get nns node01 -o yamlExample output
apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node011 status: currentState:2 dns-resolver: ... interfaces: ... route-rules: ... routes: ... lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z"3 - 1
- The name of the
NodeNetworkStateobject is taken from the node. - 2
- The
currentStatecontains the complete network configuration for the node, including DNS, interfaces, and routes. - 3
- Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report.
27.2.2. The NodeNetworkConfigurationPolicy manifest file Link kopierenLink in die Zwischenablage kopiert!
A
NodeNetworkConfigurationPolicy
If you want to apply multiple NNCP CRs to a node, you must create the NNCPs in a logical order that is based on the alphanumeric sorting of the policy names. The Kubernetes NMState Operator continuously checks for a newly created NNCP CR so that the Operator can instantly apply the CR to node. Consider the following logical order issue example:
-
You create NNCP 1 for defining the bridge interface that listens on a VLAN port, such as .
eth1.1000 -
You create NNCP 2 for defining the VLAN interface and specify the port for this interface, such as .
eth1.1000 - You apply NNCP 1 before you apply NNCP 2 to the node.
The node experiences a node connectivity issue because port
eth1.1000
After you apply a node network policy to a node, the Kubernetes NMState Operator configures the networking configuration for nodes according to the node network policy details.
You can create an NNCP by using either the OpenShift CLI (
oc
Before you create an NNCP, ensure that you read the "Example policy configurations for different interfaces" document.
If you want to delete an NNCP, you can use the
oc delete nncp
Deleting the node network policy that added an interface to a node does not change the configuration of the policy on the node. Similarly, removing an interface does not delete the policy, because the Kubernetes NMState Operator re-adds the removed interface whenever a pod or a node is restarted.
To effectively delete the NNCP, the node network policy, and any interfaces would typically require the following actions:
-
Edit the NNCP and remove interface details from the file. Ensure that you do not remove ,
name, andstateparameters from the file.type -
Add under the
state: absentsection of the NNCP.interfaces.state -
Run . After the Kubernetes NMState Operator applies the node network policy to each node in your cluster, any interface that exists on each node is now marked as absent.
oc apply -f <nncp_file_name> -
Run to delete the NNCP.
oc delete nncp
Additional resources
27.2.3. Managing policy by using the CLI Link kopierenLink in die Zwischenablage kopiert!
27.2.3.1. Creating an interface on nodes Link kopierenLink in die Zwischenablage kopiert!
Create an interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the
spec: nodeSelector
<key>:<value>
You can configure multiple nmstate-enabled nodes concurrently. The configuration applies to 50% of the nodes in parallel. This strategy prevents the entire cluster from being unavailable if the network connection fails. To apply the policy configuration in parallel to a specific portion of the cluster, use the
maxUnavailable
NodeNetworkConfigurationPolicy
If you have two nodes and you apply an NNCP manifest with the
maxUnavailable
50%
maxUnavailable
50%
Procedure
Create the
manifest. The following example configures a Linux bridge on all worker nodes and configures the DNS resolver:NodeNetworkConfigurationPolicyapiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy1 spec: nodeSelector:2 node-role.kubernetes.io/worker: ""3 maxUnavailable: 34 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver:6 config: search: - example.com - example.org server: - 8.8.8.8- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses the
node-role.kubernetes.io/worker: ""node selector to select all worker nodes in the cluster. - 4
- Optional: Specifies the maximum number of nmstate-enabled nodes that the policy configuration can be applied to concurrently. This parameter can be set to either a percentage value (string), for example,
"10%", or an absolute value (number), such as3. - 5
- Optional: Human-readable description for the interface.
- 6
- Optional: Specifies the search and server settings for the DNS server.
Create the node network policy:
$ oc apply -f br1-eth1-policy.yaml1 - 1
- File name of the node network configuration policy manifest.
Additional resources
27.2.4. Confirming node network policy updates on nodes Link kopierenLink in die Zwischenablage kopiert!
When you apply a node network policy, a
NodeNetworkConfigurationEnactment
Procedure
To confirm that a policy has been applied to the cluster, list the policies and their status:
$ oc get nncpOptional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy:
$ oc get nncp <policy> -o yamlOptional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster:
$ oc get nnceOptional: To view the configuration of a particular enactment, including any error reporting for a failed configuration:
$ oc get nnce <node>.<policy> -o yaml
27.2.5. Removing an interface from nodes Link kopierenLink in die Zwischenablage kopiert!
You can remove an interface from one or more nodes in the cluster by editing the
NodeNetworkConfigurationPolicy
state
absent
Removing an interface from a node does not automatically restore the node network configuration to a previous state. If you want to restore the previous state, you will need to define that node network configuration in the policy.
If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a
down
up
Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a
NodeNetworkConfigurationPolicy
Procedure
Update the
manifest used to create the interface. The following example removes a Linux bridge and configures theNodeNetworkConfigurationPolicyNIC with DHCP to avoid losing connectivity:eth1apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> spec: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - name: br1 type: linux-bridge state: absent - name: eth1 type: ethernet state: up ipv4: dhcp: true enabled: true-
defines the name of the policy.
metadata.name -
defines the
spec.nodeSelectorparameter. This parameter is optional. If you do not include thenodeSelectorparameter, the policy applies to all nodes in the cluster. This example uses thenodeSelectornode selector to select all worker nodes in the cluster.node-role.kubernetes.io/worker: "" -
defines the name, type, and desired state of an interface. This example creates both Linux bridge and Ethernet networking interfaces. Setting
spec.desiredState.interfacesremoves the interface.state: absent -
defines
spec.desiredState.interfaces.ipv4settings for the interface. These settings are optional. If you do not useipv4, you can either set a static IP or leave the interface without an IP address. Settingdhcpenablesenabled: truein this example.ipv4
-
Update the policy on the node and remove the interface:
$ oc apply -f <filename.yaml>Where
is the filename of the policy manifest.<filename.yaml>
27.2.6. Example policy configurations for different interfaces Link kopierenLink in die Zwischenablage kopiert!
Before you read the different example
NodeNetworkConfigurationPolicy
- If you want to apply multiple NNCP CRs to a node, you must create the NNCPs in a logical order that is based on the alphanumeric sorting of the policy names. The Kubernetes NMState Operator continuously checks for a newly created NNCP CR so that the Operator can instantly apply the CR to node.
-
When you need to apply a policy to many nodes but you only want to create a single NNCP for all the nodes, the Kubernetes NMState Operator applies the policy to each node in sequence. You can set the speed and coverage of policy application for target nodes with the parameter in the cluster’s configuration file. By setting a lower percentage value for the parameter, you can reduce the risk of a cluster-wide outage if the outage impacts the small percentage of nodes that are receiving the policy application.
maxUnavailable -
If you set the parameter to
maxUnavailablein two NNCP manifests, the policy configuration coverage applies to 100% of the nodes in your cluster.50% - When a node restarts, the Kubernetes NMState Operator cannot control the order to which it applies policies to nodes. The Kubernetes NMState Operator might apply interdependent policies in a sequence that results in a degraded network object.
- Consider specifying all related network configurations in a single policy.
27.2.6.1. Example: Linux bridge interface node network configuration policy Link kopierenLink in die Zwischenablage kopiert!
Create a Linux bridge interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br1-eth1-policy
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: br1
description: Linux bridge with eth1 as a port
type: linux-bridge
state: up
ipv4:
dhcp: true
enabled: true
bridge:
options:
stp:
enabled: false
port:
- name: eth1
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostnamenode selector. - 4
- Name of the interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates a bridge.
- 7
- The requested state for the interface after creation.
- 8
- Optional: If you do not use
dhcp, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4in this example. - 10
- Disables
stpin this example. - 11
- The node NIC to which the bridge attaches.
27.2.6.2. Example: VLAN interface node network configuration policy Link kopierenLink in die Zwischenablage kopiert!
Create a VLAN interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
Define all related configurations for the VLAN interface of a node in a single
NodeNetworkConfigurationPolicy
NodeNetworkConfigurationPolicy
When a node restarts, the Kubernetes NMState Operator cannot control the order in which policies are applied. Therefore, if you use separate policies for related network configurations, the Kubernetes NMState Operator might apply these policies in a sequence that results in a degraded network object.
The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: vlan-eth1-policy
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: eth1.102
description: VLAN using eth1
type: vlan
state: up
vlan:
base-iface: eth1
id: 102
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostnamenode selector. - 4
- Name of the interface. When deploying on bare metal, only the
<interface_name>.<vlan_number>VLAN format is supported. - 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates a VLAN.
- 7
- The requested state for the interface after creation.
- 8
- The node NIC to which the VLAN is attached.
- 9
- The VLAN tag.
27.2.6.3. Example: Bond interface node network configuration policy Link kopierenLink in die Zwischenablage kopiert!
Create a bond interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
OpenShift Container Platform only supports the following bond modes:
-
active-backup
-
balance-xor
-
802.3ad
Other bond modes are not supported.
The
balance-xor
802.3ad
active-backup
The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: bond0-eth1-eth2-policy
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: bond0
description: Bond with ports eth1 and eth2
type: bond
state: up
ipv4:
dhcp: true
enabled: true
link-aggregation:
mode: active-backup
options:
miimon: '140'
port:
- eth1
- eth2
mtu: 1450
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostnamenode selector. - 4
- Name of the interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates a bond.
- 7
- The requested state for the interface after creation.
- 8
- Optional: If you do not use
dhcp, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4in this example. - 10
- The driver mode for the bond. This example uses
active backup. - 11
- Optional: This example uses miimon to inspect the bond link every 140ms.
- 12
- The subordinate node NICs in the bond.
- 13
- Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to
1500by default.
27.2.6.4. Example: Ethernet interface node network configuration policy Link kopierenLink in die Zwischenablage kopiert!
Configure an Ethernet interface on nodes in the cluster by applying a
NodeNetworkConfigurationPolicy
The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: eth1-policy
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: eth1
description: Configuring eth1 on node01
type: ethernet
state: up
ipv4:
dhcp: true
enabled: true
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses a
hostnamenode selector. - 4
- Name of the interface.
- 5
- Optional: Human-readable description of the interface.
- 6
- The type of interface. This example creates an Ethernet networking interface.
- 7
- The requested state for the interface after creation.
- 8
- Optional: If you do not use
dhcp, you can either set a static IP or leave the interface without an IP address. - 9
- Enables
ipv4in this example.
27.2.6.5. Example: Multiple interfaces in the same node network configuration policy Link kopierenLink in die Zwischenablage kopiert!
You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest.
The following example YAML file creates a bond that is named
bond10
bond10.103
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: bond-vlan
spec:
nodeSelector:
kubernetes.io/hostname: <node01>
desiredState:
interfaces:
- name: bond10
description: Bonding eth2 and eth3
type: bond
state: up
link-aggregation:
mode: balance-xor
options:
miimon: '140'
port:
- eth2
- eth3
- name: bond10.103
description: vlan using bond10
type: vlan
state: up
vlan:
base-iface: bond10
id: 103
ipv4:
dhcp: true
enabled: true
- 1
- Name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. - 3
- This example uses
hostnamenode selector. - 4 11
- Name of the interface.
- 5 12
- Optional: Human-readable description of the interface.
- 6 13
- The type of interface.
- 7 14
- The requested state for the interface after creation.
- 8
- The driver mode for the bond.
- 9
- Optional: This example uses miimon to inspect the bond link every 140ms.
- 10
- The subordinate node NICs in the bond.
- 15
- The node NIC to which the VLAN is attached.
- 16
- The VLAN tag.
- 17
- Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address.
- 18
- Enables ipv4 in this example.
27.2.7. Capturing the static IP of a NIC attached to a bridge Link kopierenLink in die Zwischenablage kopiert!
Capturing the static IP of a NIC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
27.2.7.1. Example: Linux bridge interface node network configuration policy to inherit static IP address from the NIC attached to the bridge Link kopierenLink in die Zwischenablage kopiert!
Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single
NodeNetworkConfigurationPolicy
The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br1-eth1-copy-ipv4-policy
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
capture:
eth1-nic: interfaces.name=="eth1"
eth1-routes: routes.running.next-hop-interface=="eth1"
br1-routes: capture.eth1-routes | routes.running.next-hop-interface := "br1"
desiredState:
interfaces:
- name: br1
description: Linux bridge with eth1 as a port
type: linux-bridge
state: up
ipv4: "{{ capture.eth1-nic.interfaces.0.ipv4 }}"
bridge:
options:
stp:
enabled: false
port:
- name: eth1
routes:
config: "{{ capture.br1-routes.routes.running }}"
- 1
- The name of the policy.
- 2
- Optional: If you do not include the
nodeSelectorparameter, the policy applies to all nodes in the cluster. This example uses thenode-role.kubernetes.io/worker: ""node selector to select all worker nodes in the cluster. - 3
- The reference to the node NIC to which the bridge attaches.
- 4
- The type of interface. This example creates a bridge.
- 5
- The IP address of the bridge interface. This value matches the IP address of the NIC which is referenced by the
spec.capture.eth1-nicentry. - 6
- The node NIC to which the bridge attaches.
27.2.8. Examples: IP management Link kopierenLink in die Zwischenablage kopiert!
The following example configuration snippets demonstrate different methods of IP management.
These examples use the
ethernet
27.2.8.1. Static Link kopierenLink in die Zwischenablage kopiert!
The following snippet statically configures an IP address on the Ethernet interface:
...
interfaces:
- name: eth1
description: static IP on eth1
type: ethernet
state: up
ipv4:
dhcp: false
address:
- ip: 192.168.122.250
prefix-length: 24
enabled: true
...
- 1
- Replace this value with the static IP address for the interface.
27.2.8.2. No IP address Link kopierenLink in die Zwischenablage kopiert!
The following snippet ensures that the interface has no IP address:
...
interfaces:
- name: eth1
description: No IP on eth1
type: ethernet
state: up
ipv4:
enabled: false
...
Always set the
state
up
ipv4.enabled
ipv6.enabled
false
state: down
27.2.8.3. Dynamic host configuration Link kopierenLink in die Zwischenablage kopiert!
The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS:
...
interfaces:
- name: eth1
description: DHCP on eth1
type: ethernet
state: up
ipv4:
dhcp: true
enabled: true
...
The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS:
...
interfaces:
- name: eth1
description: DHCP without gateway or DNS on eth1
type: ethernet
state: up
ipv4:
dhcp: true
auto-gateway: false
auto-dns: false
enabled: true
...
27.2.8.4. DNS Link kopierenLink in die Zwischenablage kopiert!
Setting the DNS configuration is analagous to modifying the
/etc/resolv.conf
...
interfaces:
...
ipv4:
...
auto-dns: false
...
dns-resolver:
config:
search:
- example.com
- example.org
server:
- 8.8.8.8
...
- 1
- You must configure an interface with
auto-dns: falseor you must use static IP configuration on an interface in order for Kubernetes NMState to store custom DNS settings.
You cannot use
br-ex
27.2.8.5. Static routing Link kopierenLink in die Zwischenablage kopiert!
The following snippet configures a static route and a static IP on interface
eth1
...
interfaces:
- name: eth1
description: Static routing on eth1
type: ethernet
state: up
ipv4:
dhcp: false
address:
- ip: 192.0.2.251
prefix-length: 24
enabled: true
routes:
config:
- destination: 198.51.100.0/24
metric: 150
next-hop-address: 192.0.2.1
next-hop-interface: eth1
table-id: 254
...
You cannot use the OVN-Kubernetes
br-ex
br-ex
27.3. Troubleshooting node network configuration Link kopierenLink in die Zwischenablage kopiert!
If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as:
- The configuration fails to be applied on the host.
- The host loses connection to the default gateway.
- The host loses connection to the API server.
27.3.1. Troubleshooting an incorrect node network configuration policy configuration Link kopierenLink in die Zwischenablage kopiert!
You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. If you apply an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy.
In this example, a Linux bridge policy is applied to an example cluster that has three control plane nodes and three compute nodes. The policy fails to be applied because it references an incorrect interface. To find the error, investigate the available NMState resources. You can then update the policy with the correct configuration.
Procedure
Create a policy and apply it to your cluster. The following example creates a simple bridge on the
interface:ens01apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01$ oc apply -f ens01-bridge-testfail.yamlExample output
nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail createdVerify the status of the policy by running the following command:
$ oc get nncpThe output shows that the policy failed:
Example output
NAME STATUS ens01-bridge-testfail FailedToConfigureHowever, the policy status alone does not indicate if it failed on all nodes or a subset of nodes.
List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, it suggests that the problem is with a specific node configuration. If the policy failed on all nodes, it suggests that the problem is with the policy.
$ oc get nnceThe output shows that the policy failed on all nodes:
Example output
NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigureView one of the failed enactments and look at the traceback. The following command uses the output tool
to filter the output:jsonpath$ oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}'This command returns a large traceback that has been edited for brevity:
Example output
error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' ... libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\n current_state.interfaces[ifname],\nlibnmstate.error.NmstateVerificationError:The
lists theNmstateVerificationErrorpolicy configuration, thedesiredconfiguration of the policy on the node, and thecurrenthighlighting the parameters that do not match. In this example, thedifferenceis included in theport, which suggests that the problem is the port configuration in the policy.differenceTo ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the
object. The following command returns the network configuration for theNodeNetworkStatenode:control-plane-1$ oc get nns control-plane-1 -o yamlThe output shows that the interface name on the nodes is
but the failed policy incorrectly usesens1:ens01Example output
- ipv4: ... name: ens1 state: up type: ethernetCorrect the error by editing the existing policy:
$ oc edit nncp ens01-bridge-testfail... port: - name: ens1Save the policy to apply the correction.
Check the status of the policy to ensure it updated successfully:
$ oc get nncpExample output
NAME STATUS ens01-bridge-testfail SuccessfullyConfigured
The updated policy is successfully configured on all nodes in the cluster.
27.3.2. Troubleshooting DNS connectivity issues in a disconnected environment Link kopierenLink in die Zwischenablage kopiert!
If you experience DNS connectivity issues when configuring
nmstate
root-servers.net
Ensure that the DNS server includes a name server (NS) entry for the
root-servers.net
27.3.2.1. Configuring the bind9 DNS named server Link kopierenLink in die Zwischenablage kopiert!
For a cluster configured to query a
bind9
root-servers.net
/var/named/named.localhost
Procedure
Add the
zone at the end of theroot-servers.netconfiguration file by running the following command:/etc/named.conf$ cat >> /etc/named.conf <<EOF zone "root-servers.net" IN { type master; file "named.localhost"; }; EOFRestart the
service by running the following command:named$ systemctl restart namedConfirm that the
zone is present by running the following command:root-servers.net$ journalctl -u named|grep root-servers.netExample output
Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0Verify that the DNS server can resolve the NS record for the
domain by running the following command:root-servers.net$ host -t NS root-servers.net. 127.0.0.1Example output
Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net.
27.3.2.2. Configuring the dnsmasq DNS server Link kopierenLink in die Zwischenablage kopiert!
If you are using
dnsmasq
root-servers.net
root-servers.net
Create a configuration file that delegates the domain
to another DNS server by running the following command:root-servers.net$ echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.confRestart the
service by running the following command:dnsmasq$ systemctl restart dnsmasqConfirm that the
domain is delegated to another DNS server by running the following command:root-servers.net$ journalctl -u dnsmasq|grep root-servers.netExample output
Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.netVerify that the DNS server can resolve the NS record for the
domain by running the following command:root-servers.net$ host -t NS root-servers.net. 127.0.0.1Example output
Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net.