Chapter 5. Multiple networks


5.1. About using multiple networks

In addition to the default OVN-Kubernetes Container Network Interface (CNI) plugin, the MicroShift Multus CNI is available to chain other CNI plugins. Installing and using MicroShift Multus is optional.

5.1.1. Additional networks in MicroShift

During cluster installation, the default pod network is configured with default values unless you customize the configuration. The default network handles all ordinary network traffic for the cluster. Using the MicroShift Multus CNI plugin, you can add additional interfaces to pods from other networks. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing.

5.1.1.1. Supported additional networks for network isolation

The following additional networks are supported in MicroShift 4.17:

  • Bridge: Allows pods on the same host to communicate with each other and the host.
  • IPVLAN: Allows pods on a host to communicate with other hosts.

    • This is similar to a MACVLAN-based additional network.
    • Each pod shares the same MAC address as the parent physical network interface, unlike a MACVLAN-based additional network.
  • MACVLAN: Allows pods on a host to communicate with other hosts and the pods on those other hosts by using a physical network interface. Each pod that is attached to a MACVLAN-based additional network is provided with a unique MAC address.
Note

Setting network policies for additional networks is not supported.

5.1.1.2. Use case: Additional networks for network isolation

You can use an additional network in situations where network isolation is needed, including control plane and data plane separation. For example, you can configure an additional interface if you want pods to access a network on the host and also communicate with devices deployed to the edge. These edge devices might be on an isolated operator network or are periodically disconnected.

Isolating network traffic is useful for the following performance and security reasons:

Performance
You can send traffic on two different planes to manage the amount of traffic on each plane.
Security
You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.
Important

The Multus CNI plugin is deployed when the MicroShift service starts up. Therefore, a host restart is required if the microshift-multus RPM package is added after MicroShift has started. Restarting ensures that all containers are re-created with Multus annotations.

5.1.1.3. How additional networks are implemented

All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network.

  • You can view the interfaces for a pod by using the oc get pod <pod_name> -o=jsonpath='{ .metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status }' command.
  • If you add additional network interfaces that use the MicroShift Multus CNI, they are named net1, net2, …​, netN.
  • The CNI configuration is created when the MicroShift Multus DaemonSet starts. This configuration is autogenerated and includes the primary CNI that is the default delegate. For MicroShift, the default CNI is OVN-Kubernetes.

5.1.1.4. How to attached additional networks to pods

To attach additional network interfaces to a pod, you must create and apply configurations that define how the interfaces are attached.

  • You must configure any additional networks you want to use. Because of individual differences in networks, no default configuration is provided.
  • You must apply YAML manifest to specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A configuration inside each of these CRs defines how that interface is created.
  • CRI-O must be configured to use Multus. A default configuration is included in the microshift-multus RPM.

    • If the Multus CNI is installed on an existing MicroShift instance, the host must be restarted.
    • If the Multus CNI is installed alongside MicroShift, you can add CRs and pods and then start the MicroShift service. Restarting the host in this scenario is not needed.

5.1.1.5. Configurations for additional network types

The specific configuration fields for additional networks is described in the following sections.

5.1.2. Installing the Multus CNI plugin on a running cluster

If you want to attach additional networks to a pod for high-performance network configurations, you can install the MicroShift Multus RPM package. After installation, a host restart is required to recreate all the pods with the Multus annotation.

Important

Uninstalling the Multus CNI plugin is not supported.

Prerequisites

  1. You have root access to the host.

Procedure

  1. Install the Multus RPM package by running the following command:

    $ sudo dnf install microshift-multus
    Tip

    If you create your custom resources (CRs) for additional networks now, you can complete your installation and apply configurations with one restart.

  2. To apply the package manifest to an active cluster, restart the host by running the following command:

    $ sudo systemctl restart

Verification

  1. After restarting, ensure that the Multus CNI plugin components are created by running the following command:

    $ oc get pod -A | grep multus

    Example output

    openshift-multus      dhcp-daemon-ktzqf     1/1   Running   0     45h
    openshift-multus      multus-4frf4          1/1   Running   0     45h

Next steps

  1. If you have not done so, configure and apply the additional networks you want to use.
  2. Deploy your applications that use the created CRs.

5.1.3. Configuration for a bridge additional network

The following object describes the configuration parameters for the Bridge CNI plugin:

Table 5.1. Bridge CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.4.0 value is required.

type

string

The name of the CNI plugin to configure: bridge.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

bridge

string

Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0.

ipMasq

boolean

Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge’s IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false.

isGateway

boolean

Optional: Set to true to assign an IP address to the bridge. The default value is false.

isDefaultGateway

boolean

Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false. If isDefaultGateway is set to true, then isGateway is also set to true automatically.

forceAddress

boolean

Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false, if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false.

hairpinMode

boolean

Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay. The default value is false.

promiscMode

boolean

Optional: Set to true to enable promiscuous mode on the bridge. The default value is false.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

enabledad

boolean

Optional: Enables duplicate address detection for the container side veth. The default value is false.

macspoofchk

boolean

Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false.

5.1.3.1. Bridge CNI plugin configuration example

The following example configures an additional network named bridge-conf for use with the MicroShift Multus CNI:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: bridge-conf
spec:
  config: '{
      "cniVersion": "0.4.0",
      "type": "bridge",
      "bridge": "test-bridge",
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "ranges": [
          [
            {
              "subnet": "10.10.0.0/16",
              "rangeStart": "10.10.1.20",
              "rangeEnd": "10.10.3.50",
              "gateway": "10.10.0.254"
            }
          ]
        ],
        "dataDir": "/var/lib/cni/test-bridge"
      }
    }'

5.1.4. Configuration for an ipvlan additional network

The following object describes the configuration parameters for the IPVLAN CNI plugin:

Table 5.2. IPVLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: ipvlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained.

mode

string

Optional: The operating mode for the virtual network. The value must be l2, l3, or l3s. The default value is l2.

master

string

Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

linkInContainer

boolean

Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface.

Important
  • The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container is not able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol (PTP).
  • A single master interface cannot simultaneously be configured to use both macvlan and ipvlan.
  • For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the previous result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the previous result is used to configure the ipvlan interface.

5.1.4.1. IPVLAN CNI plugin configuration example

The following example configures an additional network named ipvlan-net:

{
  "cniVersion": "0.3.1",
  "name": "ipvlan-net",
  "type": "ipvlan",
  "master": "eth1",
  "linkInContainer": false,
  "mode": "l3",
  "ipam": {
    "type": "static",
    "addresses": [
       {
         "address": "192.168.10.10/24"
       }
    ]
  }
}

5.1.5. Configuration for a macvlan additional network

The following object describes the configuration parameters for the MACVLAN CNI plugin:

Table 5.3. MACVLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: macvlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

mode

string

Optional: Configures traffic visibility on the virtual network. Must be either bridge, passthru, private, or vepa. If a value is not provided, the default value is bridge.

master

string

Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used.

mtu

string

Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

linkInContainer

boolean

Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface.

Note

If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts.

5.1.5.1. MACVLAN CNI plugin configuration example

The following example configures an additional network named macvlan-net:

{
  "cniVersion": "0.3.1",
  "name": "macvlan-net",
  "type": "macvlan",
  "master": "eth1",
  "linkInContainer": false,
  "mode": "bridge",
  "ipam": {
    "type": "dhcp"
    }
}

5.1.6. Additional resources

5.2. Configuring and using multiple networks

After you have installed the MicroShift Multus Container Network Interface (CNI), you can use other networking plugins by using configurations.

5.2.1. IP address management types and additional networks

IP addresses are provisioned for an additional network through an IP Address Management (IPAM) CNI plugin that you configure. Supported IP address provisioning types in MicroShift are host-local, static, and dhcp.

5.2.1.1. bridge interface specifics

When using the bridge type interface and the dhcp IPAM, a DHCP server listening on the bridged network is required. If you are using a firewall, configuring the firewalld service by running the firewall-cmd --remove-service=dhcp command to allow DHCP traffic on the network zone is also required.

5.2.1.2. macvlan interface specifics

The macvlan type interface accesses the network that the host is connected to. This means that the interface can receive an IP address from the DHCP server on the host network if the dhcp IPAM plugin is used.

5.2.1.3. ipvlan interface specifics

The ipvlan interface also has direct access to the host network, but shares a MAC address with the host interface. The ipvlan type interface cannot be used with the dhcp plugin because of the shared MAC address. The IPAM plugin does not support the DHCP protocol with ClientID.

5.2.2. Creating a NetworkAttachmentDefinition for an additional network

Use the following procedure to create a NetworkAttachmentDefinition configuration file for an additional network. In this example, a bridge-type interface is used. You can also use the example workflow here that uses host-local IP address management (IPAM) to configure other supported additional network types.

Important

If you use bridge and the dhcp IPAM, a DHCP server listening on the bridged network is required. If you are also using a firewall, configuring the firewalld service to allow DHCP traffic on the network zone is also required. You can run the firewall-cmd --remove-service=dhcp command in this case.

Prerequisites

  • The MicroShift Multus CNI is installed.
  • The OpenShift CLI (oc) is installed.
  • The cluster is running.

Procedure

  1. Optional: Verify that the MicroShift cluster is running with the Multus CNI by running the following command:

    $ oc get pods -n openshift-multus

    Example output

    NAME                READY   STATUS    RESTARTS   AGE
    dhcp-daemon-dfbzw   1/1     Running   0          5h
    multus-rz8xc        1/1     Running   0          5h

  2. Create a NetworkAttachmentDefinition configuration file by running the following command and using the following example file for reference:

    $ oc apply -f network-attachment-definition.yaml

    Example NetworkAttachmentDefinition file

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: bridge-conf
    spec:
      config: '{
          "cniVersion": "0.4.0",
          "type": "bridge", 1
          "bridge": "br-test", 2
          "mode": "bridge",
          "ipam": {
            "type": "host-local", 3
            "ranges": [
              [
                {
                  "subnet": "10.10.0.0/24",
                  "rangeStart": "10.10.0.20",
                  "rangeEnd": "10.10.0.50",
                  "gateway": "10.10.0.254"
                 }
              ],
              [
                {
                  "subnet": "fd00:IJKL:MNOP:10::0/64", 4
                  "rangeStart": "fd00:IJKL:MNOP:10::1",
                  "rangeEnd": "fd00:IJKL:MNOP:10::9"
            "dataDir": "/var/lib/cni/br-test"
          }
        }'

    1
    The type value specifies a name of the CNI plugin. This example uses the bridge type.
    2
    The bridge value is name of the bridge on the MicroShift host that is used. The additional interface of the pod is connected to that bridge. If the interface does not exist on the host, the Bridge CNI creates it. If the interface already exists, it is reused. In this example, the name of the interface is br-test.
    3
    The IPAM type.
    4
    IPv6 addresses can be added to the secondary interface.
    Note

    Using the name of the bridge is specific to the bridge type of plugin. Other plugins use different fields in their NetworkAttachmentDefinitions. For example, the macvlan and ipvlan configurations use master to specify the host interface to attach.

5.2.3. Adding a pod to an additional network

You can add a pod to an additional network. At the time a pod is created, additional networks are attached to it. The pod continues to send normal cluster-related network traffic over the default network.

If you want to attach additional networks to a pod that is already running, you must restart the pod.

Prerequisites

  • The OpenShift CLI (oc) is installed.
  • The cluster is running.
  • A network defined by a NetworkAttachmentDefinition object that you want to attach the pod to exists.

Procedure

  1. Add an annotation to a Pod YAML file. Only one of the following annotation formats can be used:

    1. To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod:

      apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1
      # ...
      1
      Replace <network> with the name of the additional network to associate with the pod. To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network.

      Example annotation for a bridge-type additional network

      apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          k8s.v1.cni.cncf.io/networks: bridge-conf
      # ...

    2. To attach an additional network with customizations, add an annotation with the following format:

      apiVersion: v1
      kind: Pod
      metadata:
        annotations:
          k8s.v1.cni.cncf.io/networks: |-
            [
              {
                "name": "<network>", 1
                "namespace": "<namespace>", 2
                "default-route": ["<default-route>"] 3
              }
            ]
      # ...
      1
      Specify the name of the additional network defined by a NetworkAttachmentDefinition object.
      2
      Specify the namespace where the NetworkAttachmentDefinition object is defined.
      3
      Optional: Specify an override for the default route, such as 192.168.17.1.
  2. To create a Pod YAML file and add the NetworkAttachmentDefinition annotation for an additional network, run the following command and use the example YAML:

    $ oc apply -f ./<test_bridge>.yaml 1
    1
    Replace <test_bridge> with the pod name that you want to use.

    Example output

    pod/test_bridge created

    Example test_bridge pod YAML

    apiVersion: v1
    kind: Pod
    metadata:
      name: test_bridge
      annotations:
        k8s.v1.cni.cncf.io/networks: bridge-conf
      labels:
        app: test_bridge
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: hello-microshift
        image: quay.io/microshift/busybox:1.36
        command: ["/bin/sh"]
        args: ["-c", "while true; do echo -ne \"HTTP/1.0 200 OK\r\nContent-Length: 16\r\n\r\nHello MicroShift\" | nc -l -p 8080 ; done"]
        ports:
        - containerPort: 8080
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1001
          runAsGroup: 1001
          seccompProfile:
            type: RuntimeDefault

  3. Make sure that the NetworkAttachmentDefinition annotation is correct:

    Example NetworkAttachmentDefinition annotation

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: bridge-conf
    # ...

  4. Optional: To confirm that the NetworkAttachmentDefinition annotation exists in a Pod YAML, run the following command, replacing <name> with the name of the pod.

    $ oc get pod <name> -o yaml 1
    1
    Replace <name> with the pod name you want to use. In the following example, <test_bridge> is used.

    In the following example, the test_bridge is attached to the net1 additional network:

    $ oc get pod <test_bridge> -o yaml

    Example output

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: bridge-conf
        k8s.v1.cni.cncf.io/network-status: |- 1
          [{
              "name": "ovn-kubernetes",
              "interface": "eth0",
              "ips": [
                  "10.42.0.18"
              ],
              "default": true,
              "dns": {}
          },{
              "name": "bridge-conf",
              "interface": "net1",
              "ips": [
                  "20.2.2.100"
              ],
              "mac": "22:2f:60:a5:f8:00",
              "dns": {}
          }]
      name: pod
      namespace: default
    spec:
    # ...
    status:
    # ...

    1
    The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value.
  5. Verify that the pod is running by running the following command:

    $ oc get pod

    Example output

    NAME          READY   STATUS    RESTARTS   AGE
    test_bridge   1/1     Running   0          81s

5.2.4. Configuring an additional network

After you have created the NetworkAttachmentDefinition object and applied it, use the following example procedure to configure an additional network. In this example, the bridge type additional network is used. You can also use this workflow for other additional network types.

Prerequisite

  1. You created and applied the NetworkAttachmentDefinition object configuration.

Procedure

  1. Verify that the bridge was created on the host by running the following command:

    $ ip a show br-test

    Example output

    22: br-test: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 96:bf:ca:be:1d:15 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::34e2:bbff:fed2:31f2/64 scope link
           valid_lft forever preferred_lft forever

  2. Configure an IP address for the bridge by running the following command:

    $ sudo ip addr add 10.10.0.10/24 dev br-test
  3. Verify that the IP address configuration is added to the bridge by running the following command:

    $ ip a show br-test

    Example output

    22: br-test: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 96:bf:ca:be:1d:15 brd ff:ff:ff:ff:ff:ff
        inet 10.10.0.10/24 scope global br-test 1
           valid_lft forever preferred_lft forever
        inet6 fe80::34e2:bbff:fed2:31f2/64 scope link
           valid_lft forever preferred_lft forever

    1
    The IP address is configured as expected.
  4. Verify the IP address of the pod by running the following command:

    $ oc get pod test-bridge --output=jsonpath='{.metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status}'

    Example output

    [{
        "name": "ovn-kubernetes",
        "interface": "eth0",
        "ips": [
            "10.42.0.17"
        ],
        "mac": "0a:58:0a:2a:00:11",
        "default": true,
        "dns": {}
    },{
        "name": "default/bridge-conf", 1
        "interface": "net1",
        "ips": [
            "10.10.0.20"
        ],
        "mac": "82:01:98:e5:0c:b7",
        "dns": {}

    1
    The bridge additional network is attached as expected.
  5. Optional: You can use oc exec to access the pod and confirm its interfaces by using the ip command:

    $ oc exec -ti test-bridge -- ip a

    Example output

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
        link/ether 0a:58:0a:2a:00:11 brd ff:ff:ff:ff:ff:ff
        inet 10.42.0.17/24 brd 10.42.0.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::858:aff:fe2a:11/64 scope link
           valid_lft forever preferred_lft forever
    3: net1@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
        link/ether 82:01:98:e5:0c:b7 brd ff:ff:ff:ff:ff:ff
        inet 10.10.0.20/24 brd 10.10.0.255 scope global net1 1
           valid_lft forever preferred_lft forever
        inet6 fe80::8001:98ff:fee5:cb7/64 scope link
           valid_lft forever preferred_lft forever

    1
    Pod is attached to the 10.10.0.20 IP address on the net1 interface as expected.
  6. Confirm that the connection is working as expected by accessing the HTTP server in the pod from the MicroShift host. Use the following command:

    $ curl 10.10.0.20:8080

    Example output

    Hello MicroShift

5.2.5. Removing a pod from an additional network

You can remove a pod from an additional network only by deleting the pod.

Prerequisites

  • An additional network is attached to the pod.
  • Install the OpenShift CLI (oc).
  • Log in to the cluster.

Procedure

  • To delete the pod, enter the following command:

    $ oc delete pod <name> -n <namespace>
    • <name> is the name of the pod.
    • <namespace> is the namespace that contains the pod.

5.2.6. Troubleshooting Multus networking

If the settings for multiple networks are not configured properly, pods can fail to start. The following steps can help you solve for a couple common scenarios.

5.2.6.1. Pod networking cannot be configured

If the Multus CNI plugin cannot apply networking annotations to a pod, the pod does not start. Pods can also fail to start if any of the additional network CNIs fail.

Example error

Warning  NoNetworkFound     0s     multus    cannot find a network-attachment-definitio (asdasd) in namespace (default): network-attachment-definitions.k8s.cni.cncf.io "bad-ref-doesnt-exist" not found

In this case, you can take the following steps to trouble CNI failures:

  • Verify the values in both the NetworkAttachmentDefinitions and the annotations.
  • Remove the annotation to verify whether the pod is created successfully with just the default network. If not, this might indicate a networking problem other than the Multus configuration.
  • If you are a device administrator, you can inspect the crio.service or microshift.service logs, paying special attention to those that are generated by the kubelet.

    For example, the following error from the kubelet shows that the primary CNI is not running. This situation can be caused by pods not starting or because of a CRI-O misconfiguration such as an incorrect cni_default_network setting.

    Example kubelet-generated error

    Feb 06 13:47:31 dev microshift[1494]: kubelet E0206 13:47:31.163290    1494 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/samplepod" podUID="fe0f7f7a-8c47-4488-952b-8abc0d8e2602"

5.2.6.2. Missing configuration file

Sometimes a pod cannot be created because the annotations reference a NetworkAttachmentDefinition configuration YAML that does not exist. In this case an error such as the following is usually produced:

Example log

cannot find a network-attachment-definition (bad-conf) in namespace (default): network-attachment-definitions.k8s.cni.cncf.io "bad-conf" not found" pod="default/samplepod"`

Example error output

"CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_samplepod_default_5fa13105-1bfb-4c6b-aee7-3437cfb50e25_0(7517818bd8e85f07b551f749c7529be88b4e7daef0dd572d049aa636950c76c6): error adding pod default_samplepod to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [default/samplepod/5fa13105-1bfb-4c6b-aee7-3437cfb50e25]: error loading k8s delegates k8s args: TryLoadPodDelegates: error in getting k8s network for pod: GetNetworkDelegates: failed getting the delegate: getKubernetesDelegate: cannot find a network-attachment-definition (bad-conf) in namespace (default): network-attachment-definitions.k8s.cni.cncf.io \"bad-conf\" not found" pod="default/samplepod"

To fix this error, create and apply the NetworkAttachmentDefinitions YAML.

5.2.7. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.