Chapter 23. Multiple networks


23.1. Understanding multiple networks

In Kubernetes, container networking is delegated to networking plugins that implement the Container Network Interface (CNI).

OpenShift Container Platform uses the Multus CNI plugin to allow chaining of CNI plugins. During cluster installation, you configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing.

23.1.1. Usage scenarios for an additional network

You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons:

Performance
You can send traffic on two different planes to manage how much traffic is along each plane.
Security
You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.

All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces that use Multus CNI, they are named net1, net2, …​, netN.

To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A CNI configuration inside each of these CRs defines how that interface is created.

23.1.2. Additional networks in OpenShift Container Platform

OpenShift Container Platform provides the following CNI plugins for creating additional networks in your cluster:

23.2. Configuring an additional network

As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported:

23.2.1. Approaches to managing an additional network

You can manage the life cycle of an additional network by two approaches. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure.

For an additional network, IP addresses are provisioned through an IP Address Management (IPAM) CNI plugin that you configure as part of the additional network. The IPAM plugin supports a variety of IP address assignment approaches including DHCP and static assignment.

  • Modify the Cluster Network Operator (CNO) configuration: The CNO automatically creates and manages the NetworkAttachmentDefinition object. In addition to managing the object lifecycle the CNO ensures a DHCP is available for an additional network that uses a DHCP assigned IP address.
  • Applying a YAML manifest: You can manage the additional network directly by creating an NetworkAttachmentDefinition object. This approach allows for the chaining of CNI plugins.
Note

When deploying OpenShift Container Platform nodes with multiple network interfaces on Red Hat OpenStack Platform (RHOSP) with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet id that is attached to the secondary interface:

$ openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>

23.2.2. Configuration for an additional network attachment

An additional network is configured by using the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group.

Important

Do not store any sensitive information or a secret in the NetworkAttachmentDefinition object because this information is accessible by the project administration user.

The configuration for the API is described in the following table:

Table 23.1. NetworkAttachmentDefinition API fields
FieldTypeDescription

metadata.name

string

The name for the additional network.

metadata.namespace

string

The namespace that the object is associated with.

spec.config

string

The CNI plugin configuration in JSON format.

23.2.2.1. Configuration of an additional network through the Cluster Network Operator

The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration.

The following YAML describes the configuration parameters for managing an additional network with the CNO:

Cluster Network Operator configuration

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
spec:
  # ...
  additionalNetworks: 1
  - name: <name> 2
    namespace: <namespace> 3
    rawCNIConfig: |- 4
      {
        ...
      }
    type: Raw

1
An array of one or more additional network configurations.
2
The name for the additional network attachment that you are creating. The name must be unique within the specified namespace.
3
The namespace to create the network attachment in. If you do not specify a value then the default namespace is used.
Important

To prevent namespace issues for the OVN-Kubernetes network plugin, do not name your additional network attachment default, because this namespace is reserved for the default additional network attachment.

4
A CNI plugin configuration in JSON format.

23.2.2.2. Configuration of an additional network from a YAML manifest

The configuration for an additional network is specified from a YAML configuration file, such as in the following example:

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: <name> 1
spec:
  config: |- 2
    {
      ...
    }
1
The name for the additional network attachment that you are creating.
2
A CNI plugin configuration in JSON format.

23.2.3. Configurations for additional network types

The specific configuration fields for additional networks is described in the following sections.

23.2.3.1. Configuration for a bridge additional network

The following object describes the configuration parameters for the bridge CNI plugin:

Table 23.2. Bridge CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: bridge.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

bridge

string

Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0.

ipMasq

boolean

Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge’s IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false.

isGateway

boolean

Optional: Set to true to assign an IP address to the bridge. The default value is false.

isDefaultGateway

boolean

Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false. If isDefaultGateway is set to true, then isGateway is also set to true automatically.

forceAddress

boolean

Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false, if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false.

hairpinMode

boolean

Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay. The default value is false.

promiscMode

boolean

Optional: Set to true to enable promiscuous mode on the bridge. The default value is false.

vlan

string

Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned.

preserveDefaultVlan

string

Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

enabledad

boolean

Optional: Enables duplicate address detection for the container side veth. The default value is false.

macspoofchk

boolean

Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false.

Note

The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface.

Note

To configure uplink for a L2 network you need to allow the vlan on the uplink interface by using the following command:

$  bridge vlan add vid VLAN_ID dev DEV
23.2.3.1.1. bridge configuration example

The following example configures an additional network named bridge-net:

{
  "cniVersion": "0.3.1",
  "name": "bridge-net",
  "type": "bridge",
  "isGateway": true,
  "vlan": 2,
  "ipam": {
    "type": "dhcp"
    }
}

23.2.3.2. Configuration for a host device additional network

Note

Specify your network device by setting only one of the following parameters: device,hwaddr, kernelpath, or pciBusID.

The following object describes the configuration parameters for the host-device CNI plugin:

Table 23.3. Host device CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: host-device.

device

string

Optional: The name of the device, such as eth0.

hwaddr

string

Optional: The device hardware MAC address.

kernelpath

string

Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6.

pciBusID

string

Optional: The PCI address of the network device, such as 0000:00:1f.6.

23.2.3.2.1. host-device configuration example

The following example configures an additional network named hostdev-net:

{
  "cniVersion": "0.3.1",
  "name": "hostdev-net",
  "type": "host-device",
  "device": "eth1"
}

23.2.3.3. Configuration for an VLAN additional network

The following object describes the configuration parameters for the VLAN CNI plugin:

Table 23.4. VLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: vlan.

master

string

The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used.

vlanId

integer

Set the id of the vlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

dns

integer

Optional: DNS information to return, for example, a priority-ordered list of DNS nameservers.

linkInContainer

boolean

Optional: Specifies if the master interface is in the container network namespace or the main network namespace.

23.2.3.3.1. vlan configuration example

The following example configures an additional network named vlan-net:

{
  "name": "vlan-net",
  "cniVersion": "0.3.1",
  "type": "vlan",
  "master": "eth0",
  "mtu": 1500,
  "vlanId": 5,
  "linkInContainer": false,
  "ipam": {
      "type": "host-local",
      "subnet": "10.1.1.0/24"
  },
  "dns": {
      "nameservers": [ "10.1.1.1", "8.8.8.8" ]
  }
}

23.2.3.4. Configuration for an IPVLAN additional network

The following object describes the configuration parameters for the IPVLAN CNI plugin:

Table 23.5. IPVLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: ipvlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained.

mode

string

Optional: The operating mode for the virtual network. The value must be l2, l3, or l3s. The default value is l2.

master

string

Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

Note
  • The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container will not be able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol (PTP).
  • A single master interface cannot simultaneously be configured to use both macvlan and ipvlan.
  • For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the previous result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the previous result is used to configure the ipvlan interface.
23.2.3.4.1. ipvlan configuration example

The following example configures an additional network named ipvlan-net:

{
  "cniVersion": "0.3.1",
  "name": "ipvlan-net",
  "type": "ipvlan",
  "master": "eth1",
  "mode": "l3",
  "ipam": {
    "type": "static",
    "addresses": [
       {
         "address": "192.168.10.10/24"
       }
    ]
  }
}

23.2.3.5. Configuration for a MACVLAN additional network

The following object describes the configuration parameters for the macvlan CNI plugin:

Table 23.6. MACVLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: macvlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

mode

string

Optional: Configures traffic visibility on the virtual network. Must be either bridge, passthru, private, or vepa. If a value is not provided, the default value is bridge.

master

string

Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used.

mtu

string

Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

Note

If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts.

23.2.3.5.1. macvlan configuration example

The following example configures an additional network named macvlan-net:

{
  "cniVersion": "0.3.1",
  "name": "macvlan-net",
  "type": "macvlan",
  "master": "eth1",
  "mode": "bridge",
  "ipam": {
    "type": "dhcp"
    }
}

23.2.3.6. Configuration for an OVN-Kubernetes additional network

The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition custom resource definition (CRD).

Important

Configuration for an OVN-Kubernetes additional network is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Note

Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated network-attachment-definition CR.

The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks.

Note

Networks names must be unique. For example, creating multiple NetworkAttachmentDefinition CRDs with different configurations that reference the same network is unsupported.

23.2.3.6.1. OVN-Kubernetes network plugin JSON configuration table

The following table describes the configuration parameters for the OVN-Kubernetes CNI network plugin:

Table 23.7. OVN-Kubernetes network plugin JSON configuration table
FieldTypeDescription

cniVersion

string

The CNI specification version. The required value is 0.3.1.

name

string

The name of the network. These networks are not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinitions that exist on two different namespaces. This ensures that pods making use of the NetworkAttachmentDefinition on their own different namespaces can communicate over the same secondary network. However, those two different NetworkAttachmentDefinitions must also share the same network specific parameters such as topology, subnets, mtu, and excludeSubnets.

type

string

The name of the CNI plugin to configure. The required value is ovn-k8s-cni-overlay.

topology

string

The topological configuration for the network. The required value is layer2.

subnets

string

The subnet to use for the network across the cluster.

For "topology":"layer2" deployments, IPv6 (2001:DBB::/64) and dual-stack (192.168.100.0/24,2001:DBB::/64) subnets are supported.

mtu

string

The maximum transmission unit (MTU) to the specified value. The default value, 1300, is automatically set by the kernel.

netAttachDefName

string

The metadata namespace and name of the network attachment definition object where this configuration is included. For example, if this configuration is defined in a NetworkAttachmentDefinition in namespace ns1 named l2-network, this should be set to ns1/l2-network.

excludeSubnets

string

A comma-separated list of CIDRs and IPs. IPs are removed from the assignable IP pool, and are never passed to the pods. When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IPs for the pods. Port security only prevents MAC spoofing.

23.2.3.6.2. Configuration for a switched topology

The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments.

Note

Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster.

The following NetworkAttachmentDefinition custom resource definition (CRD) YAML describes the fields needed to configure a switched secondary network.

    {
            "cniVersion": "0.3.1",
            "name": "l2-network",
            "type": "ovn-k8s-cni-overlay",
            "topology":"layer2",
            "subnets": "10.100.200.0/24",
            "mtu": 1300,
            "netAttachDefName": "ns1/l2-network",
            "excludeSubnets": "10.100.200.0/29"
    }
23.2.3.6.3. Configuring pods for additional networks

You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks annotation.

The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide.

apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/networks: l2-network
  name: tinypod
  namespace: ns1
spec:
  containers:
  - args:
    - pause
    image: k8s.gcr.io/e2e-test-images/agnhost:2.36
    imagePullPolicy: IfNotPresent
    name: agnhost-container
23.2.3.6.4. Configuring pods with a static IP address

The following example provisions a pod with a static IP address.

Note
  • You can only specify the IP address for a pod’s secondary network attachment for layer 2 attachments.
  • Specifying a static IP address for the pod is only possible when the attachment configuration does not feature subnets.
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
      {
        "name": "l2-network", 1
        "mac": "02:03:04:05:06:07", 2
        "interface": "myiface1", 3
        "ips": [
          "192.0.2.20/24"
          ] 4
      }
    ]'
  name: tinypod
  namespace: ns1
spec:
  containers:
  - args:
    - pause
    image: k8s.gcr.io/e2e-test-images/agnhost:2.36
    imagePullPolicy: IfNotPresent
    name: agnhost-container
1
The name of the network. This value must be unique across all NetworkAttachmentDefinitions.
2
The MAC address to be assigned for the interface.
3
The name of the network interface to be created for the pod.
4
The IP addresses to be assigned to the network interface.

23.2.4. Configuration of IP address assignment for an additional network

The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins.

You can use the following IP address assignment types:

  • Static assignment.
  • Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network.
  • Dynamic assignment through the Whereabouts IPAM CNI plugin.

23.2.4.1. Static IP address assignment configuration

The following table describes the configuration for static IP address assignment:

Table 23.8. ipam static configuration object
FieldTypeDescription

type

string

The IPAM address type. The value static is required.

addresses

array

An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported.

routes

array

An array of objects specifying routes to configure inside the pod.

dns

array

Optional: An array of objects specifying the DNS configuration.

The addresses array requires objects with the following fields:

Table 23.9. ipam.addresses[] array
FieldTypeDescription

address

string

An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24, then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0.

gateway

string

The default gateway to route egress network traffic to.

Table 23.10. ipam.routes[] array
FieldTypeDescription

dst

string

The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route.

gw

string

The gateway where network traffic is routed.

Table 23.11. ipam.dns object
FieldTypeDescription

nameservers

array

An array of one or more IP addresses for to send DNS queries to.

domain

array

The default domain to append to a hostname. For example, if the domain is set to example.com, a DNS lookup query for example-host is rewritten as example-host.example.com.

search

array

An array of domain names to append to an unqualified hostname, such as example-host, during a DNS lookup query.

Static IP address assignment configuration example

{
  "ipam": {
    "type": "static",
      "addresses": [
        {
          "address": "191.168.1.7/24"
        }
      ]
  }
}

23.2.4.2. Dynamic IP address (DHCP) assignment configuration

The following JSON describes the configuration for dynamic IP address address assignment with DHCP.

Renewal of DHCP leases

A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster.

To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example:

Example shim network attachment definition

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
spec:
  additionalNetworks:
  - name: dhcp-shim
    namespace: default
    type: Raw
    rawCNIConfig: |-
      {
        "name": "dhcp-shim",
        "cniVersion": "0.3.1",
        "type": "bridge",
        "ipam": {
          "type": "dhcp"
        }
      }
  # ...

Table 23.12. ipam DHCP configuration object
FieldTypeDescription

type

string

The IPAM address type. The value dhcp is required.

Dynamic IP address (DHCP) assignment configuration example

{
  "ipam": {
    "type": "dhcp"
  }
}

23.2.4.3. Dynamic IP address assignment configuration with Whereabouts

The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server.

The following table describes the configuration for dynamic IP address assignment with Whereabouts:

Table 23.13. ipam whereabouts configuration object
FieldTypeDescription

type

string

The IPAM address type. The value whereabouts is required.

range

string

An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses.

exclude

array

Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned.

Dynamic IP address assignment configuration example that uses Whereabouts

{
  "ipam": {
    "type": "whereabouts",
    "range": "192.0.2.192/27",
    "exclude": [
       "192.0.2.192/30",
       "192.0.2.196/32"
    ]
  }
}

23.2.4.4. Creating a whereabouts-reconciler daemon set

The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster by using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pod gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down.

Note

You can also use a NetworkAttachmentDefinition custom resource (CR) for dynamic IP address assignment.

The whereabouts-reconciler daemon set is automatically created when you configure an additional network through the Cluster Network Operator. It is not automatically created when you configure an additional network from a YAML manifest.

To trigger the deployment of the whereabouts-reconciler daemon set, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource (CR) file.

Use the following procedure to deploy the whereabouts-reconciler daemon set.

Procedure

  1. Edit the Network.operator.openshift.io custom resource (CR) by running the following command:

    $ oc edit network.operator.openshift.io cluster
  2. Include the additionalNetworks section shown in this example YAML extract within the spec definition of the custom resource (CR):

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    # ...
    spec:
      additionalNetworks:
      - name: whereabouts-shim
        namespace: default
        rawCNIConfig: |-
          {
           "name": "whereabouts-shim",
           "cniVersion": "0.3.1",
           "type": "bridge",
           "ipam": {
             "type": "whereabouts"
           }
          }
        type: Raw
    # ...
  3. Save the file and exit the text editor.
  4. Verify that the whereabouts-reconciler daemon set deployed successfully by running the following command:

    $ oc get all -n openshift-multus | grep whereabouts-reconciler

    Example output

    pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s
    pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s
    pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s
    pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s
    pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s
    pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s
    daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s

23.2.4.5. Configuring the Whereabouts IP reconciler schedule

The Whereabouts IPAM CNI plugin runs the IP reconciler daily. This process cleans up any stranded IP allocations that might result in exhausting IPs and therefore prevent new pods from getting an IP allocated to them.

Use this procedure to change the frequency at which the IP reconciler runs.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You have access to the cluster as a user with the cluster-admin role.
  • You have deployed the whereabouts-reconciler daemon set, and the whereabouts-reconciler pods are up and running.

Procedure

  1. Run the following command to create a ConfigMap object named whereabouts-config in the openshift-multus namespace with a specific cron expression for the IP reconciler:

    $ oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *"

    This cron expression indicates the IP reconciler runs every 15 minutes. Adjust the expression based on your specific requirements.

    Note

    The whereabouts-reconciler daemon set can only consume a cron expression pattern that includes five asterisks. The sixth, which is used to denote seconds, is currently not supported.

  2. Retrieve information about resources related to the whereabouts-reconciler daemon set and pods within the openshift-multus namespace by running the following command:

    $ oc get all -n openshift-multus | grep whereabouts-reconciler

    Example output

    pod/whereabouts-reconciler-2p7hw                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-76jk7                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-94zw6                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-mfh68                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-pgshz                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-xn5xz                   1/1     Running   0             4m14s
    daemonset.apps/whereabouts-reconciler          6         6         6       6            6           kubernetes.io/os=linux   4m16s

  3. Run the following command to verify that the whereabouts-reconciler pod runs the IP reconciler with the configured interval:

    $ oc -n openshift-multus logs whereabouts-reconciler-2p7hw

    Example output

    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CREATE
    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CHMOD
    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..data_tmp": RENAME
    2024-02-02T16:33:54Z [verbose] using expression: */15 * * * *
    2024-02-02T16:33:54Z [verbose] configuration updated to file "/cron-schedule/..data". New cron expression: */15 * * * *
    2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id "00c2d1c9-631d-403f-bb86-73ad104a6817" - new cron expression: */15 * * * *
    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/config": CREATE
    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_26_17.3874177937": REMOVE
    2024-02-02T16:45:00Z [verbose] starting reconciler run
    2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data
    2024-02-02T16:45:00Z [debug] listing IP pools
    2024-02-02T16:45:00Z [debug] no IP addresses to cleanup
    2024-02-02T16:45:00Z [verbose] reconciler success

23.2.5. Creating an additional network attachment with the Cluster Network Operator

The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition object automatically.

Important

Do not edit the NetworkAttachmentDefinition objects that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Optional: Create the namespace for the additional networks:

    $ oc create namespace <namespace_name>
  2. To edit the CNO configuration, enter the following command:

    $ oc edit networks.operator.openshift.io cluster
  3. Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR.

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      # ...
      additionalNetworks:
      - name: tertiary-net
        namespace: namespace2
        type: Raw
        rawCNIConfig: |-
          {
            "cniVersion": "0.3.1",
            "name": "tertiary-net",
            "type": "ipvlan",
            "master": "eth1",
            "mode": "l2",
            "ipam": {
              "type": "static",
              "addresses": [
                {
                  "address": "192.168.1.23/24"
                }
              ]
            }
          }
  4. Save your changes and quit the text editor to commit your changes.

Verification

  • Confirm that the CNO created the NetworkAttachmentDefinition object by running the following command. There might be a delay before the CNO creates the object.

    $ oc get network-attachment-definitions -n <namespace>

    where:

    <namespace>
    Specifies the namespace for the network attachment that you added to the CNO configuration.

    Example output

    NAME                 AGE
    test-network-1       14m

23.2.6. Creating an additional network attachment by applying a YAML manifest

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a YAML file with your additional network configuration, such as in the following example:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: next-net
    spec:
      config: |-
        {
          "cniVersion": "0.3.1",
          "name": "work-network",
          "type": "host-device",
          "device": "eth1",
          "ipam": {
            "type": "dhcp"
          }
        }
  2. To create the additional network, enter the following command:

    $ oc apply -f <file>.yaml

    where:

    <file>
    Specifies the name of the file contained the YAML manifest.

23.3. About virtual routing and forwarding

23.3.1. About virtual routing and forwarding

Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways.

Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic.

23.3.1.1. Benefits of secondary networks for pods for telecommunications operators

In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster’s main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks.

23.4. Configuring multi-network policy

As a cluster administrator, you can configure multi-network for additional networks. You can specify multi-network policy for SR-IOV and macvlan additional networks. Macvlan additional networks are fully supported. Other types of additional networks, such as ipvlan, are not supported.

Important

Support for configuring multi-network policies for SR-IOV additional networks is a Technology Preview feature and is only supported with kernel network interface cards (NICs). SR-IOV is not supported for Data Plane Development Kit (DPDK) applications.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Note

Configured network policies are ignored in IPv6 networks.

23.4.1. Differences between multi-network policy and network policy

Although the MultiNetworkPolicy API implements the NetworkPolicy API, there are several important differences:

  • You must use the MultiNetworkPolicy API:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
  • You must use the multi-networkpolicy resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with the oc get multi-networkpolicy <name> command where <name> is the name of a multi-network policy.
  • You must specify an annotation with the name of the network attachment definition that defines the macvlan or SR-IOV additional network:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/policy-for: <network_name>

    where:

    <network_name>
    Specifies the name of a network attachment definition.

23.4.2. Enabling multi-network policy for the cluster

As a cluster administrator, you can enable multi-network policy support on your cluster.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster with a user with cluster-admin privileges.

Procedure

  1. Create the multinetwork-enable-patch.yaml file with the following YAML:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      useMultiNetworkPolicy: true
  2. Configure the cluster to enable multi-network policy:

    $ oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml

    Example output

    network.operator.openshift.io/cluster patched

23.4.3. Working with multi-network policy

As a cluster administrator, you can create, edit, view, and delete multi-network policies.

23.4.3.1. Prerequisites

  • You have enabled multi-network policy support for your cluster.

23.4.3.2. Creating a multi-network policy using the CLI

To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create a policy rule:

    1. Create a <policy_name>.yaml file:

      $ touch <policy_name>.yaml

      where:

      <policy_name>
      Specifies the multi-network policy file name.
    2. Define a multi-network policy in the file that you just created, such as in the following examples:

      Deny ingress from all pods in all namespaces

      This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies.

      apiVersion: k8s.cni.cncf.io/v1beta1
      kind: MultiNetworkPolicy
      metadata:
        name: deny-by-default
        annotations:
          k8s.v1.cni.cncf.io/policy-for: <network_name>
      spec:
        podSelector:
        ingress: []

      where:

      <network_name>
      Specifies the name of a network attachment definition.

      Allow ingress from all pods in the same namespace

      apiVersion: k8s.cni.cncf.io/v1beta1
      kind: MultiNetworkPolicy
      metadata:
        name: allow-same-namespace
        annotations:
          k8s.v1.cni.cncf.io/policy-for: <network_name>
      spec:
        podSelector:
        ingress:
        - from:
          - podSelector: {}

      where:

      <network_name>
      Specifies the name of a network attachment definition.

      Allow ingress traffic to one pod from a particular namespace

      This policy allows traffic to pods labelled pod-a from pods running in namespace-y.

      apiVersion: k8s.cni.cncf.io/v1beta1
      kind: MultiNetworkPolicy
      metadata:
        name: allow-traffic-pod
        annotations:
          k8s.v1.cni.cncf.io/policy-for: <network_name>
      spec:
        podSelector:
         matchLabels:
            pod: pod-a
        policyTypes:
        - Ingress
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                 kubernetes.io/metadata.name: namespace-y

      where:

      <network_name>
      Specifies the name of a network attachment definition.

      Restrict traffic to a service

      This policy when applied ensures every pod with both labels app=bookstore and role=api can only be accessed by pods with label app=bookstore. In this example the application could be a REST API server, marked with labels app=bookstore and role=api.

      This example addresses the following use cases:

      • Restricting the traffic to a service to only the other microservices that need to use it.
      • Restricting the connections to a database to only permit the application using it.

        apiVersion: k8s.cni.cncf.io/v1beta1
        kind: MultiNetworkPolicy
        metadata:
          name: api-allow
          annotations:
            k8s.v1.cni.cncf.io/policy-for: <network_name>
        spec:
          podSelector:
            matchLabels:
              app: bookstore
              role: api
          ingress:
          - from:
              - podSelector:
                  matchLabels:
                    app: bookstore

        where:

        <network_name>
        Specifies the name of a network attachment definition.
  2. To create the multi-network policy object, enter the following command:

    $ oc apply -f <policy_name>.yaml -n <namespace>

    where:

    <policy_name>
    Specifies the multi-network policy file name.
    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created

Note

If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console.

23.4.3.3. Editing a multi-network policy

You can edit a multi-network policy in a namespace.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace where the multi-network policy exists.

Procedure

  1. Optional: To list the multi-network policy objects in a namespace, enter the following command:

    $ oc get multi-networkpolicy

    where:

    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
  2. Edit the multi-network policy object.

    • If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command.

      $ oc apply -n <namespace> -f <policy_file>.yaml

      where:

      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
      <policy_file>
      Specifies the name of the file containing the network policy.
    • If you need to update the multi-network policy object directly, enter the following command:

      $ oc edit multi-networkpolicy <policy_name> -n <namespace>

      where:

      <policy_name>
      Specifies the name of the network policy.
      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
  3. Confirm that the multi-network policy object is updated.

    $ oc describe multi-networkpolicy <policy_name> -n <namespace>

    where:

    <policy_name>
    Specifies the name of the multi-network policy.
    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Note

If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.

23.4.3.4. Viewing multi-network policies using the CLI

You can examine the multi-network policies in a namespace.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace where the multi-network policy exists.

Procedure

  • List multi-network policies in a namespace:

    • To view multi-network policy objects defined in a namespace, enter the following command:

      $ oc get multi-networkpolicy
    • Optional: To examine a specific multi-network policy, enter the following command:

      $ oc describe multi-networkpolicy <policy_name> -n <namespace>

      where:

      <policy_name>
      Specifies the name of the multi-network policy to inspect.
      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Note

If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console.

23.4.3.5. Deleting a multi-network policy using the CLI

You can delete a multi-network policy in a namespace.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace where the multi-network policy exists.

Procedure

  • To delete a multi-network policy object, enter the following command:

    $ oc delete multi-networkpolicy <policy_name> -n <namespace>

    where:

    <policy_name>
    Specifies the name of the multi-network policy.
    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted

Note

If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.

23.4.3.6. Creating a default deny all multi-network policy

This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy.

Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      name: deny-by-default
      namespace: default 1
      annotations:
        k8s.v1.cni.cncf.io/policy-for: <network_name> 2
    spec:
      podSelector: {} 3
      ingress: [] 4
    1
    namespace: default deploys this policy to the default namespace.
    2
    network_name: specifies the name of a network attachment definition.
    3
    podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace.
    4
    There are no ingress rules specified. This causes incoming traffic to be dropped to all pods.
  2. Apply the policy by entering the following command:

    $ oc apply -f deny-by-default.yaml

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created

23.4.3.7. Creating a multi-network policy to allow traffic from external clients

With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web.

Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      name: web-allow-external
      namespace: default
      annotations:
        k8s.v1.cni.cncf.io/policy-for: <network_name>
    spec:
      policyTypes:
      - Ingress
      podSelector:
        matchLabels:
          app: web
      ingress:
        - {}
  2. Apply the policy by entering the following command:

    $ oc apply -f web-allow-external.yaml

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created

This policy allows traffic from all resources, including external traffic as illustrated in the following diagram:

Allow traffic from external clients

23.4.3.8. Creating a multi-network policy allowing traffic to an application from all namespaces

Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      name: web-allow-all-namespaces
      namespace: default
      annotations:
        k8s.v1.cni.cncf.io/policy-for: <network_name>
    spec:
      podSelector:
        matchLabels:
          app: web 1
      policyTypes:
      - Ingress
      ingress:
      - from:
        - namespaceSelector: {} 2
    1
    Applies the policy only to app:web pods in default namespace.
    2
    Selects all pods in all namespaces.
    Note

    By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to.

  2. Apply the policy by entering the following command:

    $ oc apply -f web-allow-all-namespaces.yaml

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created

Verification

  1. Start a web service in the default namespace by entering the following command:

    $ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
  2. Run the following command to deploy an alpine image in the secondary namespace and to start a shell:

    $ oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- sh
  3. Run the following command in the shell and observe that the request is allowed:

    # wget -qO- --timeout=2 http://web.default

    Expected output

    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    html { color-scheme: light dark; }
    body { width: 35em; margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif; }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>

23.4.3.9. Creating a multi-network policy allowing traffic to an application from a namespace

Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to:

  • Restrict traffic to a production database only to namespaces where production workloads are deployed.
  • Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production. Save the YAML in the web-allow-prod.yaml file:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      name: web-allow-prod
      namespace: default
      annotations:
        k8s.v1.cni.cncf.io/policy-for: <network_name>
    spec:
      podSelector:
        matchLabels:
          app: web 1
      policyTypes:
      - Ingress
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              purpose: production 2
    1
    Applies the policy only to app:web pods in the default namespace.
    2
    Restricts traffic to only pods in namespaces that have the label purpose=production.
  2. Apply the policy by entering the following command:

    $ oc apply -f web-allow-prod.yaml

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created

Verification

  1. Start a web service in the default namespace by entering the following command:

    $ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
  2. Run the following command to create the prod namespace:

    $ oc create namespace prod
  3. Run the following command to label the prod namespace:

    $ oc label namespace/prod purpose=production
  4. Run the following command to create the dev namespace:

    $ oc create namespace dev
  5. Run the following command to label the dev namespace:

    $ oc label namespace/dev purpose=testing
  6. Run the following command to deploy an alpine image in the dev namespace and to start a shell:

    $ oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- sh
  7. Run the following command in the shell and observe that the request is blocked:

    # wget -qO- --timeout=2 http://web.default

    Expected output

    wget: download timed out

  8. Run the following command to deploy an alpine image in the prod namespace and start a shell:

    $ oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- sh
  9. Run the following command in the shell and observe that the request is allowed:

    # wget -qO- --timeout=2 http://web.default

    Expected output

    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    html { color-scheme: light dark; }
    body { width: 35em; margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif; }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>

23.4.4. Additional resources

23.5. Attaching a pod to an additional network

As a cluster user you can attach a pod to an additional network.

23.5.1. Adding a pod to an additional network

You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network.

When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it.

The pod must be in the same namespace as the additional network.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster.

Procedure

  1. Add an annotation to the Pod object. Only one of the following annotation formats can be used:

    1. To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod:

      metadata:
        annotations:
          k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1
      1
      To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network.
    2. To attach an additional network with customizations, add an annotation with the following format:

      metadata:
        annotations:
          k8s.v1.cni.cncf.io/networks: |-
            [
              {
                "name": "<network>", 1
                "namespace": "<namespace>", 2
                "default-route": ["<default-route>"] 3
              }
            ]
      1
      Specify the name of the additional network defined by a NetworkAttachmentDefinition object.
      2
      Specify the namespace where the NetworkAttachmentDefinition object is defined.
      3
      Optional: Specify an override for the default route, such as 192.168.17.1.
  2. To create the pod, enter the following command. Replace <name> with the name of the pod.

    $ oc create -f <name>.yaml
  3. Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod.

    $ oc get pod <name> -o yaml

    In the following example, the example-pod pod is attached to the net1 additional network:

    $ oc get pod example-pod -o yaml
    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: macvlan-bridge
        k8s.v1.cni.cncf.io/network-status: |- 1
          [{
              "name": "openshift-sdn",
              "interface": "eth0",
              "ips": [
                  "10.128.2.14"
              ],
              "default": true,
              "dns": {}
          },{
              "name": "macvlan-bridge",
              "interface": "net1",
              "ips": [
                  "20.2.2.100"
              ],
              "mac": "22:2f:60:a5:f8:00",
              "dns": {}
          }]
      name: example-pod
      namespace: default
    spec:
      ...
    status:
      ...
    1
    The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value.

23.5.1.1. Specifying pod-specific addressing and routing options

When attaching a pod to an additional network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations.

Prerequisites

  • The pod must be in the same namespace as the additional network.
  • Install the OpenShift CLI (oc).
  • You must log in to the cluster.

Procedure

To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps:

  1. Edit the Pod resource definition. If you are editing an existing Pod resource, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod resource to edit.

    $ oc edit pod <name>
  2. In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition custom resource (CR) names in addition to specifying additional properties.

    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1
    1
    Replace <network> with a JSON object as shown in the following examples. The single quotes are required.
  3. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter.

    apiVersion: v1
    kind: Pod
    metadata:
      name: example-pod
      annotations:
        k8s.v1.cni.cncf.io/networks: '[
        {
          "name": "net1"
        },
        {
          "name": "net2", 1
          "default-route": ["192.0.2.1"] 2
        }]'
    spec:
      containers:
      - name: example-pod
        command: ["/bin/bash", "-c", "sleep 2000000000000"]
        image: centos/tools
    1
    The name key is the name of the additional network to associate with the pod.
    2
    The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the pod to fail to become active.

The default route will cause any traffic that is not specified in other routes to be routed to the gateway.

Important

Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface.

To verify the routing properties of a pod, the oc command may be used to execute the ip command within a pod.

$ oc exec -it <pod_name> -- ip route
Note

You may also reference the pod’s k8s.v1.cni.cncf.io/network-status to see which additional network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects.

To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO.

  1. Edit the CNO CR by running the following command:

    $ oc edit networks.operator.openshift.io cluster

The following YAML describes the configuration parameters for the CNO:

Cluster Network Operator YAML configuration

name: <name> 1
namespace: <namespace> 2
rawCNIConfig: '{ 3
  ...
}'
type: Raw

1
Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace.
2
Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used.
3
Specify the CNI plugin configuration in JSON format, which is based on the following template.

The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin:

macvlan CNI plugin JSON configuration object using static IP and MAC address

{
  "cniVersion": "0.3.1",
  "name": "<name>", 1
  "plugins": [{ 2
      "type": "macvlan",
      "capabilities": { "ips": true }, 3
      "master": "eth0", 4
      "mode": "bridge",
      "ipam": {
        "type": "static"
      }
    }, {
      "capabilities": { "mac": true }, 5
      "type": "tuning"
    }]
}

1
Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace.
2
Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration.
3
Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities.
4
Specifies the interface that the macvlan plugin uses.
5
Specifies that a request is made to enable the static MAC address functionality of a CNI plugin.

The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod.

Edit the pod with:

$ oc edit pod <name>

macvlan CNI plugin JSON configuration object using static IP and MAC address

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
      {
        "name": "<name>", 1
        "ips": [ "192.0.2.205/24" ], 2
        "mac": "CA:FE:C0:FF:EE:00" 3
      }
    ]'

1
Use the <name> as provided when creating the rawCNIConfig above.
2
Provide an IP address including the subnet mask.
3
Provide the MAC address.
Note

Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together.

To verify the IP address and MAC properties of a pod with additional networks, use the oc command to execute the ip command within a pod.

$ oc exec -it <pod_name> -- ip a

23.6. Removing a pod from an additional network

As a cluster user you can remove a pod from an additional network.

23.6.1. Removing a pod from an additional network

You can remove a pod from an additional network only by deleting the pod.

Prerequisites

  • An additional network is attached to the pod.
  • Install the OpenShift CLI (oc).
  • Log in to the cluster.

Procedure

  • To delete the pod, enter the following command:

    $ oc delete pod <name> -n <namespace>
    • <name> is the name of the pod.
    • <namespace> is the namespace that contains the pod.

23.7. Editing an additional network

As a cluster administrator you can modify the configuration for an existing additional network.

23.7.1. Modifying an additional network attachment definition

As a cluster administrator, you can make changes to an existing additional network. Any existing pods attached to the additional network will not be updated.

Prerequisites

  • You have configured an additional network for your cluster.
  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To edit an additional network for your cluster, complete the following steps:

  1. Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor:

    $ oc edit networks.operator.openshift.io cluster
  2. In the additionalNetworks collection, update the additional network with your changes.
  3. Save your changes and quit the text editor to commit your changes.
  4. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition object by running the following command. Replace <network-name> with the name of the additional network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition object to reflect your changes.

    $ oc get network-attachment-definitions <network-name> -o yaml

    For example, the following console output displays a NetworkAttachmentDefinition object that is named net1:

    $ oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}'
    { "cniVersion": "0.3.1", "type": "macvlan",
    "master": "ens5",
    "mode": "bridge",
    "ipam":       {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} }

23.8. Removing an additional network

As a cluster administrator you can remove an additional network attachment.

23.8.1. Removing an additional network attachment definition

As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any pods it is attached to.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To remove an additional network from your cluster, complete the following steps:

  1. Edit the Cluster Network Operator (CNO) in your default text editor by running the following command:

    $ oc edit networks.operator.openshift.io cluster
  2. Modify the CR by removing the configuration from the additionalNetworks collection for the network attachment definition you are removing.

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      additionalNetworks: [] 1
    1
    If you are removing the configuration mapping for the only additional network attachment definition in the additionalNetworks collection, you must specify an empty collection.
  3. Save your changes and quit the text editor to commit your changes.
  4. Optional: Confirm that the additional network CR was deleted by running the following command:

    $ oc get network-attachment-definition --all-namespaces

23.9. Assigning a secondary network to a VRF

As a cluster administrator, you can configure an additional network for a virtual routing and forwarding (VRF) domain by using the CNI VRF plugin. The virtual network that this plugin creates is associated with the physical interface that you specify.

Using a secondary network with a VRF instance has the following advantages:

Workload isolation
Isolate workload traffic by configuring a VRF instance for the additional network.
Improved security
Enable improved security through isolated network paths in the VRF domain.
Multi-tenancy support
Support multi-tenancy through network segmentation with a unique routing table in the VRF domain for each tenant.
Note

Applications that use VRFs must bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. The SO_BINDTODEVICE option binds the socket to the device that is specified in the passed interface name, for example, eth1. To use the SO_BINDTODEVICE option, the application must have CAP_NET_RAW capabilities.

Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface.

23.9.1. Creating an additional network attachment with the CNI VRF plugin

The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically.

Note

Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.

To create an additional network attachment with the CNI VRF plugin, perform the following procedure.

Prerequisites

  • Install the OpenShift Container Platform CLI (oc).
  • Log in to the OpenShift cluster as a user with cluster-admin privileges.

Procedure

  1. Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the additional network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml.

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      additionalNetworks:
        - name: test-network-1
          namespace: additional-network-1
          type: Raw
          rawCNIConfig: '{
            "cniVersion": "0.3.1",
            "name": "macvlan-vrf",
            "plugins": [  1
            {
              "type": "macvlan",
              "master": "eth1",
              "ipam": {
                  "type": "static",
                  "addresses": [
                  {
                      "address": "191.168.1.23/24"
                  }
                  ]
              }
            },
            {
              "type": "vrf", 2
              "vrfname": "vrf-1",  3
              "table": 1001   4
            }]
          }'
    1
    plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration.
    2
    type must be set to vrf.
    3
    vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created.
    4
    Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF.
    Note

    VRF functions correctly only when the resource is of type netdevice.

  2. Create the Network resource:

    $ oc create -f additional-network-attachment.yaml
  3. Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1.

    $ oc get network-attachment-definitions -n <namespace>

    Example output

    NAME                       AGE
    additional-network-1       14m

    Note

    There might be a delay before the CNO creates the CR.

Verification

  1. Create a pod and assign it to the additional network with the VRF instance:

    1. Create a YAML file that defines the Pod resource:

      Example pod-additional-net.yaml file

      apiVersion: v1
      kind: Pod
      metadata:
       name: pod-additional-net
       annotations:
         k8s.v1.cni.cncf.io/networks: '[
             {
                     "name": "test-network-1" 1
             }
       ]'
      spec:
       containers:
       - name: example-pod-1
         command: ["/bin/bash", "-c", "sleep 9000000"]
         image: centos:8

      1
      Specify the name of the additional network with the VRF instance.
    2. Create the Pod resource by running the following command:

      $ oc create -f pod-additional-net.yaml

      Example output

      pod/test-pod created

  2. Verify that the pod network attachment is connected to the VRF additional network. Start a remote session with the pod and run the following command:

    $ ip vrf show

    Example output

    Name              Table
    -----------------------
    vrf-1             1001

  3. Confirm that the VRF interface is the controller for the additional interface:

    $ ip link

    Example output

    5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.