Chapter 4. Multiple networks


4.1. Understanding multiple networks

By default, OVN-Kubernetes serves as the Container Network Interface (CNI) of an OpenShift Container Platform cluster. With OVN-Kubernetes as the default CNI of a cluster, OpenShift Container Platform administrators or users can leverage user-defined networks (UDNs) or NetworkAttachmentDefinition (NADs) to create one, or multiple, default networks that handle all ordinary network traffic of the cluster. Both user-defined networks and Network Attachment Definitions can serve as the following network types:

  • Primary networks: Act as the primary network for the pod. By default, all traffic passes through the primary network unless a pod route is configured to send traffic through other networks.
  • Secondary networks: Act as secondary, non-default networks for a pod. Secondary networks provide separate interfaces dedicated to specific traffic types or purposes. Only pod traffic that is explicitly configured to use a secondary network is routed through its interface.

However, during cluster installation, OpenShift Container Platform administrators can configure alternative default secondary pod networks by leveraging the Multus CNI plugin. With Multus, multiple CNI plugins such as ipvlan, macvlan, or Network Attachment Definitions can be used together to serve as secondary networks for pods.

Note

User-defined networks are only available when OVN-Kubernetes is used as the CNI. They are not supported for use with other CNIs.

You can define an secondary network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one secondary network for your cluster depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing.

For a complete list of supported CNI plugins, see "Secondary networks in OpenShift Container Platform".

For information about user-defined networks, see About user-defined networks (UDNs).

For information about Network Attachment Definitions, see Creating primary networks using a NetworkAttachmentDefinition.

4.1.1. Usage scenarios for a secondary network

You can use a secondary network, also known as a secondary network, in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons:

  1. Performance

    Traffic management: You can send traffic on two different planes to manage how much traffic is along each plane.

  2. Security

    Network isolation: You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.

All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a command. If you add secondary network interfaces that use Multus CNI, they are named net1, net2, …​, netN.

To attach secondary network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using either a UserDefinedNetwork custom resource (CR) or a NetworkAttachmentDefinition CR. A CNI configuration inside each of these CRs defines how that interface is created.

For more information about creating a UserDefinedNetwork CR, see About user-defined networks.

For more information about creating a NetworkAttachmentDefinition CR, see Creating primary networks using a NetworkAttachmentDefinition.

4.1.2. Secondary networks in OpenShift Container Platform

OpenShift Container Platform provides the following CNI plugins for creating secondary networks in your cluster:

4.1.3. UserDefinedNetwork and NetworkAttachmentDefinition support matrix

The UserDefinedNetwork and NetworkAttachmentDefinition custom resources (CRs) provide cluster administrators and users the ability to create customizable network configurations and define their own network topologies, ensure network isolation, manage IP addressing for workloads, and configure advanced network features. A third CR, ClusterUserDefinedNetwork, is also available, which allows administrators the ability to create and define secondary networks spanning multiple namespaces at the cluster level.

User-defined networks and network attachment definitions can serve as both the primary and secondary network interface, and each support layer2 and layer3 topologies; a third network topology, Localnet, is also supported with network attachment definitions with secondary networks.

Note

As of OpenShift Container Platform 4.19, the Localnet topology is generally available for the ClusterUserDefinedNetwork CRs and is the preferred method for connecting physical networks to virtual networks. Alternatively, the NetworkAttachmentDefinition CR can also be used to create secondary networks with Localnet topologies.

The following section highlights the supported features of the UserDefinedNetwork and NetworkAttachmentDefinition CRs when they are used as either the primary or secondary network. A separate table for the ClusterUserDefinedNetwork CR is also included.

Table 4.1. Primary network support matrix for UserDefinedNetwork and NetworkAttachmentDefinition CRs
Network featureLayer2 topologyLayer3 topology

east-west traffic

north-south traffic

Persistent IPs

X

Services

Routes

X

X

EgressIP resource

Multicast [1]

X

NetworkPolicy resource [2]

MultinetworkPolicy resource

X

X

  1. Multicast must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information about multicast, see "Enabling multicast for a project".
Table 4.2. Secondary network support matrix for UserDefinedNetwork and NetworkAttachmentDefinition CRs
Network featureLayer2 topologyLayer3 topologyLocalnet topology [1]

east-west traffic

✓ (NetworkAttachmentDefinition CR only)

north-south traffic

X

X

✓ (NetworkAttachmentDefinition CR only)

Persistent IPs

X

✓ (NetworkAttachmentDefinition CR only)

Services

X

X

X

Routes

X

X

X

EgressIP resource

X

X

X

Multicast

X

X

X

NetworkPolicy resource

X

X

X

MultinetworkPolicy resource

✓ (NetworkAttachmentDefinition CR only)

  1. The Localnet topology is unavailable for use with the UserDefinedNetwork CR. It is only supported on secondary networks for NetworkAttachmentDefinition CRs.
Table 4.3. Support matrix for ClusterUserDefinedNetwork CRs
Network featureLayer2 topologyLayer3 topologyLocalnet topology

east-west traffic

north-south traffic

Persistent IPs

X

Services

 

Routes

X

X

 

EgressIP resource

 

Multicast [1]

X

 

MultinetworkPolicy resource

X

X

NetworkPolicy resource [2]

 
  1. Multicast must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information, see "About multicast".
  2. When creating a ClusterUserDefinedNetwork CR with a primary network type, network policies must be created after the UserDefinedNetwork CR.

Additional resources

4.2. Primary networks

4.2.1. About user-defined networks

Before the implementation of user-defined networks (UDN), the OVN-Kubernetes CNI plugin for OpenShift Container Platform only supported a Layer 3 topology on the primary or main network. Due to Kubernetes design principles: all pods are attached to the main network, all pods communicate with each other by their IP addresses, and inter-pod traffic is restricted according to network policy.

While the Kubernetes design is useful for simple deployments, this Layer 3 topology restricts customization of primary network segment configurations, especially for modern multi-tenant deployments.

UDN improves the flexibility and segmentation capabilities of the default Layer 3 topology for a Kubernetes pod network by enabling custom Layer 2 and Layer 3 network segments, where all these segments are isolated by default. These segments act as either primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin. UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance.

A cluster administrator can use a UDN to create and define primary or secondary networks that span multiple namespaces at the cluster level by leveraging the ClusterUserDefinedNetwork custom resource (CR). Additionally, a cluster administrator or a cluster user can use a UDN to define secondary networks at the namespace level with the UserDefinedNetwork CR.

The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a ClusterUserDefinedNetwork or UserDefinedNetwork CR, how to create the CR, and additional configuration details that might be relevant to your deployment.

4.2.1.1. Benefits of a user-defined network

User-defined networks provide the following benefits:

  1. Enhanced network isolation for security

    • Tenant isolation: Namespaces can have their own isolated primary network, similar to how tenants are isolated in Red Hat OpenStack Platform (RHOSP). This improves security by reducing the risk of cross-tenant traffic.
  2. Network flexibility

    • Layer 2 and layer 3 support: Cluster administrators can configure primary networks as layer 2 or layer 3 network types.
  3. Simplified network management

    • Reduced network configuration complexity: With user-defined networks, the need for complex network policies are eliminated because isolation can be achieved by grouping workloads in different networks.
  4. Advanced capabilities

    • Consistent and selectable IP addressing: Users can specify and reuse IP subnets across different namespaces and clusters, providing a consistent networking environment.
    • Support for multiple networks: The user-defined networking feature allows administrators to connect multiple namespaces to a single network, or to create distinct networks for different sets of namespaces.
  5. Simplification of application migration from Red Hat OpenStack Platform (RHOSP)

    • Network parity: With user-defined networking, the migration of applications from OpenStack to OpenShift Container Platform is simplified by providing similar network isolation and configuration options.

Developers and administrators can create a user-defined network that is namespace scoped using the custom resource. An overview of the process is as follows:

  1. An administrator creates a namespace for a user-defined network with the k8s.ovn.org/primary-user-defined-network label.
  2. The UserDefinedNetwork CR is created by either the cluster administrator or the user.
  3. The user creates pods in the namespace.

4.2.1.2. Limitations of a user-defined network

While user-defined networks (UDN) offer highly customizable network configuration options, there are limitations that cluster administrators and developers should be aware of when implementing and managing these networks. Consider the following limitations before implementing a UDN.

  • DNS limitations:

    • DNS lookups for pods resolve to the pod’s IP address on the cluster default network. Even if a pod is part of a user-defined network, DNS lookups will not resolve to the pod’s IP address on that user-defined network. However, DNS lookups for services and external entities will function as expected.
    • When a pod is assigned to a primary UDN, it can access the Kubernetes API (KAPI) and DNS services on the cluster’s default network.
  • Initial network assignment: You must create the namespace and network before creating pods. Assigning a namespace with pods to a new network or creating a UDN in an existing namespace will not be accepted by OVN-Kubernetes.
  • Health check limitations: Kubelet health checks are performed by the cluster default network, which does not confirm the network connectivity of the primary interface on the pod. Consequently, scenarios where a pod appears healthy by the default network, but has broken connectivity on the primary interface, are possible with user-defined networks.
  • Network policy limitations: Network policies that enable traffic between namespaces connected to different user-defined primary networks are not effective. These traffic policies do not take effect because there is no connectivity between these isolated networks.
  • Creation and modification limitation: The ClusterUserDefinedNetwork CR and the UserDefinedNetwork CR cannot be modified after being created.
  • Default network service access: A user-defined network pod is isolated from the default network, which means that most default network services are inaccessible. For example, a user-defined network pod cannot currently access the OpenShift Container Platform image registry. Because of this limitation, source-to-image builds do not work in a user-defined network namespace. Additionally, other functions do not work, including functions to create applications based on the source code in a Git repository, such as oc new-app <command>, and functions to create applications from an OpenShift Container Platform template that use source-to-image builds. This limitation might also affect other openshift-*.svc services.
  • Connectivity limitation: NodePort services on user-defined networks are not guaranteed isolation. For example, NodePort traffic from a pod to a service on the same node is not accessible, whereas traffic from a pod on a different node succeeds.

4.2.1.3. Layer 2 and layer 3 topologies

A flat layer 2 topology creates a virtual switch that is distributed across all nodes in a cluster. Virtual machines and pods connect to this virtual switch so that all these components can communicate with each other within the same subnet. A flat layer 2 topology is useful for live migration of virtual machines across nodes that exist in a cluster. The following diagram shows a flat layer 2 topology with two nodes that use the virtual switch for live migration purposes:

Figure 4.1. A flat layer 2 topology that uses a virtual switch for component communication

A flat layer 2 topology with a virtual switch so that virtual machines in node-1 to node-2 can communicate with each other

If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When you do not specify a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, where the topology might cause a broadcast storm that can degrade network performance.

To access more configurable options for your network, you can integrate a layer 2 topology with a user-defined network (UDN). The following diagram shows two nodes that use a UDN with a layer 2 topology that includes pods that exist on each node. Each node includes two interfaces:

  • A node interface, which is a compute node that connects networking components to the node.
  • An Open vSwitch (OVS) bridge such as br-ex, which creates an layer 2 OVN switch so that pods can communicate with each other and share resources.

An external switch connects these two interfaces, while the gateway or router handles routing traffic between the external switch and the layer 2 OVN switch. VMs and pods in a node can use the UDN to communicate with each other. The layer 2 OVN switch handles node traffic over a UDN so that live migrate of a VM from one node to another is possible.

Figure 4.2. A user-defined network (UDN) that uses a layer 2 topology

A UDN that uses a layer 2 topology for migrating a VM from node-1 to node-2

A layer 3 topology creates a unique layer 2 segment for each node in a cluster. The layer 3 routing mechanism interconnects these segments so that virtual machines and pods that are hosted on different nodes can communicate with each other. A layer 3 topology can effectively manage large broadcast domains by assigning each domain to a specific node, so that broadcast traffic has a reduced scope. To configure a layer 3 topology, you must configure cidr and hostSubnet parameters.

4.2.1.4. About the ClusterUserDefinedNetwork CR

The ClusterUserDefinedNetwork (UDN) custom resource (CR) provides cluster-scoped network segmentation and isolation for administrators only.

The following diagram demonstrates how a cluster administrator can use the ClusterUserDefinedNetwork CR to create network isolation between tenants. This network configuration allows a network to span across many namespaces. In the diagram, network isolation is achieved through the creation of two user-defined networks, udn-1 and udn-2. These networks are not connected and the spec.namespaceSelector.matchLabels field is used to select different namespaces. For example, udn-1 configures and isolates communication for namespace-1 and namespace-2, while udn-2 configures and isolates communication for namespace-3 and namespace-4. Isolated tenants (Tenants 1 and Tenants 2) are created by separating namespaces while also allowing pods in the same namespace to communicate.

Figure 4.3. Tenant isolation using a ClusterUserDefinedNetwork CR

The tenant isolation concept in a user-defined network (UDN)
4.2.1.4.1. Best practices for ClusterUserDefinedNetwork CRs

Before setting up a ClusterUserDefinedNetwork custom resource (CR), users should consider the following information:

  • A ClusterUserDefinedNetwork CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network.
  • ClusterUserDefinedNetwork CRs should not select the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster.
  • ClusterUserDefinedNetwork CRs should not select openshift-* namespaces.
  • OpenShift Container Platform administrators should be aware that all namespaces of a cluster are selected when one of the following conditions are met:

    • The matchLabels selector is left empty.
    • The matchExpressions selector is left empty.
    • The namespaceSelector is initialized, but does not specify matchExpressions or matchLabel. For example: namespaceSelector: {}.
  • For primary networks, the namespace used for the ClusterUserDefinedNetwork CR must include the k8s.ovn.org/primary-user-defined-network label. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with the k8s.ovn.org/primary-user-defined-network namespace label:

    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a pod is created, the pod attaches itself to the default network.
    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary ClusterUserDefinedNetwork CR is created that matches the namespace, an error is reported and the network is not created.
    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary ClusterUserDefinedNetwork CR already exists, a pod in the namespace is created and attached to the default network.
    • If the namespace has the label, and a primary ClusterUserDefinedNetwork CR does not exist, a pod in the namespace is not created until the ClusterUserDefinedNetwork CR is created.
  • When using the ClusterUserDefinedNetwork CR to create localnet topology, the following are best practices for administrators:

    • You must make sure that the spec.network.physicalNetworkName parameter matches the parameter that you configured in the Open vSwitch (OVS) bridge mapping when you create your CUDN CR. This ensures that you are bridging to the intended segment of your physical network. If you intend to deploy multiple CUDN CR using the same bridge mapping, you must ensure that the same physicalNetworkName parameter is used.
    • Avoid overlapping subnets between your physical network and your other network interfaces. Overlapping network subnets can cause routing conflicts and network instability. To prevent conflicts when using the spec.network.localnet.subnets parameter, you might use the spec.network.localnet.excludeSubnets parameter.
    • When you configure a Virtual Local Area Network (VLAN), you must ensure that both your underlying physical infrastructure (switches, routers, and so on) and your nodes are properly configured to accept VLAN IDs (VIDs). This means that you configure the physical network interface, for example eth1, as an access port for the VLAN, for example 20, that you are connecting to through the physical switch. In addition, you must verify that an OVS bridge mapping, for example eth1, exists on your nodes to ensure that the physical interface is properly connected with OVN-Kubernetes.
4.2.1.4.2. Creating a ClusterUserDefinedNetwork CR by using the CLI

The following procedure creates a ClusterUserDefinedNetwork custom resource (CR) by using the CLI. Based upon your use case, create your request using either the cluster-layer-two-udn.yaml example for a Layer2 topology type or the cluster-layer-three-udn.yaml example for a Layer3 topology type.

Important
  • The ClusterUserDefinedNetwork CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network.
  • OpenShift Virtualization only supports the Layer2 and Localnet topologies.

Prerequisites

  • You have logged in as a user with cluster-admin privileges.

Procedure

  1. Optional: For a ClusterUserDefinedNetwork CR that uses a primary network, create a namespace with the k8s.ovn.org/primary-user-defined-network label by entering the following command:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: <cudn_namespace_name>
      labels:
        k8s.ovn.org/primary-user-defined-network: ""
    EOF
    Copy to Clipboard
  2. Create a request for either a Layer2 or Layer3 topology type cluster-wide user-defined network:

    1. Create a YAML file, such as cluster-layer-two-udn.yaml, to define your request for a Layer2 topology as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
        name: <cudn_name> 
      1
      
      spec:
        namespaceSelector: 
      2
      
          matchLabels: 
      3
      
            "<label_1_key>": "<label_1_value>" 
      4
      
            "<label_2_key>": "<label_2_value>" 
      5
      
        network: 
      6
      
          topology: Layer2 
      7
      
          layer2: 
      8
      
            role: Primary 
      9
      
            subnets:
              - "2001:db8::/64"
              - "10.100.0.0/16" 
      10
      Copy to Clipboard
      1
      Name of your ClusterUserDefinedNetwork CR.
      2
      A label query over the set of namespaces that the cluster UDN CR applies to. Uses the standard Kubernetes MatchLabel selector. Must not point to default or openshift-* namespaces.
      3
      Uses the matchLabels selector type, where terms are evaluated with an AND relationship.
      4 5
      Because the matchLabels selector type is used, provisions namespaces that contain both <label_1_key>=<label_1_value> and <label_2_key>=<label_2_value> labels.
      6
      Describes the network configuration.
      7
      The topology field describes the network configuration; accepted values are Layer2 and Layer3. Specifying a Layer2 topology type creates one logical switch that is shared by all nodes.
      8
      This field specifies the topology configuration. It can be layer2 or layer3.
      9
      Specifies Primary or Secondary. Primary is the only role specification supported in 4.19.
      10
      For Layer2 topology types the following specifies config details for the subnet field:
      • The subnets field is optional.
      • The subnets field is of type string and accepts standard CIDR formats for both IPv4 and IPv6.
      • The subnets field accepts one or two items. For two items, they must be of a different family. For example, subnets values of 10.100.0.0/16 and 2001:db8::/64.
      • Layer2 subnets can be omitted. If omitted, users must configure static IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. For more information, see "Configuring pods with a static IP address".
    2. Create a YAML file, such as cluster-layer-three-udn.yaml, to define your request for a Layer3 topology as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
        name: <cudn_name> 
      1
      
      spec:
        namespaceSelector: 
      2
      
          matchExpressions: 
      3
      
          - key: kubernetes.io/metadata.name 
      4
      
            operator: In 
      5
      
            values: ["<example_namespace_one>", "<example_namespace_two>"] 
      6
      
        network: 
      7
      
          topology: Layer3 
      8
      
          layer3: 
      9
      
            role: Primary 
      10
      
            subnets: 
      11
      
              - cidr: 10.100.0.0/16
                hostSubnet: 24
      Copy to Clipboard
      1
      Name of your ClusterUserDefinedNetwork CR.
      2
      A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes MatchLabel selector. Must not point to default or openshift-* namespaces.
      3
      Uses the matchExpressions selector type, where terms are evaluated with an OR relationship.
      4
      Specifies the label key to match.
      5
      Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist.
      6
      Because the matchExpressions type is used, provisions namespaces matching either <example_namespace_one> or <example_namespace_two>.
      7
      Describes the network configuration.
      8
      The topology field describes the network configuration; accepted values are Layer2 and Layer3. Specifying a Layer3 topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets.
      9
      This field specifies the topology configuration. Valid values are layer2 or layer3.
      10
      Specifies a Primary or Secondary role. Primary is the only role specification supported in 4.19.
      11
      For Layer3 topology types the following specifies config details for the subnet field:
      • The subnets field is mandatory.
      • The type for the subnets field is cidr and hostSubnet:

        • cidr is the cluster subnet and accepts a string value.
        • hostSubnet specifies the nodes subnet prefix that the cluster subnet is split to.
        • For IPv6, only a /64 length is supported for hostSubnet.
  3. Apply your request by running the following command:

    $ oc create --validate=true -f <example_cluster_udn>.yaml
    Copy to Clipboard

    Where <example_cluster_udn>.yaml is the name of your Layer2 or Layer3 configuration file.

  4. Verify that your request is successful by running the following command:

    $ oc get clusteruserdefinednetwork <cudn_name> -o yaml
    Copy to Clipboard

    Where <cudn_name> is the name you created of your cluster-wide user-defined network.

    Example output

    apiVersion: k8s.ovn.org/v1
    kind: ClusterUserDefinedNetwork
    metadata:
      creationTimestamp: "2024-12-05T15:53:00Z"
      finalizers:
      - k8s.ovn.org/user-defined-network-protection
      generation: 1
      name: my-cudn
      resourceVersion: "47985"
      uid: 16ee0fcf-74d1-4826-a6b7-25c737c1a634
    spec:
      namespaceSelector:
        matchExpressions:
        - key: custom.network.selector
          operator: In
          values:
          - example-namespace-1
          - example-namespace-2
          - example-namespace-3
      network:
        layer3:
          role: Primary
          subnets:
          - cidr: 10.100.0.0/16
        topology: Layer3
    status:
      conditions:
      - lastTransitionTime: "2024-11-19T16:46:34Z"
        message: 'NetworkAttachmentDefinition has been created in following namespaces:
          [example-namespace-1, example-namespace-2, example-namespace-3]'
        reason: NetworkAttachmentDefinitionReady
        status: "True"
        type: NetworkCreated
    Copy to Clipboard

4.2.1.4.3. Creating a ClusterUserDefinedNetwork CR for a Localnet topology

A Localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster. This topology type requires the additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes.

Prerequisites

  • You are logged in as a user with cluster-admin privileges.
  • You created and configured the Open vSwitch (OVS) bridge mapping to associate the logical OVN-Kubernetes network with the physical node network through the OVS bridge. For more information, see "Configuration for a localnet switched topology".

Procedure

  1. Create a cluster-wide user-defined network with a Localnet topology:

    1. Create a YAML file, such as cluster-udn-localnet.yaml, to define your request for a Localnet topology as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
        name: <cudn_name> 
      1
      
      spec:
        namespaceSelector: 
      2
      
          matchLabels: 
      3
      
            "<label_1_key>": "<label_1_value>" 
      4
      
            "<label_2_key>": "<label_2_value>" 
      5
      
        network: 
      6
      
          topology: Localnet 
      7
      
          localnet: 
      8
      
            role: Secondary 
      9
      
            physicalNetworkName: test
            ipam: {lifecycle: Persistent}
            subnets: ["192.168.0.0/16", "2001:dbb::/64"] 
      10
      Copy to Clipboard
      1
      Name of your ClusterUserDefinedNetwork (CUDN) CR.
      2
      A label query over the set of namespaces that the cluster CUDN CR applies to. Uses the standard Kubernetes MatchLabel selector. Must not point to default, openshift-*, or any other system namespaces.
      3
      Uses the matchLabels selector type, where terms are evaluated with an AND relationship.
      4 5
      In this example, the CUDN CR is deployed to namespaces that contain both <label_1_key>=<label_1_value> and <label_2_key>=<label_2_value> labels.
      6
      Describes the network configuration.
      7
      Specifying a Localnet topology type creates one logical switch that is directly bridged to one provider network.
      8
      This field specifies the localnet topology.
      9
      Specifies the role for the network configuration. Secondary is the only role specification supported for the localnet topology.
      10
      For Localnet topology types the following specifies config details for the subnet field:
      • The subnets field is optional.
      • The subnets field is of type string and accepts standard CIDR formats for both IPv4 and IPv6.
      • The subnets field accepts one or two items. For two items, they must be of a different IP family. For example, subnets values of 10.100.0.0/16 and 2001:db8::/64.
      • localnet subnets can be omitted. If omitted, users must configure static IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. For more information, see "Configuring pods with a static IP address".
  2. Apply your request by running the following command:

    $ oc create --validate=true -f <example_cluster_udn>.yaml
    Copy to Clipboard

    where:

    <example_cluster_udn>.yaml
    Is the name of your Localnet configuration file.
  3. Verify that your request is successful by running the following command:

    $ oc get clusteruserdefinednetwork <cudn_name> -o yaml
    Copy to Clipboard

    where:

    <cudn_name>
    Is the name you created of your cluster-wide user-defined network.

Example 4.1. Example output

apiVersion: k8s.ovn.org/v1
kind: ClusterUserDefinedNetwork
metadata:
  creationTimestamp: "2025-05-28T19:30:38Z"
  finalizers:
  - k8s.ovn.org/user-defined-network-protection
  generation: 1
  name: cudn-test
  resourceVersion: "140936"
  uid: 7ff185fa-d852-4196-858a-8903b58f6890
spec:
  namespaceSelector:
    matchLabels:
      "1": "1"
      "2": "2"
  network:
    localnet:
      ipam:
        lifecycle: Persistent
      physicalNetworkName: test
      role: Secondary
      subnets:
      - 192.168.0.0/16
      - 2001:dbb::/64
    topology: Localnet
status:
  conditions:
  - lastTransitionTime: "2025-05-28T19:30:38Z"
    message: 'NetworkAttachmentDefinition has been created in following namespaces:
      [test1, test2]'
    reason: NetworkAttachmentDefinitionCreated
    status: "True"
    type: NetworkCreated
Copy to Clipboard
4.2.1.4.4. Creating a ClusterUserDefinedNetwork CR by using the web console

You can create a ClusterUserDefinedNetwork custom resource (CR) with a Layer2 topology in the OpenShift Container Platform web console.

Note

Currently, creation of a ClusterUserDefinedNetwork CR with a Layer3 topology is not supported when using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the OpenShift Container Platform web console as a user with cluster-admin permissions.
  • You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label.

Procedure

  1. From the Administrator perspective, click Networking UserDefinedNetworks.
  2. Click ClusterUserDefinedNetwork.
  3. In the Name field, specify a name for the cluster-scoped UDN.
  4. Specify a value in the Subnet field.
  5. In the Project(s) Match Labels field, add the appropriate labels to select namespaces that the cluster UDN applies to.
  6. Click Create. The cluster-scoped UDN serves as the default primary network for pods located in namespaces that contain the labels that you specified in step 5.

4.2.1.5. About the UserDefinedNetwork CR

The UserDefinedNetwork (UDN) custom resource (CR) provides advanced network segmentation and isolation for users and administrators.

The following diagram shows four cluster namespaces, where each namespace has a single assigned user-defined network (UDN), and each UDN has an assigned custom subnet for its pod IP allocations. The OVN-Kubernetes handles any overlapping UDN subnets. Without using the Kubernetes network policy, a pod attached to a UDN can communicate with other pods in that UDN. By default, these pods are isolated from communicating with pods that exist in other UDNs. For microsegmentation, you can apply network policy within a UDN. You can assign one or more UDNs to a namespace, with a limitation of only one primary UDN to a namespace, and one or more namespaces to a UDN.

Figure 4.4. Namespace isolation using a UserDefinedNetwork CR

The namespace isolation concept in a user-defined network (UDN)
4.2.1.5.1. Best practices for UserDefinedNetwork CRs

Before setting up a UserDefinedNetwork custom resource (CR), you should consider the following information:

  • openshift-* namespaces should not be used to set up a UserDefinedNetwork CR.
  • UserDefinedNetwork CRs should not be created in the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster.
  • For primary networks, the namespace used for the UserDefinedNetwork CR must include the k8s.ovn.org/primary-user-defined-network label. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with the k8s.ovn.org/primary-user-defined-network namespace label:

    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a pod is created, the pod attaches itself to the default network.
    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary UserDefinedNetwork CR is created that matches the namespace, a status error is reported and the network is not created.
    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary UserDefinedNetwork CR already exists, a pod in the namespace is created and attached to the default network.
    • If the namespace has the label, and a primary UserDefinedNetwork CR does not exist, a pod in the namespace is not created until the UserDefinedNetwork CR is created.
  • 2 masquerade IP addresses are required for user defined networks. You must reconfigure your masquerade subnet to be large enough to hold the required number of networks.

    Important
    • For OpenShift Container Platform 4.17 and later, clusters use 169.254.0.0/17 for IPv4 and fd69::/112 for IPv6 as the default masquerade subnet. These ranges should be avoided by users. For updated clusters, there is no change to the default masquerade subnet.
    • Changing the cluster’s masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a UserDefinedNetwork CR has been set up can disrupt the network connectivity and cause configuration issues.
  • Ensure tenants are using the UserDefinedNetwork resource and not the NetworkAttachmentDefinition (NAD) CR. This can create security risks between tenants.
  • When creating network segmentation, you should only use the NetworkAttachmentDefinition CR if user-defined network segmentation cannot be completed using the UserDefinedNetwork CR.
  • The cluster subnet and services CIDR for a UserDefinedNetwork CR cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses 100.64.0.0/16 as the default join subnet for the network. You must not use that value to configure a UserDefinedNetwork CR’s joinSubnets field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the joinSubnets field. For more information, see "Additional configuration details for user-defined networks".
4.2.1.5.2. Creating a UserDefinedNetwork CR by using the CLI

The following procedure creates a UserDefinedNetwork CR that is namespace scoped. Based upon your use case, create your request by using either the my-layer-two-udn.yaml example for a Layer2 topology type or the my-layer-three-udn.yaml example for a Layer3 topology type.

Perquisites

  • You have logged in with cluster-admin privileges, or you have view and edit role-based access control (RBAC).

Procedure

  1. Optional: For a UserDefinedNetwork CR that uses a primary network, create a namespace with the k8s.ovn.org/primary-user-defined-network label by entering the following command:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: <udn_namespace_name>
      labels:
        k8s.ovn.org/primary-user-defined-network: ""
    EOF
    Copy to Clipboard
  2. Create a request for either a Layer2 or Layer3 topology type user-defined network:

    1. Create a YAML file, such as my-layer-two-udn.yaml, to define your request for a Layer2 topology as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: UserDefinedNetwork
      metadata:
        name: udn-1 
      1
      
        namespace: <some_custom_namespace>
      spec:
        topology: Layer2 
      2
      
        layer2: 
      3
      
          role: Primary 
      4
      
          subnets:
            - "10.0.0.0/24"
            - "2001:db8::/60" 
      5
      Copy to Clipboard
      1
      Name of your UserDefinedNetwork resource. This should not be default or duplicate any global namespaces created by the Cluster Network Operator (CNO).
      2
      The topology field describes the network configuration; accepted values are Layer2 and Layer3. Specifying a Layer2 topology type creates one logical switch that is shared by all nodes.
      3
      This field specifies the topology configuration. It can be layer2 or layer3.
      4
      Specifies a Primary or Secondary role.
      5
      For Layer2 topology types the following specifies config details for the subnet field:
      • The subnets field is optional.
      • The subnets field is of type string and accepts standard CIDR formats for both IPv4 and IPv6.
      • The subnets field accepts one or two items. For two items, they must be of a different family. For example, subnets values of 10.100.0.0/16 and 2001:db8::/64.
      • Layer2 subnets can be omitted. If omitted, users must configure IP addresses for the pods. As a consequence, port security only prevents MAC spoofing.
      • The Layer2 subnets field is mandatory when the ipamLifecycle field is specified.
    2. Create a YAML file, such as my-layer-three-udn.yaml, to define your request for a Layer3 topology as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: UserDefinedNetwork
      metadata:
        name: udn-2-primary 
      1
      
        namespace: <some_custom_namespace>
      spec:
        topology: Layer3 
      2
      
        layer3: 
      3
      
          role: Primary 
      4
      
          subnets: 
      5
      
            - cidr: 10.150.0.0/16
              hostSubnet: 24
            - cidr: 2001:db8::/60
              hostSubnet: 64
      # ...
      Copy to Clipboard
      1
      Name of your UserDefinedNetwork resource. This should not be default or duplicate any global namespaces created by the Cluster Network Operator (CNO).
      2
      The topology field describes the network configuration; accepted values are Layer2 and Layer3. Specifying a Layer3 topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets.
      3
      This field specifies the topology configuration. Valid values are layer2 or layer3.
      4
      Specifies a Primary or Secondary role.
      5
      For Layer3 topology types the following specifies config details for the subnet field:
      • The subnets field is mandatory.
      • The type for the subnets field is cidr and hostSubnet:

        • cidr is equivalent to the clusterNetwork configuration settings of a cluster. The IP addresses in the CIDR are distributed to pods in the user defined network. This parameter accepts a string value.
        • hostSubnet defines the per-node subnet prefix.
        • For IPv6, only a /64 length is supported for hostSubnet.
  3. Apply your request by running the following command:

    $ oc apply -f <my_layer_two_udn>.yaml
    Copy to Clipboard

    Where <my_layer_two_udn>.yaml is the name of your Layer2 or Layer3 configuration file.

  4. Verify that your request is successful by running the following command:

    $ oc get userdefinednetworks udn-1 -n <some_custom_namespace> -o yaml
    Copy to Clipboard

    Where some_custom_namespace is the namespace you created for your user-defined network.

    Example output

    apiVersion: k8s.ovn.org/v1
    kind: UserDefinedNetwork
    metadata:
      creationTimestamp: "2024-08-28T17:18:47Z"
      finalizers:
      - k8s.ovn.org/user-defined-network-protection
      generation: 1
      name: udn-1
      namespace: some-custom-namespace
      resourceVersion: "53313"
      uid: f483626d-6846-48a1-b88e-6bbeb8bcde8c
    spec:
      layer2:
        role: Primary
        subnets:
        - 10.0.0.0/24
        - 2001:db8::/60
      topology: Layer2
    status:
      conditions:
      - lastTransitionTime: "2024-08-28T17:18:47Z"
        message: NetworkAttachmentDefinition has been created
        reason: NetworkAttachmentDefinitionReady
        status: "True"
        type: NetworkCreated
    Copy to Clipboard

4.2.1.5.3. Creating a UserDefinedNetwork CR by using the web console

You can create a UserDefinedNetwork custom resource (CR) with a Layer2 topology and Primary role by using the OpenShift Container Platform web console.

Note

Currently, creation of a UserDefinedNetwork CR with a Layer3 topology or a Secondary role are not supported when using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the OpenShift Container Platform web console as a user with cluster-admin permissions.
  • You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label.

Procedure

  1. From the Administrator perspective, click Networking UserDefinedNetworks.
  2. Click Create UserDefinedNetwork.
  3. From the Project name list, select the namespace that you previously created.
  4. Specify a value in the Subnet field.
  5. Click Create. The user-defined network serves as the default primary network for pods that you create in this namespace.

4.2.1.6. Additional configuration details for user-defined networks

The following table explains additional configurations for ClusterUserDefinedNetwork and UserDefinedNetwork custom resources (CRs) that are optional. It is not recommended to set these fields without explicit need and understanding of OVN-Kubernetes network topology.

  1. Optional configurations for user-defined networks

CUDN field

UDN field

Type

Description

spec.network.<topology>.joinSubnets

spec.<topology>.joinSubnets

object

When omitted, the platform sets default values for the joinSubnets field of 100.65.0.0/16 for IPv4 and fd99::/64 for IPv6. If the default address values are used anywhere in the cluster’s network you must override it by setting the joinSubnets field. If you choose to set this field, ensure it does not conflict with other subnets in the cluster such as the cluster subnet, the default network cluster subnet, and the masquerade subnet.

The joinSubnets field configures the routing between different segments within a user-defined network. Dual-stack clusters can set 2 subnets, one for each IP family; otherwise, only 1 subnet is allowed. This field is only allowed for the Primary network.

spec.network.<topology>.excludeSubnets

spec.<topology>.exlcudeSubnets

string

Specifies a list of CIDRs to be removed from the specified CIDRs in the subnets field. The CIDRs in this list must be in range of at least one subnet specified in subnets. When omitted, no IP addresses are excluded, and all IP addresses specified in the subnets field are subject to assignment. You must use standard CIDR notation. For example, 10.128.0.0/16. This field must be omitted if the subnets field is not set or if the ipam.mode field is set to Disabled. You can only set 25 values for the excludeSubnets field.

When deploying a secondary network with Localnet topology, the IP ranges used in your physical network must be explicitly listed in the excludeSubnets field to prevent IP duplication in your subnet.

spec.network.<topology>.ipam.lifecycle

spec.<topology>.ipam.lifecycle

object

The spec.ipam.lifecycle field configures the IP address management system (IPAM). You might use this field for virtual workloads to ensure persistent IP addresses. The only allowed value is Persistent, which ensures that your virtual workloads have persistent IP addresses across reboots and migration. These are assigned by the container network interface (CNI) and used by OVN-Kubernetes to program pod IP addresses. You must not change this for pod annotations.

Setting a value of Persistent is only supported when ipam.mode parameter is set to Enabled.

spec.network.<topology>.ipam.mode

spec.<topology>`ipam.mode

object

The mode parameter controls how much of the IP configuration is managed by OVN-Kubernetes. The following options are available:

Enabled:
When enabled, OVN-Kubernetes applies the IP configuration to the SDN infrastructure and assigns IP addresses from the selected subnet to the individual pods. This is the default setting. When set to Enabled, the subnets field must be defined. Enabled is the default configuration.

Disabled:
When disabled, OVN-Kubernetes only assigns MAC addresses and provides layer 2 communication, which allows users to configure IP addresses. Disabled is only available for layer 2 (secondary) networks. By disabling IPAM, features that rely on selecting pods by IP, for example, network policy, services, and so on, no longer function. Additionally, IP port security is also disabled for interfaces attached to this network. The subnets field must be empty when spec.ipam.mode is set to Disabled.

spec.network.<topology>.mtu

spec.<topology>.mtu

integer

The maximum transmission units (MTU). The default value is 1400. The boundary for IPv4 is 576, and for IPv6 it is 1280.

spec.network.localnet.vlan

N/A

object

This field is optional and configures the virtual local area network (VLAN) tagging and allows you to segment the physical network into multiple independent broadcast domains.

spec.network.localnet.vlan.mode

N/A

object

Acceptable values are Access. A value of Access specifies that the network interface belongs to a single VLAN and all traffic will be labelled with an id that is configured in the spec.network.localnet.vlan.mode.access.id field. The id specifies the VLAN id (VID) for access ports. Values must be an integer between 1 and 4094.

spec.network.localnet.physicalNetworkName

N/A

string

Specifies the name for a physical network interface. The value you specify must match the network-name parameter that you provided in your Open vSwitch (OVS) bridge mapping.

where:

<topology>
Can be either layer2 or layer3 for the UserDefinedNetwork CR. For the ClusterUserDefinedNetwork CR the topology can also be Localnet.

4.2.1.7. User-defined network status condition types

The following tables explain the status condition types returned for ClusterUserDefinedNetwork and UserDefinedNetwork CRs when describing the resource. These conditions can be used to troubleshoot your deployment.

Table 4.4. NetworkCreated condition types (ClusterDefinedNetwork and UserDefinedNetwork CRs)
Condition typeStatusReason and Message

NetworkCreated

True

When True, the following reason and message is returned:

Reason

Message

NetworkAttachmentDefinitionCreated

'NetworkAttachmentDefinition has been created in following namespaces: [example-namespace-1, example-namespace-2, example-namespace-3]'`

NetworkCreated

False

When False, one of the following messages is returned:

Reason

Message

SyncError

failed to generate NetworkAttachmentDefinition

SyncError

failed to update NetworkAttachmentDefinition

SyncError

primary network already exist in namespace "<namespace_name>": "<primary_network_name>"

SyncError

failed to create NetworkAttachmentDefinition: create NAD error

SyncError

foreign NetworkAttachmentDefinition with the desired name already exist

SyncError

failed to add finalizer to UserDefinedNetwork

NetworkAttachmentDefinitionDeleted

NetworkAttachmentDefinition is being deleted: [<namespace>/<nad_name>]

Table 4.5. NetworkAllocationSucceeded condition types (UserDefinedNetwork CRs)
Condition typeStatusReason and Message

NetworkAllocationSucceeded

True

When True, the following reason and message is returned:

Reason

Message

NetworkAllocationSucceeded

Network allocation succeeded for all synced nodes.

NetworkAllocationSucceeded

False

When False, the following message is returned:

Reason

Message

InternalError

Network allocation failed for at least one node: [<node_name>], check UDN events for more info.

Table 4.6. Invalid mtu scenarios types for the ClusterUserDefinedNetwork CR
Condition typeReason, Message, Resolution

invalid mtu

One of the following messages is returned when the mtu is set incorrect:

Reason

Message

Resolution

The mtu field is set higher than 65536.

spec.network.localnet.mtu in body should be less than 65536.

You must set the mtu field lower than 65536.

The mtu field is set lower than 576.

spec.network.localnet.mtu in body should be greater than or equal to 576.

You must set the mtu field greater than or equal to 576.

The mtu field must be at least 1280 when using the IPv6 subnet.

MTU should be greater than or equal to 1280 when an IPv6 subnet is used

You must set the mtu field higher than or equal to 1280 when you have an IPv6 subnet defined on your user-defined network configuration.

Table 4.7. Invalid PhysicalNetworkName scenarios types for the ClusterUserDefinedNetwork CR
Condition typeReason, Message, Resolution

invalid PhysicalNetworkName

One of the following messages is returned when the PhysicalNetworkName is set incorrect:

Reason

Message

Resolution

The name of the physical network is not set.

spec.network.localnet.physicalNetworkName: Required value

You must set the physicalNetworkName field.

The name of the physical network does not meet minimum length requirements.

spec.network.localnet.physicalNetworkName in body should be at least 1 chars long

You must set physical network name to be at least one character in length.

The name of the physical network exceeds the maximum character limit of 253.

spec.network.localnet.physicalNetworkName: Too long: may not be more than 253 bytes

You must set physical network name to not exceed the 253 character in length.

The name of the physical network must not contain , or :.

physicalNetworkName cannot contain "," or ":" characters.

You must remove the , or : from the physical network name.

Table 4.8. Invalid role scenarios types for the ClusterUserDefinedNetwork CR
Condition typeReason, Message, Resolution

role unset or role is primary

One of the following messages is returned when the spec.network.localnet.role is set incorrect:

Reason

Message

Resolution

The role field must be set for your localnet topology.

spec.network.localnet.role: Required value

You must set the role field.

Primary is not a supported value for the Localnet topology.

spec.network.localnet.role: Unsupported value: "Primary": supported values: "Secondary"

You must set the role field for your Localnet topology to Secondary-the accepted value.

Table 4.9. Invalid subnets and ipam scenarios types for the ClusterUserDefinedNetwork CR
Condition typeReason, Message, Resolution

LocalnetInvalidSubnets

One of the following messages is returned when either the spec.network.localnet.subnets or spec.network.localnet.ipam is set incorrect:

Reason

Message

Resolution

The optional fields, subnets and ipam.mode, have to be set together.

Subnets is required with ipam.mode is Enabled or unset, and forbidden otherwise

You must set the subnets field unless the spec.network.localnet.ipam.mode is explicitly disabled.

The spec.network.localnet.subnets must have an acceptable value when using this optional field.

The ClusterUserDefinedNetwork "localnet-empty-subnets-fail" is invalid: spec.network.localnet.subnets: Invalid value: 0: spec.network.localnet.subnets in body should have at least 1 items

You must set an acceptable value for spec.network.localnet.subnets. Acceptable values are IPv4 and IPv6 Classless Inter-Domain Routing (CIDR) ranges that do not overlap with any CIDR ranges used by OpenShift Container Platform.

The subnet field must be set when using the optional spec.network.localnet.excludeSubnets field.

excludeSubnets must be unset when subnets is unset

You must set the spec.network.localnet.subnets field when using the spec.network.localnet.excludeSubnet field.

The excludeSubnets must be a value within the subnets field.

excludeSubnets must be subnetworks of the networks specified in the subnets field

You must set the value for the excludeSubnets field to be within the subnets field. For example, a subnets value of 192.168.100.0/24 and an excludeSubnets value of 192.168.200.1/32 is invalid.

The CIDR range is invalid.

The ClusterUserDefinedNetwork "localnet-subnets-invalid-ipv4-cidr-fail" is invalid: spec.network.localnet.subnets[0]: Invalid value: "string": CIDR is invalid

You must set an acceptable CIDR range for spec.network.localnet.subnets field. Acceptable values are IPv4 and IPv6 CIDR ranges which are not in use or reserved by OpenShift Container Platform.

You must set the subnets field when the ipam.mode is Enabled or when the IPAM mode is unset because the default value is Enabled.

Subnets is required with ipam.mode is Enabled or unset, and forbidden otherwise.

You must set the spec.network.localnet.subnets field unless the spec.network.localnet.ipam.mode is explicitly disabled.

Setting two CIDR ranges for spec.network.localnet.subnets field requires that one be IPv4 and the other be IPv6.

Invalid value…​When 2 CIDRs are set, they must be from different IP families.

You must change one of your CIDR ranges to a different IP family.

The spec.network.localnet.ipam.mode is Disabled but the spec.network.localnet.lifecycle has a value of Persistent.

lifecycle Persistent is only supported when ipam.mode is Enabled

You must set the ipam.mode to Enabled when the optional field lifecycle has a value of Persistent.

Table 4.10. Invalid vlan scenarios types for the ClusterUserDefinedNetwork CR
Condition typeReason, Message, Resolution

invalid vlan or invalid mode

One of the following messages is returned when the spec.network.localnet.vlan is set incorrect:

Reason

Message

Resolution

The spec.network.localnet.vlan.mode field must be set.

spec.network.localnet.vlan.mode: Unsupported value: "Disabled": supported values: "Access

You must set the spec.network.localnet.vlan.mode field to Access mode.

The spec.network.localnet.vlan field must be set when spec.network.localnet.vlan.mode is set to Access mode.

vlan access config is required when vlan mode is 'Access', and forbidden otherwise.

You must set spec.network.localnet.vlan.mode.access field when using Access mode.

The spec.network.localnet.vlan.access.id value must be set when using Access mode.

spec.network.localnet.vlan.access.id: Required value

You must set a value for spec.network.localnet.mode.access.id.

Acceptable values for access.id are greater than or equal to 1.

spec.network.localnet.vlan.access.id in body should be greater than or equal to 1

You must set a value of 1 or greater for access.id field.

Acceptable values for access.id are less than or equal to 4094.

spec.network.localnet.vlan.access.id in body should be less than or equal to 4094

You must set a value of 4094 or less for access.id field.

4.2.1.8. Opening default network ports on user-defined network pods

By default, pods on a user-defined network (UDN) are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the OpenShift Container Platform image registry, cannot initiate connections to UDN pods.

To allow default network pods to connect to a user-defined network pod, you can use the k8s.ovn.org/open-default-ports annotation. This annotation opens specific ports on the user-defined network pod for access from the default network.

The following pod specification allows incoming TCP connections on port 80 and UDP traffic on port 53 from the default network:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.ovn.org/open-default-ports: |
      - protocol: tcp
        port: 80
      - protocol: udp
        port: 53
# ...
Copy to Clipboard
Note

Open ports are accessible on the pod’s default network IP, not its UDN network IP.

4.2.2. Creating primary networks using a NetworkAttachmentDefinition

The following sections explain how to create and manage primary networks using the NetworkAttachmentDefinition (NAD) resource.

4.2.2.1. Approaches to managing a primary network

You can manage the life cycle of a primary network created by NAD with one of the following two approaches:

  • By modifying the Cluster Network Operator (CNO) configuration. With this method, the CNO automatically creates and manages the NetworkAttachmentDefinition object. In addition to managing the object lifecycle, the CNO ensures that a DHCP is available for a primary network that uses a DHCP assigned IP address.
  • By applying a YAML manifest. With this method, you can manage the primary network directly by creating an NetworkAttachmentDefinition object. This approach allows for the invocation of multiple CNI plugins in order to attach primary network interfaces in a pod.

Each approach is mutually exclusive and you can only use one approach for managing a primary network at a time. For either approach, the primary network is managed by a Container Network Interface (CNI) plugin that you configure.

Note

When deploying OpenShift Container Platform nodes with multiple network interfaces on Red Hat OpenStack Platform (RHOSP) with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface by running the following command:

$ openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>
Copy to Clipboard

4.2.2.2. Creating a primary network attachment with the Cluster Network Operator

The Cluster Network Operator (CNO) manages additional network definitions. When you specify a primary network to create, the CNO creates the NetworkAttachmentDefinition CRD automatically.

Important

Do not edit the NetworkAttachmentDefinition CRDs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your primary network.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Optional: Create the namespace for the primary networks:

    $ oc create namespace <namespace_name>
    Copy to Clipboard
  2. To edit the CNO configuration, enter the following command:

    $ oc edit networks.operator.openshift.io cluster
    Copy to Clipboard
  3. Modify the CR that you are creating by adding the configuration for the primary network that you are creating, as in the following example CR.

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      # ...
      additionalNetworks:
      - name: tertiary-net
        namespace: namespace2
        type: Raw
        rawCNIConfig: |-
          {
            "cniVersion": "0.3.1",
            "name": "tertiary-net",
            "type": "ipvlan",
            "master": "eth1",
            "mode": "l2",
            "ipam": {
              "type": "static",
              "addresses": [
                {
                  "address": "192.168.1.23/24"
                }
              ]
            }
          }
    Copy to Clipboard
  4. Save your changes and quit the text editor to commit your changes.

Verification

  • Confirm that the CNO created the NetworkAttachmentDefinition CRD by running the following command. There might be a delay before the CNO creates the CRD.

    $ oc get network-attachment-definitions -n <namespace>
    Copy to Clipboard

    where:

    <namespace>
    Specifies the namespace for the network attachment that you added to the CNO configuration.

    Example output

    NAME                 AGE
    test-network-1       14m
    Copy to Clipboard

4.2.2.2.1. Configuration for a primary network attachment

A primary network is configured by using the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group.

The configuration for the API is described in the following table:

Table 4.11. NetworkAttachmentDefinition API fields
FieldTypeDescription

metadata.name

string

The name for the primary network.

metadata.namespace

string

The namespace that the object is associated with.

spec.config

string

The CNI plugin configuration in JSON format.

4.2.2.3. Creating a primary network attachment by applying a YAML manifest

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in as a user with cluster-admin privileges.
  • You are working in the namespace where the NAD is to be deployed.

Procedure

  1. Create a YAML file with your primary network configuration, such as in the following example:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: next-net
    spec:
      config: |-
        {
          "cniVersion": "0.3.1",
          "name": "work-network",
          "namespace": "namespace2", 
    1
    
          "type": "host-device",
          "device": "eth1",
          "ipam": {
            "type": "dhcp"
          }
        }
    Copy to Clipboard
    1
    Optional: You can specify a namespace to which the NAD is applied. If you are working in the namespace where the NAD is to be deployed, this spec is not necessary.
  2. To create the primary network, enter the following command:

    $ oc apply -f <file>.yaml
    Copy to Clipboard

    where:

    <file>
    Specifies the name of the file contained the YAML manifest.

4.3. Secondary networks

4.3.1. Creating secondary networks on OVN-Kubernetes

As a cluster administrator, you can configure a secondary network for your cluster using the NetworkAttachmentDefinition (NAD) resource.

Note

Support for user-defined networks as a secondary network will be added in a future version of OpenShift Container Platform.

4.3.1.1. Configuration for an OVN-Kubernetes secondary network

The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition custom resource definition (CRD).

Note

Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated network-attachment-definition CRD.

You can configure an OVN-Kubernetes secondary network in layer 2, layer 3, or localnet topologies. For more information about features supported on these topologies, see "UserDefinedNetwork and NetworkAttachmentDefinition support matrix".

The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks.

Note

Networks names must be unique. For example, creating multiple NetworkAttachmentDefinition CRDs with different configurations that reference the same network is unsupported.

4.3.1.1.1. Supported platforms for OVN-Kubernetes secondary network

You can use an OVN-Kubernetes secondary network with the following supported platforms:

  • Bare metal
  • IBM Power®
  • IBM Z®
  • IBM® LinuxONE
  • VMware vSphere
  • Red Hat OpenStack Platform (RHOSP)
4.3.1.1.2. OVN-Kubernetes network plugin JSON configuration table

The following table describes the configuration parameters for the OVN-Kubernetes CNI network plugin:

Table 4.12. OVN-Kubernetes network plugin JSON configuration table
FieldTypeDescription

cniVersion

string

The CNI specification version. The required value is 0.3.1.

name

string

The name of the network. These networks are not namespaced. For example, a network named l2-network can be referenced by NetworkAttachmentDefinition custom resources (CRs) that exist in different namespaces. This configuration allows pods that use the NetworkAttachmentDefinition CR in different namespaces to communicate over the same secondary network. However, the NetworkAttachmentDefinition CRs must share the same network-specific parameters, such as topology, subnets, mtu, excludeSubnets, and vlanID. The vlanID parameter applies only when the topology field is set to localnet.

type

string

The name of the CNI plugin to configure. This value must be set to ovn-k8s-cni-overlay.

topology

string

The topological configuration for the network. Must be one of layer2 or localnet.

subnets

string

The subnet to use for the network across the cluster.

For "topology":"layer2" deployments, IPv6 (2001:DBB::/64) and dual-stack (192.168.100.0/24,2001:DBB::/64) subnets are supported.

When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IP addresses for the pods. Port security only prevents MAC spoofing.

mtu

string

The maximum transmission unit (MTU). The default value, 1300, is automatically set by the kernel.

netAttachDefName

string

The metadata namespace and name of the network attachment definition CRD where this configuration is included. For example, if this configuration is defined in a NetworkAttachmentDefinition CRD in namespace ns1 named l2-network, this should be set to ns1/l2-network.

excludeSubnets

string

A comma-separated list of CIDRs and IP addresses. IP addresses are removed from the assignable IP address pool and are never passed to the pods.

vlanID

integer

If topology is set to localnet, the specified VLAN tag is assigned to traffic from this secondary network. The default is to not assign a VLAN tag.

physicalNetworkName

string

If topology is set to localnet, you can reuse the same physical network mapping with multiple network overlays. Specifies the name of the physical network to which the OVN overlay connects. When omitted, the default value is the name of the localnet network. To isolate the different networks, ensure that a different VLAN tag is used when sharing the same physical network between overlays.

4.3.1.1.3. Compatibility with multi-network policy

The multi-network policy API, which is provided by the MultiNetworkPolicy custom resource definition (CRD) in the k8s.cni.cncf.io API group, is compatible with an OVN-Kubernetes secondary network. When defining a network policy, the network policy rules that can be used depend on whether the OVN-Kubernetes secondary network defines the subnets field. Refer to the following table for details:

Table 4.13. Supported multi-network policy selectors based on subnets CNI configuration
subnets field specifiedAllowed multi-network policy selectors

Yes

  • podSelector and namespaceSelector
  • ipBlock

No

  • ipBlock

You can use the k8s.v1.cni.cncf.io/policy-for annotation on a MultiNetworkPolicy object to point to a NetworkAttachmentDefinition (NAD) custom resource (CR). The NAD CR defines the network to which the policy applies. The following example multi-network policy is valid only if the subnets field is defined in the secondary network CNI configuration for the secondary network named blue2:

Example multi-network policy that uses a pod selector

apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
  name: allow-same-namespace
  annotations:
    k8s.v1.cni.cncf.io/policy-for: blue2 
1

spec:
  podSelector:
  ingress:
  - from:
    - podSelector: {}
Copy to Clipboard

The following example uses the ipBlock network policy selector, which is always valid for an OVN-Kubernetes secondary network:

Example multi-network policy that uses an IP block selector

apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
  name:  ingress-ipblock
  annotations:
    k8s.v1.cni.cncf.io/policy-for: default/flatl2net
spec:
  podSelector:
    matchLabels:
      name: access-control
  policyTypes:
  - Ingress
  ingress:
  - from:
    - ipBlock:
        cidr: 10.200.0.0/30
Copy to Clipboard

4.3.1.1.4. Configuration for a localnet switched topology

The switched localnet topology interconnects the workloads created as Network Attachment Definitions (NADs) through a cluster-wide logical switch to a physical network.

You must map a secondary network to the OVN bridge to use it as an OVN-Kubernetes secondary network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS).

You can create an NodeNetworkConfigurationPolicy (NNCP) object, part of the nmstate.io/v1 API group, to declaratively create the mapping. This API is provided by the NMState Operator. By using this API you can apply the bridge mapping to nodes that match your specified nodeSelector expression, such as node-role.kubernetes.io/worker: ''. With this declarative approach, the NMState Operator applies secondary network configuration to all nodes specified by the node selector automatically and transparently.

When attaching a secondary network, you can either use the existing br-ex bridge or create a new bridge. Which approach to use depends on your specific network infrastructure. Consider the following approaches:

  • If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the br-ex bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network will stop working correctly.
  • If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your secondary network. This approach provides for traffic isolation from your primary cluster network.

The localnet1 network is mapped to the br-ex bridge in the following example:

Example mapping for sharing a bridge

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: mapping 
1

spec:
  nodeSelector:
    node-role.kubernetes.io/worker: '' 
2

  desiredState:
    ovn:
      bridge-mappings:
      - localnet: localnet1 
3

        bridge: br-ex 
4

        state: present 
5
Copy to Clipboard

1 1
The name for the configuration object.
2
A node selector that specifies the nodes to apply the node network configuration policy to.
3
The name for the secondary network from which traffic is forwarded to the OVS bridge. This secondary network must match the name of the spec.config.name field of the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes secondary network.
4
The name of the OVS bridge on the node. This value is required only if you specify state: present.
5
The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present.

The following JSON example configures a localnet secondary network that is named localnet1:

{
  "cniVersion": "0.3.1",
  "name": "ns1-localnet-network",
  "type": "ovn-k8s-cni-overlay",
  "topology":"localnet",
  "physicalNetworkName": "localnet1",
  "subnets": "202.10.130.112/28",
  "vlanID": 33,
  "mtu": 1500,
  "netAttachDefName": "ns1/localnet-network",
  "excludeSubnets": "10.100.200.0/29"
}
Copy to Clipboard

In the following example, the localnet2 network interface is attached to the ovs-br1 bridge. Through this attachment, the network interface is available to the OVN-Kubernetes network plugin as a secondary network.

Example mapping for nodes with multiple interfaces

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: ovs-br1-multiple-networks 
1

spec:
  nodeSelector:
    node-role.kubernetes.io/worker: '' 
2

  desiredState:
    interfaces:
    - name: ovs-br1 
3

      description: |-
        A dedicated OVS bridge with eth1 as a port
        allowing all VLANs and untagged traffic
      type: ovs-bridge
      state: up
      bridge:
        allow-extra-patch-ports: true
        options:
          stp: false
          mcast-snooping-enable: true 
4

        port:
        - name: eth1 
5

    ovn:
      bridge-mappings:
      - localnet: localnet2 
6

        bridge: ovs-br1 
7

        state: present 
8
Copy to Clipboard

1
Specifies the name of the configuration object.
2
Specifies a node selector that identifies the nodes to which the node network configuration policy applies.
3
Specifies a new OVS bridge that operates separately from the default bridge used by OVN-Kubernetes for cluster traffic.
4
Specifies whether to enable multicast snooping. When enabled, multicast snooping prevents network devices from flooding multicast traffic to all network members. By default, an OVS bridge does not enable multicast snooping. The default value is false.
5
Specifies the network device on the host system to associate with the new OVS bridge.
6
Specifies the name of the secondary network that forwards traffic to the OVS bridge. This name must match the value of the spec.config.name field in the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes secondary network.
7
Specifies the name of the OVS bridge on the node. The value is required only when state: present is set.
8
Specifies the state of the mapping. Valid values are present to add the bridge or absent to remove the bridge. The default value is present.

The following JSON example configures a localnet secondary network that is named localnet2:

{
  "cniVersion": "0.3.1",
  "name": "ns1-localnet-network",
  "type": "ovn-k8s-cni-overlay",
  "topology":"localnet",
  "physicalNetworkName": "localnet2",
  "subnets": "202.10.130.112/28",
  "vlanID": 33,
  "mtu": 1500,
  "netAttachDefName": "ns1/localnet-network",
  "excludeSubnets": "10.100.200.0/29"
}
Copy to Clipboard
4.3.1.1.4.1. Configuration for a layer 2 switched topology

The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments.

Note

Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster.

The following JSON example configures a switched secondary network:

{
  "cniVersion": "0.3.1",
  "name": "l2-network",
  "type": "ovn-k8s-cni-overlay",
  "topology":"layer2",
  "subnets": "10.100.200.0/24",
  "mtu": 1300,
  "netAttachDefName": "ns1/l2-network",
  "excludeSubnets": "10.100.200.0/29"
}
Copy to Clipboard
4.3.1.1.5. Configuring pods for secondary networks

You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks annotation.

The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide.

apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/networks: l2-network
  name: tinypod
  namespace: ns1
spec:
  containers:
  - args:
    - pause
    image: k8s.gcr.io/e2e-test-images/agnhost:2.36
    imagePullPolicy: IfNotPresent
    name: agnhost-container
Copy to Clipboard
4.3.1.1.6. Configuring pods with a static IP address

The following example provisions a pod with a static IP address.

Note
  • You can specify the IP address for the secondary network attachment of a pod only when the secondary network attachment, a namespaced-scoped object, uses a layer 2 or localnet topology.
  • Specifying a static IP address for the pod is only possible when the attachment configuration does not feature subnets.
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
      {
        "name": "l2-network", 
1

        "mac": "02:03:04:05:06:07", 
2

        "interface": "myiface1", 
3

        "ips": [
          "192.0.2.20/24"
          ] 
4

      }
    ]'
  name: tinypod
  namespace: ns1
spec:
  containers:
  - args:
    - pause
    image: k8s.gcr.io/e2e-test-images/agnhost:2.36
    imagePullPolicy: IfNotPresent
    name: agnhost-container
Copy to Clipboard
1
The name of the network. This value must be unique across all NetworkAttachmentDefinition CRDs.
2
The MAC address to be assigned for the interface.
3
The name of the network interface to be created for the pod.
4
The IP addresses to be assigned to the network interface.

4.3.2. Creating secondary networks with other CNI plugins

The specific configuration fields for secondary networks are described in the following sections.

4.3.2.1. Configuration for a bridge secondary network

The following object describes the configuration parameters for the Bridge CNI plugin:

Table 4.14. Bridge CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: bridge.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

bridge

string

Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0.

ipMasq

boolean

Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge’s IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false.

isGateway

boolean

Optional: Set to true to assign an IP address to the bridge. The default value is false.

isDefaultGateway

boolean

Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false. If isDefaultGateway is set to true, then isGateway is also set to true automatically.

forceAddress

boolean

Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false, if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false.

hairpinMode

boolean

Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay. The default value is false.

promiscMode

boolean

Optional: Set to true to enable promiscuous mode on the bridge. The default value is false.

vlan

string

Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned.

preserveDefaultVlan

string

Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true.

vlanTrunk

list

Optional: Assign a VLAN trunk tag. The default value is none.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

enabledad

boolean

Optional: Enables duplicate address detection for the container side veth. The default value is false.

macspoofchk

boolean

Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false.

Note

The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface.

Note

To configure an uplink for an L2 network, you must allow the VLAN on the uplink interface by using the following command:

$  bridge vlan add vid VLAN_ID dev DEV
Copy to Clipboard
4.3.2.1.1. Bridge CNI plugin configuration example

The following example configures a secondary network named bridge-net:

{
  "cniVersion": "0.3.1",
  "name": "bridge-net",
  "type": "bridge",
  "isGateway": true,
  "vlan": 2,
  "ipam": {
    "type": "dhcp"
    }
}
Copy to Clipboard

4.3.2.2. Configuration for a host device secondary network

Note

Specify your network device by setting only one of the following parameters: device,hwaddr, kernelpath, or pciBusID.

The following object describes the configuration parameters for the host-device CNI plugin:

Table 4.15. Host device CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: host-device.

device

string

Optional: The name of the device, such as eth0.

hwaddr

string

Optional: The device hardware MAC address.

kernelpath

string

Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6.

pciBusID

string

Optional: The PCI address of the network device, such as 0000:00:1f.6.

4.3.2.2.1. host-device configuration example

The following example configures a secondary network named hostdev-net:

{
  "cniVersion": "0.3.1",
  "name": "hostdev-net",
  "type": "host-device",
  "device": "eth1"
}
Copy to Clipboard

4.3.2.3. Configuration for a VLAN secondary network

The following object describes the configuration parameters for the VLAN, vlan, CNI plugin:

Table 4.16. VLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: vlan.

master

string

The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used.

vlanId

integer

Set the ID of the vlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

dns

integer

Optional: DNS information to return. For example, a priority-ordered list of DNS nameservers.

linkInContainer

boolean

Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface.

Important

A NetworkAttachmentDefinition custom resource definition (CRD) with a vlan configuration can be used only on a single pod in a node because the CNI plugin cannot create multiple vlan subinterfaces with the same vlanId on the same master interface.

4.3.2.3.1. VLAN configuration example

The following example demonstrates a vlan configuration with a secondary network that is named vlan-net:

{
  "name": "vlan-net",
  "cniVersion": "0.3.1",
  "type": "vlan",
  "master": "eth0",
  "mtu": 1500,
  "vlanId": 5,
  "linkInContainer": false,
  "ipam": {
      "type": "host-local",
      "subnet": "10.1.1.0/24"
  },
  "dns": {
      "nameservers": [ "10.1.1.1", "8.8.8.8" ]
  }
}
Copy to Clipboard

4.3.2.4. Configuration for an IPVLAN secondary network

The following object describes the configuration parameters for the IPVLAN, ipvlan, CNI plugin:

Table 4.17. IPVLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: ipvlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained.

mode

string

Optional: The operating mode for the virtual network. The value must be l2, l3, or l3s. The default value is l2.

master

string

Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

linkInContainer

boolean

Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface.

Important
  • The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container is not able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol (PTP).
  • A single master interface cannot simultaneously be configured to use both macvlan and ipvlan.
  • For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the previous result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the previous result is used to configure the ipvlan interface.
4.3.2.4.1. IPVLAN CNI plugin configuration example

The following example configures a secondary network named ipvlan-net:

{
  "cniVersion": "0.3.1",
  "name": "ipvlan-net",
  "type": "ipvlan",
  "master": "eth1",
  "linkInContainer": false,
  "mode": "l3",
  "ipam": {
    "type": "static",
    "addresses": [
       {
         "address": "192.168.10.10/24"
       }
    ]
  }
}
Copy to Clipboard

4.3.2.5. Configuration for a MACVLAN secondary network

The following object describes the configuration parameters for the MAC Virtual LAN (MACVLAN) Container Network Interface (CNI) plugin:

Table 4.18. MACVLAN CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: macvlan.

ipam

object

The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.

mode

string

Optional: Configures traffic visibility on the virtual network. Must be either bridge, passthru, private, or vepa. If a value is not provided, the default value is bridge.

master

string

Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used.

mtu

integer

Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

linkInContainer

boolean

Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface.

Note

If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts.

4.3.2.5.1. MACVLAN CNI plugin configuration example

The following example configures a secondary network named macvlan-net:

{
  "cniVersion": "0.3.1",
  "name": "macvlan-net",
  "type": "macvlan",
  "master": "eth1",
  "linkInContainer": false,
  "mode": "bridge",
  "ipam": {
    "type": "dhcp"
    }
}
Copy to Clipboard

4.3.2.6. Configuration for a TAP secondary network

The following object describes the configuration parameters for the TAP CNI plugin:

Table 4.19. TAP CNI plugin JSON configuration object
FieldTypeDescription

cniVersion

string

The CNI specification version. The 0.3.1 value is required.

name

string

The value for the name parameter you provided previously for the CNO configuration.

type

string

The name of the CNI plugin to configure: tap.

mac

string

Optional: Request the specified MAC address for the interface.

mtu

integer

Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.

selinuxcontext

string

Optional: The SELinux context to associate with the tap device.

Note

The value system_u:system_r:container_t:s0 is required for OpenShift Container Platform.

multiQueue

boolean

Optional: Set to true to enable multi-queue.

owner

integer

Optional: The user owning the tap device.

group

integer

Optional: The group owning the tap device.

bridge

string

Optional: Set the tap device as a port of an already existing bridge.

4.3.2.6.1. Tap configuration example

The following example configures a secondary network named mynet:

{
 "name": "mynet",
 "cniVersion": "0.3.1",
 "type": "tap",
 "mac": "00:11:22:33:44:55",
 "mtu": 1500,
 "selinuxcontext": "system_u:system_r:container_t:s0",
 "multiQueue": true,
 "owner": 0,
 "group": 0
 "bridge": "br1"
}
Copy to Clipboard
4.3.2.6.2. Setting SELinux boolean for the TAP CNI plugin

To create the tap device with the container_t SELinux context, enable the container_use_devices boolean on the host by using the Machine Config Operator (MCO).

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a new YAML file named, such as setsebool-container-use-devices.yaml, with the following details:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-worker-setsebool
    spec:
      config:
        ignition:
          version: 3.2.0
        systemd:
          units:
          - enabled: true
            name: setsebool.service
            contents: |
              [Unit]
              Description=Set SELinux boolean for the TAP CNI plugin
              Before=kubelet.service
    
              [Service]
              Type=oneshot
              ExecStart=/usr/sbin/setsebool container_use_devices=on
              RemainAfterExit=true
    
              [Install]
              WantedBy=multi-user.target graphical.target
    Copy to Clipboard
  2. Create the new MachineConfig object by running the following command:

    $ oc apply -f setsebool-container-use-devices.yaml
    Copy to Clipboard
    Note

    Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. This update can take some time to be applied.

  3. Verify the change is applied by running the following command:

    $ oc get machineconfigpools
    Copy to Clipboard

    Expected output

    NAME        CONFIG                                                UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    master      rendered-master-e5e0c8e8be9194e7c5a882e047379cfa      True      False      False      3              3                   3                     0                      7d2h
    worker      rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2      True      False      False      3              3                   3                     0                      7d
    Copy to Clipboard

    Note

    All nodes should be in the updated and ready state.

4.3.2.7. Configuring routes using the route-override plugin on a secondary network

The following object describes the configuration parameters for the route-override CNI plugin:

Table 4.20. Route override CNI plugin JSON configuration object
FieldTypeDescription

type

string

The name of the CNI plugin to configure: route-override.

flushroutes

boolean

Optional: Set to true to flush any existing routes.

flushgateway

boolean

Optional: Set to true to flush the default route namely the gateway route.

delroutes

object

Optional: Specify the list of routes to delete from the container namespace.

addroutes

object

Optional: Specify the list of routes to add to the container namespace. Each route is a dictionary with dst and optional gw fields. If gw is omitted, the plugin uses the default gateway value.

skipcheck

boolean

Optional: Set this to true to skip the check command. By default, CNI plugins verify the network setup during the container lifecycle. When modifying routes dynamically with route-override, skipping this check ensures the final configuration reflects the updated routes.

4.3.2.7.1. Route-override plugin configuration example

The route-override CNI is a type of CNI that it is designed to be used when chained with a parent CNI. It does not operate independently, but relies on the parent CNI to first create the network interface and assign IP addresses before it can modify the routing rules.

The following example configures a secondary network named mymacvlan. The parent CNI creates a network interface attached to eth1 and assigns an IP address in the 192.168.1.0/24 range using host-local IPAM. The route-override CNI is then chained to the parent CNI and modifies the routing rules by flushing existing routes, deleting the route to 192.168.0.0/24, and adding a new route for 192.168.0.0/24 with a custom gateway.

{
    "cniVersion": "0.3.0",
    "name": "mymacvlan",
    "plugins": [
        {
            "type": "macvlan",         
1

            "master": "eth1",
            "mode": "bridge",
            "ipam": {
                "type": "host-local",
                "subnet": "192.168.1.0/24"
            }
        },
        {
            "type": "route-override",    
2

            "flushroutes": true,
            "delroutes": [
                {
                    "dst": "192.168.0.0/24"
                }
            ],
            "addroutes": [
                {
                    "dst": "192.168.0.0/24",
                    "gw": "10.1.254.254"
                }
            ]
        }
    ]
}
Copy to Clipboard
1
The parent CNI creates a network interface attached to eth1.
2
The chained route-override CNI modifies the routing rules.

4.3.3. Attaching a pod to a secondary network

As a cluster user you can attach a pod to a secondary network.

4.3.3.1. Adding a pod to a secondary network

You can add a pod to a secondary network. The pod continues to send normal cluster-related network traffic over the default network.

When a pod is created, a secondary networks is attached to the pod. However, if a pod already exists, you cannot attach a secondary network to it.

The pod must be in the same namespace as the secondary network.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster.

Procedure

  1. Add an annotation to the Pod object. Only one of the following annotation formats can be used:

    1. To attach a secondary network without any customization, add an annotation with the following format. Replace <network> with the name of the secondary network to associate with the pod:

      metadata:
        annotations:
          k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 
      1
      Copy to Clipboard
      1
      To specify more than one secondary network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same secondary network multiple times, that pod will have multiple network interfaces attached to that network.
    2. To attach a secondary network with customizations, add an annotation with the following format:

      metadata:
        annotations:
          k8s.v1.cni.cncf.io/networks: |-
            [
              {
                "name": "<network>", 
      1
      
                "namespace": "<namespace>", 
      2
      
                "default-route": ["<default-route>"] 
      3
      
              }
            ]
      Copy to Clipboard
      1
      Specify the name of the secondary network defined by a NetworkAttachmentDefinition object.
      2
      Specify the namespace where the NetworkAttachmentDefinition object is defined.
      3
      Optional: Specify an override for the default route, such as 192.168.17.1.
  2. To create the pod, enter the following command. Replace <name> with the name of the pod.

    $ oc create -f <name>.yaml
    Copy to Clipboard
  3. Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod.

    $ oc get pod <name> -o yaml
    Copy to Clipboard

    In the following example, the example-pod pod is attached to the net1 secondary network:

    $ oc get pod example-pod -o yaml
    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: macvlan-bridge
        k8s.v1.cni.cncf.io/network-status: |- 
    1
    
          [{
              "name": "ovn-kubernetes",
              "interface": "eth0",
              "ips": [
                  "10.128.2.14"
              ],
              "default": true,
              "dns": {}
          },{
              "name": "macvlan-bridge",
              "interface": "net1",
              "ips": [
                  "20.2.2.100"
              ],
              "mac": "22:2f:60:a5:f8:00",
              "dns": {}
          }]
      name: example-pod
      namespace: default
    spec:
      ...
    status:
      ...
    Copy to Clipboard
    1
    The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of a secondary network attached to the pod. The annotation value is stored as a plain text value.
4.3.3.1.1. Specifying pod-specific addressing and routing options

When attaching a pod to a secondary network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations.

Prerequisites

  • The pod must be in the same namespace as the secondary network.
  • Install the OpenShift CLI (oc).
  • You must log in to the cluster.

Procedure

To add a pod to a secondary network while specifying addressing and/or routing options, complete the following steps:

  1. Edit the Pod resource definition. If you are editing an existing Pod resource, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod resource to edit.

    $ oc edit pod <name>
    Copy to Clipboard
  2. In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition custom resource (CR) names in addition to specifying additional properties.

    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 
    1
    Copy to Clipboard
    1
    Replace <network> with a JSON object as shown in the following examples. The single quotes are required.
  3. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter.

    apiVersion: v1
    kind: Pod
    metadata:
      name: example-pod
      annotations:
        k8s.v1.cni.cncf.io/networks: '[
        {
          "name": "net1"
        },
        {
          "name": "net2", 
    1
    
          "default-route": ["192.0.2.1"] 
    2
    
        }]'
    spec:
      containers:
      - name: example-pod
        command: ["/bin/bash", "-c", "sleep 2000000000000"]
        image: centos/tools
    Copy to Clipboard
    1
    The name key is the name of the secondary network to associate with the pod.
    2
    The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the pod to fail to become active.

The default route will cause any traffic that is not specified in other routes to be routed to the gateway.

Important

Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface.

To verify the routing properties of a pod, the oc command may be used to execute the ip command within a pod.

$ oc exec -it <pod_name> -- ip route
Copy to Clipboard
Note

You may also reference the pod’s k8s.v1.cni.cncf.io/network-status to see which secondary network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects.

To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO.

  1. Edit the CNO CR by running the following command:

    $ oc edit networks.operator.openshift.io cluster
    Copy to Clipboard

The following YAML describes the configuration parameters for the CNO:

Cluster Network Operator YAML configuration

name: <name> 
1

namespace: <namespace> 
2

rawCNIConfig: '{ 
3

  ...
}'
type: Raw
Copy to Clipboard

1
Specify a name for the secondary network attachment that you are creating. The name must be unique within the specified namespace.
2
Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used.
3
Specify the CNI plugin configuration in JSON format, which is based on the following template.

The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin:

macvlan CNI plugin JSON configuration object using static IP and MAC address

{
  "cniVersion": "0.3.1",
  "name": "<name>", 
1

  "plugins": [{ 
2

      "type": "macvlan",
      "capabilities": { "ips": true }, 
3

      "master": "eth0", 
4

      "mode": "bridge",
      "ipam": {
        "type": "static"
      }
    }, {
      "capabilities": { "mac": true }, 
5

      "type": "tuning"
    }]
}
Copy to Clipboard

1
Specifies the name for the secondary network attachment to create. The name must be unique within the specified namespace.
2
Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration.
3
Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities.
4
Specifies the interface that the macvlan plugin uses.
5
Specifies that a request is made to enable the static MAC address functionality of a CNI plugin.

The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod.

Edit the pod with:

$ oc edit pod <name>
Copy to Clipboard

macvlan CNI plugin JSON configuration object using static IP and MAC address

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
      {
        "name": "<name>", 
1

        "ips": [ "192.0.2.205/24" ], 
2

        "mac": "CA:FE:C0:FF:EE:00" 
3

      }
    ]'
Copy to Clipboard

1
Use the <name> as provided when creating the rawCNIConfig above.
2
Provide an IP address including the subnet mask.
3
Provide the MAC address.
Note

Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together.

To verify the IP address and MAC properties of a pod with secondary networks, use the oc command to execute the ip command within a pod.

$ oc exec -it <pod_name> -- ip a
Copy to Clipboard

4.3.4. Configuring multi-network policy

Administrators can use the MultiNetworkPolicy API to create multiple network policies that manage traffic for pods attached to secondary networks. For example, you can create policies that allow or deny traffic based on specific ports, IPs/ranges, or labels.

Multi-network policies can be used to manage traffic on secondary networks in the cluster. These policies cannot manage the default cluster network or primary network of user-defined networks.

As a cluster administrator, you can configure a multi-network policy for any of the following network types:

  • Single-Root I/O Virtualization (SR-IOV)
  • MAC Virtual Local Area Network (MacVLAN)
  • IP Virtual Local Area Network (IPVLAN)
  • Bond Container Network Interface (CNI) over SR-IOV
  • OVN-Kubernetes secondary networks
Note

Support for configuring multi-network policies for SR-IOV secondary networks is only supported with kernel network interface controllers (NICs). SR-IOV is not supported for Data Plane Development Kit (DPDK) applications.

4.3.4.1. Differences between multi-network policy and network policy

Although the MultiNetworkPolicy API implements the NetworkPolicy API, there are several important differences:

  • You must use the MultiNetworkPolicy API:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    Copy to Clipboard
  • You must use the multi-networkpolicy resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with the oc get multi-networkpolicy <name> command where <name> is the name of a multi-network policy.
  • You can use the k8s.v1.cni.cncf.io/policy-for annotation on a MultiNetworkPolicy object to point to a NetworkAttachmentDefinition (NAD) custom resource (CR). The NAD CR defines the network to which the policy applies.

    Example multi-network policy that includes the k8s.v1.cni.cncf.io/policy-for annotation

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>
    Copy to Clipboard

    where:

    <namespace_name>
    Specifies the namespace name.
    <network_name>
    Specifies the name of a network attachment definition.

4.3.4.2. Enabling multi-network policy for the cluster

As a cluster administrator, you can enable multi-network policy support on your cluster.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster with a user with cluster-admin privileges.

Procedure

  1. Create the multinetwork-enable-patch.yaml file with the following YAML:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      useMultiNetworkPolicy: true
    Copy to Clipboard
  2. Configure the cluster to enable multi-network policy:

    $ oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml
    Copy to Clipboard

    Example output

    network.operator.openshift.io/cluster patched
    Copy to Clipboard

4.3.4.3. Supporting multi-network policies in IPv6 networks

The ICMPv6 Neighbor Discovery Protocol (NDP) is a set of messages and processes that enable devices to discover and maintain information about neighboring nodes. NDP plays a crucial role in IPv6 networks, facilitating the interaction between devices on the same link.

The Cluster Network Operator (CNO) deploys the iptables implementation of multi-network policy when the useMultiNetworkPolicy parameter is set to true.

To support multi-network policies in IPv6 networks the Cluster Network Operator deploys the following set of rules in every pod affected by a multi-network policy:

Multi-network policy custom rules

kind: ConfigMap
apiVersion: v1
metadata:
  name: multi-networkpolicy-custom-rules
  namespace: openshift-multus
data:

  custom-v6-rules.txt: |
    # accept NDP
    -p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT 
1

    -p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT 
2

    # accept RA/RS
    -p icmpv6 --icmpv6-type router-solicitation -j ACCEPT 
3

    -p icmpv6 --icmpv6-type router-advertisement -j ACCEPT 
4
Copy to Clipboard

1
This rule allows incoming ICMPv6 neighbor solicitation messages, which are part of the neighbor discovery protocol (NDP). These messages help determine the link-layer addresses of neighboring nodes.
2
This rule allows incoming ICMPv6 neighbor advertisement messages, which are part of NDP and provide information about the link-layer address of the sender.
3
This rule permits incoming ICMPv6 router solicitation messages. Hosts use these messages to request router configuration information.
4
This rule allows incoming ICMPv6 router advertisement messages, which give configuration information to hosts.
Note

You cannot edit these predefined rules.

These rules collectively enable essential ICMPv6 traffic for correct network functioning, including address resolution and router communication in an IPv6 environment. With these rules in place and a multi-network policy denying traffic, applications are not expected to experience connectivity issues.

4.3.4.4. Working with multi-network policy

As a cluster administrator, you can create, edit, view, and delete multi-network policies.

4.3.4.4.1. Prerequisites
  • You have enabled multi-network policy support for your cluster.
4.3.4.4.2. Creating a multi-network policy using the CLI

To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create a policy rule:

    1. Create a <policy_name>.yaml file:

      $ touch <policy_name>.yaml
      Copy to Clipboard

      where:

      <policy_name>
      Specifies the multi-network policy file name.
    2. Define a multi-network policy in the file that you just created, such as in the following examples:

      Deny ingress from all pods in all namespaces

      This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies.

      apiVersion: k8s.cni.cncf.io/v1beta1
      kind: MultiNetworkPolicy
      metadata:
        name: deny-by-default
        annotations:
          k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>
      spec:
        podSelector: {}
        policyTypes:
        - Ingress
        ingress: []
      Copy to Clipboard

      where:

      <network_name>
      Specifies the name of a network attachment definition.

      Allow ingress from all pods in the same namespace

      apiVersion: k8s.cni.cncf.io/v1beta1
      kind: MultiNetworkPolicy
      metadata:
        name: allow-same-namespace
        annotations:
          k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>
      spec:
        podSelector:
        ingress:
        - from:
          - podSelector: {}
      Copy to Clipboard

      where:

      <network_name>
      Specifies the name of a network attachment definition.

      Allow ingress traffic to one pod from a particular namespace

      This policy allows traffic to pods labelled pod-a from pods running in namespace-y.

      apiVersion: k8s.cni.cncf.io/v1beta1
      kind: MultiNetworkPolicy
      metadata:
        name: allow-traffic-pod
        annotations:
          k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>
      spec:
        podSelector:
         matchLabels:
            pod: pod-a
        policyTypes:
        - Ingress
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                 kubernetes.io/metadata.name: namespace-y
      Copy to Clipboard

      where:

      <network_name>
      Specifies the name of a network attachment definition.

      Restrict traffic to a service

      This policy when applied ensures every pod with both labels app=bookstore and role=api can only be accessed by pods with label app=bookstore. In this example the application could be a REST API server, marked with labels app=bookstore and role=api.

      This example addresses the following use cases:

      • Restricting the traffic to a service to only the other microservices that need to use it.
      • Restricting the connections to a database to only permit the application using it.

        apiVersion: k8s.cni.cncf.io/v1beta1
        kind: MultiNetworkPolicy
        metadata:
          name: api-allow
          annotations:
            k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>
        spec:
          podSelector:
            matchLabels:
              app: bookstore
              role: api
          ingress:
          - from:
              - podSelector:
                  matchLabels:
                    app: bookstore
        Copy to Clipboard

        where:

        <network_name>
        Specifies the name of a network attachment definition.
  2. To create the multi-network policy object, enter the following command:

    $ oc apply -f <policy_name>.yaml -n <namespace>
    Copy to Clipboard

    where:

    <policy_name>
    Specifies the multi-network policy file name.
    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created
    Copy to Clipboard

Note

If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console.

4.3.4.4.3. Editing a multi-network policy

You can edit a multi-network policy in a namespace.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace where the multi-network policy exists.

Procedure

  1. Optional: To list the multi-network policy objects in a namespace, enter the following command:

    $ oc get multi-networkpolicy
    Copy to Clipboard

    where:

    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
  2. Edit the multi-network policy object.

    • If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command.

      $ oc apply -n <namespace> -f <policy_file>.yaml
      Copy to Clipboard

      where:

      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
      <policy_file>
      Specifies the name of the file containing the network policy.
    • If you need to update the multi-network policy object directly, enter the following command:

      $ oc edit multi-networkpolicy <policy_name> -n <namespace>
      Copy to Clipboard

      where:

      <policy_name>
      Specifies the name of the network policy.
      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
  3. Confirm that the multi-network policy object is updated.

    $ oc describe multi-networkpolicy <policy_name> -n <namespace>
    Copy to Clipboard

    where:

    <policy_name>
    Specifies the name of the multi-network policy.
    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Note

If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.

4.3.4.4.4. Viewing multi-network policies using the CLI

You can examine the multi-network policies in a namespace.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace where the multi-network policy exists.

Procedure

  • List multi-network policies in a namespace:

    • To view multi-network policy objects defined in a namespace, enter the following command:

      $ oc get multi-networkpolicy
      Copy to Clipboard
    • Optional: To examine a specific multi-network policy, enter the following command:

      $ oc describe multi-networkpolicy <policy_name> -n <namespace>
      Copy to Clipboard

      where:

      <policy_name>
      Specifies the name of the multi-network policy to inspect.
      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Note

If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console.

4.3.4.4.5. Deleting a multi-network policy using the CLI

You can delete a multi-network policy in a namespace.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace where the multi-network policy exists.

Procedure

  • To delete a multi-network policy object, enter the following command:

    $ oc delete multi-networkpolicy <policy_name> -n <namespace>
    Copy to Clipboard

    where:

    <policy_name>
    Specifies the name of the multi-network policy.
    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted
    Copy to Clipboard

Note

If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.

4.3.4.4.6. Creating a default deny all multi-network policy

This policy blocks all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies and traffic between host-networked pods. This procedure enforces a strong deny policy by applying a deny-by-default policy in the my-project namespace.

Warning

Without configuring a NetworkPolicy custom resource (CR) that allows traffic communication, the following policy might cause communication problems across your cluster.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      name: deny-by-default
      namespace: my-project 
    1
    
      annotations:
        k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> 
    2
    
    spec:
      podSelector: {} 
    3
    
      policyTypes: 
    4
    
      - Ingress 
    5
    
      ingress: [] 
    6
    Copy to Clipboard
    1
    Specifies the namespace in which to deploy the policy. For example, the my-project namespace.
    2
    Specifies the name of namespace project followed by the network attachment definition name.
    3
    If this field is empty, the configuration matches all the pods. Therefore, the policy applies to all pods in the my-project namespace.
    4
    Specifies a list of rule types that the NetworkPolicy relates to.
    5
    Specifies Ingress only policyTypes.
    6
    Specifies ingress rules. If not specified, all incoming traffic is dropped to all pods.
  2. Apply the policy by entering the following command:

    $ oc apply -f deny-by-default.yaml
    Copy to Clipboard

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created
    Copy to Clipboard

4.3.4.4.7. Creating a multi-network policy to allow traffic from external clients

With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web.

Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      name: web-allow-external
      namespace: default
      annotations:
        k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>
    spec:
      policyTypes:
      - Ingress
      podSelector:
        matchLabels:
          app: web
      ingress:
        - {}
    Copy to Clipboard
  2. Apply the policy by entering the following command:

    $ oc apply -f web-allow-external.yaml
    Copy to Clipboard

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created
    Copy to Clipboard

    This policy allows traffic from all resources, including external traffic as illustrated in the following diagram:

Allow traffic from external clients
4.3.4.4.8. Creating a multi-network policy allowing traffic to an application from all namespaces
Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      name: web-allow-all-namespaces
      namespace: default
      annotations:
        k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>
    spec:
      podSelector:
        matchLabels:
          app: web 
    1
    
      policyTypes:
      - Ingress
      ingress:
      - from:
        - namespaceSelector: {} 
    2
    Copy to Clipboard
    1
    Applies the policy only to app:web pods in default namespace.
    2
    Selects all pods in all namespaces.
    Note

    By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to.

  2. Apply the policy by entering the following command:

    $ oc apply -f web-allow-all-namespaces.yaml
    Copy to Clipboard

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created
    Copy to Clipboard

Verification

  1. Start a web service in the default namespace by entering the following command:

    $ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
    Copy to Clipboard
  2. Run the following command to deploy an alpine image in the secondary namespace and to start a shell:

    $ oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- sh
    Copy to Clipboard
  3. Run the following command in the shell and observe that the request is allowed:

    # wget -qO- --timeout=2 http://web.default
    Copy to Clipboard

    Expected output

    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    html { color-scheme: light dark; }
    body { width: 35em; margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif; }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    Copy to Clipboard

4.3.4.4.9. Creating a multi-network policy allowing traffic to an application from a namespace
Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to:

  • Restrict traffic to a production database only to namespaces where production workloads are deployed.
  • Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with cluster-admin privileges.
  • You are working in the namespace that the multi-network policy applies to.

Procedure

  1. Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production. Save the YAML in the web-allow-prod.yaml file:

    apiVersion: k8s.cni.cncf.io/v1beta1
    kind: MultiNetworkPolicy
    metadata:
      name: web-allow-prod
      namespace: default
      annotations:
        k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>
    spec:
      podSelector:
        matchLabels:
          app: web 
    1
    
      policyTypes:
      - Ingress
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              purpose: production 
    2
    Copy to Clipboard
    1
    Applies the policy only to app:web pods in the default namespace.
    2
    Restricts traffic to only pods in namespaces that have the label purpose=production.
  2. Apply the policy by entering the following command:

    $ oc apply -f web-allow-prod.yaml
    Copy to Clipboard

    Example output

    multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created
    Copy to Clipboard

Verification

  1. Start a web service in the default namespace by entering the following command:

    $ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
    Copy to Clipboard
  2. Run the following command to create the prod namespace:

    $ oc create namespace prod
    Copy to Clipboard
  3. Run the following command to label the prod namespace:

    $ oc label namespace/prod purpose=production
    Copy to Clipboard
  4. Run the following command to create the dev namespace:

    $ oc create namespace dev
    Copy to Clipboard
  5. Run the following command to label the dev namespace:

    $ oc label namespace/dev purpose=testing
    Copy to Clipboard
  6. Run the following command to deploy an alpine image in the dev namespace and to start a shell:

    $ oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- sh
    Copy to Clipboard
  7. Run the following command in the shell and observe that the request is blocked:

    # wget -qO- --timeout=2 http://web.default
    Copy to Clipboard

    Expected output

    wget: download timed out
    Copy to Clipboard

  8. Run the following command to deploy an alpine image in the prod namespace and start a shell:

    $ oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- sh
    Copy to Clipboard
  9. Run the following command in the shell and observe that the request is allowed:

    # wget -qO- --timeout=2 http://web.default
    Copy to Clipboard

    Expected output

    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    html { color-scheme: light dark; }
    body { width: 35em; margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif; }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    Copy to Clipboard

4.3.5. Removing a pod from a secondary network

As a cluster user you can remove a pod from a secondary network.

4.3.5.1. Removing a pod from a secondary network

You can remove a pod from a secondary network only by deleting the pod.

Prerequisites

  • A secondary network is attached to the pod.
  • Install the OpenShift CLI (oc).
  • Log in to the cluster.

Procedure

  • To delete the pod, enter the following command:

    $ oc delete pod <name> -n <namespace>
    Copy to Clipboard
    • <name> is the name of the pod.
    • <namespace> is the namespace that contains the pod.

4.3.6. Editing a secondary network

As a cluster administrator you can modify the configuration for an existing secondary network.

4.3.6.1. Modifying a secondary network attachment definition

As a cluster administrator, you can make changes to an existing secondary network. Any existing pods attached to the secondary network will not be updated.

Prerequisites

  • You have configured a secondary network for your cluster.
  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To edit a secondary network for your cluster, complete the following steps:

  1. Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor:

    $ oc edit networks.operator.openshift.io cluster
    Copy to Clipboard
  2. In the additionalNetworks collection, update the secondary network with your changes.
  3. Save your changes and quit the text editor to commit your changes.
  4. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition object by running the following command. Replace <network-name> with the name of the secondary network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition object to reflect your changes.

    $ oc get network-attachment-definitions <network-name> -o yaml
    Copy to Clipboard

    For example, the following console output displays a NetworkAttachmentDefinition object that is named net1:

    $ oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}'
    { "cniVersion": "0.3.1", "type": "macvlan",
    "master": "ens5",
    "mode": "bridge",
    "ipam":       {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} }
    Copy to Clipboard

4.3.7. Configuring IP address assignment on secondary networks

The following sections give instructions and information for how to configure IP address assignments for secondary networks.

4.3.7.1. Configuration of IP address assignment for a network attachment

For secondary networks, IP addresses can be assigned using an IP Address Management (IPAM) CNI plugin, which supports various assignment methods, including Dynamic Host Configuration Protocol (DHCP) and static assignment.

The DHCP IPAM CNI plugin responsible for dynamic assignment of IP addresses operates with two distinct components:

  • CNI Plugin: Responsible for integrating with the Kubernetes networking stack to request and release IP addresses.
  • DHCP IPAM CNI Daemon: A listener for DHCP events that coordinates with existing DHCP servers in the environment to handle IP address assignment requests. This daemon is not a DHCP server itself.

For networks requiring type: dhcp in their IPAM configuration, ensure the following:

  • A DHCP server is available and running in the environment. The DHCP server is external to the cluster and is expected to be part of the customer’s existing network infrastructure.
  • The DHCP server is appropriately configured to serve IP addresses to the nodes.

In cases where a DHCP server is unavailable in the environment, it is recommended to use the Whereabouts IPAM CNI plugin instead. The Whereabouts CNI provides similar IP address management capabilities without the need for an external DHCP server.

Note

Use the Whereabouts CNI plugin when there is no external DHCP server or where static IP address management is preferred. The Whereabouts plugin includes a reconciler daemon to manage stale IP address allocations.

A DHCP lease must be periodically renewed throughout the container’s lifetime, so a separate daemon, the DHCP IPAM CNI Daemon, is required. To deploy the DHCP IPAM CNI daemon, modify the Cluster Network Operator (CNO) configuration to trigger the deployment of this daemon as part of the secondary network setup.

4.3.7.1.1. Static IP address assignment configuration

The following table describes the configuration for static IP address assignment:

Table 4.21. ipam static configuration object
FieldTypeDescription

type

string

The IPAM address type. The value static is required.

addresses

array

An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported.

routes

array

An array of objects specifying routes to configure inside the pod.

dns

array

Optional: An array of objects specifying the DNS configuration.

The addresses array requires objects with the following fields:

Table 4.22. ipam.addresses[] array
FieldTypeDescription

address

string

An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24, then the secondary network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0.

gateway

string

The default gateway to route egress network traffic to.

Table 4.23. ipam.routes[] array
FieldTypeDescription

dst

string

The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route.

gw

string

The gateway where network traffic is routed.

Table 4.24. ipam.dns object
FieldTypeDescription

nameservers

array

An array of one or more IP addresses for to send DNS queries to.

domain

array

The default domain to append to a hostname. For example, if the domain is set to example.com, a DNS lookup query for example-host is rewritten as example-host.example.com.

search

array

An array of domain names to append to an unqualified hostname, such as example-host, during a DNS lookup query.

Static IP address assignment configuration example

{
  "ipam": {
    "type": "static",
      "addresses": [
        {
          "address": "191.168.1.7/24"
        }
      ]
  }
}
Copy to Clipboard

4.3.7.1.2. Dynamic IP address (DHCP) assignment configuration

A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster.

Important

For an Ethernet network attachment, the SR-IOV Network Operator does not create a DHCP server deployment; the Cluster Network Operator is responsible for creating the minimal DHCP server deployment.

To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example:

Example shim network attachment definition

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
spec:
  additionalNetworks:
  - name: dhcp-shim
    namespace: default
    type: Raw
    rawCNIConfig: |-
      {
        "name": "dhcp-shim",
        "cniVersion": "0.3.1",
        "type": "bridge",
        "ipam": {
          "type": "dhcp"
        }
      }
  # ...
Copy to Clipboard

The following table describes the configuration parameters for dynamic IP address address assignment with DHCP.

Table 4.25. ipam DHCP configuration object
FieldTypeDescription

type

string

The IPAM address type. The value dhcp is required.

The following JSON example describes the configuration p for dynamic IP address address assignment with DHCP.

Dynamic IP address (DHCP) assignment configuration example

{
  "ipam": {
    "type": "dhcp"
  }
}
Copy to Clipboard

4.3.7.1.3. Dynamic IP address assignment configuration with Whereabouts

The Whereabouts CNI plugin helps the dynamic assignment of an IP address to a secondary network without the use of a DHCP server.

The Whereabouts CNI plugin also supports overlapping IP address ranges and configuration of the same CIDR range multiple times within separate NetworkAttachmentDefinition CRDs. This provides greater flexibility and management capabilities in multi-tenant environments.

4.3.7.1.3.1. Dynamic IP address configuration parameters

The following table describes the configuration objects for dynamic IP address assignment with Whereabouts:

Table 4.26. ipam whereabouts configuration parameters
FieldTypeDescription

type

string

The IPAM address type. The value whereabouts is required.

range

string

An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses.

exclude

array

Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned.

network_name

string

Optional: Helps ensure that each group or domain of pods gets its own set of IP addresses, even if they share the same range of IP addresses. Setting this field is important for keeping networks separate and organized, notably in multi-tenant environments.

4.3.7.1.3.2. Dynamic IP address assignment configuration with Whereabouts that excludes IP address ranges

The following example shows a dynamic address assignment configuration in a NAD file that uses Whereabouts:

Whereabouts dynamic IP address assignment that excludes specific IP address ranges

{
  "ipam": {
    "type": "whereabouts",
    "range": "192.0.2.192/27",
    "exclude": [
       "192.0.2.192/30",
       "192.0.2.196/32"
    ]
  }
}
Copy to Clipboard

4.3.7.1.3.3. Dynamic IP address assignment that uses Whereabouts with overlapping IP address ranges

The following example shows a dynamic IP address assignment that uses overlapping IP address ranges for multi-tenant networks.

NetworkAttachmentDefinition 1

{
  "ipam": {
    "type": "whereabouts",
    "range": "192.0.2.192/29",
    "network_name": "example_net_common", 
1

  }
}
Copy to Clipboard

1
Optional. If set, must match the network_name of NetworkAttachmentDefinition 2.

NetworkAttachmentDefinition 2

{
  "ipam": {
    "type": "whereabouts",
    "range": "192.0.2.192/24",
    "network_name": "example_net_common", 
1

  }
}
Copy to Clipboard

1
Optional. If set, must match the network_name of NetworkAttachmentDefinition 1.
4.3.7.1.4. Creating a whereabouts-reconciler daemon set

The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster by using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pod gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down.

Note

You can also use a NetworkAttachmentDefinition custom resource definition (CRD) for dynamic IP address assignment.

The whereabouts-reconciler daemon set is automatically created when you configure a secondary network through the Cluster Network Operator. It is not automatically created when you configure a secondary network from a YAML manifest.

To trigger the deployment of the whereabouts-reconciler daemon set, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource (CR) file.

Use the following procedure to deploy the whereabouts-reconciler daemon set.

Procedure

  1. Edit the Network.operator.openshift.io custom resource (CR) by running the following command:

    $ oc edit network.operator.openshift.io cluster
    Copy to Clipboard
  2. Include the additionalNetworks section shown in this example YAML extract within the spec definition of the custom resource (CR):

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    # ...
    spec:
      additionalNetworks:
      - name: whereabouts-shim
        namespace: default
        rawCNIConfig: |-
          {
           "name": "whereabouts-shim",
           "cniVersion": "0.3.1",
           "type": "bridge",
           "ipam": {
             "type": "whereabouts"
           }
          }
        type: Raw
    # ...
    Copy to Clipboard
  3. Save the file and exit the text editor.
  4. Verify that the whereabouts-reconciler daemon set deployed successfully by running the following command:

    $ oc get all -n openshift-multus | grep whereabouts-reconciler
    Copy to Clipboard

    Example output

    pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s
    pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s
    pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s
    pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s
    pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s
    pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s
    daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s
    Copy to Clipboard

4.3.7.1.5. Configuring the Whereabouts IP reconciler schedule

The Whereabouts IPAM CNI plugin runs the IP reconciler daily. This process cleans up any stranded IP allocations that might result in exhausting IPs and therefore prevent new pods from getting an IP allocated to them.

Use this procedure to change the frequency at which the IP reconciler runs.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You have access to the cluster as a user with the cluster-admin role.
  • You have deployed the whereabouts-reconciler daemon set, and the whereabouts-reconciler pods are up and running.

Procedure

  1. Run the following command to create a ConfigMap object named whereabouts-config in the openshift-multus namespace with a specific cron expression for the IP reconciler:

    $ oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *"
    Copy to Clipboard

    This cron expression indicates the IP reconciler runs every 15 minutes. Adjust the expression based on your specific requirements.

    Note

    The whereabouts-reconciler daemon set can only consume a cron expression pattern that includes five asterisks. The sixth, which is used to denote seconds, is currently not supported.

  2. Retrieve information about resources related to the whereabouts-reconciler daemon set and pods within the openshift-multus namespace by running the following command:

    $ oc get all -n openshift-multus | grep whereabouts-reconciler
    Copy to Clipboard

    Example output

    pod/whereabouts-reconciler-2p7hw                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-76jk7                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-94zw6                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-mfh68                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-pgshz                   1/1     Running   0             4m14s
    pod/whereabouts-reconciler-xn5xz                   1/1     Running   0             4m14s
    daemonset.apps/whereabouts-reconciler          6         6         6       6            6           kubernetes.io/os=linux   4m16s
    Copy to Clipboard

  3. Run the following command to verify that the whereabouts-reconciler pod runs the IP reconciler with the configured interval:

    $ oc -n openshift-multus logs whereabouts-reconciler-2p7hw
    Copy to Clipboard

    Example output

    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CREATE
    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CHMOD
    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..data_tmp": RENAME
    2024-02-02T16:33:54Z [verbose] using expression: */15 * * * *
    2024-02-02T16:33:54Z [verbose] configuration updated to file "/cron-schedule/..data". New cron expression: */15 * * * *
    2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id "00c2d1c9-631d-403f-bb86-73ad104a6817" - new cron expression: */15 * * * *
    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/config": CREATE
    2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_26_17.3874177937": REMOVE
    2024-02-02T16:45:00Z [verbose] starting reconciler run
    2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data
    2024-02-02T16:45:00Z [debug] listing IP pools
    2024-02-02T16:45:00Z [debug] no IP addresses to cleanup
    2024-02-02T16:45:00Z [verbose] reconciler success
    Copy to Clipboard

4.3.7.1.6. Fast IPAM configuration for the Whereabouts IPAM CNI plugin

Wherabouts is an IP Address Management (IPAM) Container Network Interface (CNI) plugin that assigns IP addresses at a cluster-wide level. Whereabouts does not require a Dynamic Host Configuration Protocol (DHCP) server.

A typical Wherabouts workflow is described as follows:

  1. Whereabouts takes an address range in classless inter-domain routing (CIDR) notation, such as 192.168.2.0/24, and assigns IP addresses within that range, such as 192.168.2.1 to 192.168.2.254.
  2. Whereabouts assigns an IP address, the lowest value address in a CIDR range, to a pod and tracks the IP address in a data store for the lifetime of that pod.
  3. When the pod is removed, Whereabouts frees the address from the pod so that the address is available for assignment.

To improve the performance of Whereabouts, especially if nodes in your cluster run a high amount of pods, you can enable the Fast IPAM feature.

Important

Fast IPAM is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The Fast IPAM feature uses nodeslicepools, which are managed by the Whereabouts Controller, to optimize IP allocation for nodes.

Prerequisites

  • You added the whereabouts-shim configuration to the Network.operator.openshift.io custom resource (CR), so that the Cluster Network Operator (CNO) can deploy the Whereabouts Controller. See "Creating a Whereabouts reconciler daemon set".
  • For the Fast IPAM feature to work, ensure that the NetworkAttachmentDefinition (NAD) and the pod exist in the same openshift-multus namesapace.

Procedure

  1. Confirm that the Whereabouts Controller is running by entering the following command.

    $ oc get pods -n openshift-multus | grep controller
    Copy to Clipboard

    Example output

    multus-admission-controller-d89bc96f-gbf7s   2/2     Running   0              6h3m
    ...
    Copy to Clipboard

    Important

    If the Whereabouts Controller is not running, the Fast IPAM does not work.

  2. Create a NAD file for your cluster and add the Fast IPAM details to the file:

    Example NAD file with a Fast IPAM configuration

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: wb-ipam
      namespace: openshift-multus 
    1
    
    spec:
      config: {
    	"cniVersion": "0.3.0",
    	"name": "wb-ipam-cni-name", 
    2
    
    	"type": "bridge",
    	"bridge": "cni0",
    	"ipam": {
      	"type": "whereabouts", 
    3
    
      	"range": "10.5.0.0/20", 
    4
    
      	"node_slice_size": "/24" 
    5
    
        }
      }
    # ...
    Copy to Clipboard

    1
    The namespace where CNO deploys the NAD.
    2
    The name of the Whereabouts IPAM CNI plugin.
    3
    The type of IPAM CNI plugin: whereabouts.
    4
    The IP address range for the IP pool that the Whereabouts IPAM CNI plugin uses for allocating IP addresses to pods.
    5
    Sets the slice size of IP addresses available to each node.
  3. Add the Whereabouts IPAM CNI plugin annotation details to the YAML file for the pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: <pod_name> 
    1
    
      annotations:
      k8s.v1.cni.cncf.io/networks: openshift-multus/wb-ipam 
    2
    
    spec:
      containers:
      - name: samplepod 
    3
    
      command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"] 
    4
    
      image: alpine
    # ...
    Copy to Clipboard
    1
    The name of the pod.
    2
    The annotation details that references the Whereabouts IPAM CNI plugin name that exists in the openshift-multus namespace.
    3
    The name of the container for the pod.
    4
    Defines the entry point for the container and controls the behavior of the container in the Whereabouts IPAM CNI plugin.
  4. Apply the NAD file configuration to pods that exist on nodes that run in your cluster:

    $ oc create -f <NAD_file_name>.yaml
    Copy to Clipboard

Verification

  1. Show the IP address details of the pod by entering the following command:

    $ oc describe pod <pod_name>
    Copy to Clipboard

    Example output

    ...
    IP:     192.168.2.0
    IPs:
      IP:   192.168.2.0
    Containers:
      samplepod:
        Container ID:   docker://<image_name>
        Image:          <app_name>:v1
        Image ID:
    ...
    Copy to Clipboard

  2. Access the pod and confirm its interfaces by entering the following command:

    $ oc exec <pod_name> -- ip a
    Copy to Clipboard

    Example output

    ...
    3: net1@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
        link/ether 82:01:98:e5:0c:b7 brd ff:ff:ff:ff:ff:ff
        inet 192.168.2.0/24 brd 10.10.0.255 scope global net1 
    1
    
           valid_lft forever preferred_lft forever
        inet6 fe80::8001:98ff:fee5:cb7/64 scope link
           valid_lft forever preferred_lft forever
    ...
    Copy to Clipboard

    1
    Pod is attached to the 192.168.2.1 IP address on the net1 interface as expected.
  3. Check that the node selector pool exists in the openshift-multus namespace by entering the following command:

    $ oc get nodeslicepool -n openshift-multus
    Copy to Clipboard

    Example output

    NAME            AGE
    nodeslicepool   32m
    Copy to Clipboard

4.3.7.1.7. Creating a configuration for assignment of dual-stack IP addresses dynamically

Dual-stack IP address assignment can be configured with the ipRanges parameter for:

  • IPv4 addresses
  • IPv6 addresses
  • multiple IP address assignment

Procedure

  1. Set type to whereabouts.
  2. Use ipRanges to allocate IP addresses as shown in the following example:

    cniVersion: operator.openshift.io/v1
    kind: Network
    =metadata:
      name: cluster
    spec:
      additionalNetworks:
      - name: whereabouts-shim
        namespace: default
        type: Raw
        rawCNIConfig: |-
          {
           "name": "whereabouts-dual-stack",
           "cniVersion": "0.3.1,
           "type": "bridge",
           "ipam": {
             "type": "whereabouts",
             "ipRanges": [
                      {"range": "192.168.10.0/24"},
                      {"range": "2001:db8::/64"}
                  ]
           }
          }
    Copy to Clipboard
  3. Attach network to a pod. For more information, see "Adding a pod to a secondary network".
  4. Verify that all IP addresses are assigned.
  5. Run the following command to ensure the IP addresses are assigned as metadata.

    $ oc exec -it mypod -- ip a
    Copy to Clipboard

4.3.8. Configuring the master interface in the container network namespace

The following section provides instructions and information for how to create and manage a MAC-VLAN, IP-VLAN, and VLAN subinterface based on a master interface.

4.3.8.1. About configuring the master interface in the container network namespace

You can create a MAC-VLAN, an IP-VLAN, or a VLAN subinterface that is based on a master interface that exists in a container namespace. You can also create a master interface as part of the pod network configuration in a separate network attachment definition CRD.

To use a container namespace master interface, you must specify true for the linkInContainer parameter that exists in the subinterface configuration of the NetworkAttachmentDefinition CRD.

4.3.8.1.1. Creating multiple VLANs on SR-IOV VFs

An example use case for utilizing this feature is to create multiple VLANs based on SR-IOV VFs. To do so, begin by creating an SR-IOV network and then define the network attachments for the VLAN interfaces.

The following example shows how to configure the setup illustrated in this diagram.

Figure 4.5. Creating VLANs

Creating VLANs

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed the SR-IOV Network Operator.

Procedure

  1. Create a dedicated container namespace where you want to deploy your pod by using the following command:

    $ oc new-project test-namespace
    Copy to Clipboard
  2. Create an SR-IOV node policy:

    1. Create an SriovNetworkNodePolicy object, and then save the YAML in the sriov-node-network-policy.yaml file:

      apiVersion: sriovnetwork.openshift.io/v1
      kind: SriovNetworkNodePolicy
      metadata:
       name: sriovnic
       namespace: openshift-sriov-network-operator
      spec:
       deviceType: netdevice
       isRdma: false
       needVhostNet: true
       nicSelector:
         vendor: "15b3" 
      1
      
         deviceID: "101b" 
      2
      
         rootDevices: ["00:05.0"]
       numVfs: 10
       priority: 99
       resourceName: sriovnic
       nodeSelector:
          feature.node.kubernetes.io/network-sriov.capable: "true"
      Copy to Clipboard
      Note

      The SR-IOV network node policy configuration example, with the setting deviceType: netdevice, is tailored specifically for Mellanox Network Interface Cards (NICs).

      1
      The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC.
      2
      The device hexadecimal code of the SR-IOV network device.
    2. Apply the YAML by running the following command:

      $ oc apply -f sriov-node-network-policy.yaml
      Copy to Clipboard
      Note

      Applying this might take some time due to the node requiring a reboot.

  3. Create an SR-IOV network:

    1. Create the SriovNetwork custom resource (CR) for the additional secondary SR-IOV network attachment as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml:

      apiVersion: sriovnetwork.openshift.io/v1
      kind: SriovNetwork
      metadata:
       name: sriov-network
       namespace: openshift-sriov-network-operator
      spec:
       networkNamespace: test-namespace
       resourceName: sriovnic
       spoofChk: "off"
       trust: "on"
      Copy to Clipboard
    2. Apply the YAML by running the following command:

      $ oc apply -f sriov-network-attachment.yaml
      Copy to Clipboard
  4. Create the VLAN secondary network:

    1. Using the following YAML example, create a file named vlan100-additional-network-configuration.yaml:

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: vlan-100
        namespace: test-namespace
      spec:
        config: |
          {
            "cniVersion": "0.4.0",
            "name": "vlan-100",
            "plugins": [
              {
                "type": "vlan",
                "master": "ext0", 
      1
      
                "mtu": 1500,
                "vlanId": 100,
                "linkInContainer": true, 
      2
      
                "ipam": {"type": "whereabouts", "ipRanges": [{"range": "1.1.1.0/24"}]}
              }
            ]
          }
      Copy to Clipboard
      1
      The VLAN configuration needs to specify the master name. This can be configured in the pod networks annotation.
      2
      The linkInContainer parameter must be specified.
    2. Apply the YAML file by running the following command:

      $ oc apply -f vlan100-additional-network-configuration.yaml
      Copy to Clipboard
  5. Create a pod definition by using the earlier specified networks:

    1. Using the following YAML example, create a file named pod-a.yaml file:

      Note

      The manifest below includes 2 resources:

      • Namespace with security labels
      • Pod definition with appropriate network annotation
      apiVersion: v1
      kind: Namespace
      metadata:
        name: test-namespace
        labels:
          pod-security.kubernetes.io/enforce: privileged
          pod-security.kubernetes.io/audit: privileged
          pod-security.kubernetes.io/warn: privileged
          security.openshift.io/scc.podSecurityLabelSync: "false"
      ---
      apiVersion: v1
      kind: Pod
      metadata:
        name: nginx-pod
        namespace: test-namespace
        annotations:
          k8s.v1.cni.cncf.io/networks: '[
            {
              "name": "sriov-network",
              "namespace": "test-namespace",
              "interface": "ext0" 
      1
      
            },
            {
              "name": "vlan-100",
              "namespace": "test-namespace",
              "interface": "ext0.100"
            }
          ]'
      spec:
        securityContext:
          runAsNonRoot: true
        containers:
          - name: nginx-container
            image: nginxinc/nginx-unprivileged:latest
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop: ["ALL"]
            ports:
              - containerPort: 80
            seccompProfile:
              type: "RuntimeDefault"
      Copy to Clipboard
      1
      The name to be used as the master for the VLAN interface.
    2. Apply the YAML file by running the following command:

      $ oc apply -f pod-a.yaml
      Copy to Clipboard
  6. Get detailed information about the nginx-pod within the test-namespace by running the following command:

    $ oc describe pods nginx-pod -n test-namespace
    Copy to Clipboard

    Example output

    Name:         nginx-pod
    Namespace:    test-namespace
    Priority:     0
    Node:         worker-1/10.46.186.105
    Start Time:   Mon, 14 Aug 2023 16:23:13 -0400
    Labels:       <none>
    Annotations:  k8s.ovn.org/pod-networks:
                    {"default":{"ip_addresses":["10.131.0.26/23"],"mac_address":"0a:58:0a:83:00:1a","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0...
                  k8s.v1.cni.cncf.io/network-status:
                    [{
                        "name": "ovn-kubernetes",
                        "interface": "eth0",
                        "ips": [
                            "10.131.0.26"
                        ],
                        "mac": "0a:58:0a:83:00:1a",
                        "default": true,
                        "dns": {}
                    },{
                        "name": "test-namespace/sriov-network",
                        "interface": "ext0",
                        "mac": "6e:a7:5e:3f:49:1b",
                        "dns": {},
                        "device-info": {
                            "type": "pci",
                            "version": "1.0.0",
                            "pci": {
                                "pci-address": "0000:d8:00.2"
                            }
                        }
                    },{
                        "name": "test-namespace/vlan-100",
                        "interface": "ext0.100",
                        "ips": [
                            "1.1.1.1"
                        ],
                        "mac": "6e:a7:5e:3f:49:1b",
                        "dns": {}
                    }]
                  k8s.v1.cni.cncf.io/networks:
                    [ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" }, { "name": "vlan-100", "namespace": "test-namespace", "i...
                  openshift.io/scc: privileged
    Status:       Running
    IP:           10.131.0.26
    IPs:
      IP:  10.131.0.26
    Copy to Clipboard

4.3.8.1.2. Creating a subinterface based on a bridge master interface in a container namespace

You can create a subinterface based on a bridge master interface that exists in a container namespace. Creating a subinterface can be applied to other types of interfaces.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You are logged in to the OpenShift Container Platform cluster as a user with cluster-admin privileges.

Procedure

  1. Create a dedicated container namespace where you want to deploy your pod by entering the following command:

    $ oc new-project test-namespace
    Copy to Clipboard
  2. Using the following YAML example, create a bridge NetworkAttachmentDefinition custom resource definition (CRD) file named bridge-nad.yaml:

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: bridge-network
    spec:
      config: '{
        "cniVersion": "0.4.0",
        "name": "bridge-network",
        "type": "bridge",
        "bridge": "br-001",
        "isGateway": true,
        "ipMasq": true,
        "hairpinMode": true,
        "ipam": {
          "type": "host-local",
          "subnet": "10.0.0.0/24",
          "routes": [{"dst": "0.0.0.0/0"}]
        }
      }'
    Copy to Clipboard
  3. Run the following command to apply the NetworkAttachmentDefinition CRD to your OpenShift Container Platform cluster:

    $ oc apply -f bridge-nad.yaml
    Copy to Clipboard
  4. Verify that you successfully created a NetworkAttachmentDefinition CRD by entering the following command:

    $ oc get network-attachment-definitions
    Copy to Clipboard

    Example output

    NAME             AGE
    bridge-network   15s
    Copy to Clipboard

  5. Using the following YAML example, create a file named ipvlan-additional-network-configuration.yaml for the IPVLAN secondary network configuration:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: ipvlan-net
      namespace: test-namespace
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "ipvlan-net",
        "type": "ipvlan",
        "master": "net1", 
    1
    
        "mode": "l3",
        "linkInContainer": true, 
    2
    
        "ipam": {"type": "whereabouts", "ipRanges": [{"range": "10.0.0.0/24"}]}
      }'
    Copy to Clipboard
    1
    Specifies the ethernet interface to associate with the network attachment. This is subsequently configured in the pod networks annotation.
    2
    Specifies that the master interface is in the container network namespace.
  6. Apply the YAML file by running the following command:

    $ oc apply -f ipvlan-additional-network-configuration.yaml
    Copy to Clipboard
  7. Verify that the NetworkAttachmentDefinition CRD has been created successfully by running the following command:

    $ oc get network-attachment-definitions
    Copy to Clipboard

    Example output

    NAME             AGE
    bridge-network   87s
    ipvlan-net       9s
    Copy to Clipboard

  8. Using the following YAML example, create a file named pod-a.yaml for the pod definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-a
      namespace: test-namespace
      annotations:
        k8s.v1.cni.cncf.io/networks: '[
          {
            "name": "bridge-network",
            "interface": "net1" 
    1
    
          },
          {
            "name": "ipvlan-net",
            "interface": "net2"
          }
        ]'
    spec:
      securityContext:
        runAsNonRoot: true
        seccompProfile:
          type: RuntimeDefault
      containers:
      - name: test-pod
        image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: [ALL]
    Copy to Clipboard
    1
    Specifies the name to be used as the master for the IPVLAN interface.
  9. Apply the YAML file by running the following command:

    $ oc apply -f pod-a.yaml
    Copy to Clipboard
  10. Verify that the pod is running by using the following command:

    $ oc get pod -n test-namespace
    Copy to Clipboard

    Example output

    NAME    READY   STATUS    RESTARTS   AGE
    pod-a   1/1     Running   0          2m36s
    Copy to Clipboard

  11. Show network interface information about the pod-a resource within the test-namespace by running the following command:

    $ oc exec -n test-namespace pod-a -- ip a
    Copy to Clipboard

    Example output

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
        link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::488b:91ff:fe84:a94b/64 scope link
           valid_lft forever preferred_lft forever
    4: net1@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
        link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 10.0.0.2/24 brd 10.0.0.255 scope global net1
           valid_lft forever preferred_lft forever
        inet6 fe80::bcda:bdff:fe7e:f437/64 scope link
           valid_lft forever preferred_lft forever
    5: net2@net1: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
        link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff
        inet 10.0.0.1/24 brd 10.0.0.255 scope global net2
           valid_lft forever preferred_lft forever
        inet6 fe80::beda:bd00:17e:f437/64 scope link
           valid_lft forever preferred_lft forever
    Copy to Clipboard

    This output shows that the network interface net2 is associated with the physical interface net1.

4.3.9. Removing an additional network

As a cluster administrator you can remove an additional network attachment.

4.3.9.1. Removing a secondary network attachment definition

As a cluster administrator, you can remove a secondary network from your OpenShift Container Platform cluster. The secondary network is not removed from any pods it is attached to.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To remove a secondary network from your cluster, complete the following steps:

  1. Edit the Cluster Network Operator (CNO) in your default text editor by running the following command:

    $ oc edit networks.operator.openshift.io cluster
    Copy to Clipboard
  2. Modify the CR by removing the configuration that the CNO created from the additionalNetworks collection for the secondary network that you want to remove.

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      additionalNetworks: [] 
    1
    Copy to Clipboard
    1
    If you are removing the configuration mapping for the only secondary network attachment definition in the additionalNetworks collection, you must specify an empty collection.
  3. To remove a network attachment definition from the network of your cluster, enter the following command:

    $ oc delete net-attach-def <name_of_NAD> 
    1
    Copy to Clipboard
    1
    Replace <name_of_NAD> with the name of your network attachment definition.
  4. Save your changes and quit the text editor to commit your changes.
  5. Optional: Confirm that the secondary network CR was deleted by running the following command:

    $ oc get network-attachment-definition --all-namespaces
    Copy to Clipboard

4.4. Virtual routing and forwarding

4.4.1. About virtual routing and forwarding

Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways.

Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic.

4.4.1.1. Benefits of secondary networks for pods for telecommunications operators

In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster’s main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks.

4.5. Assigning a secondary network to a VRF

As a cluster administrator, you can configure a secondary network for a virtual routing and forwarding (VRF) domain by using the CNI VRF plugin. The virtual network that this plugin creates is associated with the physical interface that you specify.

Using a secondary network with a VRF instance has the following advantages:

Workload isolation
Isolate workload traffic by configuring a VRF instance for the secondary network.
Improved security
Enable improved security through isolated network paths in the VRF domain.
Multi-tenancy support
Support multi-tenancy through network segmentation with a unique routing table in the VRF domain for each tenant.
Note

Applications that use VRFs must bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. The SO_BINDTODEVICE option binds the socket to the device that is specified in the passed interface name, for example, eth1. To use the SO_BINDTODEVICE option, the application must have CAP_NET_RAW capabilities.

Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface.

4.5.1. Creating a secondary network attachment with the CNI VRF plugin

The Cluster Network Operator (CNO) manages secondary network definitions. When you specify a secondary network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically.

Note

Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your secondary network.

To create a secondary network attachment with the CNI VRF plugin, perform the following procedure.

Prerequisites

  • Install the OpenShift Container Platform CLI (oc).
  • Log in to the OpenShift cluster as a user with cluster-admin privileges.

Procedure

  1. Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the secondary network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml.

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      additionalNetworks:
        - name: test-network-1
          namespace: additional-network-1
          type: Raw
          rawCNIConfig: '{
            "cniVersion": "0.3.1",
            "name": "macvlan-vrf",
            "plugins": [  
    1
    
            {
              "type": "macvlan",
              "master": "eth1",
              "ipam": {
                  "type": "static",
                  "addresses": [
                  {
                      "address": "191.168.1.23/24"
                  }
                  ]
              }
            },
            {
              "type": "vrf", 
    2
    
              "vrfname": "vrf-1",  
    3
    
              "table": 1001   
    4
    
            }]
          }'
    Copy to Clipboard
    1
    plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration.
    2
    type must be set to vrf.
    3
    vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created.
    4
    Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF.
    Note

    VRF functions correctly only when the resource is of type netdevice.

  2. Create the Network resource:

    $ oc create -f additional-network-attachment.yaml
    Copy to Clipboard
  3. Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1.

    $ oc get network-attachment-definitions -n <namespace>
    Copy to Clipboard

    Example output

    NAME                       AGE
    additional-network-1       14m
    Copy to Clipboard

    Note

    There might be a delay before the CNO creates the CR.

Verification

  1. Create a pod and assign it to the secondary network with the VRF instance:

    1. Create a YAML file that defines the Pod resource:

      Example pod-additional-net.yaml file

      apiVersion: v1
      kind: Pod
      metadata:
       name: pod-additional-net
       annotations:
         k8s.v1.cni.cncf.io/networks: '[
             {
                     "name": "test-network-1" 
      1
      
             }
       ]'
      spec:
       containers:
       - name: example-pod-1
         command: ["/bin/bash", "-c", "sleep 9000000"]
         image: centos:8
      Copy to Clipboard

      1
      Specify the name of the secondary network with the VRF instance.
    2. Create the Pod resource by running the following command:

      $ oc create -f pod-additional-net.yaml
      Copy to Clipboard

      Example output

      pod/test-pod created
      Copy to Clipboard

  2. Verify that the pod network attachment is connected to the VRF secondary network. Start a remote session with the pod and run the following command:

    $ ip vrf show
    Copy to Clipboard

    Example output

    Name              Table
    -----------------------
    vrf-1             1001
    Copy to Clipboard

  3. Confirm that the VRF interface is the controller for the secondary interface:

    $ ip link
    Copy to Clipboard

    Example output

    5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode
    Copy to Clipboard

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat