Chapter 3. Secondary networks
3.1. Creating secondary networks on OVN-Kubernetes Copy linkLink copied to clipboard!
As a cluster administrator, you can configure a secondary network for your cluster using the NetworkAttachmentDefinition (NAD) resource.
Support for user-defined networks as a secondary network will be added in a future version of OpenShift Container Platform.
3.1.1. Configuration for an OVN-Kubernetes secondary network Copy linkLink copied to clipboard!
The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition custom resource definition (CRD).
Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated network-attachment-definition CRD.
You can configure an OVN-Kubernetes secondary network in layer 2, layer 3, or localnet topologies. For more information about features supported on these topologies, see "UserDefinedNetwork and NetworkAttachmentDefinition support matrix".
The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks.
Networks names must be unique. For example, creating multiple NetworkAttachmentDefinition CRDs with different configurations that reference the same network is unsupported.
3.1.1.1. Supported platforms for OVN-Kubernetes secondary network Copy linkLink copied to clipboard!
You can use an OVN-Kubernetes secondary network with the following supported platforms:
- Bare metal
- IBM Power®
- IBM Z®
- IBM® LinuxONE
- VMware vSphere
- Red Hat OpenStack Platform (RHOSP)
3.1.1.2. OVN-Kubernetes network plugin JSON configuration table Copy linkLink copied to clipboard!
The following table describes the configuration parameters for the OVN-Kubernetes CNI network plugin:
| Field | Type | Description |
|---|---|---|
|
|
|
The CNI specification version. The required value is |
|
|
|
The name of the network. These networks are not namespaced. For example, a network named |
|
|
|
The name of the CNI plugin to configure. This value must be set to |
|
|
|
The topological configuration for the network. Must be one of |
|
|
| The subnet to use for the network across the cluster.
For When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IP addresses for the pods. Port security only prevents MAC spoofing. |
|
|
| The maximum transmission unit (MTU). If you do not set a value, the Cluster Network Operator (CNO) sets a default MTU value by calculating the difference among the underlay MTU of the primary network interface, the overlay MTU of the pod network, such as the Geneve (Generic Network Virtualization Encapsulation), and byte capacity of any enabled features, such as IPsec. |
|
|
|
The metadata |
|
|
| A comma-separated list of CIDRs and IP addresses. IP addresses are removed from the assignable IP address pool and are never passed to the pods. |
|
|
|
If topology is set to |
3.1.1.3. Compatibility with multi-network policy Copy linkLink copied to clipboard!
The multi-network policy API, which is provided by the MultiNetworkPolicy custom resource definition (CRD) in the k8s.cni.cncf.io API group, is compatible with an OVN-Kubernetes secondary network. When defining a network policy, the network policy rules that can be used depend on whether the OVN-Kubernetes secondary network defines the subnets field. Refer to the following table for details:
subnets field specified | Allowed multi-network policy selectors |
|---|---|
| Yes |
|
| No |
|
You can use the k8s.v1.cni.cncf.io/policy-for annotation on a MultiNetworkPolicy object to point to a NetworkAttachmentDefinition (NAD) custom resource (CR). The NAD CR defines the network to which the policy applies. The following example multi-network policy is valid only if the subnets field is defined in the secondary network CNI configuration for the secondary network named blue2:
Example multi-network policy that uses a pod selector
The following example uses the ipBlock network policy selector, which is always valid for an OVN-Kubernetes secondary network:
Example multi-network policy that uses an IP block selector
3.1.1.4. Configuration for a localnet switched topology Copy linkLink copied to clipboard!
The switched localnet topology interconnects the workloads created as Network Attachment Definitions (NADs) through a cluster-wide logical switch to a physical network.
You must map a secondary network to the ovs-bridge to use it as an OVN-Kubernetes secondary network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS).
You can create an NodeNetworkConfigurationPolicy (NNCP) object, part of the nmstate.io/v1 API group, to declaratively create the mapping. This API is provided by the NMState Operator. By using this API you can apply the bridge mapping to nodes that match your specified nodeSelector expression, such as node-role.kubernetes.io/worker: ''. With this declarative approach, the NMState Operator applies secondary network configuration to all nodes specified by the node selector automatically and transparently.
When attaching a secondary network, you can either use the existing br-ex bridge or create a new bridge. Which approach to use depends on your specific network infrastructure. Consider the following approaches:
-
If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the
br-exbridge or alter the interface configuration. If you remove or alter the network interface, your cluster network will stop working correctly. - If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your secondary network. This approach provides for traffic isolation from your primary cluster network.
The localnet1 network is mapped to the br-ex bridge in the following example:
Example mapping for sharing a bridge
- 1 1
- The name for the configuration object.
- 2
- A node selector that specifies the nodes to apply the node network configuration policy to.
- 3
- The name for the secondary network from which traffic is forwarded to the OVS bridge. This secondary network must match the name of the
spec.config.namefield of theNetworkAttachmentDefinitionCRD that defines the OVN-Kubernetes secondary network. - 4
- The name of the OVS bridge on the node. This value is required only if you specify
state: present. - 5
- The state for the mapping. Must be either
presentto add the bridge orabsentto remove the bridge. The default value ispresent.The following JSON example configures a localnet secondary network that is named
localnet1. Note that the value for themtuparameter must match the MTU value that was set for the secondary network interface that is mapped to thebr-exbridge interface.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
In the following example, the localnet2 network interface is attached to the ovs-br1 bridge. Through this attachment, the network interface is available to the OVN-Kubernetes network plugin as a secondary network.
Example mapping for nodes with multiple interfaces
- 1
- Specifies the name of the configuration object.
- 2
- Specifies a node selector that identifies the nodes to which the node network configuration policy applies.
- 3
- Specifies a new OVS bridge that operates separately from the default bridge used by OVN-Kubernetes for cluster traffic.
- 4
- Specifies whether to enable multicast snooping. When enabled, multicast snooping prevents network devices from flooding multicast traffic to all network members. By default, an OVS bridge does not enable multicast snooping. The default value is
false. - 5
- Specifies the network device on the host system to associate with the new OVS bridge.
- 6
- Specifies the name of the secondary network that forwards traffic to the OVS bridge. This name must match the value of the
spec.config.namefield in theNetworkAttachmentDefinitionCRD that defines the OVN-Kubernetes secondary network. - 7
- Specifies the name of the OVS bridge on the node. The value is required only when
state: presentis set. - 8
- Specifies the state of the mapping. Valid values are
presentto add the bridge orabsentto remove the bridge. The default value ispresent.The following JSON example configures a localnet secondary network that is named
localnet2. Note that the value for themtuparameter must match the MTU value that was set for theeth1secondary network interface.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.1.4.1. Configuration for a layer 2 switched topology Copy linkLink copied to clipboard!
The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments.
Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster.
The following JSON example configures a switched secondary network:
3.1.1.5. Configuring pods for secondary networks Copy linkLink copied to clipboard!
You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks annotation.
The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide.
3.1.1.6. Configuring pods with a static IP address Copy linkLink copied to clipboard!
The following example provisions a pod with a static IP address.
- You can specify the IP address for the secondary network attachment of a pod only when the secondary network attachment, a namespaced-scoped object, uses a layer 2 or localnet topology.
- Specifying a static IP address for the pod is only possible when the attachment configuration does not feature subnets.
3.2. Creating secondary networks with other CNI plugins Copy linkLink copied to clipboard!
The specific configuration fields for secondary networks are described in the following sections.
3.2.1. Configuration for a bridge secondary network Copy linkLink copied to clipboard!
The following object describes the configuration parameters for the Bridge CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
|
The CNI specification version. The |
|
|
|
The value for the |
|
|
|
The name of the CNI plugin to configure: |
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
|
Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
| Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. |
|
|
|
Optional: Indicates whether the default vlan must be preserved on the |
|
|
|
Optional: Assign a VLAN trunk tag. The default value is |
|
|
| Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
|
Optional: Enables duplicate address detection for the container side |
|
|
|
Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is |
The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface.
To configure an uplink for an L2 network, you must allow the VLAN on the uplink interface by using the following command:
bridge vlan add vid VLAN_ID dev DEV
$ bridge vlan add vid VLAN_ID dev DEV
3.2.1.1. Bridge CNI plugin configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named bridge-net:
3.2.2. Configuration for a Bond CNI secondary network Copy linkLink copied to clipboard!
The Bond Container Network Interface (Bond CNI) enables the aggregation of multiple network interfaces into a single logical "bonded" interface within a container, enhancing network redundancy and fault tolerance. Only SR-IOV Virtual Functions (VFs) are supported for bonding with this plugin.
The following table describes the configuration parameters for the Bond CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
| Specifies the name given to this CNI network attachment definition. This name is used to identify and reference the interface within the container. |
|
|
| The CNI specification version. |
|
|
|
Specifies the name of the CNI plugin to configure: |
|
|
| Specifies the address resolution protocol (ARP) link monitoring frequency in milliseconds. This parameter defines how often the bond interface sends ARP requests to check the availability of its aggregated interfaces. |
|
|
| Optional: Specifies the maximum transmission unit (MTU) of the bond. The default is 1500. |
|
|
|
Optional: Specifies the |
|
|
| Specifies the bonding policy. |
|
|
|
Optional: Specifies whether the network interfaces intended for bonding are expected to be created and available directly within the container’s network namespace when the bond starts. If |
|
|
| Specifies the interfaces to be bonded. |
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
3.2.2.1. Bond CNI plugin configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named bond-net1:
3.2.3. Configuration for a host device secondary network Copy linkLink copied to clipboard!
Specify your network device by setting only one of the following parameters: device,hwaddr, kernelpath, or pciBusID.
The following object describes the configuration parameters for the host-device CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
|
The CNI specification version. The |
|
|
|
The value for the |
|
|
|
The name of the CNI plugin to configure: |
|
|
|
Optional: The name of the device, such as |
|
|
| Optional: The device hardware MAC address. |
|
|
|
Optional: The Linux kernel device path, such as |
|
|
|
Optional: The PCI address of the network device, such as |
3.2.3.1. host-device configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named hostdev-net:
3.2.4. Configuration for a VLAN secondary network Copy linkLink copied to clipboard!
The following object describes the configuration parameters for the VLAN, vlan, CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
|
The CNI specification version. The |
|
|
|
The value for the |
|
|
|
The name of the CNI plugin to configure: |
|
|
|
The Ethernet interface to associate with the network attachment. If a |
|
|
|
Set the ID of the |
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
| Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
| Optional: DNS information to return. For example, a priority-ordered list of DNS nameservers. |
|
|
|
Optional: Specifies whether the |
A NetworkAttachmentDefinition custom resource definition (CRD) with a vlan configuration can be used only on a single pod in a node because the CNI plugin cannot create multiple vlan subinterfaces with the same vlanId on the same master interface.
3.2.4.1. VLAN configuration example Copy linkLink copied to clipboard!
The following example demonstrates a vlan configuration with a secondary network that is named vlan-net:
3.2.5. Configuration for an IPVLAN secondary network Copy linkLink copied to clipboard!
The following object describes the configuration parameters for the IPVLAN, ipvlan, CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
|
The CNI specification version. The |
|
|
|
The value for the |
|
|
|
The name of the CNI plugin to configure: |
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained. |
|
|
|
Optional: The operating mode for the virtual network. The value must be |
|
|
|
Optional: The Ethernet interface to associate with the network attachment. If a |
|
|
| Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
|
Optional: Specifies whether the |
-
The
ipvlanobject does not allow virtual interfaces to communicate with themasterinterface. Therefore the container is not able to reach the host by using theipvlaninterface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol (PTP). -
A single
masterinterface cannot simultaneously be configured to use bothmacvlanandipvlan. -
For IP allocation schemes that cannot be interface agnostic, the
ipvlanplugin can be chained with an earlier plugin that handles this logic. If themasteris omitted, then the previous result must contain a single interface name for theipvlanplugin to enslave. Ifipamis omitted, then the previous result is used to configure theipvlaninterface.
3.2.5.1. IPVLAN CNI plugin configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named ipvlan-net:
3.2.6. Configuration for a MACVLAN secondary network Copy linkLink copied to clipboard!
The following object describes the configuration parameters for the MAC Virtual LAN (MACVLAN) Container Network Interface (CNI) plugin:
| Field | Type | Description |
|---|---|---|
|
|
|
The CNI specification version. The |
|
|
|
The value for the |
|
|
|
The name of the CNI plugin to configure: |
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
|
Optional: Configures traffic visibility on the virtual network. Must be either |
|
|
| Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. |
|
|
| Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
|
Optional: Specifies whether the |
If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts.
3.2.6.1. MACVLAN CNI plugin configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named macvlan-net:
3.2.7. Configuration for a TAP secondary network Copy linkLink copied to clipboard!
The following object describes the configuration parameters for the TAP CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
|
The CNI specification version. The |
|
|
|
The value for the |
|
|
|
The name of the CNI plugin to configure: |
|
|
| Optional: Request the specified MAC address for the interface. |
|
|
| Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
| Optional: The SELinux context to associate with the tap device. Note
The value |
|
|
|
Optional: Set to |
|
|
| Optional: The user owning the tap device. |
|
|
| Optional: The group owning the tap device. |
|
|
| Optional: Set the tap device as a port of an already existing bridge. |
3.2.7.1. Tap configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named mynet:
3.2.7.2. Setting SELinux boolean for the TAP CNI plugin Copy linkLink copied to clipboard!
To create the tap device with the container_t SELinux context, enable the container_use_devices boolean on the host by using the Machine Config Operator (MCO).
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a new YAML file named, such as
setsebool-container-use-devices.yaml, with the following details:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new
MachineConfigobject by running the following command:oc apply -f setsebool-container-use-devices.yaml
$ oc apply -f setsebool-container-use-devices.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteApplying any changes to the
MachineConfigobject causes all affected nodes to gracefully reboot after the change is applied. This update can take some time to be applied.Verify the change is applied by running the following command:
oc get machineconfigpools
$ oc get machineconfigpoolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7dCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAll nodes should be in the updated and ready state.
3.2.8. Configuring routes using the route-override plugin on a secondary network Copy linkLink copied to clipboard!
The following object describes the configuration parameters for the route-override CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
|
The name of the CNI plugin to configure: |
|
|
|
Optional: Set to |
|
|
|
Optional: Set to |
|
|
| Optional: Specify the list of routes to delete from the container namespace. |
|
|
|
Optional: Specify the list of routes to add to the container namespace. Each route is a dictionary with |
|
|
|
Optional: Set this to |
3.2.8.1. Route-override plugin configuration example Copy linkLink copied to clipboard!
The route-override CNI is a type of CNI that it is designed to be used when chained with a parent CNI. It does not operate independently, but relies on the parent CNI to first create the network interface and assign IP addresses before it can modify the routing rules.
The following example configures a secondary network named mymacvlan. The parent CNI creates a network interface attached to eth1 and assigns an IP address in the 192.168.1.0/24 range using host-local IPAM. The route-override CNI is then chained to the parent CNI and modifies the routing rules by flushing existing routes, deleting the route to 192.168.0.0/24, and adding a new route for 192.168.0.0/24 with a custom gateway.
3.3. Attaching a pod to a secondary network Copy linkLink copied to clipboard!
As a cluster user you can attach a pod to a secondary network.
3.3.1. Adding a pod to a secondary network Copy linkLink copied to clipboard!
You can add a pod to a secondary network. The pod continues to send normal cluster-related network traffic over the default network.
When a pod is created, a secondary networks is attached to the pod. However, if a pod already exists, you cannot attach a secondary network to it.
The pod must be in the same namespace as the secondary network.
Prerequisites
-
Install the OpenShift CLI (
oc). - Log in to the cluster.
Procedure
Add an annotation to the
Podobject. Only one of the following annotation formats can be used:To attach a secondary network without any customization, add an annotation with the following format. Replace
<network>with the name of the secondary network to associate with the pod:metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...]metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...]1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- To specify more than one secondary network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same secondary network multiple times, that pod will have multiple network interfaces attached to that network.
To attach a secondary network with customizations, add an annotation with the following format:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To create the pod, enter the following command. Replace
<name>with the name of the pod.oc create -f <name>.yaml
$ oc create -f <name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To Confirm that the annotation exists in the
PodCR, enter the following command, replacing<name>with the name of the pod.oc get pod <name> -o yaml
$ oc get pod <name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the following example, the
example-podpod is attached to thenet1secondary network:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
k8s.v1.cni.cncf.io/network-statusparameter is a JSON array of objects. Each object describes the status of a secondary network attached to the pod. The annotation value is stored as a plain text value.
3.3.1.1. Specifying pod-specific addressing and routing options Copy linkLink copied to clipboard!
When attaching a pod to a secondary network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations.
Prerequisites
- The pod must be in the same namespace as the secondary network.
-
Install the OpenShift CLI (
oc). - You must log in to the cluster.
Procedure
To add a pod to a secondary network while specifying addressing and/or routing options, complete the following steps:
Edit the
Podresource definition. If you are editing an existingPodresource, run the following command to edit its definition in the default editor. Replace<name>with the name of thePodresource to edit.oc edit pod <name>
$ oc edit pod <name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
Podresource definition, add thek8s.v1.cni.cncf.io/networksparameter to the podmetadatamapping. Thek8s.v1.cni.cncf.io/networksaccepts a JSON string of a list of objects that reference the name ofNetworkAttachmentDefinitioncustom resource (CR) names in addition to specifying additional properties.metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' # ...metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<network>- Replace with a JSON object as shown in the following examples. The single quotes are required.
In the following example the annotation specifies which network attachment will have the default route, using the
default-routeparameter.Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
name-
The
namekey is the name of the secondary network to associate with the pod. default-route-
The
default-routekey specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than onedefault-routekey is specified, this will cause the pod to fail to become active.
The default route will cause any traffic that is not specified in other routes to be routed to the gateway.
ImportantSetting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface.
To verify the routing properties of a pod, the
occommand may be used to execute theipcommand within a pod.oc exec -it <pod_name> -- ip route
$ oc exec -it <pod_name> -- ip routeCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou may also reference the pod’s
k8s.v1.cni.cncf.io/network-statusto see which secondary network has been assigned the default route, by the presence of thedefault-routekey in the JSON-formatted list of objects.To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO.
Edit the CNO CR by running the following command:
oc edit networks.operator.openshift.io cluster
$ oc edit networks.operator.openshift.io clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following YAML describes the configuration parameters for the CNO:
Cluster Network Operator YAML configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
name-
Specify a name for the secondary network attachment that you are creating. The name must be unique within the specified
namespace. namespace-
Specify the namespace to create the network attachment in. If you do not specify a value, then the
defaultnamespace is used. rawCNIConfig- Specify the CNI plugin configuration in JSON format, which is based on the following template.
The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin:
macvlan CNI plugin JSON configuration object using static IP and MAC address
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
name-
Specifies the name for the secondary network attachment to create. The name must be unique within the specified
namespace. plugins- Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration.
ips- Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities.
master- Specifies the interface that the macvlan plugin uses.
mac- Specifies that a request is made to enable the static MAC address functionality of a CNI plugin.
The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod.
Edit the pod with:
oc edit pod <name>
$ oc edit pod <name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow macvlan CNI plugin JSON configuration object using static IP and MAC address
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteStatic IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together.
To verify the IP address and MAC properties of a pod with secondary networks, use the
occommand to execute the ip command within a pod.oc exec -it <pod_name> -- ip a
$ oc exec -it <pod_name> -- ip aCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Configuring multi-network policy Copy linkLink copied to clipboard!
Administrators can use the MultiNetworkPolicy API to create multiple network policies that manage traffic for pods attached to secondary networks. For example, you can create policies that allow or deny traffic based on specific ports, IPs/ranges, or labels.
Multi-network policies can be used to manage traffic on secondary networks in the cluster. These policies cannot manage the default cluster network or primary network of user-defined networks.
As a cluster administrator, you can configure a multi-network policy for any of the following network types:
- Single-Root I/O Virtualization (SR-IOV)
- MAC Virtual Local Area Network (MacVLAN)
- IP Virtual Local Area Network (IPVLAN)
- Bond Container Network Interface (CNI) over SR-IOV
- OVN-Kubernetes secondary networks
Support for configuring multi-network policies for SR-IOV secondary networks is only supported with kernel network interface controllers (NICs). SR-IOV is not supported for Data Plane Development Kit (DPDK) applications.
3.4.1. Differences between multi-network policy and network policy Copy linkLink copied to clipboard!
Although the MultiNetworkPolicy API implements the NetworkPolicy API, there are several important differences:
You must use the
MultiNetworkPolicyAPI:apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy
apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicyCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
You must use the
multi-networkpolicyresource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with theoc get multi-networkpolicy <name>command where<name>is the name of a multi-network policy. You can use the
k8s.v1.cni.cncf.io/policy-forannotation on aMultiNetworkPolicyobject to point to aNetworkAttachmentDefinition(NAD) custom resource (CR). The NAD CR defines the network to which the policy applies.Example multi-network policy that includes the
k8s.v1.cni.cncf.io/policy-forannotationapiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace_name>- Specifies the namespace name.
<network_name>- Specifies the name of a network attachment definition.
3.4.2. Enabling multi-network policy for the cluster Copy linkLink copied to clipboard!
As a cluster administrator, you can enable multi-network policy support on your cluster.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in to the cluster with a user with
cluster-adminprivileges.
Procedure
Create the
multinetwork-enable-patch.yamlfile with the following YAML:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the cluster to enable multi-network policy. Successful output lists the name of the policy object and the
patchedstatus.oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml
$ oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.3. Supporting multi-network policies in IPv6 networks Copy linkLink copied to clipboard!
The ICMPv6 Neighbor Discovery Protocol (NDP) is a set of messages and processes that enable devices to discover and maintain information about neighboring nodes. NDP plays a crucial role in IPv6 networks, facilitating the interaction between devices on the same link.
The Cluster Network Operator (CNO) deploys the iptables implementation of multi-network policy when the useMultiNetworkPolicy parameter is set to true.
To support multi-network policies in IPv6 networks the Cluster Network Operator deploys the following set of rules in every pod affected by a multi-network policy:
Multi-network policy custom rules
- 1
- This rule allows incoming ICMPv6 neighbor solicitation messages, which are part of the neighbor discovery protocol (NDP). These messages help determine the link-layer addresses of neighboring nodes.
- 2
- This rule allows incoming ICMPv6 neighbor advertisement messages, which are part of NDP and provide information about the link-layer address of the sender.
- 3
- This rule permits incoming ICMPv6 router solicitation messages. Hosts use these messages to request router configuration information.
- 4
- This rule allows incoming ICMPv6 router advertisement messages, which give configuration information to hosts.
You cannot edit these predefined rules.
These rules collectively enable essential ICMPv6 traffic for correct network functioning, including address resolution and router communication in an IPv6 environment. With these rules in place and a multi-network policy denying traffic, applications are not expected to experience connectivity issues.
3.4.4. Working with multi-network policy Copy linkLink copied to clipboard!
As a cluster administrator, you can create, edit, view, and delete multi-network policies.
3.4.4.1. Prerequisites Copy linkLink copied to clipboard!
- You have enabled multi-network policy support for your cluster.
3.4.4.2. Creating a multi-network policy using the CLI Copy linkLink copied to clipboard!
To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicyobjects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicyset. -
You installed the OpenShift CLI (
oc). -
You logged in to the cluster with a user with
cluster-adminprivileges. - You are working in the namespace that the multi-network policy applies to.
Procedure
Create a policy rule:
Create a
<policy_name>.yamlfile:touch <policy_name>.yaml
$ touch <policy_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the multi-network policy file name.
Define a multi-network policy in the file that you just created, such as in the following examples:
Deny ingress from all pods in all namespaces
This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<network_name>- Specifies the name of a network attachment definition.
Allow ingress from all pods in the same namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<network_name>- Specifies the name of a network attachment definition.
Allow ingress traffic to one pod from a particular namespace
This policy allows traffic to pods that have the
pod-alabel from pods running innamespace-y.Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<network_name>- Specifies the name of a network attachment definition.
Restrict traffic to a service
This policy when applied ensures every pod with both labels
app=bookstoreandrole=apican only be accessed by pods with labelapp=bookstore. In this example the application could be a REST API server, marked with labelsapp=bookstoreandrole=api.This example addresses the following use cases:
- Restricting the traffic to a service to only the other microservices that need to use it.
Restricting the connections to a database to only permit the application using it.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<network_name>- Specifies the name of a network attachment definition.
To create the multi-network policy object, enter the following command. Successful output lists the name of the policy object and the
createdstatus.oc apply -f <policy_name>.yaml -n <namespace>
$ oc apply -f <policy_name>.yaml -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the multi-network policy file name.
<namespace>- Optional parameter. If you defined the object in a different namespace than the current namespace, the parameter specifices the namespace.
Successful output lists the name of the policy object and the
createdstatus.
If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console.
3.4.4.3. Editing a multi-network policy Copy linkLink copied to clipboard!
You can edit a multi-network policy in a namespace.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicyobjects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicyset. -
You installed the OpenShift CLI (
oc). -
You are logged in to the cluster with a user with
cluster-adminprivileges. - You are working in the namespace where the multi-network policy exists.
Procedure
Optional: To list the multi-network policy objects in a namespace, enter the following command:
oc get multi-networkpolicy
$ oc get multi-networkpolicyCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Edit the multi-network policy object.
If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command.
oc apply -n <namespace> -f <policy_file>.yaml
$ oc apply -n <namespace> -f <policy_file>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
<policy_file>- Specifies the name of the file containing the network policy.
If you need to update the multi-network policy object directly, enter the following command:
oc edit multi-networkpolicy <policy_name> -n <namespace>
$ oc edit multi-networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the network policy.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Confirm that the multi-network policy object is updated.
oc describe multi-networkpolicy <policy_name> -n <namespace>
$ oc describe multi-networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the multi-network policy.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.
3.4.4.4. Viewing multi-network policies using the CLI Copy linkLink copied to clipboard!
You can examine the multi-network policies in a namespace.
Prerequisites
-
You installed the OpenShift CLI (
oc). -
You are logged in to the cluster with a user with
cluster-adminprivileges. - You are working in the namespace where the multi-network policy exists.
Procedure
List multi-network policies in a namespace:
To view multi-network policy objects defined in a namespace, enter the following command:
oc get multi-networkpolicy
$ oc get multi-networkpolicyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To examine a specific multi-network policy, enter the following command:
oc describe multi-networkpolicy <policy_name> -n <namespace>
$ oc describe multi-networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the multi-network policy to inspect.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console.
3.4.4.5. Deleting a multi-network policy using the CLI Copy linkLink copied to clipboard!
You can delete a multi-network policy in a namespace.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicyobjects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicyset. -
You installed the OpenShift CLI (
oc). -
You logged in to the cluster with a user with
cluster-adminprivileges. - You are working in the namespace where the multi-network policy exists.
Procedure
To delete a multi-network policy object, enter the following command. Successful output lists the name of the policy object and the
deletedstatus.oc delete multi-networkpolicy <policy_name> -n <namespace>
$ oc delete multi-networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the multi-network policy.
<namespace>- Optional parameter. If you defined the object in a different namespace than the current namespace, the parameter specifices the namespace.
Successful output lists the name of the policy object and the
deletedstatus.
If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.
3.4.4.6. Creating a default deny all multi-network policy Copy linkLink copied to clipboard!
This policy blocks all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies and traffic between host-networked pods. This procedure enforces a strong deny policy by applying a deny-by-default policy in the my-project namespace.
Without configuring a NetworkPolicy custom resource (CR) that allows traffic communication, the following policy might cause communication problems across your cluster.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicyobjects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicyset. -
You installed the OpenShift CLI (
oc). -
You logged in to the cluster with a user with
cluster-adminprivileges. - You are working in the namespace that the multi-network policy applies to.
Procedure
Create the following YAML that defines a
deny-by-defaultpolicy to deny ingress from all pods in all namespaces. Save the YAML in thedeny-by-default.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the namespace in which to deploy the policy. For example, the
my-projectnamespace. - 2
- Specifies the name of namespace project followed by the network attachment definition name.
- 3
- If this field is empty, the configuration matches all the pods. Therefore, the policy applies to all pods in the
my-projectnamespace. - 4
- Specifies a list of rule types that the
NetworkPolicyrelates to. - 5
- Specifies
IngressonlypolicyTypes. - 6
- Specifies
ingressrules. If not specified, all incoming traffic is dropped to all pods.
Apply the policy by entering the following command. Successful output lists the name of the policy object and the
createdstatus.oc apply -f deny-by-default.yaml
$ oc apply -f deny-by-default.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the name of the policy object and the
createdstatus.
3.4.4.7. Creating a multi-network policy to allow traffic from external clients Copy linkLink copied to clipboard!
With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web.
If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.
Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicyobjects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicyset. -
You installed the OpenShift CLI (
oc). -
You logged in to the cluster with a user with
cluster-adminprivileges. - You are working in the namespace that the multi-network policy applies to.
Procedure
Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the
web-allow-external.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by entering the following command. Successful output lists the name of the policy object and the
createdstatus.oc apply -f web-allow-external.yaml
$ oc apply -f web-allow-external.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the name of the policy object and the
createdstatus. This policy allows traffic from all resources, including external traffic as illustrated in the following diagram:
3.4.4.8. Creating a multi-network policy allowing traffic to an application from all namespaces Copy linkLink copied to clipboard!
If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.
Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicyobjects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicyset. -
You installed the OpenShift CLI (
oc). -
You logged in to the cluster with a user with
cluster-adminprivileges. - You are working in the namespace that the multi-network policy applies to.
Procedure
Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the
web-allow-all-namespaces.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, if you do not specify a
namespaceSelectorparameter in the policy object, no namespaces get selected. This means the policy allows traffic only from the namespace where the network policy deployes.Apply the policy by entering the following command. Successful output lists the name of the policy object and the
createdstatus.oc apply -f web-allow-all-namespaces.yaml
$ oc apply -f web-allow-all-namespaces.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the name of the policy object and the
createdstatus.
Verification
Start a web service in the
defaultnamespace by entering the following command:oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpineimage in thesecondarynamespace and to start a shell:oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- sh
$ oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command in the shell and observe that the service allows the request:
wget -qO- --timeout=2 http://web.default
# wget -qO- --timeout=2 http://web.defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.4.9. Creating a multi-network policy allowing traffic to an application from a namespace Copy linkLink copied to clipboard!
If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.
Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to:
- Restrict traffic to a production database only to namespaces that have production workloads deployed.
- Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicyobjects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicyset. -
You installed the OpenShift CLI (
oc). -
You logged in to the cluster with a user with
cluster-adminprivileges. - You are working in the namespace that the multi-network policy applies to.
Procedure
Create a policy that allows traffic from all pods in a particular namespaces with a label
purpose=production. Save the YAML in theweb-allow-prod.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by entering the following command. Successful output lists the name of the policy object and the
createdstatus.oc apply -f web-allow-prod.yaml
$ oc apply -f web-allow-prod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the name of the policy object and the
createdstatus.
Verification
Start a web service in the
defaultnamespace by entering the following command:oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create the
prodnamespace:oc create namespace prod
$ oc create namespace prodCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to label the
prodnamespace:oc label namespace/prod purpose=production
$ oc label namespace/prod purpose=productionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create the
devnamespace:oc create namespace dev
$ oc create namespace devCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to label the
devnamespace:oc label namespace/dev purpose=testing
$ oc label namespace/dev purpose=testingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpineimage in thedevnamespace and to start a shell:oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- sh
$ oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command in the shell and observe the reason for the blocked request. For example, expected output states
wget: download timed out.wget -qO- --timeout=2 http://web.default
# wget -qO- --timeout=2 http://web.defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpineimage in theprodnamespace and start a shell:oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- sh
$ oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command in the shell and observe that the request is allowed:
wget -qO- --timeout=2 http://web.default
# wget -qO- --timeout=2 http://web.defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Removing a pod from a secondary network Copy linkLink copied to clipboard!
As a cluster user you can remove a pod from a secondary network.
3.5.1. Removing a pod from a secondary network Copy linkLink copied to clipboard!
You can remove a pod from a secondary network only by deleting the pod.
Prerequisites
- A secondary network is attached to the pod.
-
Install the OpenShift CLI (
oc). - Log in to the cluster.
Procedure
To delete the pod, enter the following command:
oc delete pod <name> -n <namespace>
$ oc delete pod <name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
<name>is the name of the pod. -
<namespace>is the namespace that contains the pod.
-
3.6. Editing a secondary network Copy linkLink copied to clipboard!
As a cluster administrator you can modify the configuration for an existing secondary network.
3.6.1. Modifying a secondary network attachment definition Copy linkLink copied to clipboard!
As a cluster administrator, you can make changes to an existing secondary network. Any existing pods attached to the secondary network will not be updated.
Prerequisites
- You have configured a secondary network for your cluster.
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges.
Procedure
To edit a secondary network for your cluster, complete the following steps:
Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor:
oc edit networks.operator.openshift.io cluster
$ oc edit networks.operator.openshift.io clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
In the
additionalNetworkscollection, update the secondary network with your changes. - Save your changes and quit the text editor to commit your changes.
Optional: Confirm that the CNO updated the
NetworkAttachmentDefinitionobject by running the following command. Replace<network-name>with the name of the secondary network to display. There might be a delay before the CNO updates theNetworkAttachmentDefinitionobject to reflect your changes.oc get network-attachment-definitions <network-name> -o yaml
$ oc get network-attachment-definitions <network-name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, the following console output displays a
NetworkAttachmentDefinitionobject that is namednet1:oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}'$ oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} }Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. Configuring IP address assignment on secondary networks Copy linkLink copied to clipboard!
The following sections give instructions and information for how to configure IP address assignments for secondary networks.
3.7.1. Configuration of IP address assignment for a network attachment Copy linkLink copied to clipboard!
For secondary networks, you can assign IP addresses by using an IP Address Management (IPAM) CNI plugin, which supports various assignment methods, including Dynamic Host Configuration Protocol (DHCP) and static assignment.
The DHCP IPAM CNI plugin responsible for dynamic assignment of IP addresses operates with two distinct components:
- CNI Plugin: Responsible for integrating with the Kubernetes networking stack to request and release IP addresses.
- DHCP IPAM CNI Daemon: A listener for DHCP events that coordinates with existing DHCP servers in the environment to handle IP address assignment requests. This daemon is not a DHCP server itself.
For networks requiring type: dhcp in their IPAM configuration, ensure the following:
- A DHCP server is available and running in the environment.
- The DHCP server is external to the cluster and you expect the server to form part of the existing network infrastructure for the customer.
- The DHCP server is appropriately configured to serve IP addresses to the nodes.
In cases where a DHCP server is unavailable in the environment, consider using the Whereabouts IPAM CNI plugin instead. The Whereabouts CNI provides similar IP address management capabilities without the need for an external DHCP server.
Use the Whereabouts CNI plugin when no external DHCP server exists or where static IP address management is preferred. The Whereabouts plugin includes a reconciler daemon to manage stale IP address allocations.
Ensure the periodic renewal of a DHCP lease throughout the lifetime of a container by including a separate daemon, the DHCP IPAM CNI Daemon. To deploy the DHCP IPAM CNI daemon, change the Cluster Network Operator (CNO) configuration to trigger the deployment of this daemon as part of the secondary network setup.
3.7.1.1. Static IP address assignment configuration Copy linkLink copied to clipboard!
The following table describes the configuration for static IP address assignment:
| Field | Type | Description |
|---|---|---|
|
|
|
The IPAM address type. The value |
|
|
| An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. |
|
|
| An array of objects specifying routes to configure inside the pod. |
|
|
| Optional: An array of objects specifying the DNS configuration. |
The addresses array requires objects with the following fields:
| Field | Type | Description |
|---|---|---|
|
|
|
An IP address and network prefix that you specify. For example, if you specify |
|
|
| The default gateway to route egress network traffic to. |
| Field | Type | Description |
|---|---|---|
|
|
|
The IP address range in CIDR format, such as |
|
|
| The gateway that routes network traffic. |
| Field | Type | Description |
|---|---|---|
|
|
| An array of one or more IP addresses where DNS queries get sent. |
|
|
|
The default domain to append to a hostname. For example, if the domain is set to |
|
|
|
An array of domain names to append to an unqualified hostname, such as |
Static IP address assignment configuration example
3.7.1.2. Dynamic IP address (DHCP) assignment configuration Copy linkLink copied to clipboard!
A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster.
For an Ethernet network attachment, the SR-IOV Network Operator does not create a DHCP server deployment; the Cluster Network Operator is responsible for creating the minimal DHCP server deployment.
To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example:
Example shim network attachment definition
- 1
- Specifies dynamic IP address (DHCP) assignment for the cluster.
The following table describes the configuration parameters for dynamic IP address address assignment with DHCP.
| Field | Type | Description |
|---|---|---|
|
|
|
The IPAM address type. The value |
The following JSON example describes the configuration p for dynamic IP address address assignment with DHCP.
Dynamic IP address (DHCP) assignment configuration example
{
"ipam": {
"type": "dhcp"
}
}
{
"ipam": {
"type": "dhcp"
}
}
3.7.1.3. Dynamic IP address assignment configuration with Whereabouts Copy linkLink copied to clipboard!
The Whereabouts CNI plugin allows the dynamic assignment of an IP address to a secondary network without the use of a DHCP server.
The Whereabouts CNI plugin also supports overlapping IP address ranges and configuration of the same CIDR range multiple times within separate NetworkAttachmentDefinition CRDs. This provides greater flexibility and management capabilities in multi-tenant environments.
3.7.1.3.1. Dynamic IP address configuration objects Copy linkLink copied to clipboard!
The following table describes the configuration objects for dynamic IP address assignment with Whereabouts:
| Field | Type | Description |
|---|---|---|
|
|
|
The IPAM address type. The value |
|
|
| An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. |
|
|
| Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. |
|
|
| Optional: Helps ensure that each group or domain of pods gets its own set of IP addresses, even if they share the same range of IP addresses. Setting this field is important for keeping networks separate and organized, notably in multi-tenant environments. |
3.7.1.3.2. Dynamic IP address assignment configuration that uses Whereabouts Copy linkLink copied to clipboard!
The following example shows a dynamic address assignment configuration that uses Whereabouts:
Whereabouts dynamic IP address assignment
3.7.1.3.3. Dynamic IP address assignment that uses Whereabouts with overlapping IP address ranges Copy linkLink copied to clipboard!
The following example shows a dynamic IP address assignment that uses overlapping IP address ranges for multi-tenant networks.
NetworkAttachmentDefinition 1
- 1
- Optional. If set, must match the
network_nameofNetworkAttachmentDefinition 2.
NetworkAttachmentDefinition 2
- 1
- Optional. If set, must match the
network_nameofNetworkAttachmentDefinition 1.
3.7.1.4. Creating a whereabouts-reconciler daemon set Copy linkLink copied to clipboard!
The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster by using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pod gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down.
You can also use a NetworkAttachmentDefinition custom resource definition (CRD) for dynamic IP address assignment.
The whereabouts-reconciler daemon set is automatically created when you configure a secondary network through the Cluster Network Operator. It is not automatically created when you configure a secondary network from a YAML manifest.
To trigger the deployment of the whereabouts-reconciler daemon set, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource (CR) file.
Use the following procedure to deploy the whereabouts-reconciler daemon set.
Procedure
Edit the
Network.operator.openshift.iocustom resource (CR) by running the following command:oc edit network.operator.openshift.io cluster
$ oc edit network.operator.openshift.io clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Include the
additionalNetworkssection shown in this example YAML extract within thespecdefinition of the custom resource (CR):Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and exit the text editor.
Verify that the
whereabouts-reconcilerdaemon set deployed successfully by running the following command:oc get all -n openshift-multus | grep whereabouts-reconciler
$ oc get all -n openshift-multus | grep whereabouts-reconcilerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s
pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7.1.5. Configuring the Whereabouts IP reconciler schedule Copy linkLink copied to clipboard!
The Whereabouts IPAM CNI plugin runs the IP reconciler daily. This process cleans up any stranded IP allocations that might result in exhausting IPs and therefore prevent new pods from getting an IP allocated to them.
Use this procedure to change the frequency at which the IP reconciler runs.
Prerequisites
-
You installed the OpenShift CLI (
oc). -
You have access to the cluster as a user with the
cluster-adminrole. -
You have deployed the
whereabouts-reconcilerdaemon set, and thewhereabouts-reconcilerpods are up and running.
Procedure
Run the following command to create a
ConfigMapobject namedwhereabouts-configin theopenshift-multusnamespace with a specific cron expression for the IP reconciler:oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *"
$ oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This cron expression indicates the IP reconciler runs every 15 minutes. Adjust the expression based on your specific requirements.
NoteThe
whereabouts-reconcilerdaemon set can only consume a cron expression pattern that includes five asterisks. The sixth, which is used to denote seconds, is currently not supported.Retrieve information about resources related to the
whereabouts-reconcilerdaemon set and pods within theopenshift-multusnamespace by running the following command:oc get all -n openshift-multus | grep whereabouts-reconciler
$ oc get all -n openshift-multus | grep whereabouts-reconcilerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s
pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to verify that the
whereabouts-reconcilerpod runs the IP reconciler with the configured interval:oc -n openshift-multus logs whereabouts-reconciler-2p7hw
$ oc -n openshift-multus logs whereabouts-reconciler-2p7hwCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7.1.6. Creating a configuration for assignment of dual-stack IP addresses dynamically Copy linkLink copied to clipboard!
Dual-stack IP address assignment can be configured with the ipRanges parameter for:
- IPv4 addresses
- IPv6 addresses
- multiple IP address assignment
Procedure
-
Set
typetowhereabouts. Use
ipRangesto allocate IP addresses as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Attach network to a pod. For more information, see "Adding a pod to a secondary network".
- Verify that all IP addresses are assigned.
Run the following command to ensure the IP addresses are assigned as metadata.
$ oc exec -it mypod -- ip a
$ oc exec -it mypod -- ip aCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8. Configuring the master interface in the container network namespace Copy linkLink copied to clipboard!
The following section provides instructions and information for how to create and manage a MAC-VLAN, IP-VLAN, and VLAN subinterface based on a master interface.
3.8.1. About configuring the master interface in the container network namespace Copy linkLink copied to clipboard!
You can create a MAC-VLAN, an IP-VLAN, or a VLAN subinterface that is based on a master interface that exists in a container namespace. You can also create a master interface as part of the pod network configuration in a separate network attachment definition CRD.
To use a container namespace master interface, you must specify true for the linkInContainer parameter that exists in the subinterface configuration of the NetworkAttachmentDefinition CRD.
3.8.1.1. Creating multiple VLANs on SR-IOV VFs Copy linkLink copied to clipboard!
An example use case for utilizing this feature is to create multiple VLANs based on SR-IOV VFs. To do so, begin by creating an SR-IOV network and then define the network attachments for the VLAN interfaces.
The following example shows how to configure the setup illustrated in this diagram.
Figure 3.1. Creating VLANs
Prerequisites
-
You installed the OpenShift CLI (
oc). -
You have access to the cluster as a user with the
cluster-adminrole. - You have installed the SR-IOV Network Operator.
Procedure
Create a dedicated container namespace where you want to deploy your pod by using the following command:
oc new-project test-namespace
$ oc new-project test-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an SR-IOV node policy:
Create an
SriovNetworkNodePolicyobject, and then save the YAML in thesriov-node-network-policy.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe SR-IOV network node policy configuration example, with the setting
deviceType: netdevice, is tailored specifically for Mellanox Network Interface Cards (NICs).Apply the YAML by running the following command:
oc apply -f sriov-node-network-policy.yaml
$ oc apply -f sriov-node-network-policy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteApplying this might take some time due to the node requiring a reboot.
Create an SR-IOV network:
Create the
SriovNetworkcustom resource (CR) for the additional secondary SR-IOV network attachment as in the following example CR. Save the YAML as the filesriov-network-attachment.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the YAML by running the following command:
oc apply -f sriov-network-attachment.yaml
$ oc apply -f sriov-network-attachment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the VLAN secondary network:
Using the following YAML example, create a file named
vlan100-additional-network-configuration.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the YAML file by running the following command:
oc apply -f vlan100-additional-network-configuration.yaml
$ oc apply -f vlan100-additional-network-configuration.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a pod definition by using the earlier specified networks:
Using the following YAML example, create a file named
pod-a.yamlfile:NoteThe manifest below includes 2 resources:
- Namespace with security labels
- Pod definition with appropriate network annotation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name to be used as the
masterfor the VLAN interface.
Apply the YAML file by running the following command:
oc apply -f pod-a.yaml
$ oc apply -f pod-a.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Get detailed information about the
nginx-podwithin thetest-namespaceby running the following command:oc describe pods nginx-pod -n test-namespace
$ oc describe pods nginx-pod -n test-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8.1.2. Creating a subinterface based on a bridge master interface in a container namespace Copy linkLink copied to clipboard!
You can create a subinterface based on a bridge master interface that exists in a container namespace. Creating a subinterface can be applied to other types of interfaces.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You are logged in to the OpenShift Container Platform cluster as a user with
cluster-adminprivileges.
Procedure
Create a dedicated container namespace where you want to deploy your pod by entering the following command:
oc new-project test-namespace
$ oc new-project test-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using the following YAML example, create a bridge
NetworkAttachmentDefinitioncustom resource definition (CRD) file namedbridge-nad.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to apply the
NetworkAttachmentDefinitionCRD to your OpenShift Container Platform cluster:oc apply -f bridge-nad.yaml
$ oc apply -f bridge-nad.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you successfully created a
NetworkAttachmentDefinitionCRD by entering the following command. The expected output shows the name of the NAD CRD and the creation age in minutes.oc get network-attachment-definitions
$ oc get network-attachment-definitionsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using the following YAML example, create a file named
ipvlan-additional-network-configuration.yamlfor the IPVLAN secondary network configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the YAML file by running the following command:
oc apply -f ipvlan-additional-network-configuration.yaml
$ oc apply -f ipvlan-additional-network-configuration.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
NetworkAttachmentDefinitionCRD has been created successfully by running the following command. The expected output shows the name of the NAD CRD and the creation age in minutes.oc get network-attachment-definitions
$ oc get network-attachment-definitionsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using the following YAML example, create a file named
pod-a.yamlfor the pod definition:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the name to be used as the
masterfor the IPVLAN interface.
Apply the YAML file by running the following command:
oc apply -f pod-a.yaml
$ oc apply -f pod-a.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the pod is running by using the following command:
oc get pod -n test-namespace
$ oc get pod -n test-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36s
NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Show network interface information about the
pod-aresource within thetest-namespaceby running the following command:oc exec -n test-namespace pod-a -- ip a
$ oc exec -n test-namespace pod-a -- ip aCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This output shows that the network interface
net2is associated with the physical interfacenet1.
3.9. Removing an additional network Copy linkLink copied to clipboard!
As a cluster administrator you can remove an additional network attachment.
3.9.1. Removing a secondary network attachment definition Copy linkLink copied to clipboard!
As a cluster administrator, you can remove a secondary network from your OpenShift Container Platform cluster. The secondary network is not removed from any pods it is attached to.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges.
Procedure
To remove a secondary network from your cluster, complete the following steps:
Edit the Cluster Network Operator (CNO) in your default text editor by running the following command:
oc edit networks.operator.openshift.io cluster
$ oc edit networks.operator.openshift.io clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the CR by removing the configuration that the CNO created from the
additionalNetworkscollection for the secondary network that you want to remove.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you are removing the configuration mapping for the only secondary network attachment definition in the
additionalNetworkscollection, you must specify an empty collection.
To remove a network attachment definition from the network of your cluster, enter the following command:
oc delete net-attach-def <name_of_NAD>
$ oc delete net-attach-def <name_of_NAD>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name_of_NAD>with the name of your network attachment definition.
- Save your changes and quit the text editor to commit your changes.
Optional: Confirm that the secondary network CR was deleted by running the following command:
oc get network-attachment-definition --all-namespaces
$ oc get network-attachment-definition --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow