Multiple networks
Configuring and managing multiple network interfaces and virtual routing in OpenShift Container Platform
Abstract
Chapter 1. Understanding multiple networks Copy linkLink copied to clipboard!
OpenShift Container Platform administrators and users can use user-defined networks (UDNs) or
NetworkAttachmentDefinition
1.1. Multiple networks with the OVN-K CNI Copy linkLink copied to clipboard!
By default, OVN-Kubernetes serves as the Container Network Interface (CNI) of an OpenShift Container Platform cluster. This network interface is what administrators use to create default networks.
Both user-defined networks and Network Attachment Definitions can serve as the following network types:
- Primary networks: Act as the primary network for the pod. By default, all traffic passes through the primary network unless you have configured a pod route to send traffic through other networks.
- Secondary networks: Act as secondary, non-default networks for a pod. Secondary networks offer separate interfaces dedicated to specific traffic types or purposes. Only pod traffic that you explicitly configure to use a secondary network routes through its interface.
The following diagram shows a cluster that has an existing default network infrastructure that uses a physical network interface,
eth0
Figure 1.1. Diagram showing namespaces with multiple secondary UDNs
During cluster installation, OpenShift Container Platform administrators can configure alternative default secondary pod networks by leveraging the Multus CNI plugin. With Multus, you can use multiple CNI plugins such as ipvlan, macvlan, or Network Attachment Definitions together to serve as secondary networks for pods.
User-defined networks are only supported when OVN-Kubernetes is used as the CNI. UDNs are not supported for use with other CNIs.
You can define an secondary network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one secondary network for your cluster depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing. For more information, see the links in the Additional resources:
- For a complete list of supported CNI plugins, see "Secondary networks in OpenShift Container Platform".
- For information about user-defined networks, see "About user-defined networks (UDNs)".
- For information about Network Attachment Definitions, "Creating primary networks by using a NetworkAttachmentDefinition".
1.2. UserDefinedNetwork and NetworkAttachmentDefinition support matrix Copy linkLink copied to clipboard!
You can use user defined networks and network attachment definitions to define and configure customized networks for your needs.
By creating
UserDefinedNetwork
NetworkAttachmentDefinition
- Create customizable network configurations
- Define their own network topologies
- Ensure network isolation
- Manage IP addressing for workloads
- Configure advanced network features
By creating a
ClusterUserDefinedNetwork
User-defined networks and network attachment definitions can serve as both the primary and secondary network interface, and each support
layer2
layer3
As of OpenShift Container Platform 4.19, the use of the
Localnet
ClusterUserDefinedNetwork
NetworkAttachmentDefinition
Localnet
The following section highlights the supported features of the
UserDefinedNetwork
NetworkAttachmentDefinition
ClusterUserDefinedNetwork
| Network feature | Layer2 topology | Layer3 topology |
|---|---|---|
| east-west traffic | ✓ | ✓ |
| north-south traffic | ✓ | ✓ |
| Persistent IPs | ✓ | X |
| Services | ✓ | ✓ |
| Routes | X | X |
|
| ✓ | ✓ |
| Multicast | X | ✓ |
|
| ✓ | ✓ |
|
| X | X |
where:
- Multicast
- Must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information, see "About multicast".
NetworkPolicyresource-
When creating a
ClusterUserDefinedNetworkCR with a primary network type, network policies must be created after theUserDefinedNetworkCR.
| Network feature | Layer2 topology | Layer3 topology | Localnet topology |
|---|---|---|---|
| east-west traffic | ✓ | ✓ | ✓ (
|
| north-south traffic | X | X | ✓ (
|
| Persistent IPs | ✓ | X | ✓ (
|
| Services | X | X | X |
| Routes | X | X | X |
|
| X | X | X |
| Multicast | X | X | X |
|
| X | X | X |
|
| ✓ | ✓ | ✓ (
|
The Localnet topology is unavailable for use with the
UserDefinedNetwork
NetworkAttachmentDefinition
| Network feature | Layer2 topology | Layer3 topology | Localnet topology |
|---|---|---|---|
| east-west traffic | ✓ | ✓ | ✓ |
| north-south traffic | ✓ | ✓ | ✓ |
| Persistent IPs | ✓ | X | ✓ |
| Services | ✓ | ✓ | |
| Routes | X | X | |
|
| ✓ | ✓ | |
| Multicast | X | ✓ | |
|
| X | X | ✓ |
|
| ✓ | ✓ |
where:
- Multicast
- must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information, see "About multicast".
NetworkPolicyresource-
When creating a
ClusterUserDefinedNetworkCR with a primary network type, network policies must be created after theUserDefinedNetworkCR.
Chapter 2. Use cases for a secondary network Copy linkLink copied to clipboard!
You can use a secondary network in situations where you require network isolation, including data plane and control plane separation.
Isolating network traffic is useful for the following performance and security reasons:
Performance
Traffic management: You can send traffic on two different planes to manage how much traffic is along each plane.
Security
Network isolation: You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.
All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an
eth0
oc exec -it <pod_name> -- ip a
net1
net2
To attach secondary network interfaces to a pod, you must create configurations that define how the interfaces are attached. Use either a
UserDefinedNetwork
NetworkAttachmentDefinition
2.1. Secondary networks in OpenShift Container Platform Copy linkLink copied to clipboard!
OpenShift Container Platform provides the following CNI plugins for creating secondary networks in your cluster:
bridge: To configure a bridge-based secondary network to allow pods on the same host to communicate with each other and the host, use the following procedure:
bond-cni: To provide a method for aggregating multiple network interfaces into a single logical bonded interface, use the following procedure:
host-device: To allow pods access to a physical Ethernet network device on the host system, use the following procedure:
ipvlan: Allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based secondary network. Unlike a macvlan-based secondary network, each pod shares the same MAC address as the parent physical network interface. Use the following procedure:
VLAN: To allow VLAN-based network isolation and connectivity for pods, use the following procedure:
macvlan: To allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based secondary network is provided a unique MAC address:
TAP: A TAP device enables user space programs to send and receive network packets. To create a TAP device inside the container namespace, use the following procedure:
SR-IOV: To allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system.
route-override: To allow pods to override and set routes, use the following procedure:
2.2. UserDefinedNetwork and NetworkAttachmentDefinition support matrix Copy linkLink copied to clipboard!
You can use user defined networks and network attachment definitions to define and configure customized networks for your needs.
By creating
UserDefinedNetwork
NetworkAttachmentDefinition
- Create customizable network configurations
- Define their own network topologies
- Ensure network isolation
- Manage IP addressing for workloads
- Configure advanced network features
By creating a
ClusterUserDefinedNetwork
User-defined networks and network attachment definitions can serve as both the primary and secondary network interface, and each support
layer2
layer3
As of OpenShift Container Platform 4.19, the use of the
Localnet
ClusterUserDefinedNetwork
NetworkAttachmentDefinition
Localnet
The following section highlights the supported features of the
UserDefinedNetwork
NetworkAttachmentDefinition
ClusterUserDefinedNetwork
| Network feature | Layer2 topology | Layer3 topology |
|---|---|---|
| east-west traffic | ✓ | ✓ |
| north-south traffic | ✓ | ✓ |
| Persistent IPs | ✓ | X |
| Services | ✓ | ✓ |
| Routes | X | X |
|
| ✓ | ✓ |
| Multicast | X | ✓ |
|
| ✓ | ✓ |
|
| X | X |
where:
- Multicast
- Must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information, see "About multicast".
NetworkPolicyresource-
When creating a
ClusterUserDefinedNetworkCR with a primary network type, network policies must be created after theUserDefinedNetworkCR.
| Network feature | Layer2 topology | Layer3 topology | Localnet topology |
|---|---|---|---|
| east-west traffic | ✓ | ✓ | ✓ (
|
| north-south traffic | X | X | ✓ (
|
| Persistent IPs | ✓ | X | ✓ (
|
| Services | X | X | X |
| Routes | X | X | X |
|
| X | X | X |
| Multicast | X | X | X |
|
| X | X | X |
|
| ✓ | ✓ | ✓ (
|
The Localnet topology is unavailable for use with the
UserDefinedNetwork
NetworkAttachmentDefinition
| Network feature | Layer2 topology | Layer3 topology | Localnet topology |
|---|---|---|---|
| east-west traffic | ✓ | ✓ | ✓ |
| north-south traffic | ✓ | ✓ | ✓ |
| Persistent IPs | ✓ | X | ✓ |
| Services | ✓ | ✓ | |
| Routes | X | X | |
|
| ✓ | ✓ | |
| Multicast | X | ✓ | |
|
| X | X | ✓ |
|
| ✓ | ✓ |
where:
- Multicast
- must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information, see "About multicast".
NetworkPolicyresource-
When creating a
ClusterUserDefinedNetworkCR with a primary network type, network policies must be created after theUserDefinedNetworkCR.
Chapter 3. Primary networks Copy linkLink copied to clipboard!
3.1. About-user-defined networks Copy linkLink copied to clipboard!
User-defined networks (UDNs) extend OVN-Kubernetes to enable custom layer 2 and layer 3 network segments with default isolation, providing enhanced network flexibility, security, and segmentation capabilities for multi-tenant deployments and custom network architectures.
3.1.1. Overview of user-defined networks Copy linkLink copied to clipboard!
To secure and improve network segmentation and isolation, cluster administrators can create primary or secondary networks that span namespaces at the cluster level using the
ClusterUserDefinedNetwork
UserDefinedNetwork
Before the implementation of user-defined networks (UDN), the OVN-Kubernetes CNI plugin for OpenShift Container Platform only supported a layer 3 topology on the primary or main network. Due to Kubernetes design principles: all pods are attached to the main network, all pods communicate with each other by their IP addresses, and inter-pod traffic is restricted according to network policy.
While the Kubernetes design is useful for simple deployments, this layer 3 topology restricts customization of primary network segment configurations, especially for modern multi-tenant deployments.
UDN improves the flexibility and segmentation capabilities of the default layer 3 topology for a Kubernetes pod network by enabling custom layer 2 and layer 3 network segments, where all these segments are isolated by default. These segments act as either primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin. UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance.
The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a
ClusterUserDefinedNetwork
UserDefinedNetwork
3.1.2. Benefits of a user-defined network Copy linkLink copied to clipboard!
User-defined networks enable tenant isolation by providing each namespace with its own isolated primary network, reducing cross-tenant traffic risks and simplifying network management by eliminating the need for complex network policies.
User-defined networks offer the following benefits:
Enhanced network isolation for security
- Tenant isolation: Namespaces can have their own isolated primary network, similar to how tenants are isolated in Red Hat OpenStack Platform (RHOSP). This improves security by reducing the risk of cross-tenant traffic.
Network flexibility
- Layer 2 and layer 3 support: Cluster administrators can configure primary networks as layer 2 or layer 3 network types.
Simplified network management
- Reduced network configuration complexity: With user-defined networks, the need for complex network policies are eliminated because isolation can be achieved by grouping workloads in different networks.
Advanced capabilities
- Consistent and selectable IP addressing: Users can specify and reuse IP subnets across different namespaces and clusters, providing a consistent networking environment.
- Support for multiple networks: The user-defined networking feature allows administrators to connect multiple namespaces to a single network, or to create distinct networks for different sets of namespaces.
-
Virtual machine reachability over CUDN: When you attach virtual machines (VM)s to a layer 2 with BGP route advertisements enabled, you can publish VM routes to the provider network and import routes back, avoiding per‑node static routes while improving VM ingress and egress reachability.
ClusterUserDefinedNetwork
Simplification of application migration from Red Hat OpenStack Platform (RHOSP)
- Network parity: With user-defined networking, the migration of applications from OpenStack to OpenShift Container Platform is simplified by providing similar network isolation and configuration options.
Developers and administrators can create a user-defined network that is namespace scoped using the custom resource. An overview of the process is as follows:
-
An administrator creates a namespace for a user-defined network with the label.
k8s.ovn.org/primary-user-defined-network -
The CR is created by either the cluster administrator or the user.
UserDefinedNetwork - The user creates pods in the namespace.
3.1.3. Limitations of a user-defined network Copy linkLink copied to clipboard!
To deploy a successful user-defined networks (UDN), you must consider their limitations including DNS resolution behavior, restricted access to default network services such as the image registry, network policy constraints between isolated networks, and the requirement to create namespaces and networks before pods.
Consider the following limitations before implementing a UDN.
DNS limitations:
- DNS lookups for pods resolve to the pod’s IP address on the cluster default network. Even if a pod is part of a user-defined network, DNS lookups will not resolve to the pod’s IP address on that user-defined network. However, DNS lookups for services and external entities will function as expected.
- When a pod is assigned to a primary UDN, it can access the Kubernetes API (KAPI) and DNS services on the cluster’s default network.
- Initial network assignment: You must create the namespace and network before creating pods. Assigning a namespace with pods to a new network or creating a UDN in an existing namespace will not be accepted by OVN-Kubernetes.
- Health check limitations: Kubelet health checks are performed by the cluster default network, which does not confirm the network connectivity of the primary interface on the pod. Consequently, scenarios where a pod appears healthy by the default network, but has broken connectivity on the primary interface, are possible with user-defined networks.
- Network policy limitations: Network policies that enable traffic between namespaces connected to different user-defined primary networks are not effective. These traffic policies do not take effect because there is no connectivity between these isolated networks.
-
Creation and modification limitation: The CR and the
ClusterUserDefinedNetworkCR cannot be modified after being created.UserDefinedNetwork -
Default network service access: A user-defined network pod is isolated from the default network, which means that most default network services are inaccessible. For example, a user-defined network pod cannot currently access the OpenShift Container Platform image registry. Because of this limitation, source-to-image builds do not work in a user-defined network namespace. Additionally, other functions do not work, including functions to create applications based on the source code in a Git repository, such as , and functions to create applications from an OpenShift Container Platform template that use source-to-image builds. This limitation might also affect other
oc new-app <command>services.openshift-*.svc - Connectivity limitation: NodePort services on user-defined networks are not guaranteed isolation. For example, NodePort traffic from a pod to a service on the same node is not accessible, whereas traffic from a pod on a different node succeeds.
-
Unclear error message for IP address exhaustion: When the subnet of a user-defined network runs out of available IP addresses, new pods fail to start. When this occurs, the following error is returned: . This error message does not clearly specify that IP depletion is the cause. To confirm the issue, you can check the Events page in the pod’s namespace on the OpenShift Container Platform web console, where an explicit message about subnet exhaustion is reported.
Warning: Failed to create pod sandbox Layer2 egress IP limitations (
CRs only):UserDefinedNetwork- Egress IP does not work without a default gateway.
- Egress IP does not work on Google Cloud.
- Egress IP does not work with multiple gateways and instead will forward all traffic to a single gateway.
3.1.4. Layer 2 and layer 3 topologies Copy linkLink copied to clipboard!
A layer 2 topology creates a distributed virtual switch across cluster nodes, this network topology provides a smooth live migration of virtual machine (VM) within the same subnet. A layer 3 topology creates unique segments per node with routing between them, this network topology effectively manages large broadcast domains.
In a flat layer 2 topology, virtual machines and pods connect to the virtual switch so that all these components can communicate with each other within the same subnet. This topology is useful for the live migration of VMs across nodes in the cluster. The following diagram shows a flat layer 2 topology with two nodes that use the virtual switch for live migration purposes:
Figure 3.1. A flat layer 2 topology that uses a virtual switch for component communication
If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When you do not specify a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, where the topology might cause a broadcast storm that can degrade network performance.
To access more configurable options for your network, you can integrate a layer 2 topology with a user-defined network (UDN). The following diagram shows two nodes that use a UDN with a layer 2 topology that includes pods that exist on each node. Each node includes two interfaces:
- A node interface, which is a compute node that connects networking components to the node.
-
An Open vSwitch (OVS) bridge such as , which creates an layer 2 OVN switch so that pods can communicate with each other and share resources.
br-ex
An external switch connects these two interfaces, while the gateway or router handles routing traffic between the external switch and the layer 2 OVN switch. VMs and pods in a node can use the UDN to communicate with each other. The layer 2 OVN switch handles node traffic over a UDN so that live migrate of a VM from one node to another is possible.
Figure 3.2. A user-defined network (UDN) that uses a layer 2 topology
A layer 3 topology creates a unique layer 2 segment for each node in a cluster. The layer 3 routing mechanism interconnects these segments so that virtual machines and pods that are hosted on different nodes can communicate with each other. A layer 3 topology can effectively manage large broadcast domains by assigning each domain to a specific node, so that broadcast traffic has a reduced scope. To configure a layer 3 topology, you must configure
cidr
hostSubnet
3.1.5. About the ClusterUserDefinedNetwork CR Copy linkLink copied to clipboard!
The
ClusterUserDefinedNetwork
The following diagram demonstrates how a cluster administrator can use the CUDN CR to create network isolation between tenants. This network configuration allows a network to span across many namespaces. In the diagram, network isolation is achieved through the creation of two user-defined networks,
udn-1
udn-2
spec.namespaceSelector.matchLabels
udn-1
namespace-1
namespace-2
udn-2
namespace-3
namespace-4
Figure 3.3. Tenant isolation using a ClusterUserDefinedNetwork CR
3.1.5.1. Best practices for ClusterUserDefinedNetwork CRs Copy linkLink copied to clipboard!
To create and deploy a successful instance of the
ClusterUserDefinedNetwork
The following details provide administrators with a best practice for designing a CUDN CR:
-
A CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network.
ClusterUserDefinedNetwork -
CRs should not select the
ClusterUserDefinedNetworknamespace. This can result in no isolation and, as a result, could introduce security risks to the cluster.default -
CRs should not select
ClusterUserDefinedNetworknamespaces.openshift-* OpenShift Container Platform administrators should be aware that all namespaces of a cluster are selected when one of the following conditions are met:
-
The selector is left empty.
matchLabels -
The selector is left empty.
matchExpressions -
The is initialized, but does not specify
namespaceSelectorormatchExpressions. For example:matchLabel.namespaceSelector: {}
-
The
For primary networks, the namespace used for the
CR must include theClusterUserDefinedNetworklabel. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with thek8s.ovn.org/primary-user-defined-networknamespace label:k8s.ovn.org/primary-user-defined-network-
If the namespace is missing the label and a pod is created, the pod attaches itself to the default network.
k8s.ovn.org/primary-user-defined-network -
If the namespace is missing the label and a primary
k8s.ovn.org/primary-user-defined-networkCR is created that matches the namespace, an error is reported and the network is not created.ClusterUserDefinedNetwork -
If the namespace is missing the label and a primary
k8s.ovn.org/primary-user-defined-networkCR already exists, a pod in the namespace is created and attached to the default network.ClusterUserDefinedNetwork -
If the namespace has the label, and a primary CR does not exist, a pod in the namespace is not created until the
ClusterUserDefinedNetworkCR is created.ClusterUserDefinedNetwork
-
If the namespace is missing the
When using the
CR to createClusterUserDefinedNetworktopology, the following are best practices for administrators:localnet-
You must make sure that the parameter matches the parameter that you configured in the Open vSwitch (OVS) bridge mapping when you create your CUDN CR. This ensures that you are bridging to the intended segment of your physical network. If you intend to deploy multiple CUDN CR using the same bridge mapping, you must ensure that the same
spec.network.physicalNetworkNameparameter is used.physicalNetworkName -
Avoid overlapping subnets between your physical network and your other network interfaces. Overlapping network subnets can cause routing conflicts and network instability. To prevent conflicts when using the parameter, you might use the
spec.network.localnet.subnetsparameter.spec.network.localnet.excludeSubnets -
When you configure a Virtual Local Area Network (VLAN), you must ensure that both your underlying physical infrastructure (switches, routers, and so on) and your nodes are properly configured to accept VLAN IDs (VIDs). This means that you configure the physical network interface, for example , as an access port for the VLAN, for example
eth1, that you are connecting to through the physical switch. In addition, you must verify that an OVS bridge mapping, for example20, exists on your nodes to ensure that the physical interface is properly connected with OVN-Kubernetes.eth1
-
You must make sure that the
3.1.5.2. Creating a ClusterUserDefinedNetwork CR by using the CLI Copy linkLink copied to clipboard!
To implement cluster-wide network segmentation and isolation across multiple namespaces, supporting either layer 2 or layer 3 in OpenShift Container Platform, create a
ClusterUserDefinedNetwork
Based upon your use case, create your request by using either the
cluster-layer-two-udn.yaml
Layer2
cluster-layer-three-udn.yaml
Layer3
-
The CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network.
ClusterUserDefinedNetwork -
OpenShift Virtualization only supports the and
Layer2topologies.Localnet
Prerequisites
-
You have logged in as a user with privileges.
cluster-admin
Procedure
Optional: For a
CR that uses a primary network, create a namespace with theClusterUserDefinedNetworklabel by entering the following command:k8s.ovn.org/primary-user-defined-network$ cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: <cudn_namespace_name> labels: k8s.ovn.org/primary-user-defined-network: "" EOFCreate a cluster-wide user-defined network for either a
orLayer2topology type:Layer3Create a YAML file, such as
, to define your request for acluster-layer-two-udn.yamltopology as in the following example:Layer2apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: name: <cudn_name> spec: namespaceSelector: matchLabels: "<label_1_key>": "<label_1_value>" "<label_2_key>": "<label_2_value>" network: topology: Layer2 layer2: role: Primary subnets: - "2001:db8::/64" - "10.100.0.0/16"where:
Name-
Specifies the name of your
ClusterUserDefinedNetworkCR. namespaceSelector-
Specifies a label query over the set of namespaces that the CUDN CR applies to. Uses the standard Kubernetes
MatchLabelselector. Must not point todefaultoropenshift-*namespaces. matchLabels-
Uses the
matchLabelsselector type, where terms are evaluated with anANDrelationship. In this example, the CUDN CR is deployed to namespaces that contain both<label_1_key>=<label_1_value>and<label_2_key>=<label_2_value>labels. network- Describes the network configuration.
topology-
This field describes the network configuration; accepted values are
Layer2andLayer3. Specifying aLayer2topology type creates one logical switch that is shared by all nodes. This field specifies the topology configuration. It can belayer2orlayer3. role-
Specifies
PrimaryorSecondary.Primaryis the onlyrolespecification supported in 4.19. subnetsFor
topology types the following specifies config details for the field:Layer2- The subnets field is optional.
-
The subnets field is of type and accepts standard CIDR formats for both IPv4 and IPv6.
string -
The subnets field accepts one or two items. For two items, they must be of a different family. For example, subnets values of and
10.100.0.0/16.2001:db8::/64 -
subnets can be omitted. If omitted, users must configure static IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. For more information, see "Configuring pods with a static IP address".
Layer2
Create a YAML file, such as
, to define your request for acluster-layer-three-udn.yamltopology as in the following example:Layer3apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: name: <cudn_name> spec: namespaceSelector: matchExpressions: - key: kubernetes.io/metadata.name operator: In values: ["<example_namespace_one>", "<example_namespace_two>"] network: topology: Layer3 layer3: role: Primary subnets: - cidr: 10.100.0.0/16 hostSubnet: 24where:
Name-
Specifies the name of your
ClusterUserDefinedNetworkCR. namespaceSelector-
Specifies a label query over the set of namespaces that the CUDN CR applies to. Uses the standard Kubernetes
MatchLabelselector. Must not point todefaultoropenshift-*namespaces. Uses thematchExpressionsselector type, where terms are evaluated with anORrelationship. Key-
Specifies the label key to match. Takes an operator value; valid values include:
In,NotIn,Exists, andDoesNotExist. Because thematchExpressionstype is used, provisions namespaces matching either<example_namespace_one>or<example_namespace_two>. network- Describes the network configuration.
topology-
The
topologyfield describes the network configuration; accepted values areLayer2andLayer3. Specifying aLayer3topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets. role-
Specifies
PrimaryorSecondary.Primaryis the onlyrolespecification supported in 4.19. subnetsFor
topology types the following specifies config details for theLayer3field:subnet-
The field is mandatory.
subnets The type for the
field issubnetsandcidr:hostSubnet-
is the cluster subnet and accepts a string value.
cidr -
specifies the nodes subnet prefix that the cluster subnet is split to.
hostSubnet -
For IPv6, only a length is supported for
/64.hostSubnet
-
-
The
Apply your request by running the following command:
$ oc create --validate=true -f <example_cluster_udn>.yamlWhere
is the name of your<example_cluster_udn>.yamlorLayer2configuration file.Layer3Verify that your request is successful by running the following command:
$ oc get clusteruserdefinednetwork <cudn_name> -o yamlWhere
is the name you created of your cluster-wide user-defined network.<cudn_name>Example output
apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: creationTimestamp: "2024-12-05T15:53:00Z" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: my-cudn resourceVersion: "47985" uid: 16ee0fcf-74d1-4826-a6b7-25c737c1a634 spec: namespaceSelector: matchExpressions: - key: custom.network.selector operator: In values: - example-namespace-1 - example-namespace-2 - example-namespace-3 network: layer3: role: Primary subnets: - cidr: 10.100.0.0/16 topology: Layer3 status: conditions: - lastTransitionTime: "2024-11-19T16:46:34Z" message: 'NetworkAttachmentDefinition has been created in following namespaces: [example-namespace-1, example-namespace-2, example-namespace-3]' reason: NetworkAttachmentDefinitionReady status: "True" type: NetworkCreated
3.1.5.3. Creating a ClusterUserDefinedNetwork CR for a Localnet topology Copy linkLink copied to clipboard!
You deploy a
Localnet
Prerequisites
-
You are logged in as a user with privileges.
cluster-admin - You created and configured the Open vSwitch (OVS) bridge mapping to associate the logical OVN-Kubernetes network with the physical node network through the OVS bridge. For more information, see "Configuration for a localnet switched topology".
Procedure
Create a cluster-wide user-defined network with a
topology:LocalnetCreate a YAML file, such as
, to define your request for acluster-udn-localnet.yamltopology as in the following example:LocalnetapiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: name: <cudn_name> spec: namespaceSelector: matchLabels: "<label_1_key>": "<label_1_value>" "<label_2_key>": "<label_2_value>" network: topology: Localnet localnet: role: Secondary physicalNetworkName: test ipam: {lifecycle: Persistent} subnets: ["192.168.0.0/16", "2001:dbb::/64"]where:
Name-
Specifies the name of your
ClusterUserDefinedNetworkCR. namespaceSelector-
Specifies a label query over the set of namespaces that the CUDN CR applies to. Uses the standard Kubernetes
MatchLabelselector. Must not point todefaultoropenshift-*namespaces. matchLabels-
Uses the
matchLabelsselector type, where terms are evaluated with anANDrelationship. In this example, the CUDN CR is deployed to namespaces that contain both<label_1_key>=<alabel_1_value>and<label_2_key>=<label_2_value>labels. network- Describes the network configuration.
topology-
Specifying a
Localnettopology type creates one logical switch that is directly bridged to one provider network. role-
Specifies the
rolefor the network configuration.Secondaryis the onlyrolespecification supported for thelocalnettopology. subnetsFor
topology types the following specifies config details for theLocalnetfield:subnet- The subnets field is optional.
-
The subnets field is of type and accepts standard CIDR formats for both IPv4 and IPv6.
string -
The subnets field accepts one or two items. For two items, they must be of a different IP family. For example, subnets values of and
10.100.0.0/16.2001:db8::/64 -
subnets can be omitted. If omitted, users must configure static IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. For more information, see "Configuring pods with a static IP address".
localnet
Apply your request by running the following command:
$ oc create --validate=true -f <example_cluster_udn>.yamlwhere:
<example_cluster_udn>.yaml-
Is the name of your
Localnetconfiguration file.
Verify that your request is successful by running the following command:
$ oc get clusteruserdefinednetwork <cudn_name> -o yamlwhere:
<cudn_name>- Is the name you created of your cluster-wide user-defined network.
Example 3.1. Example output
apiVersion: k8s.ovn.org/v1
kind: ClusterUserDefinedNetwork
metadata:
creationTimestamp: "2025-05-28T19:30:38Z"
finalizers:
- k8s.ovn.org/user-defined-network-protection
generation: 1
name: cudn-test
resourceVersion: "140936"
uid: 7ff185fa-d852-4196-858a-8903b58f6890
spec:
namespaceSelector:
matchLabels:
"1": "1"
"2": "2"
network:
localnet:
ipam:
lifecycle: Persistent
physicalNetworkName: test
role: Secondary
subnets:
- 192.168.0.0/16
- 2001:dbb::/64
topology: Localnet
status:
conditions:
- lastTransitionTime: "2025-05-28T19:30:38Z"
message: 'NetworkAttachmentDefinition has been created in following namespaces:
[test1, test2]'
reason: NetworkAttachmentDefinitionCreated
status: "True"
type: NetworkCreated
3.1.5.4. Creating a ClusterUserDefinedNetwork CR by using the web console Copy linkLink copied to clipboard!
To implement isolated network segments with layer 2 connectivity in OpenShift Container Platform, create a
ClusterUserDefinedNetwork
Currently, creation of a
ClusterUserDefinedNetwork
Layer3
Prerequisites
-
You have access to the OpenShift Container Platform web console as a user with permissions.
cluster-admin -
You have created a namespace and applied the label.
k8s.ovn.org/primary-user-defined-network
Procedure
- From the Administrator perspective, click Networking → UserDefinedNetworks.
- Click ClusterUserDefinedNetwork.
- In the Name field, specify a name for the cluster-scoped UDN.
- Specify a value in the Subnet field.
- In the Project(s) Match Labels field, add the appropriate labels to select namespaces that the cluster UDN applies to.
- Click Create. The cluster-scoped UDN serves as the default primary network for pods located in namespaces that contain the labels that you specified in step 5.
3.1.6. About the UserDefinedNetwork CR Copy linkLink copied to clipboard!
To create advanced network segmentation and isolation, users and administrators create
UserDefinedNetwork
The following diagram shows four cluster namespaces, where each namespace has a single assigned user-defined network (UDN), and each UDN has an assigned custom subnet for its pod IP allocations. The OVN-Kubernetes handles any overlapping UDN subnets. Without using the Kubernetes network policy, a pod attached to a UDN can communicate with other pods in that UDN. By default, these pods are isolated from communicating with pods that exist in other UDNs. For microsegmentation, you can apply network policy within a UDN. You can assign one or more UDNs to a namespace, with a limitation of only one primary UDN to a namespace, and one or more namespaces to a UDN.
Figure 3.4. Namespace isolation using a UserDefinedNetwork CR
3.1.6.1. Best practices for UserDefinedNetwork CRs Copy linkLink copied to clipboard!
To deploy a successful instance of the
UserDefinedNetwork
The following details provide a best practice for designing a UDN CR:
-
namespaces should not be used to set up a
openshift-*CR.UserDefinedNetwork -
CRs should not be created in the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster.
UserDefinedNetwork For primary networks, the namespace used for the
CR must include theUserDefinedNetworklabel. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with thek8s.ovn.org/primary-user-defined-networknamespace label:k8s.ovn.org/primary-user-defined-network-
If the namespace is missing the label and a pod is created, the pod attaches itself to the default network.
k8s.ovn.org/primary-user-defined-network -
If the namespace is missing the label and a primary
k8s.ovn.org/primary-user-defined-networkCR is created that matches the namespace, a status error is reported and the network is not created.UserDefinedNetwork -
If the namespace is missing the label and a primary
k8s.ovn.org/primary-user-defined-networkCR already exists, a pod in the namespace is created and attached to the default network.UserDefinedNetwork -
If the namespace has the label, and a primary CR does not exist, a pod in the namespace is not created until the
UserDefinedNetworkCR is created.UserDefinedNetwork
-
If the namespace is missing the
2 masquerade IP addresses are required for user defined networks. You must reconfigure your masquerade subnet to be large enough to hold the required number of networks.
Important-
For OpenShift Container Platform 4.17 and later, clusters use for IPv4 and
169.254.0.0/17for IPv6 as the default masquerade subnet. These ranges should be avoided by users. For updated clusters, there is no change to the default masquerade subnet.fd69::/112 -
Changing the cluster’s masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a CR has been set up can disrupt the network connectivity and cause configuration issues.
UserDefinedNetwork
-
For OpenShift Container Platform 4.17 and later, clusters use
-
Ensure tenants are using the resource and not the
UserDefinedNetwork(NAD) CR. This can create security risks between tenants.NetworkAttachmentDefinition -
When creating network segmentation, you should only use the CR if user-defined network segmentation cannot be completed using the
NetworkAttachmentDefinitionCR.UserDefinedNetwork -
The cluster subnet and services CIDR for a CR cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses
UserDefinedNetworkas the default join subnet for the network. You must not use that value to configure a100.64.0.0/16CR’sUserDefinedNetworkfield. If the default address values are used anywhere in the network for the cluster you must override the default values by setting thejoinSubnetsfield. For more information, see "Additional configuration details for user-defined networks".joinSubnets
3.1.6.2. Creating a UserDefinedNetwork CR by using the CLI Copy linkLink copied to clipboard!
Create a
UserDefinedNetwork
The following procedure creates a
UserDefinedNetwork
my-layer-two-udn.yaml
Layer2
my-layer-three-udn.yaml
Layer3
Prerequisites
As a cluster administrator, you have created a namespace.
-
During namespace creation, ensure you also applied the label to the namespace.
k8s.ovn.org/primary-user-defined-network -
After you create the namespace, a user that has and
viewrole-based access control (RBAC) permissions can create aeditCR in the namespace.UserDefinedNetwork
-
During namespace creation, ensure you also applied the
Procedure
Optional: For a
CR that uses a primary network, create a namespace with theUserDefinedNetworklabel by entering the following command:k8s.ovn.org/primary-user-defined-network$ cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: <udn_namespace_name> labels: k8s.ovn.org/primary-user-defined-network: "" EOFCreate a user-defined network for either a
orLayer2topology type:Layer3Create a YAML file, such as
, to define your request for amy-layer-two-udn.yamltopology as in the following example:Layer2apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-1 namespace: <some_custom_namespace> spec: topology: Layer2 layer2:1 role: Primary subnets: - "10.0.0.0/24" - "2001:db8::/60"where:
name-
Name of your
UserDefinedNetworkresource. This should not bedefaultor duplicate any global namespaces created by the Cluster Network Operator (CNO). topology-
Specifies the network configuration; accepted values are
Layer2andLayer3. Specifying aLayer2topology type creates one logical switch that is shared by all nodes. role-
Specifies a
PrimaryorSecondaryrole. subnetsFor
topology types the following specifies config details for theLayer2field:subnet- The subnets field is optional.
-
The subnets field is of type and accepts standard CIDR formats for both IPv4 and IPv6.
string -
The subnets field accepts one or two items. For two items, they must be of a different family. For example, subnets values of and
10.100.0.0/16.2001:db8::/64 -
subnets can be omitted. If omitted, users must configure IP addresses for the pods. As a consequence, port security only prevents MAC spoofing.
Layer2 -
The
Layer2field is mandatory when thesubnetsfield is specified.ipamLifecycle
Create a YAML file, such as
, to define your request for amy-layer-three-udn.yamltopology as in the following example:Layer3apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-2-primary namespace: <some_custom_namespace> spec: topology: Layer3 layer3: role: Primary subnets: - cidr: 10.150.0.0/16 hostSubnet: 24 - cidr: 2001:db8::/60 hostSubnet: 64 # ...where:
name-
Name of your
UserDefinedNetworkresource. This should not bedefaultor duplicate any global namespaces created by the Cluster Network Operator (CNO). topology-
Specifies the network configuration; accepted values are
Layer2andLayer3. Specifying aLayer2topology type creates one logical switch that is shared by all nodes. role-
Specifies a
PrimaryorSecondaryrole. subnetsFor
topology types the following specifies config details for theLayer3field:subnet-
The field is mandatory.
subnets The type for the
field issubnetsandcidr:hostSubnet-
is equivalent to the
cidrconfiguration settings of a cluster. The IP addresses in the CIDR are distributed to pods in the user defined network. This parameter accepts a string value.clusterNetwork -
defines the per-node subnet prefix.
hostSubnet -
For IPv6, only a length is supported for
/64.hostSubnet
-
-
The
Apply your request by running the following command:
$ oc apply -f <my_layer_two_udn>.yamlWhere
is the name of your<my_layer_two_udn>.yamlorLayer2configuration file.Layer3Verify that your request is successful by running the following command:
$ oc get userdefinednetworks udn-1 -n <some_custom_namespace> -o yamlWhere
is the namespace you created for your user-defined network.some_custom_namespaceExample output
apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: creationTimestamp: "2024-08-28T17:18:47Z" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: udn-1 namespace: some-custom-namespace resourceVersion: "53313" uid: f483626d-6846-48a1-b88e-6bbeb8bcde8c spec: layer2: role: Primary subnets: - 10.0.0.0/24 - 2001:db8::/60 topology: Layer2 status: conditions: - lastTransitionTime: "2024-08-28T17:18:47Z" message: NetworkAttachmentDefinition has been created reason: NetworkAttachmentDefinitionReady status: "True" type: NetworkCreated
3.1.6.3. Creating a UserDefinedNetwork CR by using the web console Copy linkLink copied to clipboard!
To implement isolated network segments with layer 2 connectivity in OpenShift Container Platform, create a
UserDefinedNetwork
Currently, creation of a
UserDefinedNetwork
Layer3
Secondary
Prerequisites
As a cluster administrator, you have created a namespace.
-
During namespace creation, ensure you also applied the label to the namespace.
k8s.ovn.org/primary-user-defined-network -
After you create the namespace, a user that has and
viewrole-based access control (RBAC) permissions can create aeditCR in the namespace.UserDefinedNetwork
-
During namespace creation, ensure you also applied the
Procedure
- From the Administrator perspective, click Networking → UserDefinedNetworks.
- Click Create UserDefinedNetwork.
- From the Project name list, select the namespace that you previously created.
- Specify a value in the Subnet field.
- Click Create. The user-defined network serves as the default primary network for pods that you create in this namespace.
3.1.7. Additional configuration details for user-defined networks Copy linkLink copied to clipboard!
Configure optional advanced settings for
ClusterUserDefinedNetwork
UserDefinedNetwork
It is not recommended to set these fields without explicit need and understanding of OVN-Kubernetes network topology.
- Optional configurations for user-defined networks
| CUDN field | UDN field | Type | Description |
|
|
| object | When omitted, the platform sets default values for the
The
|
|
|
| string | Specifies a list of CIDRs to be removed from the specified CIDRs in the
When deploying a secondary network with
|
|
|
| object | The
Setting a value of Persistent is only supported when
|
|
|
| object | The
|
|
|
| integer | The maximum transmission units (MTU). The default value is
|
|
| N/A | object | This field is optional and configures the virtual local area network (VLAN) tagging and allows you to segment the physical network into multiple independent broadcast domains. |
|
| N/A | object | Acceptable values are
|
|
| N/A | string | Specifies the name for a physical network interface. The value you specify must match the
|
where:
<topology>-
Can be either
layer2orlayer3for theUserDefinedNetworkCR. For theClusterUserDefinedNetworkCR the topology can also beLocalnet.
3.1.8. User-defined network status condition types Copy linkLink copied to clipboard!
To troubleshoot your network deployment in OpenShift Container Platform, evaluate the status condition types returned for
ClusterUserDefinedNetwork
UserDefinedNetwork
| Condition type | Status | Reason and Message | |
|---|---|---|---|
|
|
| When
| |
| Reason | Message | ||
|
| 'NetworkAttachmentDefinition has been created in following namespaces: [example-namespace-1, example-namespace-2, example-namespace-3]'` | ||
|
|
| When
| |
| Reason | Message | ||
|
|
| ||
|
|
| ||
|
|
| ||
|
|
| ||
|
|
| ||
|
|
| ||
|
|
| ||
| Condition type | Status | Reason and Message | |
|---|---|---|---|
|
|
| When
| |
| Reason | Message | ||
|
|
| ||
|
|
| When
| |
| Reason | Message | ||
|
|
| ||
| Condition type | Reason, Message, Resolution | ||
|---|---|---|---|
|
| One of the following messages is returned when the
| ||
| Reason | Message | Resolution | |
| The
|
| You must set the
| |
| The
|
| You must set the
| |
| The
|
| You must set the
| |
| Condition type | Reason, Message, Resolution | ||
|---|---|---|---|
|
| One of the following messages is returned when the
| ||
| Reason | Message | Resolution | |
| The name of the physical network is not set. |
| You must set the
| |
| The name of the physical network does not meet minimum length requirements. |
| You must set physical network name to be at least one character in length. | |
| The name of the physical network exceeds the maximum character limit of 253. |
| You must set physical network name to not exceed the 253 character in length. | |
| The name of the physical network must not contain
|
| You must remove the
| |
| Condition type | Reason, Message, Resolution | ||
|---|---|---|---|
|
| One of the following messages is returned when the
| ||
| Reason | Message | Resolution | |
| The
|
| You must set the
| |
|
|
| You must set the
| |
| Condition type | Reason, Message, Resolution | ||
|---|---|---|---|
|
| One of the following messages is returned when either the
| ||
| Reason | Message | Resolution | |
| The optional fields,
|
| You must set the
| |
| The
|
| You must set an acceptable value for
| |
| The
|
| You must set the
| |
| The
|
| You must set the value for the
| |
| The CIDR range is invalid. |
| You must set an acceptable CIDR range for
| |
| You must set the
|
| You must set the
| |
| Setting two CIDR ranges for
|
| You must change one of your CIDR ranges to a different IP family. | |
| The
|
| You must set the
| |
| Condition type | Reason, Message, Resolution | ||
|---|---|---|---|
|
| One of the following messages is returned when the
| ||
| Reason | Message | Resolution | |
| The
|
| You must set the
| |
| The
|
| You must set
| |
| The
|
| You must set a value for
| |
| Acceptable values for
|
| You must set a value of 1 or greater for
| |
| Acceptable values for
|
| You must set a value of 4094 or less for
| |
3.1.9. Opening default network ports on user-defined network pods Copy linkLink copied to clipboard!
To allow default network pods to connect to a user-defined network pod, you can use the
k8s.ovn.org/open-default-ports
By default, pods on a user-defined network (UDN) are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the OpenShift Container Platform image registry, cannot initiate connections to UDN pods.
The following pod specification allows incoming TCP connections on port
80
53
apiVersion: v1
kind: Pod
metadata:
annotations:
k8s.ovn.org/open-default-ports: |
- protocol: tcp
port: 80
- protocol: udp
port: 53
# ...
Open ports are accessible on the pod’s default network IP, not its UDN network IP.
3.2. Creating primary networks using a NetworkAttachmentDefinition Copy linkLink copied to clipboard!
Use the
NetworkAttachmentDefinition
3.2.1. Approaches to managing a primary network Copy linkLink copied to clipboard!
You can manage the life cycle of a primary network created by a NAD CR through the Cluster Network Operator (CNO) or a YAML manifest. Using the CNO provides automated management of the network resource, while applying a YAML manifest allows for direct control over the network configuration.
- Modifying the Cluster Network Operator (CNO) configuration
-
With this method, the CNO automatically creates and manages the
NetworkAttachmentDefinitionobject. In addition to managing the object lifecycle, the CNO ensures that a DHCP is available for a primary network that uses a DHCP assigned IP address. - Applying a YAML manifest
-
With this method, you can manage the primary network directly by creating an
NetworkAttachmentDefinitionobject. This approach allows for the invocation of multiple CNI plugins in order to attach primary network interfaces in a pod.
Each approach is mutually exclusive and you can only use one approach for managing a primary network at a time. For either approach, the primary network is managed by a Container Network Interface (CNI) plugin that you configure.
When deploying OpenShift Container Platform nodes with multiple network interfaces on Red Hat OpenStack Platform (RHOSP) with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface by running the following command:
$ openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>
3.2.2. Creating a primary network attachment with the Cluster Network Operator Copy linkLink copied to clipboard!
When you specify a primary network to create by using the Cluster Network Operator (CNO), the (CNO) creates the
NetworkAttachmentDefinition
Do not edit the
NetworkAttachmentDefinition
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in as a user with privileges.
cluster-admin
Procedure
Optional: Create the namespace for the primary networks:
$ oc create namespace <namespace_name>To edit the CNO configuration, enter the following command:
$ oc edit networks.operator.openshift.io clusterModify the CR that you are creating by adding the configuration for the primary network that you are creating, as in the following example CR.
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { "cniVersion": "0.3.1", "name": "tertiary-net", "type": "ipvlan", "master": "eth1", "mode": "l2", "ipam": { "type": "static", "addresses": [ { "address": "192.168.1.23/24" } ] } }- Save your changes and quit the text editor to commit your changes.
Verification
Confirm that the CNO created the
CRD by running the following command. A delay might exist before the CNO creates the CRD. The expected output shows the name of the NAD CRD and the creation age in minutes.NetworkAttachmentDefinition$ oc get network-attachment-definitions -n <namespace>where:
<namespace>- Specifies the namespace for the network attachment that you added to the CNO configuration.
3.2.3. Configuration for a primary network attachment Copy linkLink copied to clipboard!
You configure a primary network by using the
NetworkAttachmentDefinition
k8s.cni.cncf.io
The configuration for the API is described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
| The name for the primary network. |
|
|
| The namespace that the object is associated with. |
|
|
| The CNI plugin configuration in JSON format. |
3.2.4. Creating a primary network attachment by applying a YAML manifest Copy linkLink copied to clipboard!
Create a primary network attachment by directly applying a
NetworkAttachmentDefinition
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have logged in as a user with privileges.
cluster-admin - You are working in the namespace where the NAD is to be deployed.
Procedure
Create a YAML file with your primary network configuration, such as in the following example:
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { "cniVersion": "0.3.1", "name": "work-network", "namespace": "namespace2", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } }-
Optional: You can specify a namespace to which the NAD is applied. If you are working in the namespace where the NAD is to be deployed, the specification is not necessary.
namespace
-
Optional: You can specify a namespace to which the NAD is applied. If you are working in the namespace where the NAD is to be deployed, the
To create the primary network, enter the following command:
$ oc apply -f <file>.yamlwhere:
<file>- Specifies the name of the file contained the YAML manifest.
Chapter 4. Secondary networks Copy linkLink copied to clipboard!
4.1. Creating secondary networks on OVN-Kubernetes Copy linkLink copied to clipboard!
As a cluster administrator, you can configure a secondary network for your cluster by using the
NetworkAttachmentDefinition
4.1.1. Configuration for an OVN-Kubernetes secondary network Copy linkLink copied to clipboard!
The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the
NetworkAttachmentDefinition
Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated
network-attachment-definition
You can configure an OVN-Kubernetes secondary network in layer 2, layer 3, or localnet topologies. For more information about features supported on these topologies, see "UserDefinedNetwork and NetworkAttachmentDefinition support matrix".
The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks.
Networks names must be unique. For example, creating multiple
NetworkAttachmentDefinition
4.1.1.1. Supported platforms for OVN-Kubernetes secondary network Copy linkLink copied to clipboard!
You can use an OVN-Kubernetes secondary network with the following supported platforms:
- Bare metal
- IBM Power®
- IBM Z®
- IBM® LinuxONE
- VMware vSphere
- Red Hat OpenStack Platform (RHOSP)
4.1.1.2. OVN-Kubernetes network plugin JSON configuration table Copy linkLink copied to clipboard!
The OVN-Kubernetes network plugin JSON configuration object describes the configuration parameters for the OVN-Kubernetes CNI network plugin. The following table details these parameters:
| Field | Type | Description |
|---|---|---|
|
|
| The CNI specification version. The required value is
|
|
|
| The name of the network. These networks are not namespaced. For example, a network named
|
|
|
| The name of the CNI plugin to configure. This value must be set to
|
|
|
| The topological configuration for the network. Must be one of
|
|
|
| The subnet to use for the network across the cluster. For
When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IP addresses for the pods. Port security only prevents MAC spoofing. |
|
|
| The maximum transmission unit (MTU). If you do not set a value, the Cluster Network Operator (CNO) sets a default MTU value by calculating the difference among the underlay MTU of the primary network interface, the overlay MTU of the pod network, such as the Geneve (Generic Network Virtualization Encapsulation), and byte capacity of any enabled features, such as IPsec. |
|
|
| The metadata
|
|
|
| A comma-separated list of CIDRs and IP addresses. IP addresses are removed from the assignable IP address pool and are never passed to the pods. |
|
|
| If topology is set to
|
|
|
| If topology is set to
|
4.1.1.3. Compatibility with multi-network policy Copy linkLink copied to clipboard!
When defining a network policy, the network policy rules that can be used depend on whether the OVN-Kubernetes secondary network defines the
subnets
The multi-network policy API, which is provided by the
MultiNetworkPolicy
k8s.cni.cncf.io
Refer to the following table that details supported multi-network policy selectors that are based on a
subnets
subnets field specified | Allowed multi-network policy selectors |
|---|---|
| Yes |
|
| No |
|
You can use the
k8s.v1.cni.cncf.io/policy-for
MultiNetworkPolicy
NetworkAttachmentDefinition
subnets
blue2
apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
name: allow-same-namespace
annotations:
k8s.v1.cni.cncf.io/policy-for: blue2
spec:
podSelector:
ingress:
- from:
- podSelector: {}
The following example uses the
ipBlock
Example multi-network policy that uses an IP block selector
apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
name: ingress-ipblock
annotations:
k8s.v1.cni.cncf.io/policy-for: default/flatl2net
spec:
podSelector:
matchLabels:
name: access-control
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.200.0.0/30
4.1.1.4. Configuration for a localnet switched topology Copy linkLink copied to clipboard!
The switched
localnet
You must map a secondary network to the OVS bridge to use it as an OVN-Kubernetes secondary network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS).
You can create an
NodeNetworkConfigurationPolicy
nmstate.io/v1
nodeSelector
node-role.kubernetes.io/worker: ''
When attaching a secondary network, you can either use the existing
br-ex
-
If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network stops working correctly.
br-ex - If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your secondary network. This approach provides for traffic isolation from your primary cluster network.
You cannot make configuration changes to the
br-ex
NodeNetworkConfigurationPolicy
The
localnet1
br-ex
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: mapping
spec:
nodeSelector:
node-role.kubernetes.io/worker: ''
desiredState:
ovn:
bridge-mappings:
- localnet: localnet1
bridge: br-ex
state: present
+ where:
+
metadata.name
spec.nodeSelector.node-role.kubernetes.io/worker
spec.desiredState.ovn.bridge-mappings.localnet
spec.config.name
NetworkAttachmentDefinition
spec.desiredState.ovn.bridge-mappings.bridge
state: present
spec.desiredState.ovn.bridge-mappings.state
present
absent
present
+ The following JSON example configures a localnet secondary network that is named
localnet1
mtu
br-ex
{
"cniVersion": "0.3.1",
"name": "localnet1",
"type": "ovn-k8s-cni-overlay",
"topology":"localnet",
"physicalNetworkName": "localnet1",
"subnets": "202.10.130.112/28",
"vlanID": 33,
"mtu": 1500,
"netAttachDefName": "ns1/localnet-network",
"excludeSubnets": "10.100.200.0/29"
}
In the following multiple interfaces example, the
localnet2
ovs-br1
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: ovs-br1-multiple-networks
spec:
nodeSelector:
node-role.kubernetes.io/worker: ''
desiredState:
interfaces:
- name: ovs-br1
description: |-
A dedicated OVS bridge with eth1 as a port
allowing all VLANs and untagged traffic
type: ovs-bridge
state: up
bridge:
allow-extra-patch-ports: true
options:
stp: false
mcast-snooping-enable: true
port:
- name: eth1
ovn:
bridge-mappings:
- localnet: localnet2
bridge: ovs-br1
state: present
+ where:
+
metadata.name
node-role.kubernetes.io/worker
desiredState.interfaces.name
options.mcast-snooping-enable
false
bridge.port.name
ovn.bridge-mappings.localnet
spec.config.name
NetworkAttachmentDefinition
ovn.bridge-mappings.bridge
state: present
ovn.bridge-mappings.state
present
absent
present
+ The following JSON example configures a localnet secondary network that is named
localnet2
mtu
eth1
{
"cniVersion": "0.3.1",
"name": "localnet2",
"type": "ovn-k8s-cni-overlay",
"topology":"localnet",
"physicalNetworkName": "localnet2",
"subnets": "202.10.130.112/28",
"vlanID": 33,
"mtu": 1500,
"netAttachDefName": "ns1/localnet-network",
"excludeSubnets": "10.100.200.0/29"
}
4.1.1.4.1. Configuration for a layer 2 switched topology Copy linkLink copied to clipboard!
The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments.
Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster.
The following JSON example configures a switched secondary network:
{
"cniVersion": "0.3.1",
"name": "l2-network",
"type": "ovn-k8s-cni-overlay",
"topology":"layer2",
"subnets": "10.100.200.0/24",
"mtu": 1300,
"netAttachDefName": "ns1/l2-network",
"excludeSubnets": "10.100.200.0/29"
}
4.1.1.5. Configuring pods for secondary networks Copy linkLink copied to clipboard!
You must specify the secondary network attachments through the
k8s.v1.cni.cncf.io/networks
The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide:
apiVersion: v1
kind: Pod
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: l2-network
name: tinypod
namespace: ns1
spec:
containers:
- args:
- pause
image: k8s.gcr.io/e2e-test-images/agnhost:2.36
imagePullPolicy: IfNotPresent
name: agnhost-container
4.1.1.6. Configuring pods with a static IP address Copy linkLink copied to clipboard!
You can configure pods with a static IP address. The example in the procedure provisions a pod with a static IP address.
- You can specify the IP address for the secondary network attachment of a pod only when the secondary network attachment, a namespaced-scoped object, uses a layer 2 or localnet topology.
- Specifying a static IP address for the pod is only possible when the attachment configuration does not feature subnets.
apiVersion: v1
kind: Pod
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: '[
{
"name": "l2-network",
"mac": "02:03:04:05:06:07",
"interface": "myiface1",
"ips": [
"192.0.2.20/24"
]
}
]'
name: tinypod
namespace: ns1
spec:
containers:
- args:
- pause
image: k8s.gcr.io/e2e-test-images/agnhost:2.36
imagePullPolicy: IfNotPresent
name: agnhost-container
where:
k8s.v1.cni.cncf.io/networks.name-
The name of the network. This value must be unique across all
NetworkAttachmentDefinitionCRDs. k8s.v1.cni.cncf.io/networks.mac- The MAC address to be assigned for the interface.
k8s.v1.cni.cncf.io/networks.interface- The name of the network interface to be created for the pod.
k8s.v1.cni.cncf.io/networks.ips- The IP addresses to be assigned to the network interface.
4.2. Creating secondary networks with other CNI plugins Copy linkLink copied to clipboard!
The specific configuration fields for secondary networks are described in the following sections.
4.2.1. Configuration for a bridge secondary network Copy linkLink copied to clipboard!
The bridge CNI plugin JSON configuration object describes the configuration parameters for the Bridge CNI plugin. The following table details these parameters:
| Field | Type | Description |
|---|---|---|
|
|
| The CNI specification version. A minimum version of
|
|
|
| The mandatory, unique identifier assigned to this CNI network attachment definition. It is used by the container runtime to select the correct network configuration and serves as the key for persistent resource state management, such as IP address allocations. |
|
|
| The name of the CNI plugin to configure:
|
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
| Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, the bridge interface gets created. The default value is
|
|
|
| Optional: Set to
|
|
|
| Optional: Set to
|
|
|
| Optional: Set to
|
|
|
| Optional: Set to
|
|
|
| Optional: Set to
|
|
|
| Optional: Set to
|
|
|
| Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. |
|
|
| Optional: Indicates whether the default vlan must be preserved on the
|
|
|
| Optional: Assign a VLAN trunk tag. The default value is
|
|
|
| Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
| Optional: Enables duplicate address detection for the container side
|
|
|
| Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is
|
The VLAN parameter configures the VLAN tag on the host end of the
veth
vlan_filtering
To configure an uplink for an L2 network, you must allow the VLAN on the uplink interface by using the following command:
$ bridge vlan add vid VLAN_ID dev DEV
4.2.1.1. Bridge CNI plugin configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named
bridge-net
{
"cniVersion": "0.3.1",
"name": "bridge-net",
"type": "bridge",
"isGateway": true,
"vlan": 2,
"ipam": {
"type": "dhcp"
}
}
4.2.2. Configuration for a Bond CNI secondary network Copy linkLink copied to clipboard!
The Bond Container Network Interface (Bond CNI) enables the aggregation of multiple network interfaces into a single logical bonded interface within a container, which enhanches network redundancy and fault tolerance. Only SR-IOV Virtual Functions (VFs) are supported for bonding with this plugin.
The following table describes the configuration parameters for the Bond CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
| The mandatory, unique identifier assigned to this CNI network attachment definition. It is used by the container runtime to select the correct network configuration and serves as the key for persistent resource state management, such as IP address allocations. |
|
|
| The CNI specification version. A minimum version of
|
|
|
| Specifies the name of the CNI plugin to configure:
|
|
|
| Specifies the address resolution protocol (ARP) link monitoring frequency in milliseconds. This parameter defines how often the bond interface sends ARP requests to check the availability of its aggregated interfaces. |
|
|
| Optional: Specifies the maximum transmission unit (MTU) of the bond. The default is
|
|
|
| Optional: Specifies the
|
|
|
| Specifies the bonding policy. |
|
|
| Optional: Specifies whether the network interfaces intended for bonding are expected to be created and available directly within the network namespace of the container when the bond starts. If
|
|
|
| Specifies the interfaces to be bonded. |
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
4.2.2.1. Bond CNI plugin configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named
bond-net1
{
"type": "bond",
"cniVersion": "0.3.1",
"name": "bond-net1",
"mode": "active-backup",
"failOverMac": 1,
"linksInContainer": true,
"miimon": "100",
"mtu": 1500,
"links": [
{"name": "net1"},
{"name": "net2"}
],
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"routes": [{
"dst": "0.0.0.0/0"
}],
"gateway": "10.56.217.1"
}
}
4.2.3. Configuration for a host device secondary network Copy linkLink copied to clipboard!
The host device CNI plugin JSON configuration object describes the configuration parameters for the host-device CNI plugin.
Specify your network device by setting only one of the following parameters:
device
hwaddr
kernelpath
pciBusID
The following table details the configuration parameters:
| Field | Type | Description |
|---|---|---|
|
|
| The CNI specification version. A minimum version of
|
|
|
| The mandatory, unique identifier assigned to this CNI network attachment definition. It is used by the container runtime to select the correct network configuration and serves as the key for persistent resource state management, such as IP address allocations. |
|
|
| The name of the CNI plugin to configure:
|
|
|
| Optional: The name of the device, such as
|
|
|
| Optional: The device hardware MAC address. |
|
|
| Optional: The Linux kernel device path, such as
|
|
|
| Optional: The PCI address of the network device, such as
|
4.2.3.1. host-device configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named
hostdev-net
{
"cniVersion": "0.3.1",
"name": "hostdev-net",
"type": "host-device",
"device": "eth1"
}
4.2.4. Configuration for a dummy device additional network Copy linkLink copied to clipboard!
The dummy CNI plugin functions like a loopback device. The plugin is a virtual interface, and you can use the plugin to route the packets to a designated IP address. Unlike a loopback device, the IP address is arbitrary and is not restricted to the
127.0.0.0/8
The dummy device CNI plugin JSON configuration object describes the configuration parameters for the dummy CNI plugin. The following table details these parameters:
| Field | Type | Description |
|---|---|---|
|
|
| The CNI specification version. A minimum version of
|
|
|
| The mandatory, unique identifier assigned to this CNI network attachment definition. It is used by the container runtime to select the correct network configuration and serves as the key for persistent resource state management, such as IP address allocations. |
|
|
| The name of the CNI plugin that you want to configure. The required value is
|
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages the IP address assignment for the attachment definition. |
4.2.4.1. dummy configuration example Copy linkLink copied to clipboard!
The following example configures an additional network named
hostdev-net
{
"cniVersion": "0.3.1",
"name": "dummy-net",
"type": "dummy",
"ipam": {
"type": "host-local",
"subnet": "10.1.1.0/24"
}
}
4.2.5. Configuration for a VLAN secondary network Copy linkLink copied to clipboard!
The VLAN CNI plugin JSON configuration object describes the configuration parameters for the VLAN,
vlan
| Field | Type | Description |
|---|---|---|
|
|
| The CNI specification version. A minimum version of
|
|
|
| The mandatory, unique identifier assigned to this CNI network attachment definition. It is used by the container runtime to select the correct network configuration and serves as the key for persistent resource state management, such as IP address allocations. |
|
|
| The name of the CNI plugin to configure:
|
|
|
| The Ethernet interface to associate with the network attachment. If a
|
|
|
| Set the ID of the
|
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
| Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
| Optional: DNS information to return. For example, a priority-ordered list of DNS nameservers. |
|
|
| Optional: Specifies whether the
|
A
NetworkAttachmentDefinition
vlan
vlan
vlanId
master
4.2.5.1. VLAN configuration example Copy linkLink copied to clipboard!
The following example demonstrates a
vlan
vlan-net
{
"name": "vlan-net",
"cniVersion": "0.3.1",
"type": "vlan",
"master": "eth0",
"mtu": 1500,
"vlanId": 5,
"linkInContainer": false,
"ipam": {
"type": "host-local",
"subnet": "10.1.1.0/24"
},
"dns": {
"nameservers": [ "10.1.1.1", "8.8.8.8" ]
}
}
-
: Allocates IPv4 and IPv6 IP addresses from a specified set of address ranges. IPAM plugin stores the IP addresses locally on the host filesystem so that the addresses remain unique to the host.
ipam.type.host-local
4.2.6. Configuration for an IPVLAN secondary network Copy linkLink copied to clipboard!
The IPVLAN CNI plugin JSON configuration object describes the configuration parameters for the IPVLAN,
ipvlan
| Field | Type | Description |
|---|---|---|
|
|
| The CNI specification version. A minimum version of
|
|
|
| The mandatory, unique identifier assigned to this CNI network attachment definition. It is used by the container runtime to select the correct network configuration and serves as the key for persistent resource state management, such as IP address allocations. |
|
|
| The name of the CNI plugin to configure:
|
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained. |
|
|
| Optional: The operating mode for the virtual network. The value must be
|
|
|
| Optional: The Ethernet interface to associate with the network attachment. If a
|
|
|
| Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
| Optional: Specifies whether the
|
-
The object does not allow virtual interfaces to communicate with the
ipvlaninterface. Therefore the container is not able to reach the host by using themasterinterface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol (ipvlan).PTP -
A single interface cannot simultaneously be configured to use both
masterandmacvlan.ipvlan -
For IP allocation schemes that cannot be interface agnostic, the plugin can be chained with an earlier plugin that handles this logic. If the
ipvlanis omitted, then the previous result must contain a single interface name for themasterplugin to enslave. Ifipvlanis omitted, then the previous result is used to configure theipaminterface.ipvlan
4.2.6.1. IPVLAN CNI plugin configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named
ipvlan-net
{
"cniVersion": "0.3.1",
"name": "ipvlan-net",
"type": "ipvlan",
"master": "eth1",
"linkInContainer": false,
"mode": "l3",
"ipam": {
"type": "static",
"addresses": [
{
"address": "192.168.10.10/24"
}
]
}
}
4.2.7. Configuration for a MACVLAN secondary network Copy linkLink copied to clipboard!
The MACVLAN CNI plugin JSON configuration object describes the configuration parameters for the MAC Virtual LAN (MACVLAN) Container Network Interface (CNI) plugin. The following table describes these parameters:
| Field | Type | Description |
|---|---|---|
|
|
| The CNI specification version. A minimum version of
|
|
|
| The mandatory, unique identifier assigned to this CNI network attachment definition. It is used by the container runtime to select the correct network configuration and serves as the key for persistent resource state management, such as IP address allocations. |
|
|
| The name of the CNI plugin to configure:
|
|
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
| Optional: Configures traffic visibility on the virtual network. Must be either
|
|
|
| Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. |
|
|
| Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
| Optional: Specifies whether the
|
If you specify the
master
4.2.7.1. MACVLAN CNI plugin configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named
macvlan-net
{
"cniVersion": "0.3.1",
"name": "macvlan-net",
"type": "macvlan",
"master": "eth1",
"linkInContainer": false,
"mode": "bridge",
"ipam": {
"type": "dhcp"
}
}
4.2.8. Configuration for a TAP secondary network Copy linkLink copied to clipboard!
The TAP CNI plugin JSON configuration object describes the configuration parameters for the TAP CNI plugin. The following table describes these parameters:
| Field | Type | Description |
|---|---|---|
|
|
| The CNI specification version. A minimum version of
|
|
|
| The mandatory, unique identifier assigned to this CNI network attachment definition. It is used by the container runtime to select the correct network configuration and serves as the key for persistent resource state management, such as IP address allocations. |
|
|
| The name of the CNI plugin to configure:
|
|
|
| Optional: Request the specified MAC address for the interface. |
|
|
| Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
| Optional: The SELinux context to associate with the tap device. Note The value
|
|
|
| Optional: Set to
|
|
|
| Optional: The user owning the tap device. |
|
|
| Optional: The group owning the tap device. |
|
|
| Optional: Set the tap device as a port of an already existing bridge. |
4.2.8.1. Tap configuration example Copy linkLink copied to clipboard!
The following example configures a secondary network named
mynet
{
"name": "mynet",
"cniVersion": "0.3.1",
"type": "tap",
"mac": "00:11:22:33:44:55",
"mtu": 1500,
"selinuxcontext": "system_u:system_r:container_t:s0",
"multiQueue": true,
"owner": 0,
"group": 0
"bridge": "br1"
}
4.2.9. Setting SELinux boolean for the TAP CNI plugin Copy linkLink copied to clipboard!
To create the tap device with the
container_t
container_use_devices
Prerequisites
-
You have installed the OpenShift CLI ().
oc
Procedure
Create a new YAML file with the following details:
Example
setsebool-container-use-devices.yamlapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: setsebool.service contents: | [Unit] Description=Set SELinux boolean for the TAP CNI plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/usr/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.targetCreate the new
object by running the following command:MachineConfig$ oc apply -f setsebool-container-use-devices.yamlNoteApplying any changes to the
object causes all affected nodes to gracefully reboot after the change is applied. The MCO might take some time to apply the update.MachineConfig
Verification
Verify that the change is applied by running the following command:
$ oc get machineconfigpoolsNAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7dNoteAll nodes should be in the
andUpdatedstate.Ready
4.2.10. Configuring routes using the route-override plugin on a secondary network Copy linkLink copied to clipboard!
The Route override CNI plugin JSON configuration object describes the configuration parameters for the
route-override
| Field | Type | Description |
|---|---|---|
|
|
| The name of the CNI plugin to configure:
|
|
|
| Optional: Set to
|
|
|
| Optional: Set to
|
|
|
| Optional: Specify the list of routes to delete from the container namespace. |
|
|
| Optional: Specify the list of routes to add to the container namespace. Each route is a dictionary with
|
|
|
| Optional: Set this to
|
4.2.10.1. Route-override plugin configuration example Copy linkLink copied to clipboard!
The
route-override
The following example configures a secondary network named
mymacvlan
eth1
192.168.1.0/24
host-local
route-override
192.168.0.0/24
192.168.0.0/24
{
"cniVersion": "0.3.0",
"name": "mymacvlan",
"plugins": [
{
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24"
}
},
{
"type": "route-override",
"flushroutes": true,
"delroutes": [
{
"dst": "192.168.0.0/24"
}
],
"addroutes": [
{
"dst": "192.168.0.0/24",
"gw": "10.1.254.254"
}
]
}
]
}
where:
"type": "macvlan"-
The parent CNI creates a network interface attached to
eth1. "type": "route-override"-
The chained
route-overrideCNI modifies the routing rules.
4.3. Attaching a pod to a secondary network Copy linkLink copied to clipboard!
To enable a pod to use additional network interfaces beyond the primary cluster network in OpenShift Container Platform, you can attach the pod to a secondary network. Secondary networks provide additional connectivity options for your workloads.
4.3.1. Adding a pod to a secondary network Copy linkLink copied to clipboard!
To enable a pod to use additional network interfaces in OpenShift Container Platform, you can attach the pod to a secondary network. The pod continues to send normal cluster-related network traffic over the default network.
When a pod is created, a secondary network is attached to the pod. However, if a pod already exists, you cannot attach a secondary network to it.
The pod must be in the same namespace as the secondary network.
Prerequisites
-
Install the OpenShift CLI ().
oc - Log in to the cluster.
Procedure
Add an annotation to the
object. Only one of the following annotation formats can be used:PodTo attach a secondary network without any customization, add an annotation with the following format:
metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...]where:
k8s.v1.cni.cncf.io/networks- Specifies the name of the secondary network to associate with the pod. To specify more than one secondary network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same secondary network multiple times, that pod will have multiple network interfaces attached to that network.
To attach a secondary network with customizations, add an annotation with the following format:
metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", "namespace": "<namespace>", "default-route": ["<default_route>"] } ]where:
<network>-
Specifies the name of the secondary network defined by a
NetworkAttachmentDefinitionobject. <namespace>-
Specifies the namespace where the
NetworkAttachmentDefinitionobject is defined. <default-route>-
Optional parameter. Specifies an override for the default route, such as
192.168.17.1.
Create the pod by entering the following command.
$ oc create -f <name>.yamlReplace
with the name of the pod.<name>Optional: Confirm that the annotation exists in the
CR by entering the following command. Replacepodwith the name of the pod.<name>$ oc get pod <name> -o yamlIn the following example, the
pod is attached to theexample-podsecondary network:net1$ oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ...where:
k8s.v1.cni.cncf.io/network-status- Specifies a JSON array of objects. Each object describes the status of a secondary network attached to the pod. The annotation value is stored as a plain text value.
4.3.1.1. Specifying pod-specific addressing and routing options Copy linkLink copied to clipboard!
To set static IP addresses, MAC addresses, and default routes for a pod in OpenShift Container Platform, you can configure pod-specific addressing and routing options using JSON-formatted annotations. With these annotations, you can customize network behavior for individual pods on secondary networks.
Prerequisites
- The pod must be in the same namespace as the secondary network.
-
Install the OpenShift CLI ().
oc - You must log in to the cluster.
Procedure
Edit the
resource definition. If you are editing an existingPodresource, run the following command to edit its definition in the default editor. ReplacePodwith the name of the<name>resource to edit.Pod$ oc edit pod <name>In the
resource definition, add thePodparameter to the podk8s.v1.cni.cncf.io/networksmapping. Themetadataaccepts a JSON string of a list of objects that reference the name ofk8s.v1.cni.cncf.io/networkscustom resource (CR) names in addition to specifying additional properties.NetworkAttachmentDefinitionmetadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' # ...where:
<network>- Replace with a JSON object as shown in the following examples. The single quotes are required.
In the following example the annotation specifies which network attachment will have the default route, using the
parameter.default-routeapiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "net1" }, { "name": "net2", "default-route": ["192.0.2.1"] }]' spec: containers: - name: example-pod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: centos/toolswhere:
net1,net2-
Specifies the name of the
NetworkAttachmentDefinitionresource that defines the secondary network to associate with the pod. 192.0.2.1-
Specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one
default-routekey is specified, this will cause the pod to fail to become active.
The default route will cause any traffic that is not specified in other routes to be routed to the gateway.
ImportantSetting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface.
To verify the routing properties of a pod, the
command may be used to execute theoccommand within a pod.ip$ oc exec -it <pod_name> -- ip routeNoteYou may also reference the pod’s
to see which secondary network has been assigned the default route, by the presence of thek8s.v1.cni.cncf.io/network-statuskey in the JSON-formatted list of objects.default-routeTo set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO.
Edit the CNO CR by running the following command:
$ oc edit networks.operator.openshift.io clusterThe following YAML describes the configuration parameters for the CNO:
Cluster Network Operator YAML configuration
name: <name> namespace: <namespace> rawCNIConfig: '{ ... }' type: Rawwhere:
name-
Specifies a name for the secondary network attachment that you are creating. The name must be unique within the specified
namespace. namespace-
Specifies the namespace to create the network attachment in. If you do not specify a value, then the
defaultnamespace is used. rawCNIConfig- Specifies the CNI plugin configuration in JSON format, which is based on the following template.
The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin:
macvlan CNI plugin JSON configuration object using static IP and MAC address
{ "cniVersion": "0.3.1", "name": "<name>", "plugins": [{ "type": "macvlan", "capabilities": { "ips": true }, "master": "eth0", "mode": "bridge", "ipam": { "type": "static" } }, { "capabilities": { "mac": true }, "type": "tuning" }] }where:
name-
Specifies the name for the secondary network attachment to create. The name must be unique within the specified
namespace. plugins- Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration.
ips- Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities.
master- Specifies the interface that the macvlan plugin uses.
mac- Specifies that a request is made to enable the static MAC address functionality of a CNI plugin.
The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod.
Edit the pod by entering the following command:
$ oc edit pod <name>macvlan CNI plugin JSON configuration object using static IP and MAC address
apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "<name>", "ips": [ "192.0.2.205/24" ], "mac": "CA:FE:C0:FF:EE:00" } ]'where:
metadata.name-
Specifies the name for the secondary network attachment to create. The name must be unique within the specified
namespace. metadata.annotations.k8s.v1.cni.cncf.io/ips- Specifies an IP address including the subnet mask.
metadata.annotations.k8s.v1.cni.cncf.io/mac- Specifies the MAC address.
NoteStatic IP addresses and MAC addresses do not have to be used at the same time. You can use them individually, or together.
4.4. Configuring multi-network policy Copy linkLink copied to clipboard!
As an administrator, you can use the
MultiNetworkPolicy
Multi-network policies can be used to manage traffic on secondary networks in the cluster. These policies cannot manage the default cluster network or primary network of user-defined networks.
As a cluster administrator, you can configure a multi-network policy for any of the following network types:
- Single-Root I/O Virtualization (SR-IOV)
- MAC Virtual Local Area Network (MacVLAN)
- IP Virtual Local Area Network (IPVLAN)
- Bond Container Network Interface (CNI) over SR-IOV
- OVN-Kubernetes secondary networks
Support for configuring multi-network policies for SR-IOV secondary networks is only supported with kernel network interface controllers (NICs). SR-IOV is not supported for Data Plane Development Kit (DPDK) applications.
4.4.1. Differences between multi-network policy and network policy Copy linkLink copied to clipboard!
Although the
MultiNetworkPolicy
NetworkPolicy
You must use the
API, as demonstrated in the following example configuration:MultiNetworkPolicyapiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy # ...-
You must use the resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with the
multi-networkpolicycommand whereoc get multi-networkpolicy <name>is the name of a multi-network policy.<name> You can use the
annotation on ak8s.v1.cni.cncf.io/policy-forobject to point to aMultiNetworkPolicy(NAD) custom resource (CR). The NAD CR defines the network to which the policy applies. The following example multi-network policy includes theNetworkAttachmentDefinitionannotation:k8s.v1.cni.cncf.io/policy-forapiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> # ...where:
<namespace_name>- Specifies the namespace name.
<network_name>- Specifies the name of a network attachment definition.
4.4.2. Enabling multi-network policy for the cluster Copy linkLink copied to clipboard!
As a cluster administrator, you can enable multi-network policy support on your cluster.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster with a user with privileges.
cluster-admin
Procedure
Create the
file with the following YAML:multinetwork-enable-patch.yamlapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true # ...Configure the cluster to enable multi-network policy. Successful output lists the name of the policy object and the
status.patched$ oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml
4.4.3. Supporting multi-network policies in IPv6 networks Copy linkLink copied to clipboard!
The ICMPv6 Neighbor Discovery Protocol (NDP) is a set of messages and processes that enable devices to discover and maintain information about neighboring nodes. NDP plays a crucial role in IPv6 networks, facilitating the interaction between devices on the same link.
The Cluster Network Operator (CNO) deploys the iptables implementation of multi-network policy when the
useMultiNetworkPolicy
true
To support multi-network policies in IPv6 networks the Cluster Network Operator deploys the following set of custom rules in every pod affected by a multi-network policy:
kind: ConfigMap
apiVersion: v1
metadata:
name: multi-networkpolicy-custom-rules
namespace: openshift-multus
data:
custom-v6-rules.txt: |
# accept NDP
-p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT
-p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT
# accept RA/RS
-p icmpv6 --icmpv6-type router-solicitation -j ACCEPT
-p icmpv6 --icmpv6-type router-advertisement -j ACCEPT
where:
icmpv6-type neighbor-solicitation- This rule allows incoming ICMPv6 neighbor solicitation messages, which are part of the neighbor discovery protocol (NDP). These messages help determine the link-layer addresses of neighboring nodes.
icmpv6-type neighbor-advertisement- This rule allows incoming ICMPv6 neighbor advertisement messages, which are part of NDP and provide information about the link-layer address of the sender.
icmpv6-type router-solicitation- This rule permits incoming ICMPv6 router solicitation messages. Hosts use these messages to request router configuration information.
icmpv6-type router-advertisement- This rule allows incoming ICMPv6 router advertisement messages, which give configuration information to hosts.
You cannot edit the predefined rules.
The rules collectively enable essential ICMPv6 traffic for correct network functioning, including address resolution and router communication in an IPv6 environment. With these rules in place and a multi-network policy denying traffic, applications are not expected to experience connectivity issues.
4.4.4. Working with multi-network policy Copy linkLink copied to clipboard!
To manage network traffic isolation and security for pods on secondary networks, you can create, edit, view, and delete multi-network policies. Before you work with multi-network policies, you must enable multi-network policy support for your cluster.
4.4.4.1. Creating a multi-network policy using the CLI Copy linkLink copied to clipboard!
To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy.
Prerequisites
-
Your cluster uses a network plugin that supports objects, such as the OVN-Kubernetes network plugin, with
NetworkPolicyset.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You logged in to the cluster with a user with privileges.
cluster-admin - You are working in the namespace that the multi-network policy applies to.
Procedure
Create a policy rule.
Create a
file:<policy_name>.yaml$ touch <policy_name>.yamlwhere:
<policy_name>- Specifies the multi-network policy file name.
Define a multi-network policy in the created file. The following example denies ingress traffic from all pods in all namespaces. This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies.
apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy endif:: multi[] metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: {} policyTypes: - Ingress ingress: []where:
<network_name>Specifies the name of a network attachment definition.
The following example configuration allows ingress traffic from all pods in the same namespace:
apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: ingress: - from: - podSelector: {} # ...where:
<network_name>Specifies the name of a network attachment definition.
The following example allows ingress traffic to one pod from a particular namespace. This policy allows traffic to pods that have the
label from pods running inpod-a.namespace-yapiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y # ...where:
<network_name>Specifies the name of a network attachment definition.
The following example configuration restricts traffic to a service. This policy when applied ensures every pod with both labels
andapp=bookstorecan only be accessed by pods with labelrole=api. In this example the application could be a REST API server, marked with labelsapp=bookstoreandapp=bookstore.role=apiThis example configuration addresses the following use cases:
- Restricting the traffic to a service to only the other microservices that need to use it.
Restricting the connections to a database to only permit the application using it.
apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore # ...where:
<network_name>- Specifies the name of a network attachment definition.
To create the multi-network policy object, enter the following command. Successful output lists the name of the policy object and the
status.created$ oc apply -f <policy_name>.yaml -n <namespace>where:
<policy_name>- Specifies the multi-network policy file name.
<namespace>- Optional parameter. If you defined the object in a different namespace than the current namespace, the parameter specifices the namespace.
Successful output lists the name of the policy object and the
status.createdNoteIf you log in to the web console with
privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console.cluster-admin
4.4.4.2. Editing a multi-network policy Copy linkLink copied to clipboard!
To modify existing policy configurations, you can edit a multi-network policy in a namespace. Edit policies by modifying the policy file and applying it with
oc apply
oc edit
If you log in with
cluster-admin
Prerequisites
-
Your cluster uses a network plugin that supports objects, such as the OVN-Kubernetes network plugin, with
NetworkPolicyset.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
cluster-admin - You are working in the namespace where the multi-network policy exists.
Procedure
Optional: To list the multi-network policy objects in a namespace, enter the following command:
$ oc get multi-network policy -n <namespace>where:
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Edit the multi-network policy object.
If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command.
$ oc apply -n <namespace> -f <policy_file>.yamlwhere:
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
<policy_file>- Specifies the name of the file containing the network policy.
If you need to update the multi-network policy object directly, enter the following command:
$ oc edit multi-network policy <policy_name> -n <namespace>where:
<policy_name>- Specifies the name of the network policy.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Confirm that the multi-network policy object is updated.
$ oc describe multi-networkpolicy <policy_name> -n <namespace>where:
<policy_name>- Specifies the name of the multi-network policy.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
4.4.4.3. Viewing multi-network policies using the CLI Copy linkLink copied to clipboard!
You can examine the multi-network policies in a namespace.
If you log in with
cluster-admin
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
cluster-admin - You are working in the namespace where the multi-network policy exists.
Procedure
List multi-network policies in a namespace.
To view multi-network policy objects defined in a namespace enter the following command:
$ oc get multi-networkpolicyOptional: To examine a specific multi-network policy enter the following command:
$ oc describe multi-networkpolicy <policy_name> -n <namespace>where:
<policy_name>- Specifies the name of the multi-network policy to inspect.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
4.4.4.4. Deleting a multi-network policy using the CLI Copy linkLink copied to clipboard!
You can delete a multi-network policy in a namespace.
If you log in with
cluster-admin
Prerequisites
-
Your cluster uses a network plugin that supports objects, such as the OVN-Kubernetes network plugin, with
NetworkPolicyset.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You logged in to the cluster with a user with privileges.
cluster-admin - You are working in the namespace where the multi-network policy exists.
Procedure
To delete a multi-network policy object, enter the following command. Successful output lists the name of the policy object and the
status.deleted$ oc delete multi-networkpolicy <policy_name> -n <namespace>where:
<policy_name>- Specifies the name of the multi-network policy.
<namespace>- Optional parameter. If you defined the object in a different namespace than the current namespace, the parameter specifices the namespace.
4.4.4.5. Creating a default deny all multi-network policy Copy linkLink copied to clipboard!
The default deny all multi-network policy blocks all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies and traffic between host-networked pods. This procedure enforces a strong deny policy by applying a
deny-by-default
my-project
Without configuring a
NetworkPolicy
Prerequisites
-
Your cluster uses a network plugin that supports objects, such as the OVN-Kubernetes network plugin, with
NetworkPolicyset.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You logged in to the cluster with a user with privileges.
cluster-admin - You are working in the namespace that the multi-network policy applies to.
Procedure
Create the following YAML that defines a
policy to deny ingress from all pods in all namespaces. Save the YAML in thedeny-by-defaultfile:deny-by-default.yamlapiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: my-project annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: {} policyTypes: - Ingress ingress: []where:
namespace-
Specifies the namespace in which to deploy the policy. For example, the
my-projectnamespace. annotations- Specifies the name of namespace project followed by the network attachment definition name.
podSelector-
If this field is empty, the configuration matches all the pods. Therefore, the policy applies to all pods in the
my-projectnamespace. policyTypes-
Specifies a list of rule types that the
NetworkPolicyrelates to. - Ingress-
Specifies
IngressonlypolicyTypes. ingress- Specifies ingress rules. If not specified, all incoming traffic is dropped to all pods.
Apply the policy by entering the following command. Successful output lists the name of the policy object and the
status.created$ oc apply -f deny-by-default.yaml
4.4.4.6. Creating a multi-network policy to allow traffic from external clients Copy linkLink copied to clipboard!
With the
deny-by-default
app=web
If you log in with a user with the
cluster-admin
Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label
app=web
Prerequisites
-
Your cluster uses a network plugin that supports objects, such as the OVN-Kubernetes network plugin, with
NetworkPolicyset.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You logged in to the cluster with a user with privileges.
cluster-admin - You are working in the namespace that the multi-network policy applies to.
Procedure
Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the
file:web-allow-external.yamlapiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}Apply the policy by entering the following command. Successful output lists the name of the policy object and the
status.created$ oc apply -f web-allow-external.yamlThis policy allows traffic from all resources, including external traffic as illustrated in the following diagram:
4.4.4.7. Creating a multi-network policy allowing traffic to an application from all namespaces Copy linkLink copied to clipboard!
You can configure a policy that allows traffic from all pods in all namespaces to a particular application.
If you log in with a user with the
cluster-admin
Prerequisites
-
Your cluster uses a network plugin that supports objects, such as the OVN-Kubernetes network plugin, with
NetworkPolicyset.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You logged in to the cluster with a user with privileges.
cluster-admin - You are working in the namespace that the multi-network policy applies to.
Procedure
Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the
file:web-allow-all-namespaces.yaml+ where:
+
app-
Applies the policy only to
app:webpods in default namespace. namespaceSelector- Selects all pods in all namespaces.
By default, if you do not specify a
namespaceSelector
Apply the policy by entering the following command. Successful output lists the name of the policy object and the
status.created$ oc apply -f web-allow-all-namespaces.yaml
Verification
Start a web service in the
namespace by entering the following command:default$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80Run the following command to deploy an
image in thealpinenamespace and to start a shell:secondary$ oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- shRun the following command in the shell and observe that the service allows the request:
# wget -qO- --timeout=2 http://web.default<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
4.4.4.8. Creating a multi-network policy allowing traffic to an application from a namespace Copy linkLink copied to clipboard!
You can configure a policy that allows traffic to a pod with the label
app=web
- Restrict traffic to a production database only to namespaces that have production workloads deployed.
- Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace.
If you log in with a user with the
cluster-admin
Prerequisites
-
Your cluster uses a network plugin that supports objects, such as the OVN-Kubernetes network plugin, with
NetworkPolicyset.mode: NetworkPolicy -
You installed the OpenShift CLI ().
oc -
You logged in to the cluster with a user with privileges.
cluster-admin - You are working in the namespace that the multi-network policy applies to.
Do not apply the
network.openshift.io/policy-group: ingress
Using this label can result in intermittent network connectivity drops, unintended application of system
NetworkPolicies
Procedure
Create a policy that allows traffic from all pods in a particular namespaces with a label
. Save the YAML in thepurpose=productionfile:web-allow-prod.yamlapiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: productionwhere:
app-
Applies the policy only to
app:webpods in the default namespace. purpose-
Restricts traffic to only pods in namespaces that have the label
purpose=production.
Apply the policy by entering the following command. Successful output lists the name of the policy object and the
status.created$ oc apply -f web-allow-prod.yaml
Verification
Start a web service in the
namespace by entering the following command:default$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80Run the following command to create the
namespace:prod$ oc create namespace prodRun the following command to label the
namespace:prod$ oc label namespace/prod purpose=productionRun the following command to create the
namespace:dev$ oc create namespace devRun the following command to label the
namespace:dev$ oc label namespace/dev purpose=testingRun the following command to deploy an
image in thealpinenamespace and to start a shell:dev$ oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- shRun the following command in the shell and observe the reason for the blocked request. For example, expected output states
.wget: download timed out# wget -qO- --timeout=2 http://web.defaultRun the following command to deploy an
image in thealpinenamespace and start a shell:prod$ oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- shRun the following command in the shell and observe that the request is allowed:
# wget -qO- --timeout=2 http://web.default<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
4.5. Removing a pod from a secondary network Copy linkLink copied to clipboard!
To disconnect a pod from specific network configurations in OpenShift Container Platform, you can remove the pod from a secondary network. Delete the pod to remove its connection to the secondary network.
4.5.1. Removing a pod from a secondary network Copy linkLink copied to clipboard!
To disconnect a pod from specific network configurations in OpenShift Container Platform, you can remove the pod from a secondary network. Delete the pod using the
oc delete pod
Prerequisites
- A secondary network is attached to the pod.
-
Install the OpenShift CLI ().
oc - Log in to the cluster.
Procedure
Delete the pod by entering the following command:
$ oc delete pod <name> -n <namespace>where:
<name>- Specifies the name of the pod.
<namespace>- Specifies the namespace that contains the pod.
4.6. Editing a secondary network Copy linkLink copied to clipboard!
To update network settings or change network parameters for a secondary network in OpenShift Container Platform, you can modify the configuration for an existing secondary network. Edit the
NetworkAttachmentDefinition
4.6.1. Modifying a NetworkAttachmentDefinition custom resource Copy linkLink copied to clipboard!
To update network settings or change network parameters for a secondary network in OpenShift Container Platform, you can modify the
NetworkAttachmentDefinition
Prerequisites
- You have configured a secondary network for your cluster.
-
Install the OpenShift CLI ().
oc -
Log in as a user with privileges.
cluster-admin
Procedure
Edit the Cluster Network Operator (CNO) CR in your default text editor by running the following command:
$ oc edit networks.operator.openshift.io cluster-
In the collection, update the secondary network with your changes.
additionalNetworks - Save your changes and quit the text editor to commit your changes.
Optional: Confirm that the CNO updated the
object by running the following command. ReplaceNetworkAttachmentDefinitionwith the name of the secondary network to display. There might be a delay before the CNO updates the<network_name>object to reflect your changes.NetworkAttachmentDefinition$ oc get network-attachment-definitions <network_name> -o yamlFor example, the following console output displays a
object that is namedNetworkAttachmentDefinition:net1$ oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} }
4.6.2. Using an OVN-Kubernetes localnet topology to map VLANs to a secondary interface Copy linkLink copied to clipboard!
You can use OVN-Kubernetes localnet topology in a
NetworkAttachmentDefinition
To provide multiple VLANs for cluster workloads in OpenShift Container Platform, define additional VLANs in the
NetworkAttachmentDefinition
The example in the procedure demonstrates the following configurations:
- Physical switch ports connect to OpenShift Container Platform nodes by using VLAN trunking. The trunk carries tagged traffic for the VLANs you define in NADs.
-
The acts as the OVS bridge that connects virtual workloads to the physical workloads.
br-ex -
Multiple NADs with specific VLAN tags get created by using the topology. This configuration defines specific VLAN IDs for traffic isolation.
localnet - Pods or virtual machines (VMs) attach to the NAD CRs for the purposes of improved network connectivity.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You logged in as a user with privileges.
cluster-admin - You installed the NMState Operator.
-
You configured the bridge interface during cluster installation.
br-ex
Procedure
Create an
CR for each VLAN, such asNetworkAttachmentDefinition. OVN-Kubernetes uses the NAD files to tag and untag Ethernet frames for pods or VMs.nad-cvlan100.yamlExample configuration
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vlan-100 namespace: default spec: config: |- { "cniVersion": "0.4.0", "name": "localnet-vlan-100", "type": "ovn-k8s-cni-overlay", "physicalNetworkName": "physnet", "topology": "localnet", "vlanID": 100, "mtu": 1500, "netAttachDefName": "default/vlan-100" } # ...Attach pods or VMs to the VLANs by referencing the NAD in the configuration for the pod or VM:
Example pod configuration
apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: vlan-100 # ...Example VM configuration
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: networks: - multus: networkName: vlan-100 name: secondary-vlan # ...
4.7. Configuring IP address assignment on secondary networks Copy linkLink copied to clipboard!
You can configure IP address assignments for secondary networks so that pods can connect to the secondary networks.
4.7.1. Configuration of IP address assignment for a network attachment Copy linkLink copied to clipboard!
For secondary networks, you can assign IP addresses by using an IP Address Management (IPAM) CNI plugin, which supports various assignment methods, including Dynamic Host Configuration Protocol (DHCP) and static assignment.
The DHCP IPAM CNI plugin responsible for dynamic assignment of IP addresses operates with two distinct components:
- CNI Plugin: Responsible for integrating with the Kubernetes networking stack to request and release IP addresses.
- DHCP IPAM CNI Daemon: A listener for DHCP events that coordinates with existing DHCP servers in the environment to handle IP address assignment requests. This daemon is not a DHCP server itself.
For networks requiring
type: dhcp
- A DHCP server is available and running in the environment.
- The DHCP server is external to the cluster and you expect the server to form part of the existing network infrastructure for the customer.
- The DHCP server is appropriately configured to serve IP addresses to the nodes.
In cases where a DHCP server is unavailable in the environment, consider using the Whereabouts IPAM CNI plugin. The Whereabouts CNI provides similar IP address management capabilities without the need for an external DHCP server.
Use the Whereabouts CNI plugin when no external DHCP server exists or where static IP address management is preferred. The Whereabouts plugin includes a reconciler daemon to manage stale IP address allocations.
Ensure the periodic renewal of a DHCP lease throughout the lifetime of a container by including a separate daemon, the DHCP IPAM CNI Daemon. To deploy the DHCP IPAM CNI daemon, change the Cluster Network Operator (CNO) configuration to trigger the deployment of this daemon as part of the secondary network setup.
4.7.1.1. Static IP address assignment configuration Copy linkLink copied to clipboard!
The following table describes the configuration for static IP address assignment:
| Field | Type | Description |
|---|---|---|
|
|
| The IPAM address type. The value
|
|
|
| An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. |
|
|
| An array of objects specifying routes to configure inside the pod. |
|
|
| Optional: An array of objects specifying the DNS configuration. |
The
addresses
| Field | Type | Description |
|---|---|---|
|
|
| An IP address and network prefix that you specify. For example, if you specify
|
|
|
| The default gateway to route egress network traffic to. |
| Field | Type | Description |
|---|---|---|
|
|
| The IP address range in CIDR format, such as
|
|
|
| The gateway that routes network traffic. |
| Field | Type | Description |
|---|---|---|
|
|
| An array of one or more IP addresses where DNS queries get sent. |
|
|
| The default domain to append to a hostname. For example, if the domain is set to
|
|
|
| An array of domain names to append to an unqualified hostname, such as
|
Static IP address assignment configuration example
{
"ipam": {
"type": "static",
"addresses": [
{
"address": "191.168.1.7/24"
}
]
}
}
4.7.1.2. Dynamic IP address (DHCP) assignment configuration Copy linkLink copied to clipboard!
A pod obtains its original DHCP lease when the pod gets created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster.
For an Ethernet network attachment, the SR-IOV Network Operator does not create a DHCP server deployment; the Cluster Network Operator is responsible for creating the minimal DHCP server deployment.
To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example:
Example shim network attachment definition
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
additionalNetworks:
- name: dhcp-shim
namespace: default
type: Raw
rawCNIConfig: |-
{
"name": "dhcp-shim",
"cniVersion": "0.3.1",
"type": "bridge",
"ipam": {
"type": "dhcp"
}
}
# ...
where:
type- Specifies dynamic IP address assignment for the cluster.
4.7.1.3. Dynamic IP address assignment configuration with Whereabouts Copy linkLink copied to clipboard!
The Whereabouts CNI plugin helps the dynamic assignment of an IP address to a secondary network without the use of a DHCP server.
The Whereabouts CNI plugin also supports overlapping IP address ranges and configuration of the same CIDR range multiple times within separate
NetworkAttachmentDefinition
4.7.1.3.1. Dynamic IP address configuration parameters Copy linkLink copied to clipboard!
The following table describes the configuration objects for dynamic IP address assignment with Whereabouts:
| Field | Type | Description |
|---|---|---|
|
|
| The IPAM address type. The value
|
|
|
| An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. |
|
|
| Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. |
|
|
| Optional: Helps ensure that each group or domain of pods gets its own set of IP addresses, even if they share the same range of IP addresses. Setting this field is important for keeping networks separate and organized, notably in multi-tenant environments. |
4.7.1.3.2. Dynamic IP address assignment configuration with Whereabouts that excludes IP address ranges Copy linkLink copied to clipboard!
The following example shows a dynamic address assignment configuration in a NAD file that uses Whereabouts:
Whereabouts dynamic IP address assignment that excludes specific IP address ranges
{
"ipam": {
"type": "whereabouts",
"range": "192.0.2.192/27",
"exclude": [
"192.0.2.192/30",
"192.0.2.196/32"
]
}
}
4.7.1.3.3. Dynamic IP address assignment that uses Whereabouts with overlapping IP address ranges Copy linkLink copied to clipboard!
The following example shows a dynamic IP address assignment that uses overlapping IP address ranges for multitenant networks.
NetworkAttachmentDefinition 1
{
"ipam": {
"type": "whereabouts",
"range": "192.0.2.192/29",
"network_name": "example_net_common",
}
}
where:
network_name-
Optional parameter. If set, must match the
network_nameofNetworkAttachmentDefinition 2.
NetworkAttachmentDefinition 2
{
"ipam": {
"type": "whereabouts",
"range": "192.0.2.192/24",
"network_name": "example_net_common",
}
}
where:
network_name-
Optional parameter. If set, must match the
network_nameofNetworkAttachmentDefinition 1.
4.7.1.4. Creating a whereabouts-reconciler daemon set Copy linkLink copied to clipboard!
The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster by using the Whereabouts IP Address Management (IPAM) solution. The Whereabouts reconciler ensures that each pod gets a unique IP address from the specified IP address range. The Whereabouts reconciler also handles IP address releases when pods are deleted or scaled down.
You can also use a
NetworkAttachmentDefinition
The
whereabouts-reconciler
whereabouts-reconciler
To trigger the deployment of the
whereabouts-reconciler
whereabouts-shim
Procedure
Edit the
CR by running the following command:Network.operator.openshift.io$ oc edit network.operator.openshift.io clusterInclude the
section shown in this example YAML extract within theadditionalNetworksdefinition of the CR:specapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster # ... spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { "name": "whereabouts-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "whereabouts" } } type: Raw # ...- Save the file and exit the text editor.
Verify that the
daemon set deployed successfully by running the following command:whereabouts-reconciler$ oc get all -n openshift-multus | grep whereabouts-reconcilerpod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s
4.7.1.5. Configuring the Whereabouts IP reconciler schedule Copy linkLink copied to clipboard!
The Whereabouts IPAM CNI plugin runs the IP address reconciler daily. This process cleans up any stranded IP address allocations that might result in exhausting IP addresses and therefore prevent new pods from getting a stranded IP address allocated to them.
Use this procedure to change the frequency at which the IP reconciler runs.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You have access to the cluster as a user with the role.
cluster-admin -
You have deployed the daemon set, and the
whereabouts-reconcilerpods are up and running.whereabouts-reconciler
Procedure
Run the following command to create a
object namedConfigMapin thewhereabouts-confignamespace with a specific cron expression for the IP reconciler:openshift-multus$ oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *"This cron expression indicates the IP reconciler runs every 15 minutes. Adjust the expression based on your specific requirements.
NoteThe
daemon set can only consume a cron expression pattern that includes five asterisks. Red Hat does not support the sixth asterisk, which is used to denote seconds.whereabouts-reconcilerRetrieve information about resources related to the
daemon set and pods within thewhereabouts-reconcilernamespace by running the following command:openshift-multus$ oc get all -n openshift-multus | grep whereabouts-reconcilerpod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16sRun the following command to verify that the
pod runs the IP reconciler with the configured interval:whereabouts-reconciler$ oc -n openshift-multus logs whereabouts-reconciler-2p7hw2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..data_tmp": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file "/cron-schedule/..data". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id "00c2d1c9-631d-403f-bb86-73ad104a6817" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/config": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_26_17.3874177937": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success
4.7.1.6. Fast IPAM configuration for the Whereabouts IPAM CNI plugin Copy linkLink copied to clipboard!
Wherabouts is an IP Address Management (IPAM) Container Network Interface (CNI) plugin that assigns IP addresses at a cluster-wide level. Whereabouts does not require a Dynamic Host Configuration Protocol (DHCP) server.
A typical Wherabouts workflow is described as follows:
-
Whereabouts takes an address range in classless inter-domain routing (CIDR) notation, such as , and assigns IP addresses within that range, such as
192.168.2.0/24to192.168.2.1.192.168.2.254 - Whereabouts assigns an IP address, the lowest value address in a CIDR range, to a pod and tracks the IP address in a data store for the lifetime of that pod.
- When the pod is removed, Whereabouts frees the address from the pod so that the address is available for assignment.
To improve the performance of Whereabouts, especially if nodes in your cluster run a high amount of pods, you can enable the Fast IPAM feature.
Fast IPAM is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The Fast IPAM feature uses
nodeslicepools
Prerequisites
-
You added the configuration to the
whereabouts-shimcustom resource (CR), so that the Cluster Network Operator (CNO) can deploy the Whereabouts Controller. See "Creating a Whereabouts reconciler daemon set".Network.operator.openshift.io -
For the Fast IPAM feature to work, ensure that the (NAD) and the pod exist in the same
NetworkAttachmentDefinitionnamespace.openshift-multus
Procedure
Confirm that the Whereabouts Controller is running by entering the following command.
$ oc get pods -n openshift-multus | grep whereabouts-controllerwhereabouts-controller-5cbfd6c475-fr7d7 1/1 Running 0 22s ...ImportantIf the Whereabouts Controller is not running, the Fast IPAM does not work.
Create a NAD file for your cluster and add the Fast IPAM details to the file as demonstrated in the following example configuration:
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: wb-ipam namespace: openshift-multus spec: config: '{ "cniVersion": "0.3.0", "name": "wb-ipam-cni-name", "type": "bridge", "bridge": "cni0", "ipam": { "type": "whereabouts", "range": "10.5.0.0/20", "node_slice_size": "/24" } }' # ...where:
namespace- The namespace where CNO deploys the NAD.
name- The name of the Whereabouts IPAM CNI plugin.
type-
The type of IPAM CNI plugin, such as
whereabouts. range- The IP address range for the IP pool that the Whereabouts IPAM CNI plugin uses for allocating IP addresses to pods.
node_slice_size- Sets the slice size of IP addresses available to each node.
Add the Whereabouts IPAM CNI plugin annotation details to the YAML file for the pod:
apiVersion: v1 kind: Pod metadata: name: samplepod annotations: k8s.v1.cni.cncf.io/networks: openshift-multus/wb-ipam spec: containers: - name: samplecontainer command: ["/bin/bash", "-c", "trap : TERM INT; sleep infinity & wait"] image: registry.redhat.io/ubi9/ubi-minimal # ...where:
name- The name of the pod.
k8s.v1.cni.cncf.io/networks-
The annotation details that references the Whereabouts IPAM CNI plugin name that exists in the
openshift-multusnamespace. - name- The name of the container for the pod.
command- Defines the entry point for the container and controls the behavior of the container in the Whereabouts IPAM CNI plugin.
Apply the NAD file configuration to pods that exist on nodes that run in your cluster:
$ oc create -f <NAD_file_name>.yaml
Verification
Show the IP address details of the pod by entering the following command:
$ oc describe pod <pod_name>... k8s.v1.cni.cncf.io/network-status: [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.3.174" ], "mac": "0a:58:0a:80:03:ae", "default": true, "dns": {} },{ "name": "openshift-multus/wb-ipam", "interface": "net1", "ips": [ "10.5.0.1" ], "mac": "1a:04:6f:a4:15:3c", "dns": {} }] k8s.v1.cni.cncf.io/networks: openshift-multus/wb-ipam ...Access the pod and confirm its interfaces by entering the following command:
$ oc exec <pod_name> -- ip a... 3: net1@if439: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 1a:04:6f:a4:15:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.5.0.1/20 brd 10.5.15.255 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::1804:6fff:fea4:153c/64 scope link valid_lft forever preferred_lft forever ...where:
: Pod is attached to theinetIP address on the10.5.0.1interface as expected.net1Check that the node selector pool exists in the
namespace by entering the following command. The expected output shows the name of the node selector pool, such asopenshift-multus.nodeslicepool, and the creation age in minutes, such as `32m$ oc get nodeslicepool -n openshift-multusExample output
NAME AGE wb-ipam-cni-name 32m
4.7.1.7. Creating a configuration for assignment of dual-stack IP addresses dynamically Copy linkLink copied to clipboard!
You can dynamically assign dual-stack IP addresses to a secondary network so that pods can communicate over both IPv4 and IPv6 addresses.
You can configure the following IP address assignment types in the
ipRanges
- IPv4 addresses
- IPv6 addresses
- multiple IP address assignment
Procedure
-
Set to
type.whereabouts Use
to allocate IP addresses as shown in the following example:ipRangescniVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } }- Attach the secondary network to a pod. For more information, see "Adding a pod to a secondary network".
Verification
Verify that all IP addresses got assigned to the network interfaces within the network namespace of a pod by entering the following command:
$ oc exec -it <pod_name> -- ip awhere:
<podname>- The name of the pod.
4.8. Configuring the master interface in the container network namespace Copy linkLink copied to clipboard!
You can create and manage a MAC-VLAN, IP-VLAN, and VLAN subinterface based on a
master
4.8.1. About configuring the master interface in the container network namespace Copy linkLink copied to clipboard!
You can create a MAC-VLAN, an IP-VLAN, or a VLAN subinterface that is based on a
master
master
To use a container namespace
master
true
linkInContainer
NetworkAttachmentDefinition
4.8.1.1. Creating multiple VLANs on SR-IOV VFs Copy linkLink copied to clipboard!
You can create multiple VLANs based on SR-IOV VFs. For this configuration, create an SR-IOV network and then define the network attachments for the VLAN interfaces.
The following diagram shows the setup process for creating multiple VLANs on SR-IOV VFs.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You have access to the cluster as a user with the role.
cluster-admin - You have installed the SR-IOV Network Operator.
Procedure
Create a dedicated container namespace where you want to deploy your pod by using the following command:
$ oc new-project test-namespaceCreate an SR-IOV node policy.
Create an
object, and then save the YAML in theSriovNetworkNodePolicyfile:sriov-node-network-policy.yamlapiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: false needVhostNet: true nicSelector: vendor: "15b3" deviceID: "101b" rootDevices: ["00:05.0"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true"where:
vendor-
The vendor hexadecimal code of the SR-IOV network device. The value
15b3associates with a Mellanox NIC. deviceIDThe device hexadecimal code of the SR-IOV network device.
NoteThe SR-IOV network node policy configuration example, with the setting
, is tailored specifically for Mellanox Network Interface Cards (NICs).deviceType: netdevice
Apply the YAML configuration by running the following command:
$ oc apply -f sriov-node-network-policy.yamlNoteApplying the YAML configuration might take time because of a node reboot operation.
Create an SR-IOV network:
Create the
custom resource (CR) for the additional secondary SR-IOV network attachment as demonstrated in the following example CR. Save the YAML as aSriovNetworkfile:sriov-network-attachment.yamlapiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: "off" trust: "on"Apply the YAML by running the following command:
$ oc apply -f sriov-network-attachment.yaml
Create the VLAN secondary network.
Using the following YAML example, create a file named
:vlan100-additional-network-configuration.yamlapiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vlan-100 namespace: test-namespace spec: config: | { "cniVersion": "0.4.0", "name": "vlan-100", "plugins": [ { "type": "vlan", "master": "ext0", "mtu": 1500, "vlanId": 100, "linkInContainer": true, "ipam": {"type": "whereabouts", "ipRanges": [{"range": "1.1.1.0/24"}]} } ] }where:
master-
The VLAN configuration needs to specify the
mastername. You can specify the name in the networks annotation of a pod. linkInContainer-
The
linkInContainerparameter must be specified.
Apply the YAML file by running the following command:
$ oc apply -f vlan100-additional-network-configuration.yaml
Create a pod definition by using the earlier specified networks.
Using the following YAML configuration example, create a file named
file:pod-a.yamlNoteThe manifest example includes the following resources:
- Namespace with security labels
- Pod definition with appropriate network annotation
apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" --- apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" }, { "name": "vlan-100", "namespace": "test-namespace", "interface": "ext0.100" } ]' spec: securityContext: runAsNonRoot: true containers: - name: nginx-container image: nginxinc/nginx-unprivileged:latest securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] ports: - containerPort: 80 seccompProfile: type: "RuntimeDefault"where:
interface-
The name to be used as the
masterinterface for the VLAN interface.
Apply the YAML file by running the following command:
$ oc apply -f pod-a.yaml
Get detailed information about the
within thenginx-podby running the following command:test-namespace$ oc describe pods nginx-pod -n test-namespaceName: nginx-pod Namespace: test-namespace Priority: 0 Node: worker-1/10.46.186.105 Start Time: Mon, 14 Aug 2023 16:23:13 -0400 Labels: <none> Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.131.0.26/23"],"mac_address":"0a:58:0a:83:00:1a","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0... k8s.v1.cni.cncf.io/network-status: [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.131.0.26" ], "mac": "0a:58:0a:83:00:1a", "default": true, "dns": {} },{ "name": "test-namespace/sriov-network", "interface": "ext0", "mac": "6e:a7:5e:3f:49:1b", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:d8:00.2" } } },{ "name": "test-namespace/vlan-100", "interface": "ext0.100", "ips": [ "1.1.1.1" ], "mac": "6e:a7:5e:3f:49:1b", "dns": {} }] k8s.v1.cni.cncf.io/networks: [ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" }, { "name": "vlan-100", "namespace": "test-namespace", "i... openshift.io/scc: privileged Status: Running IP: 10.131.0.26 IPs: IP: 10.131.0.26
4.8.1.2. Creating a subinterface based on a bridge master interface in a container namespace Copy linkLink copied to clipboard!
You can create a subinterface based on a bridge
master
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You are logged in to the OpenShift Container Platform cluster as a user with privileges.
cluster-admin
Procedure
Create a dedicated container namespace where you want to deploy your pod by entering the following command:
$ oc new-project test-namespaceUsing the following YAML configuration example, create a bridge
custom resource definition (CRD) file namedNetworkAttachmentDefinition:bridge-nad.yamlapiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bridge-network spec: config: '{ "cniVersion": "0.4.0", "name": "bridge-network", "type": "bridge", "bridge": "br-001", "isGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "subnet": "10.0.0.0/24", "routes": [{"dst": "0.0.0.0/0"}] } }'Run the following command to apply the
CRD to your OpenShift Container Platform cluster:NetworkAttachmentDefinition$ oc apply -f bridge-nad.yamlVerify that you successfully created a
CRD by entering the following command. The expected output shows the name of the NAD CRD and the creation age in minutes.NetworkAttachmentDefinition$ oc get network-attachment-definitionsUsing the following YAML example, create a file named
for the IPVLAN secondary network configuration:ipvlan-additional-network-configuration.yamlapiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ipvlan-net namespace: test-namespace spec: config: '{ "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "net1", "mode": "l3", "linkInContainer": true, "ipam": {"type": "whereabouts", "ipRanges": [{"range": "10.0.0.0/24"}]} }'where:
master- Specifies the ethernet interface to associate with the network attachment. The ethernet interface is subsequently configured in the pod networks annotation.
linkInContainer-
Specifies that the
masterinterface exists in the container network namespace.
Apply the YAML file by running the following command:
$ oc apply -f ipvlan-additional-network-configuration.yamlVerify that the
CRD has been created successfully by running the following command. The expected output shows the name of the NAD CRD and the creation age in minutes.NetworkAttachmentDefinition$ oc get network-attachment-definitionsUsing the following YAML configuration example, create a file named
for the pod definition:pod-a.yamlapiVersion: v1 kind: Pod metadata: name: pod-a namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "bridge-network", "interface": "net1"1 }, { "name": "ipvlan-net", "interface": "net2" } ]' spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-pod image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]where:
k8s.v1.cni.cncf.io/networks,interface-
Specifies the name to be used as the
masterfor the IPVLAN interface.
Apply the YAML file by running the following command:
$ oc apply -f pod-a.yaml
Verification
Verify that the pod is running by using the following command:
$ oc get pod -n test-namespaceNAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36sShow network interface information about the
resource within thepod-aby running the following command:test-namespace$ oc exec -n test-namespace pod-a -- ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::488b:91ff:fe84:a94b/64 scope link valid_lft forever preferred_lft forever 4: net1@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.0.2/24 brd 10.0.0.255 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::bcda:bdff:fe7e:f437/64 scope link valid_lft forever preferred_lft forever 5: net2@net1: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global net2 valid_lft forever preferred_lft forever inet6 fe80::beda:bd00:17e:f437/64 scope link valid_lft forever preferred_lft foreverThis output shows that the network interface
associates with the physical interfacenet2.net1
4.9. Removing an additional network Copy linkLink copied to clipboard!
To clean up unused network configurations or free up network resources in OpenShift Container Platform, you can remove an additional network attachment. Delete the
NetworkAttachmentDefinition
4.9.1. Removing a secondary NetworkAttachmentDefinition custom resource Copy linkLink copied to clipboard!
To clean up unused network configurations or free up network resources in OpenShift Container Platform, you can remove a secondary
NetworkAttachmentDefinition
NetworkAttachmentDefinition
When a secondary network is removed from the cluster, it is not removed from any pods that it is attached to.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in as a user with privileges.
cluster-admin
Procedure
Edit the Cluster Network Operator (CNO) in your default text editor by running the following command:
$ oc edit networks.operator.openshift.io clusterModify the custom resource (CR) by removing the configuration that the CNO created from the
collection for the secondary network that you want to remove.additionalNetworksapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: []where:
spec.additionalNetworks-
Specifies the secondary network attachment definition that you want to remove from the
additionalNetworkscollection. If you are removing the configuration mapping for the only secondary network attachment definition in theadditionalNetworkscollection, you must specify an empty collection.
Remove the
CR from the network of your cluster by entering the following command:NetworkAttachmentDefinition$ oc delete net-attach-def <name_of_network_attachment_definition>Replace
with the name of the<name_of_network_attachment_definition>CR that you want to remove.NetworkAttachmentDefinition- Save your changes and quit the text editor to commit your changes.
Optional: Confirm that the secondary network CR was deleted by running the following command:
$ oc get network-attachment-definition --all-namespaces
4.10. Enabling multi-networking for advanced use cases with CNI plugin chaining Copy linkLink copied to clipboard!
You can use Container Network Interface (CNI) plugin chaining to enable advanced multi-networking use cases for your pods.
4.10.1. About CNI chaining Copy linkLink copied to clipboard!
CNI plugin chaining allows pods to use multiple network interfaces. This enables advanced configurations such as traffic isolation and prioritized routing through granular traffic policies.
By using CNI plugin chaining, different types of traffic can be isolated to meet performance, security, and compliance requirements, providing greater flexibility in network design and traffic management.
Some scenarios where this might be useful include:
- Multi-Network topologies: Enables you to attach pods to multiple networks, each with its own traffic policy, where relevant.
- Traffic isolation: Provides separate networks for management, storage, and application traffic to ensure each has the appropriate security and QoS settings.
- Custom routing rules: Ensures that specific traffic, for example SIP traffic, always uses a designated network interface, while other traffic follows the default network.
- Enhanced network performance: Allows you to prioritize certain traffic types or manage congestion by directing them through dedicated network interfaces.
4.10.2. Configuring plugin chaining with the route-override CNI plugin Copy linkLink copied to clipboard!
Plugin chaining allows you to configure multiple CNI plugins to be applied sequentially to the same network interface, where each plugin in the chain processes the interface in order.
When you define a
NetworkAttachmentDefinition
plugins
The
route-override
-
Add static routes to direct traffic for specific destination networks through the interface.
addroutes: -
Remove specific routes from the interface.
delroutes: -
Remove all routes from the interface.
flushroutes: -
Remove the default gateway route from the interface.
flushgateway:
The following example demonstrates plugin chaining by configuring a pod with two additional network interfaces, each on a separate VLAN with custom routing:
-
on the
eth1network (VLAN 100), with a static route directing192.168.100.0/24traffic through this interface.10.0.0.0/8 -
on the
eth2network (VLAN 200), with a static route directing192.168.200.0/24traffic through this interface.172.16.0.0/12
Each interface uses a chain of two plugins:
macvlan
route-override
Prerequisites
-
Install the OpenShift CLI ().
oc -
An account with privileges.
cluster-admin
Procedure
Create a namespace for the example by running the following command:
$ oc create namespace chain-exampleCreate the first NetworkAttachmentDefinition (NAD) with a chained plugin configuration.
Create a YAML file, such as
, to define a NAD that configures a new interface,management.yaml, on VLAN 100 with the following configuration:eth1apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: management-net namespace: chain-example spec: config: '{ "cniVersion": "1.0.0", "name": "management-net", "plugins": [ { "type": "macvlan", "master": "br-ex", "vlan": 100, "mode": "bridge", "ipam": { "type": "static", "addresses": [ { "address": "192.168.100.10/24", "gateway": "192.168.100.1" } ] } }, { "type": "route-override", "addroutes": [ { "dst": "10.0.0.0/8", "gw": "192.168.100.1" } ] } ] }'
Create the NAD by running the following command:
$ oc apply -f management.yamlCreate the second NAD with a chained plugin configuration.
Create a YAML file, such as
, to define a NAD that configures a new interface,sip.yaml, on VLAN 200 with the following configuration:eth2apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: sip-net namespace: chain-example spec: config: '{ "cniVersion": "1.0.0", "name": "sip-net", "plugins": [ { "type": "macvlan", "master": "br-ex", "vlan": 200, "mode": "bridge", "ipam": { "type": "static", "addresses": [ { "address": "192.168.200.10/24", "gateway": "192.168.200.1" } ] } }, { "type": "route-override", "addroutes": [ { "dst": "172.16.0.0/12", "gw": "192.168.200.1" } ] } ] }'
Create the NAD by running the following command:
$ oc apply -f sip.yamlAttach the
resources to a pod by creating a pod definition file, such asNetworkAttachmentDefinition, with the following configuration:pod.yamlapiVersion: v1 kind: Pod metadata: name: chain-test-pod namespace: chain-example labels: app: chain-test annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "management-net", "interface": "eth1" }, { "name": "sip-net", "interface": "eth2" } ]' spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: registry.access.redhat.com/ubi9/ubi:latest command: ["sleep", "infinity"] securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"]Create the pod by running the following command:
$ oc apply -f pod.yamlVerify the pod is running with the following command:
$ oc wait --for=condition=Ready pod/chain-test-pod -n chain-example --timeout=120sExample output:
pod/chain-test-pod condition met
Verification
Run the following command to list all network interfaces and their assigned IP addresses inside the pod. This verifies that the pod has the additional interfaces configured by plugin chaining:
$ oc exec chain-test-pod -n chain-example -- ip aExample output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP link/ether 0a:58:0a:83:02:19 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.131.2.25/23 brd 10.131.3.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fe83:219/64 scope link valid_lft forever preferred_lft forever 3: eth1@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP qlen 1000 link/ether aa:25:73:ff:a7:00 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.100.10/24 brd 192.168.100.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::a825:73ff:feff:a700/64 scope link valid_lft forever preferred_lft forever 4: eth2@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP qlen 1000 link/ether aa:a4:6c:4e:e8:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.200.10/24 brd 192.168.200.255 scope global eth2 valid_lft forever preferred_lft forever inet6 fe80::a8a4:6cff:fe4e:e897/64 scope link valid_lft forever preferred_lft foreverThis output shows the pod has three network interfaces:
-
The default interface, connected to the cluster network.
eth0: -
The first additional interface from
eth1:, with IPmanagement-net.192.168.100.10 -
The second additional interface from
eth2:, with IPsip-net.192.168.200.10
-
Run the following command to verify that the
plugin added the expected static routes:route-override$ oc exec chain-test-pod -n chain-example -- ip routeExample output:
default via 10.132.0.1 dev eth0 10.0.0.0/8 via 192.168.100.1 dev eth1 10.132.0.0/23 dev eth0 proto kernel scope link src 10.132.1.97 10.132.0.0/14 via 10.132.0.1 dev eth0 100.64.0.0/16 via 10.132.0.1 dev eth0 169.254.0.5 via 10.132.0.1 dev eth0 172.16.0.0/12 via 192.168.200.1 dev eth2 172.30.0.0/16 via 10.132.0.1 dev eth0 192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.10 192.168.200.0/24 dev eth2 proto kernel scope link src 192.168.200.10This output confirms that the
plugin in each chain added the expected static routes:route-override-
For , traffic destined for
10.0.0.0/8 via 192.168.100.1 dev eth1is routed through10.0.0.0/8via theeth1gateway. This route was added by themanagement-netplugin in theroute-overridechain.management-net -
For , traffic destined for
172.16.0.0/12 via 192.168.200.1 dev eth2is routed through172.16.0.0/12via theeth2gateway. This route was added by thesip-netplugin in theroute-overridechain.sip-net -
The connected subnet routes (and
192.168.100.0/24) were created by the192.168.200.0/24plugin, while the default route usesmacvlan, the cluster network interface.eth0
-
For
Chapter 5. Virtual routing and forwarding Copy linkLink copied to clipboard!
5.1. About virtual routing and forwarding Copy linkLink copied to clipboard!
You can use virtual routing and forwarding (VRF) to provide multi-tenancy functionality. For example, where each tenant has its own unique routing tables and requires different default gateways.
VRF reduces the number of permissions needed by cloud-native network function (CNF), and provides increased visibility of the network topology of secondary networks. VRF devices combined with IP address rules provide the ability to create virtual routing and forwarding domains.
Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP address rules such as policy-based routing to take precedence over the VRF device rules directing specific traffic.
5.1.1. Benefits of secondary networks for pods for telecommunications operators Copy linkLink copied to clipboard!
You can connect network functions to different customers' infrastructure by using the same IP address with the Container Network Interface (CNI) virtual routing and forwarding (VRF) plugin. Using the CNI VRF plugin keeps different customers isolated.
In telecommunications use cases, each CNF can potentially be connected to many different networks sharing the same address space. These secondary networks can potentially conflict with the cluster’s main network CIDR.
With the CNI VRF plugin, IP addresses are overlapped with the OpenShift Container Platform IP address space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of the network topologies of secondary networks.
Chapter 6. Assigning a secondary network to a VRF Copy linkLink copied to clipboard!
As a cluster administrator, you can configure a secondary network for a virtual routing and forwarding (VRF) domain by using the CNI VRF plugin. The virtual network that this plugin creates is associated with the physical interface that you specify.
Using a secondary network with a VRF instance has the following advantages:
- Workload isolation
- Isolate workload traffic by configuring a VRF instance for the secondary network.
- Improved security
- Enable improved security through isolated network paths in the VRF domain.
- Multi-tenancy support
- Support multi-tenancy through network segmentation with a unique routing table in the VRF domain for each tenant.
Applications that use VRFs must bind to a specific device. The common usage is to use the
SO_BINDTODEVICE
SO_BINDTODEVICE
eth1
SO_BINDTODEVICE
CAP_NET_RAW
Using a VRF through the
ip vrf exec
6.1. Creating a secondary network attachment with the CNI VRF plugin Copy linkLink copied to clipboard!
The Cluster Network Operator (CNO) manages secondary network definitions. When you specify a secondary network in the cluster-scoped
Network
NetworkAttachmentDefinition
Do not edit the
NetworkAttachmentDefinition
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster as a user with privileges.
cluster-admin
Procedure
Create the
CR for the additional network attachment and insert theNetworkconfiguration for the secondary network. Save as therawCNIConfigfile.additional-network-attachment.yamlapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "macvlan-vrf", "plugins": [ { "type": "macvlan", "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.23/24" } ] } }, { "type": "vrf", "vrfname": "vrf-1", "table": 1001 }] }'where:
plugins- You must specify a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration.
type-
You must set this parameter to
vrf. vrfname- The name of the VRF that the interface is assigned to. If the VRF does not exist in the pod, the CNI creates the VRF.
tableOptional parameter. Specify the routing table ID. By default, the
parameter is used. If you do not specify a table ID, the CNI assigns a free routing table ID to the VRF.tableidNoteVRF functions correctly only when the resource is of type
.netdevice
Create the
resource:Network$ oc create -f additional-network-attachment.yamlConfirm that the CNO created the
CR by running the following command. ReplaceNetworkAttachmentDefinitionwith the namespace that you specified when configuring the network attachment, for example,<namespace>. The expected output shows the name of the NAD CR and the creation age in minutes.additional-network-1$ oc get network-attachment-definitions -n <namespace>NoteA delay might exist before the CNO creates the CR.
Verification
Create a pod and assign the pod to the secondary network that includes the VRF plugin configuration.
Create a YAML file that defines the
resource, as demonstrated in the followingPodfile:pod-additional-net.yamlapiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "test-network-1"1 } ]' spec: containers: - name: example-pod-1 command: ["/bin/bash", "-c", "sleep 9000000"] image: centos:8where:
name- Specify the name of the secondary network that includes the VRF plugin configuration.
Create the
resource by running the following command. The expected output shows the name of thePodresource and the creation age in minutes.Pod$ oc create -f pod-additional-net.yaml
Verify that the pod network attachment connects to the VRF secondary network. Start a remote session with the pod and run the following command. The expected output shows the name of the VRF interface and its unique ID in the routing table.
$ ip vrf showConfirm that the VRF interface is the controller for the secondary interface by entering the following command:
$ ip link5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode
Legal Notice
Copy linkLink copied to clipboard!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.