Chapter 13. Multiple networks
13.1. Understanding multiple networks
In Kubernetes, container networking is delegated to networking plugins that implement the Container Network Interface (CNI).
OpenShift Container Platform uses the Multus CNI plugin to allow chaining of CNI plugins. During cluster installation, you configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing.
13.1.1. Usage scenarios for an additional network
You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons:
- Performance
- You can send traffic on two different planes to manage how much traffic is along each plane.
- Security
- You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.
All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0
interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a
command. If you add additional network interfaces that use Multus CNI, they are named net1
, net2
, …, netN
.
To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition
custom resource (CR). A CNI configuration inside each of these CRs defines how that interface is created.
13.1.2. Additional networks in OpenShift Container Platform
OpenShift Container Platform provides the following CNI plugins for creating additional networks in your cluster:
- bridge: Configure a bridge-based additional network to allow pods on the same host to communicate with each other and the host.
- host-device: Configure a host-device additional network to allow pods access to a physical Ethernet network device on the host system.
- ipvlan: Configure an ipvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based additional network. Unlike a macvlan-based additional network, each pod shares the same MAC address as the parent physical network interface.
- macvlan: Configure a macvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address.
- SR-IOV: Configure an SR-IOV based additional network to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system.
13.2. Configuring an additional network
As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported:
13.2.1. Approaches to managing an additional network
You can manage the life cycle of an additional network by two approaches. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure.
For an additional network, IP addresses are provisioned through an IP Address Management (IPAM) CNI plugin that you configure as part of the additional network. The IPAM plugin supports a variety of IP address assignment approaches including DHCP and static assignment.
-
Modify the Cluster Network Operator (CNO) configuration: The CNO automatically creates and manages the
NetworkAttachmentDefinition
object. In addition to managing the object lifecycle the CNO ensures a DHCP is available for an additional network that uses a DHCP assigned IP address. -
Applying a YAML manifest: You can manage the additional network directly by creating an
NetworkAttachmentDefinition
object. This approach allows for the chaining of CNI plugins.
13.2.2. Configuration for an additional network attachment
An additional network is configured via the NetworkAttachmentDefinition
API in the k8s.cni.cncf.io
API group. The configuration for the API is described in the following table:
Field | Type | Description |
---|---|---|
|
| The name for the additional network. |
|
| The namespace that the object is associated with. |
|
| The CNI plugin configuration in JSON format. |
13.2.2.1. Configuration of an additional network through the Cluster Network Operator
The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration.
The following YAML describes the configuration parameters for managing an additional network with the CNO:
Cluster Network Operator configuration
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { ... } type: Raw
- 1
- An array of one or more additional network configurations.
- 2
- The name for the additional network attachment that you are creating. The name must be unique within the specified
namespace
. - 3
- The namespace to create the network attachment in. If you do not specify a value, then the
default
namespace is used. - 4
- A CNI plugin configuration in JSON format.
13.2.2.2. Configuration of an additional network from a YAML manifest
The configuration for an additional network is specified from a YAML configuration file, such as in the following example:
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { ... }
13.2.3. Configurations for additional network types
The specific configuration fields for additional networks is described in the following sections.
13.2.3.1. Configuration for a bridge additional network
The following object describes the configuration parameters for the bridge CNI plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
| |
|
|
Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is |
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
Set to |
|
|
Set to |
|
|
Set to |
|
|
Set to |
|
|
Set to |
|
|
Set to |
|
| Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. |
|
| Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
13.2.3.1.1. bridge configuration example
The following example configures an additional network named bridge-net
:
{ "cniVersion": "0.3.1", "name": "work-network", "type": "bridge", "isGateway": true, "vlan": 2, "ipam": { "type": "dhcp" } }
13.2.3.2. Configuration for a host device additional network
Specify your network device by setting only one of the following parameters: device
, hwaddr
, kernelpath
, or pciBusID
.
The following object describes the configuration parameters for the host-device CNI plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
|
The name of the CNI plugin to configure: |
|
|
Optional: The name of the device, such as |
|
| Optional: The device hardware MAC address. |
|
|
Optional: The Linux kernel device path, such as |
|
|
Optional: The PCI address of the network device, such as |
|
| The configuration object for the IPAM CNI plug-in. The plug-in manages IP address assignment for the attachment definition. |
13.2.3.2.1. host-device configuration example
The following example configures an additional network named hostdev-net
:
{ "cniVersion": "0.3.1", "name": "work-network", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } }
13.2.3.3. Configuration for an IPVLAN additional network
The following object describes the configuration parameters for the IPVLAN CNI plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
|
The name of the CNI plugin to configure: |
|
|
The operating mode for the virtual network. The value must be |
|
|
The Ethernet interface to associate with the network attachment. If a |
|
| Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.
Do not specify |
13.2.3.3.1. ipvlan configuration example
The following example configures an additional network named ipvlan-net
:
{ "cniVersion": "0.3.1", "name": "work-network", "type": "ipvlan", "master": "eth1", "mode": "l3", "ipam": { "type": "static", "addresses": [ { "address": "192.168.10.10/24" } ] } }
13.2.3.4. Configuration for a MACVLAN additional network
The following object describes the configuration parameters for the macvlan CNI plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
|
The name of the CNI plugin to configure: |
|
|
Configures traffic visibility on the virtual network. Must be either |
|
| The Ethernet, bonded, or VLAN interface to associate with the virtual interface. If a value is not specified, then the host system’s primary Ethernet interface is used. |
|
| The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
| The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
13.2.3.4.1. macvlan configuration example
The following example configures an additional network named macvlan-net
:
{ "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "eth1", "mode": "bridge", "ipam": { "type": "dhcp" } }
13.2.4. Configuration of IP address assignment for an additional network
The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins.
You can use the following IP address assignment types:
- Static assignment.
- Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network.
- Dynamic assignment through the Whereabouts IPAM CNI plugin.
13.2.4.1. Static IP address assignment configuration
The following table describes the configuration for static IP address assignment:
Field | Type | Description |
---|---|---|
|
|
The IPAM address type. The value |
|
| An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. |
|
| An array of objects specifying routes to configure inside the pod. |
|
| Optional: An array of objects specifying the DNS configuration. |
The addresses
array requires objects with the following fields:
Field | Type | Description |
---|---|---|
|
|
An IP address and network prefix that you specify. For example, if you specify |
|
| The default gateway to route egress network traffic to. |
Field | Type | Description |
---|---|---|
|
|
The IP address range in CIDR format, such as |
|
| The gateway where network traffic is routed. |
Field | Type | Description |
---|---|---|
|
| An of array of one or more IP addresses for to send DNS queries to. |
|
|
The default domain to append to a hostname. For example, if the domain is set to |
|
|
An array of domain names to append to an unqualified hostname, such as |
Static IP address assignment configuration example
{ "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } }
13.2.4.2. Dynamic IP address (DHCP) assignment configuration
The following JSON describes the configuration for dynamic IP address address assignment with DHCP.
A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster.
To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example:
Example shim network attachment definition
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ...
Field | Type | Description |
---|---|---|
|
|
The IPAM address type. The value |
Dynamic IP address (DHCP) assignment configuration example
{ "ipam": { "type": "dhcp" } }
13.2.4.3. Dynamic IP address assignment configuration with Whereabouts
The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server.
The following table describes the configuration for dynamic IP address assignment with Whereabouts:
Field | Type | Description |
---|---|---|
|
|
The IPAM address type. The value |
|
| An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. |
|
| Optional: A list of zero ore more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. |
Dynamic IP address assignment configuration example that uses Whereabouts
{ "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } }
13.2.5. Creating an additional network attachment with the Cluster Network Operator
The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition
object automatically.
Do not edit the NetworkAttachmentDefinition
objects that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
To edit the CNO configuration, enter the following command:
$ oc edit networks.operator.openshift.io cluster
Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR.
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: - name: tertiary-net namespace: project2 type: Raw rawCNIConfig: |- { "cniVersion": "0.3.1", "name": "tertiary-net", "type": "ipvlan", "master": "eth1", "mode": "l2", "ipam": { "type": "static", "addresses": [ { "address": "192.168.1.23/24" } ] } }
- Save your changes and quit the text editor to commit your changes.
Verification
Confirm that the CNO created the NetworkAttachmentDefinition object by running the following command. There might be a delay before the CNO creates the object.
$ oc get network-attachment-definitions -n <namespace>
where:
<namespace>
- Specifies the namespace for the network attachment that you added to the CNO configuration.
Example output
NAME AGE test-network-1 14m
13.2.6. Creating an additional network attachment by applying a YAML manifest
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create a YAML file with your additional network configuration, such as in the following example:
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { "cniVersion": "0.3.1", "name": "work-network", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } }
To create the additional network, enter the following command:
$ oc apply -f <file>.yaml
where:
<file>
- Specifies the name of the file contained the YAML manifest.
13.3. About virtual routing and forwarding
13.3.1. About virtual routing and forwarding
Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways.
Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic.
13.3.1.1. Benefits of secondary networks for pods for telecommunications operators
In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster’s main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks.
13.4. Configuring multi-network policy
As a cluster administrator, you can configure network policy for additional networks.
You can specify multi-network policy for only macvlan additional networks. Other types of additional networks, such as ipvlan, are not supported.
13.4.1. Differences between multi-network policy and network policy
Although the MultiNetworkPolicy
API implements the NetworkPolicy
API, there are several important differences:
You must use the
MultiNetworkPolicy
API:apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy
-
You must use the
multi-networkpolicy
resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with theoc get multi-networkpolicy <name>
command where<name>
is the name of a multi-network policy. You must specify an annotation with the name of the network attachment definition that defines the macvlan additional network:
apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>
where:
<network_name>
- Specifies the name of a network attachment definition.
13.4.2. Enabling multi-network policy for the cluster
As a cluster administrator, you can enable multi-network policy support on your cluster.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
Create the
multinetwork-enable-patch.yaml
file with the following YAML:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true
Configure the cluster to enable multi-network policy:
$ oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml
Example output
network.operator.openshift.io/cluster patched
13.4.3. Working with multi-network policy
As a cluster administrator, you can create, edit, view, and delete multi-network policies.
13.4.3.1. Prerequisites
- You have enabled multi-network policy support for your cluster.
13.4.3.2. Creating a multi-network policy
To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy.
Prerequisites
-
Your cluster uses a cluster network provider that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN. -
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges. - You are working in the namespace that the multi-network policy applies to.
Procedure
Create a policy rule:
Create a
<policy_name>.yaml
file:$ touch <policy_name>.yaml
where:
<policy_name>
- Specifies the multi-network policy file name.
Define a multi-network policy in the file that you just created, such as in the following examples:
Deny ingress from all pods in all namespaces
apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: []
where
<network_name>
- Specifies the name of a network attachment definition.
Allow ingress from all pods in the same namespace
apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}
where
<network_name>
- Specifies the name of a network attachment definition.
To create the multi-network policy object, enter the following command:
$ oc apply -f <policy_name>.yaml -n <namespace>
where:
<policy_name>
- Specifies the multi-network policy file name.
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Example output
multinetworkpolicy.k8s.cni.cncf.io/default-deny created
13.4.3.3. Editing a multi-network policy
You can edit a multi-network policy in a namespace.
Prerequisites
-
Your cluster uses a cluster network provider that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN. -
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges. - You are working in the namespace where the multi-network policy exists.
Procedure
Optional: To list the multi-network policy objects in a namespace, enter the following command:
$ oc get multi-networkpolicy
where:
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Edit the multi-network policy object.
If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command.
$ oc apply -n <namespace> -f <policy_file>.yaml
where:
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
<policy_file>
- Specifies the name of the file containing the network policy.
If you need to update the multi-network policy object directly, enter the following command:
$ oc edit multi-networkpolicy <policy_name> -n <namespace>
where:
<policy_name>
- Specifies the name of the network policy.
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Confirm that the multi-network policy object is updated.
$ oc describe multi-networkpolicy <policy_name> -n <namespace>
where:
<policy_name>
- Specifies the name of the multi-network policy.
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
13.4.3.4. Viewing multi-network policies
You can examine the multi-network policies in a namespace.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges. - You are working in the namespace where the multi-network policy exists.
Procedure
List multi-network policies in a namespace:
To view multi-network policy objects defined in a namespace, enter the following command:
$ oc get multi-networkpolicy
Optional: To examine a specific multi-network policy, enter the following command:
$ oc describe multi-networkpolicy <policy_name> -n <namespace>
where:
<policy_name>
- Specifies the name of the multi-network policy to inspect.
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
13.4.3.5. Deleting a multi-network policy
You can delete a multi-network policy in a namespace.
Prerequisites
-
Your cluster uses a cluster network provider that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN. -
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges. - You are working in the namespace where the multi-network policy exists.
Procedure
To delete a multi-network policy object, enter the following command:
$ oc delete multi-networkpolicy <policy_name> -n <namespace>
where:
<policy_name>
- Specifies the name of the multi-network policy.
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Example output
multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted
13.4.4. Additional resources
13.5. Attaching a pod to an additional network
As a cluster user you can attach a pod to an additional network.
13.5.1. Adding a pod to an additional network
You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network.
When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it.
The pod must be in the same namespace as the additional network.
Prerequisites
-
Install the OpenShift CLI (
oc
). - Log in to the cluster.
Procedure
Add an annotation to the
Pod
object. Only one of the following annotation formats can be used:To attach an additional network without any customization, add an annotation with the following format. Replace
<network>
with the name of the additional network to associate with the pod:metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1
- 1
- To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network.
To attach an additional network with customizations, add an annotation with the following format:
metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ]
To create the pod, enter the following command. Replace
<name>
with the name of the pod.$ oc create -f <name>.yaml
Optional: To Confirm that the annotation exists in the
Pod
CR, enter the following command, replacing<name>
with the name of the pod.$ oc get pod <name> -o yaml
In the following example, the
example-pod
pod is attached to thenet1
additional network:$ oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ...
- 1
- The
k8s.v1.cni.cncf.io/networks-status
parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value.
13.5.1.1. Specifying pod-specific addressing and routing options
When attaching a pod to an additional network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations.
Prerequisites
- The pod must be in the same namespace as the additional network.
-
Install the OpenShift CLI (
oc
). - You must log in to the cluster.
Procedure
To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps:
Edit the
Pod
resource definition. If you are editing an existingPod
resource, run the following command to edit its definition in the default editor. Replace<name>
with the name of thePod
resource to edit.$ oc edit pod <name>
In the
Pod
resource definition, add thek8s.v1.cni.cncf.io/networks
parameter to the podmetadata
mapping. Thek8s.v1.cni.cncf.io/networks
accepts a JSON string of a list of objects that reference the name ofNetworkAttachmentDefinition
custom resource (CR) names in addition to specifying additional properties.metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1
- 1
- Replace
<network>
with a JSON object as shown in the following examples. The single quotes are required.
In the following example the annotation specifies which network attachment will have the default route, using the
default-route
parameter.apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: ' { "name": "net1" }, { "name": "net2", 1 "default-route": ["192.0.2.1"] 2 }' spec: containers: - name: example-pod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: centos/tools
- 1
- The
name
key is the name of the additional network to associate with the pod. - 2
- The
default-route
key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than onedefault-route
key is specified, this will cause the pod to fail to become active.
The default route will cause any traffic that is not specified in other routes to be routed to the gateway.
Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface.
To verify the routing properties of a pod, the oc
command may be used to execute the ip
command within a pod.
$ oc exec -it <pod_name> -- ip route
You may also reference the pod’s k8s.v1.cni.cncf.io/networks-status
to see which additional network has been assigned the default route, by the presence of the default-route
key in the JSON-formatted list of objects.
To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO.
Edit the CNO CR by running the following command:
$ oc edit networks.operator.openshift.io cluster
The following YAML describes the configuration parameters for the CNO:
Cluster Network Operator YAML configuration
name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 ... }' type: Raw
- 1
- Specify a name for the additional network attachment that you are creating. The name must be unique within the specified
namespace
. - 2
- Specify the namespace to create the network attachment in. If you do not specify a value, then the
default
namespace is used. - 3
- Specify the CNI plugin configuration in JSON format, which is based on the following template.
The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin:
macvlan CNI plugin JSON configuration object using static IP and MAC address
{ "cniVersion": "0.3.1", "name": "<name>", 1 "plugins": [{ 2 "type": "macvlan", "capabilities": { "ips": true }, 3 "master": "eth0", 4 "mode": "bridge", "ipam": { "type": "static" } }, { "capabilities": { "mac": true }, 5 "type": "tuning" }] }
- 1
- Specifies the name for the additional network attachment to create. The name must be unique within the specified
namespace
. - 2
- Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration.
- 3
- Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities.
- 4
- Specifies the interface that the macvlan plugin uses.
- 5
- Specifies that a request is made to enable the static MAC address functionality of a CNI plugin.
The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod.
Edit the pod with:
$ oc edit pod <name>
macvlan CNI plugin JSON configuration object using static IP and MAC address
apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "<name>", 1 "ips": [ "192.0.2.205/24" ], 2 "mac": "CA:FE:C0:FF:EE:00" 3 } ]'
Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together.
To verify the IP address and MAC properties of a pod with additional networks, use the oc
command to execute the ip command within a pod.
$ oc exec -it <pod_name> -- ip a
13.6. Removing a pod from an additional network
As a cluster user you can remove a pod from an additional network.
13.6.1. Removing a pod from an additional network
You can remove a pod from an additional network only by deleting the pod.
Prerequisites
- An additional network is attached to the pod.
-
Install the OpenShift CLI (
oc
). - Log in to the cluster.
Procedure
To delete the pod, enter the following command:
$ oc delete pod <name> -n <namespace>
-
<name>
is the name of the pod. -
<namespace>
is the namespace that contains the pod.
-
13.7. Editing an additional network
As a cluster administrator you can modify the configuration for an existing additional network.
13.7.1. Modifying an additional network attachment definition
As a cluster administrator, you can make changes to an existing additional network. Any existing pods attached to the additional network will not be updated.
Prerequisites
- You have configured an additional network for your cluster.
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
To edit an additional network for your cluster, complete the following steps:
Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor:
$ oc edit networks.operator.openshift.io cluster
-
In the
additionalNetworks
collection, update the additional network with your changes. - Save your changes and quit the text editor to commit your changes.
Optional: Confirm that the CNO updated the
NetworkAttachmentDefinition
object by running the following command. Replace<network-name>
with the name of the additional network to display. There might be a delay before the CNO updates theNetworkAttachmentDefinition
object to reflect your changes.$ oc get network-attachment-definitions <network-name> -o yaml
For example, the following console output displays a
NetworkAttachmentDefinition
object that is namednet1
:$ oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} }
13.8. Removing an additional network
As a cluster administrator you can remove an additional network attachment.
13.8.1. Removing an additional network attachment definition
As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any pods it is attached to.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
To remove an additional network from your cluster, complete the following steps:
Edit the Cluster Network Operator (CNO) in your default text editor by running the following command:
$ oc edit networks.operator.openshift.io cluster
Modify the CR by removing the configuration from the
additionalNetworks
collection for the network attachment definition you are removing.apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1
- 1
- If you are removing the configuration mapping for the only additional network attachment definition in the
additionalNetworks
collection, you must specify an empty collection.
- Save your changes and quit the text editor to commit your changes.
Optional: Confirm that the additional network CR was deleted by running the following command:
$ oc get network-attachment-definition --all-namespaces
13.9. Assigning a secondary network to a VRF
CNI VRF plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
13.9.1. Assigning a secondary network to a VRF
As a cluster administrator, you can configure an additional network for your VRF domain by using the CNI VRF plugin. The virtual network created by this plugin is associated with a physical interface that you specify.
Applications that use VRFs need to bind to a specific device. The common usage is to use the SO_BINDTODEVICE
option for a socket. SO_BINDTODEVICE
binds the socket to a device that is specified in the passed interface name, for example, eth1
. To use SO_BINDTODEVICE
, the application must have CAP_NET_RAW
capabilities.
13.9.1.1. Creating an additional network attachment with the CNI VRF plugin
The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition
custom resource (CR) automatically.
Do not edit the NetworkAttachmentDefinition
CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.
To create an additional network attachment with the CNI VRF plugin, perform the following procedure.
Prerequisites
- Install the OpenShift Container Platform CLI (oc).
- Log in to the OpenShift cluster as a user with cluster-admin privileges.
Procedure
Create the
Network
custom resource (CR) for the additional network attachment and insert therawCNIConfig
configuration for the additional network, as in the following example CR. Save the YAML as the fileadditional-network-attachment.yaml
.apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "macvlan-vrf", "plugins": [ 1 { "type": "macvlan", 2 "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.23/24" } ] } }, { "type": "vrf", "vrfname": "example-vrf-name", 3 "table": 1001 4 }] }'
- 1
plugins
must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration.- 2
type
must be set tovrf
.- 3
vrfname
is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created.- 4
- Optional.
table
is the routing table ID. By default, thetableid
parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF.
NoteVRF functions correctly only when the resource is of type
netdevice
.Create the
Network
resource:$ oc create -f additional-network-attachment.yaml
Confirm that the CNO created the
NetworkAttachmentDefinition
CR by running the following command. Replace<namespace>
with the namespace that you specified when configuring the network attachment, for example,additional-network-1
.$ oc get network-attachment-definitions -n <namespace>
Example output
NAME AGE additional-network-1 14m
NoteThere might be a delay before the CNO creates the CR.
Verifying that the additional VRF network attachment is successful
To verify that the VRF CNI is correctly configured and the additional network attachment is attached, do the following:
- Create a network that uses the VRF CNI.
- Assign the network to a pod.
Verify that the pod network attachment is connected to the VRF additional network. Remote shell into the pod and run the following command:
$ ip vrf show
Example output
Name Table ----------------------- red 10
Confirm the VRF interface is master of the secondary interface:
$ ip link
Example output
5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode