Chapter 9. Networking
9.1. Networking overview
OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with Red Hat OpenShift Service on AWS networking and its ecosystem.
OpenShift Virtualization support for single-stack IPv6 clusters is limited to the OVN-Kubernetes localnet and Linux bridge Container Network Interface (CNI) plugins.
Deploying OpenShift Virtualization on a single-stack IPv6 cluster is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following figure illustrates the typical network setup of OpenShift Virtualization. Other configurations are also possible.
Figure 9.1. OpenShift Virtualization networking overview
				 Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads.
				 Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads.
			
				 You can connect VMs to the default pod network and to any number of secondary networks.
				 You can connect VMs to the default pod network and to any number of secondary networks.
			
				 The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality.
				 The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality.
			
				 Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins.
				 Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins.
			
				 The default pod network is overlay-based, tunneled through the underlying machine network.
				 The default pod network is overlay-based, tunneled through the underlying machine network.
			
				 The machine network can be defined over a selected set of network interface controllers (NICs).
				 The machine network can be defined over a selected set of network interface controllers (NICs).
			
				 Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation. It is also possible to create virtual overlay networks for secondary networks.
				 Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation. It is also possible to create virtual overlay networks for secondary networks.
			
Connecting VMs directly to the underlay network is not supported on Red Hat OpenShift Service on AWS, Azure for Red Hat OpenShift Service on AWS, Google Cloud, or Oracle® Cloud Infrastructure (OCI).
					Connecting VMs to user-defined networks with the layer2 topology is recommended on public clouds.
				
				 Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network.
				 Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network.
			
9.1.1. OpenShift Virtualization networking glossary
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
- Multus
- A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- UserDefinedNetwork (UDN)
- A namespace-scoped CRD introduced by the user-defined network API that can be used to create a tenant network that isolates the tenant namespace from other namespaces.
- ClusterUserDefinedNetwork (CUDN)
- A cluster-scoped CRD introduced by the user-defined network API that cluster administrators can use to create a shared network across multiple namespaces.
- Node network configuration policy (NNCP)
- 
								A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicymanifest to the cluster.
9.1.2. Using the default pod network
- Connecting a virtual machine to the default pod network
- Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification.
- Exposing a virtual machine as a service
- 
								You can expose a VM within the cluster or outside the cluster by creating a Serviceobject.
9.1.3. Configuring a primary user-defined network
- Connecting a virtual machine to a primary user-defined network
- You can connect a virtual machine (VM) to a user-defined network (UDN) on the primary interface of the VM. The primary UDN replaces the default pod network to connect pods and VMs in selected namespaces. - Cluster administrators can configure a primary - UserDefinedNetworkCRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use the- ClusterUserDefinedNetworkCRD to create a shared OVN- layer2network across multiple namespaces.- User-defined networks with the - layer2overlay topology are useful for VM workloads, and a good alternative to secondary networks in environments where physical network access is limited, such as the public cloud. The- layer2topology enables seamless migration of VMs without the need for Network Address Translation (NAT), and also provides persistent IP addresses that are preserved between reboots and during live migration.
9.1.4. Configuring VM secondary network interfaces
You can connect a virtual machine to a secondary network by using an OVN-Kubernetes Container Network Interface (CNI) plugin. It is not required to specify the primary pod network in the VM specification when connecting to a secondary network interface.
- Connecting a virtual machine to an OVN-Kubernetes secondary network
- You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. OpenShift Virtualization supports the - layer2topology for OVN-Kubernetes.- A - layer2topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes CNI plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.- To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps: - Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD).
- Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification.
 
- Hot plugging secondary network interfaces
- 
								You can add or remove secondary network interfaces without stopping your VM. OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the OVN-Kubernetes layer2topology.
- Configuring and viewing IP addresses
- You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line. The network information is collected by the QEMU guest agent.
9.1.5. Integrating with OpenShift Service Mesh
- Connecting a virtual machine to a service mesh
- OpenShift Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines.
9.1.6. Managing MAC address pools
- Managing MAC address pools for network interfaces
- The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots.
9.1.7. Configuring SSH access
- Configuring SSH access to virtual machines
- You can configure SSH access to VMs by using the following methods: - You create an SSH key pair, add the public key to a VM, and connect to the VM by running the - virtctl sshcommand with the private key.- You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source. 
- You add the - virtctl port-fowardcommand to your- .ssh/configfile and connect to the VM by using OpenSSH.
- You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service. 
- You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address. 
 
9.2. Connecting a virtual machine to the default pod network
				You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode.
			
Traffic passing through network interfaces to the default pod network is interrupted during live migration.
9.2.1. Configuring masquerade mode from the CLI
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
Prerequisites
- 
							You have installed the OpenShift CLI (oc).
- The virtual machine must be configured to use DHCP to acquire IPv4 addresses.
Procedure
- Edit the - interfacesspec of your virtual machine configuration file:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Connect using masquerade mode.
- 2
- Optional: List the ports that you want to expose from the virtual machine, each specified by theportfield. Theportvalue must be a number between 0 and 65536. When theportsarray is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port80.
 Note- Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. 
- Create the virtual machine: - oc create -f <vm-name>.yaml - $ oc create -f <vm-name>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
9.2.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6)
You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init.
					The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120. You can edit this value based on your network requirements.
				
When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.
Prerequisites
- The Red Hat OpenShift Service on AWS cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack.
- 
							You have installed the OpenShift CLI (oc).
Procedure
- In a new virtual machine configuration, include an interface with - masqueradeand configure the IPv6 address and default gateway by using cloud-init.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Connect using masquerade mode.
- 2
- Allows incoming traffic on port 80 to the virtual machine.
- 3
- The static IPv6 address as determined by theNetwork.pod.vmIPv6NetworkCIDRfield in the virtual machine instance configuration. The default value isfd10:0:2::2/120.
- 4
- The gateway IP address as determined by theNetwork.pod.vmIPv6NetworkCIDRfield in the virtual machine instance configuration. The default value isfd10:0:2::1.
 
- Create the virtual machine in the namespace: - oc create -f example-vm-ipv6.yaml - $ oc create -f example-vm-ipv6.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
- To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"9.2.3. About jumbo frames support
When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes.
The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways:
- 
							libvirt: If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device.
- DHCP: If the guest DHCP client can read the MTU value from the DHCP server response.
						For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using netsh or a similar tool. This is because the Windows DHCP client does not read the MTU value.
					
9.3. Connecting a virtual machine to a primary user-defined network
You can connect a virtual machine (VM) to a user-defined network (UDN) on the VM’s primary interface by using the Red Hat OpenShift Service on AWS web console or the CLI. The primary user-defined network replaces the default pod network in your specified namespace. Unlike the pod network, you can define the primary UDN per project, where each project can use its specific subnet and topology.
				OpenShift Virtualization supports the namespace-scoped UserDefinedNetwork and the cluster-scoped ClusterUserDefinedNetwork custom resource definitions (CRD).
			
				Cluster administrators can configure a primary UserDefinedNetwork CRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use the ClusterUserDefinedNetwork CRD to create a shared OVN network across multiple namespaces.
			
					You must add the k8s.ovn.org/primary-user-defined-network label when you create a namespace that is to be used with user-defined networks.
				
With the layer 2 topology, OVN-Kubernetes creates an overlay network between nodes. You can use this overlay network to connect VMs on different nodes without having to configure any additional physical networking infrastructure.
The layer 2 topology enables seamless migration of VMs without the need for Network Address Translation (NAT) because persistent IP addresses are preserved across cluster nodes during live migration.
You must consider the following limitations before implementing a primary UDN:
- 
						You cannot use the virtctl sshcommand to configure SSH access to a VM.
- 
						You cannot use the oc port-forwardcommand to forward ports to a VM.
- You cannot use headless services to access a VM.
9.3.1. Creating a primary user-defined network by using the web console
					You can use the Red Hat OpenShift Service on AWS web console to create a primary namespace-scoped UserDefinedNetwork or a cluster-scoped ClusterUserDefinedNetwork CRD. The UDN serves as the default primary network for pods and VMs that you create in namespaces associated with the network.
				
9.3.1.1. Creating a namespace for user-defined networks by using the web console
You can create a namespace to be used with primary user-defined networks (UDNs) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- 
								Log in to the Red Hat OpenShift Service on AWS web console as a user with cluster-adminpermissions.
Procedure
- 
								From the Administrator perspective, click Administration Namespaces. 
- Click Create Namespace.
- In the Name field, specify a name for the namespace. The name must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character.
- 
								In the Labels field, add the k8s.ovn.org/primary-user-defined-networklabel.
- 
								Optional: If the namespace is to be used with an existing cluster-scoped UDN, add the appropriate labels as defined in the spec.namespaceSelectorfield in theClusterUserDefinedNetworkcustom resource.
- Optional: Specify a default network policy.
- Click Create to create the namespace.
9.3.1.2. Creating a primary namespace-scoped user-defined network by using the web console
						You can create an isolated primary network in your project namespace by creating a UserDefinedNetwork custom resource in the Red Hat OpenShift Service on AWS web console.
					
Prerequisites
- 
								You have access to the Red Hat OpenShift Service on AWS web console as a user with cluster-adminpermissions.
- 
								You have created a namespace and applied the k8s.ovn.org/primary-user-defined-networklabel. For more information, see "Creating a namespace for user-defined networks by using the web console".
Procedure
- 
								From the Administrator perspective, click Networking UserDefinedNetworks. 
- Click Create UserDefinedNetwork.
- From the Project name list, select the namespace that you previously created.
- Specify a value in the Subnet field.
- Click Create. The user-defined network serves as the default primary network for pods and virtual machines that you create in this namespace.
9.3.1.3. Creating a primary cluster-scoped user-defined network by using the web console
						You can connect multiple namespaces to the same primary user-defined network (UDN) by creating a ClusterUserDefinedNetwork custom resource in the Red Hat OpenShift Service on AWS web console.
					
Prerequisites
- 
								You have access to the Red Hat OpenShift Service on AWS web console as a user with cluster-adminpermissions.
Procedure
- 
								From the Administrator perspective, click Networking UserDefinedNetworks. 
- From the Create list, select ClusterUserDefinedNetwork.
- In the Name field, specify a name for the cluster-scoped UDN.
- Specify a value in the Subnet field.
- In the Project(s) Match Labels field, add the appropriate labels to select namespaces that the cluster UDN applies to.
- Click Create. The cluster-scoped UDN serves as the default primary network for pods and virtual machines located in namespaces that contain the labels that you specified in step 5.
9.3.2. Creating a primary user-defined network by using the CLI
					You can create a primary UserDefinedNetwork or ClusterUserDefinedNetwork CRD by using the CLI.
				
9.3.2.1. Creating a namespace for user-defined networks by using the CLI
						You can create a namespace to be used with primary user-defined networks (UDNs) by using the OpenShift CLI (oc).
					
Prerequisites
- 
								You have access to the cluster as a user with cluster-adminpermissions.
- 
								You have installed the OpenShift CLI (oc).
Procedure
- Create a - Namespaceobject as a YAML file similar to the following example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- This label is required for the namespace to be associated with a UDN. If the namespace is to be used with an existing cluster UDN, you must also add the appropriate labels that are defined in thespec.namespaceSelectorfield of theClusterUserDefinedNetworkcustom resource.
 
- Apply the - Namespacemanifest by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
9.3.2.2. Creating a primary namespace-scoped user-defined network by using the CLI
You can create an isolated primary network in your project namespace by using the CLI. You must use the OVN-Kubernetes layer 2 topology and enable persistent IP address allocation in the user-defined network (UDN) configuration to ensure VM live migration support.
Prerequisites
- 
								You have installed the OpenShift CLI (oc).
- 
								You have created a namespace and applied the k8s.ovn.org/primary-user-defined-networklabel.
Procedure
- Create a - UserDefinedNetworkobject to specify the custom network configuration:- Example - UserDefinedNetworkmanifest- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specifies the name of theUserDefinedNetworkcustom resource.
- 2
- Specifies the namespace in which the VM is located. The namespace must have thek8s.ovn.org/primary-user-defined-networklabel. The namespace must not bedefault, anopenshift-*namespace, or match any global namespaces that are defined by the Cluster Network Operator (CNO).
- 3
- Specifies the topological configuration of the network. The required value isLayer2. ALayer2topology creates a logical switch that is shared by all nodes.
- 4
- Specifies whether the UDN is primary or secondary. ThePrimaryrole means that the UDN acts as the primary network for the VM and all default traffic passes through this network.
- 5
- Specifies that virtual workloads have consistent IP addresses across reboots and migration. Thespec.layer2.subnetsfield is required whenipam.lifecycle: Persistentis specified.
 
- Apply the - UserDefinedNetworkmanifest by running the following command:- oc apply -f --validate=true <filename>.yaml - $ oc apply -f --validate=true <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
9.3.2.3. Creating a primary cluster-scoped user-defined network by using the CLI
You can connect multiple namespaces to the same primary user-defined network (UDN) to achieve native tenant isolation by using the CLI.
Prerequisites
- 
								You have access to the cluster as a user with cluster-adminprivileges.
- 
								You have installed the OpenShift CLI (oc).
Procedure
- Create a - ClusterUserDefinedNetworkobject to specify the custom network configuration:- Example - ClusterUserDefinedNetworkmanifest- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specifies the name of theClusterUserDefinedNetworkcustom resource.
- 2
- Specifies the set of namespaces that the cluster UDN applies to. The namespace selector must not point todefault, anopenshift-*namespace, or any global namespaces that are defined by the Cluster Network Operator (CNO).
- 3
- Specifies the type of selector. In this example, thematchExpressionsselector selects objects that have the labelkubernetes.io/metadata.namewith the valuered-namespaceorblue-namespace.
- 4
- Specifies the type of operator. Possible values areIn,NotIn, andExists.
- 5
- Specifies the topological configuration of the network. The required value isLayer2. ALayer2topology creates a logical switch that is shared by all nodes.
- 6
- Specifies whether the UDN is primary or secondary. ThePrimaryrole means that the UDN acts as the primary network for the VM and all default traffic passes through this network.
 
- Apply the - ClusterUserDefinedNetworkmanifest by running the following command:- oc apply -f --validate=true <filename>.yaml - $ oc apply -f --validate=true <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
9.3.3. Attaching a virtual machine to the primary user-defined network
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by requesting the pod network attachment and configuring the interface binding.
OpenShift Virtualization supports the following network binding plugins to connect the network interface to the VM:
- Layer 2 bridge
- The Layer 2 bridge binding creates a direct Layer 2 connection between the VM’s virtual interface and the virtual switch of the UDN.
- Passt
- The Plug a Simple Socket Transport (passt) binding provides a user-space networking solution that integrates seamlessly with the pod network, providing better integration with the Red Hat OpenShift Service on AWS networking ecosystem. - Passt binding has the following benefits: - You can define readiness and liveness HTTP probes to configure VM health checks.
- You can use Red Hat Advanced Cluster Security to monitor TCP traffic within the cluster with detailed insights.
 
Using the passt binding plugin to attach a VM to the primary UDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
9.3.3.1. Attaching a virtual machine to the primary user-defined network by using the web console
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by using the Red Hat OpenShift Service on AWS web console. VMs that are created in a namespace where the primary UDN is configured are automatically attached to the UDN with the Layer 2 bridge network binding plugin.
To attach a VM to the primary UDN by using the Plug a Simple Socket Transport (passt) binding, enable the plugin and configure the VM network interface in the web console.
Using the passt binding plugin to attach a VM to the primary UDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- You are logged in to the Red Hat OpenShift Service on AWS web console.
Procedure
- Follow these steps to enable the passt network binding plugin Technology Preview feature: - From the Virtualization perspective, click Overview.
- On the Virtualization page, click the Settings tab.
- Click Preview features and set Enable Passt binding for primary user-defined networks to on.
 
- From the Virtualization perspective, click VirtualMachines.
- Select a VM to open the VirtualMachine details page.
- Click the Configuration tab.
- Click Network.
- 
								Click the Options menu 
								 on the Network interfaces page and select Edit. on the Network interfaces page and select Edit.
- In the Edit network interface dialog, select the default pod network attachment from the Network list.
- Expand Advanced and then select the Passt binding.
- Click Save.
- If your VM is running, restart it for the changes to take effect.
9.3.3.2. Attaching a virtual machine to the primary user-defined network by using the CLI
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by using the CLI.
Prerequisites
- 
								You have installed the OpenShift CLI (oc).
Procedure
- Edit the - VirtualMachinemanifest to add the UDN interface details, as in the following example:- Example - VirtualMachinemanifest:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- The namespace in which the VM is located. This value must match the namespace in which the UDN is defined.
- 2
- The name of the user-defined network interface.
- 3
- The name of the binding plugin that is used to connect the interface to the VM. The possible values arel2bridgeandpasst. The default value isl2bridge.
- 4
- The name of the network. This must match the value of thespec.template.spec.domain.devices.interfaces.namefield.
 
- Optional: If you are using the Plug a Simple Socket Transport (passt) network binding plugin, set the - hco.kubevirt.io/deployPasstNetworkBindingannotation to- truein the- HyperConvergedcustom resource (CR) by running the following command:- oc annotate hco kubevirt-hyperconverged -n kubevirt-hyperconverged hco.kubevirt.io/deployPasstNetworkBinding=true --overwrite - $ oc annotate hco kubevirt-hyperconverged -n kubevirt-hyperconverged hco.kubevirt.io/deployPasstNetworkBinding=true --overwrite- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Important- Using the passt binding plugin to attach a VM to the primary UDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. - For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. 
- Apply the - VirtualMachinemanifest by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
9.4. Exposing a virtual machine by using a service
				You can expose a virtual machine within the cluster or outside the cluster by creating a Service object.
			
9.4.1. About services
					A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world.
				
- ClusterIP
- 
								Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends. ClusterIPis the default service type.
- NodePort
- 
								Exposes the service on the same port of each selected node in the cluster. NodePortmakes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client.
- LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
						For Red Hat OpenShift Service on AWS, you must use externalTrafficPolicy: Cluster when configuring a load-balancing service, to minimize the network downtime during live migration.
					
9.4.2. Dual-stack support
					If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object.
				
					The spec.ipFamilyPolicy field can be set to one of the following values:
				
- SingleStack
- The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range.
- PreferDualStack
- The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured.
- RequireDualStack
- 
								This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.
					You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values:
				
- 
							[IPv4]
- 
							[IPv6]
- 
							[IPv4, IPv6]
- 
							[IPv6, IPv4]
9.4.3. Creating a service by using the CLI
You can create a service and associate it with a virtual machine (VM) by using the command line.
Prerequisites
- You configured the cluster network to support the service.
- 
							You have installed the OpenShift CLI (oc).
Procedure
- Edit the - VirtualMachinemanifest to add the label for service creation:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Addspecial: keyto thespec.template.metadata.labelsstanza.
 Note- Labels on a virtual machine are passed through to the pod. The - special: keylabel must match the label in the- spec.selectorattribute of the- Servicemanifest.
- 
							Save the VirtualMachinemanifest file to apply your changes.
- Create a - Servicemanifest to expose the VM:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 
							Save the Servicemanifest file.
- Create the service by running the following command: - oc create -f example-service.yaml - $ oc create -f example-service.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Restart the VM to apply the changes.
Verification
- Query the - Serviceobject to verify that it is available:- oc get service -n example-namespace - $ oc get service -n example-namespace- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
9.5. Connecting a virtual machine to an OVN-Kubernetes layer 2 secondary network
				You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. OpenShift Virtualization supports the layer2 topology for OVN-Kubernetes.
			
				A layer2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.
			
				To configure an OVN-Kubernetes layer2 secondary network and attach a VM to that network, perform the following steps:
			
9.5.1. Creating an OVN-Kubernetes layer 2 NAD
You can create an OVN-Kubernetes network attachment definition (NAD) for the layer 2 network topology by using the Red Hat OpenShift Service on AWS web console or the CLI.
						Configuring IP address management (IPAM) by specifying the spec.config.ipam.subnet attribute in a network attachment definition for virtual machines is not supported.
					
9.5.1.1. Creating a NAD for layer 2 topology by using the CLI
You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network.
Prerequisites
- 
								You have access to the cluster as a user with cluster-adminprivileges.
- 
								You have installed the OpenShift CLI (oc).
Procedure
- Create a - NetworkAttachmentDefinitionobject:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- The Container Network Interface (CNI) specification version. The required value is0.3.1.
- 2
- The name of the network. This attribute is not namespaced. For example, you can have a network namedl2-networkreferenced from two differentNetworkAttachmentDefinitionobjects that exist in two different namespaces. This feature is useful to connect VMs in different namespaces.
- 3
- The name of the CNI plugin. The required value isovn-k8s-cni-overlay.
- 4
- The topological configuration for the network. The required value islayer2.
- 5
- Optional: The maximum transmission unit (MTU) value. If you do not set a value, the Cluster Network Operator (CNO) sets a default MTU value by calculating the difference among the underlay MTU of the primary network interface, the overlay MTU of the pod network, such as the Geneve (Generic Network Virtualization Encapsulation), and byte capacity of any enabled features, such as IPsec.
- 6
- The value of thenamespaceandnamefields in themetadatastanza of theNetworkAttachmentDefinitionobject.
 Note- The previous example configures a cluster-wide overlay without a subnet defined. This means that the logical switch implementing the network only provides layer 2 communication. You must configure an IP address when you create the virtual machine by either setting a static IP address or by deploying a DHCP server on the network for a dynamic IP address. 
- Apply the manifest by running the following command: - oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
9.5.1.2. Creating a NAD for layer 2 topology by using the web console
You can create a network attachment definition (NAD) that describes how to attach a pod to the layer 2 overlay network.
Prerequisites
- 
								You have access to the cluster as a user with cluster-adminprivileges.
Procedure
- 
								Go to Networking NetworkAttachmentDefinitions in the web console. 
- Click Create Network Attachment Definition. The network attachment definition must be in the same namespace as the pod or virtual machine using it.
- Enter a unique Name and optional Description.
- Select OVN Kubernetes L2 overlay network from the Network Type list.
- Click Create.
9.5.2. Attaching a virtual machine to the OVN-Kubernetes layer 2 secondary network
You can attach a virtual machine (VM) to the OVN-Kubernetes layer 2 secondary network interface by using the Red Hat OpenShift Service on AWS web console or the CLI.
9.5.2.1. Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI
You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration.
Prerequisites
- 
								You have access to the cluster as a user with cluster-adminprivileges.
- 
								You have installed the OpenShift CLI (oc).
Procedure
- Edit the - VirtualMachinemanifest to add the OVN-Kubernetes secondary network interface details, as in the following example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- The name of the OVN-Kubernetes secondary interface.
- 2
- The name of the network. This must match the value of thespec.template.spec.domain.devices.interfaces.namefield.
- 3
- The name of theNetworkAttachmentDefinitionobject.
- 4
- Specifies the nodes on which the VM can be scheduled. The recommended node selector value isnode-role.kubernetes.io/worker: ''.
 
- Apply the - VirtualMachinemanifest:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
9.6. Hot plugging secondary network interfaces
You can add or remove secondary network interfaces without stopping your virtual machine (VM). OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver.
9.6.1. VirtIO limitations
Each VirtIO interface uses one of the limited Peripheral Connect Interface (PCI) slots in the VM. There are a total of 32 slots available. The PCI slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. OpenShift Virtualization reserves up to four slots for hot plugging interfaces. This includes any existing plugged network interfaces. For example, if your VM has two existing plugged interfaces, you can hot plug two more network interfaces.
The actual number of slots available for hot plugging also depends on the machine type. For example, the default PCI topology for the q35 machine type supports hot plugging one additional PCIe device. For more information on PCI topology and hot plug support, see the libvirt documentation.
If you restart the VM after hot plugging an interface, that interface becomes part of the standard network interfaces.
9.6.2. Hot plugging a secondary network interface by using the CLI
Hot plug a secondary network interface to a virtual machine (VM) while the VM is running.
Prerequisites
- A network attachment definition is configured in the same namespace as your VM.
- The VM to which you want to hot plug the network interface is running.
- 
							You have installed the OpenShift CLI (oc).
Procedure
- Use your preferred text editor to edit the - VirtualMachinemanifest, as shown in the following example:- Example VM configuration - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Save your changes and exit the editor.
- For the new configuration to take effect, apply the changes by running the following command. Applying the changes triggers automatic VM live migration and attaches the network interface to the running VM. - oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <filename>
- 
										Specifies the name of your VirtualMachinemanifest YAML file.
 
Verification
- Verify that the VM live migration is successful by using the following command: - oc get VirtualMachineInstanceMigration -w - $ oc get VirtualMachineInstanceMigration -w- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the new interface is added to the VM by checking the status of the virtual machine instance (VMI): - oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }"- $ oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- The hot plugged interface appears in the VMI status.
 
9.6.3. Hot unplugging a secondary network interface by using the CLI
You can remove a secondary network interface from a running virtual machine (VM).
Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces.
Prerequisites
- Your VM must be running.
- The VM must be created on a cluster running OpenShift Virtualization 4.14 or later.
- The VM must have a bridge network interface attached.
- 
							You have installed the OpenShift CLI (oc).
Procedure
- Using your preferred text editor, edit the - VirtualMachinemanifest file and set the interface state to- absent. Setting the interface state to- absentdetaches the network interface from the guest, but the interface still exists in the pod.- Example VM configuration - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Set the interface state toabsentto detach it from the running VM. Removing the interface details from the VM specification does not hot unplug the secondary network interface.
 
- Save your changes and exit the editor.
- For the new configuration to take effect, apply the changes by running the following command. Applying the changes triggers automatic VM live migration and removes the interface from the pod. - oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <filename>
- 
										Specifies the name of your VirtualMachinemanifest YAML file.
 
9.7. Connecting a virtual machine to a service mesh
OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4.
9.7.1. Adding a virtual machine to a service mesh
					To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject annotation to true. Then expose your VM as a service to view your application in the mesh.
				
To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090.
Prerequisites
- 
							You have installed the OpenShift CLI (oc).
- You have installed the Service Mesh Operator.
Procedure
- Edit the VM configuration file to add the - sidecar.istio.io/inject: "true"annotation:- Example configuration file - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Apply the VM configuration: - oc apply -f <vm_name>.yaml - $ oc apply -f <vm_name>.yaml- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- The name of the virtual machine YAML file.
 
- Create a - Serviceobject to expose your VM to the service mesh.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- The service selector that determines the set of pods targeted by a service. This attribute corresponds to thespec.metadata.labelsfield in the VM configuration file. In the above example, theServiceobject namedvm-istiotargets TCP port 8080 on any pod with the labelapp=vm-istio.
 
- Create the service: - oc create -f <service_name>.yaml - $ oc create -f <service_name>.yaml- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- The name of the service YAML file.
 
9.8. Configuring a dedicated network for live migration
You can configure a dedicated secondary network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
9.8.1. Configuring a dedicated secondary network for live migration
					To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR).
				
Prerequisites
- 
							You installed the OpenShift CLI (oc).
- 
							You logged in to the cluster as a user with the cluster-adminrole.
- Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
- Create a - NetworkAttachmentDefinitionmanifest according to the following example:- Example configuration file - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the name of theNetworkAttachmentDefinitionobject.
- 2
- Specify the name of the NIC to be used for live migration.
- 3
- Specify the name of the CNI plugin that provides the network for the NAD.
- 4
- Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
 
- Open the - HyperConvergedCR in your default editor by running the following command:- oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv - $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Add the name of the - NetworkAttachmentDefinitionobject to the- spec.liveMigrationConfigstanza of the- HyperConvergedCR:- Example - HyperConvergedmanifest- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the name of the MultusNetworkAttachmentDefinitionobject to be used for live migrations.
 
- 
							Save your changes and exit the editor. The virt-handlerpods restart and connect to the secondary network.
Verification
- When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. - oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'- $ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
9.8.2. Selecting a dedicated network by using the web console
You can select a dedicated network for live migration by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You configured a Multus network for live migration.
- You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization > Overview in the Red Hat OpenShift Service on AWS web console.
- Click the Settings tab and then click Live migration.
- Select the network from the Live migration network list.
9.9. Configuring and viewing IP addresses
You can configure an IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line. The network information is collected by the QEMU guest agent.
9.9.1. Configuring IP addresses for virtual machines
You can configure a static IP address when you create a virtual machine (VM) by using the web console or the command line.
You can configure a dynamic IP address when you create a VM by using the command line.
The IP address is provisioned with cloud-init.
9.9.1.1. Configuring an IP address when creating a virtual machine by using the CLI
You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
If the VM is connected to the pod network, the pod network interface is the default route unless you update it.
Prerequisites
- The virtual machine is connected to a secondary network.
- You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine.
Procedure
- Edit the - spec.template.spec.volumes.cloudInitNoCloud.networkDatastanza of the virtual machine configuration:- To configure a dynamic IP address, specify the interface name and enable DHCP: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the interface name.
 
- To configure a static IP, specify the interface name and the IP address: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
9.9.2. Viewing IP addresses of virtual machines
You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line.
The network information is collected by the QEMU guest agent.
9.9.2.1. Viewing the IP address of a virtual machine by using the web console
You can view the IP address of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Procedure
- 
								In the Red Hat OpenShift Service on AWS console, click Virtualization VirtualMachines from the side menu. 
- Select a VM to open the VirtualMachine details page.
- Click the Details tab to view the IP address.
9.9.2.2. Viewing the IP address of a virtual machine by using the CLI
You can view the IP address of a virtual machine (VM) by using the command line.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Prerequisites
- 
								You have installed the OpenShift CLI (oc).
Procedure
- Obtain the virtual machine instance configuration by running the following command: - oc describe vmi <vmi_name> - $ oc describe vmi <vmi_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
9.10. Managing MAC address pools for network interfaces
The KubeMacPool component allocates MAC addresses for virtual machine (VM) network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address.
A virtual machine instance created from that VM retains the assigned MAC address across reboots.
KubeMacPool does not handle virtual machine instances created independently from a virtual machine.
9.10.1. Managing KubeMacPool by using the CLI
You can disable and re-enable KubeMacPool by using the command line.
KubeMacPool is enabled by default.
Prerequisites
- 
							You have installed the OpenShift CLI (oc).
Procedure
- To disable KubeMacPool in two namespaces, run the following command: - oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore - $ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To re-enable KubeMacPool in two namespaces, run the following command: - oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io- - $ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
