Este conteúdo não está disponível no idioma selecionado.
Chapter 11. Networking
11.1. Networking overview Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with OpenShift Container Platform networking and its ecosystem.
OpenShift Virtualization support for single-stack IPv6 clusters is limited to the OVN-Kubernetes localnet and Linux bridge Container Network Interface (CNI) plugins.
Deploying OpenShift Virtualization on a single-stack IPv6 cluster is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following figure illustrates the typical network setup of OpenShift Virtualization. Other configurations are also possible.
Figure 11.1. OpenShift Virtualization networking overview
Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads.
You can connect VMs to the default pod network and to any number of secondary networks.
The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality.
Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins.
The default pod network is overlay-based, tunneled through the underlying machine network.
The machine network can be defined over a selected set of network interface controllers (NICs).
Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation. It is also possible to create virtual overlay networks for secondary networks.
Connecting VMs directly to the underlay network is not supported on Red Hat OpenShift Service on AWS, Azure for OpenShift Container Platform, Google Cloud, or Oracle® Cloud Infrastructure (OCI).
Connecting VMs to user-defined networks with the layer2 topology is recommended on public clouds.
Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network.
11.1.1. OpenShift Virtualization networking glossary Copiar o linkLink copiado para a área de transferência!
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
- Multus
- A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- UserDefinedNetwork (UDN)
- A namespace-scoped CRD introduced by the user-defined network API that can be used to create a tenant network that isolates the tenant namespace from other namespaces.
- ClusterUserDefinedNetwork (CUDN)
- A cluster-scoped CRD introduced by the user-defined network API that cluster administrators can use to create a shared network across multiple namespaces.
- Node network configuration policy (NNCP)
-
A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicymanifest to the cluster.
11.1.2. Using the default pod network Copiar o linkLink copiado para a área de transferência!
- Connecting a virtual machine to the default pod network
- Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification.
- Exposing a virtual machine as a service
-
You can expose a VM within the cluster or outside the cluster by creating a
Serviceobject. For on-premise clusters, you can configure a load balancing service by using the MetalLB Operator. You can install the MetalLB Operator by using the OpenShift Container Platform web console or the CLI.
11.1.3. Configuring a primary user-defined network Copiar o linkLink copiado para a área de transferência!
- Connecting a virtual machine to a primary user-defined network
You can connect a virtual machine (VM) to a user-defined network (UDN) on the primary interface of the VM. The primary UDN replaces the default pod network to connect pods and VMs in selected namespaces.
Cluster administrators can configure a primary
UserDefinedNetworkCRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use theClusterUserDefinedNetworkCRD to create a shared OVNlayer2network across multiple namespaces.User-defined networks with the
layer2overlay topology are useful for VM workloads, and a good alternative to secondary networks in environments where physical network access is limited, such as the public cloud. Thelayer2topology enables seamless migration of VMs without the need for Network Address Translation (NAT), and also provides persistent IP addresses that are preserved between reboots and during live migration.
11.1.4. Configuring VM secondary network interfaces Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine to a secondary network by using Linux bridge, SR-IOV and OVN-Kubernetes CNI plugins. You can list multiple secondary networks and interfaces in the VM specification. It is not required to specify the primary pod network in the VM specification when connecting to a secondary network interface.
- Connecting a virtual machine to an OVN-Kubernetes secondary network
You can connect a VM to an OVN-Kubernetes secondary network. OpenShift Virtualization supports the
layer2andlocalnettopologies for OVN-Kubernetes. Thelocalnettopology is the recommended way of exposing VMs to the underlying physical network, with or without VLAN encapsulation.-
A
layer2topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes CNI plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure. -
A
localnettopology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes.
To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps:
Choose the appropriate option based on your OVN-Kubernetes network topology:
- Configure an OVN-Kubernetes layer 2 secondary network by creating a network attachment definition (NAD).
-
Configure an OVN-Kubernetes localnet secondary network by creating a
ClusterUserDefinedNetwork(CUDN) CR.
Choose the appropriate option based on your OVN-Kubernetes network topology:
- Connect the VM to the OVN-Kubernetes layer 2 secondary network by adding the network details to the VM specification.
- Connect the VM to the OVN-Kubernetes localnet secondary network by adding the network details to the VM specification.
-
A
- Connecting a virtual machine to an SR-IOV network
You can use Single Root I/O Virtualization (SR-IOV) network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency.
You must install the SR-IOV Network Operator on your cluster to manage SR-IOV network devices and network attachments.
You can connect a VM to an SR-IOV network by performing the following steps:
-
Configure an SR-IOV network device by creating a
SriovNetworkNodePolicyCRD. -
Configure an SR-IOV network by creating an
SriovNetworkobject. - Connect the VM to the SR-IOV network by including the network details in the VM configuration.
-
Configure an SR-IOV network device by creating a
- Connecting a virtual machine to a Linux bridge network
Install the Kubernetes NMState Operator to configure Linux bridges, VLANs, and bonding for your secondary networks. The OVN-Kubernetes
localnettopology is the recommended way of connecting a VM to the underlying physical network, but OpenShift Virtualization also supports Linux bridge networks.NoteYou cannot directly attach to the default machine network when using Linux bridge networks.
You can create a Linux bridge network and attach a VM to the network by performing the following steps:
-
Configure a Linux bridge network device by creating a
NodeNetworkConfigurationPolicycustom resource definition (CRD). -
Configure a Linux bridge network by creating a
NetworkAttachmentDefinitionCRD. - Connect the VM to the Linux bridge network by including the network details in the VM configuration.
-
Configure a Linux bridge network device by creating a
- Hot plugging secondary network interfaces
- You can add or remove secondary network interfaces without stopping your VM. OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver. OpenShift Virtualization also supports hot plugging secondary interfaces that use the SR-IOV binding.
- Using DPDK with SR-IOV
- The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and VMs to run DPDK workloads over SR-IOV networks.
- Configuring a dedicated network for live migration
- You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
- Accessing a virtual machine by using the cluster FQDN
- You can access a VM that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN).
- Configuring and viewing IP addresses
- You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent.
11.1.4.1. Comparing Linux bridge CNI and OVN-Kubernetes localnet topology Copiar o linkLink copiado para a área de transferência!
The following table provides a comparison of features available when using the Linux bridge CNI compared to the localnet topology for an OVN-Kubernetes plugin:
| Feature | Available on Linux bridge CNI | Available on OVN-Kubernetes localnet |
|---|---|---|
| Layer 2 access to the underlay native network | Only on secondary network interface controllers (NICs) | Yes |
| Layer 2 access to underlay VLANs | Yes | Yes |
| Network policies | No | Yes |
| Managed IP pools | No | Yes |
| MAC spoof filtering | Yes | Yes |
11.1.5. Integrating with OpenShift Service Mesh Copiar o linkLink copiado para a área de transferência!
- Connecting a virtual machine to a service mesh
- OpenShift Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines.
11.1.6. Managing MAC address pools Copiar o linkLink copiado para a área de transferência!
- Managing MAC address pools for network interfaces
- The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots.
11.1.7. Configuring SSH access Copiar o linkLink copiado para a área de transferência!
- Configuring SSH access to virtual machines
You can configure SSH access to VMs by using the following methods:
You create an SSH key pair, add the public key to a VM, and connect to the VM by running the
virtctl sshcommand with the private key.You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.
You add the
virtctl port-fowardcommand to your.ssh/configfile and connect to the VM by using OpenSSH.You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.
You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address.
11.2. Connecting a virtual machine to the default pod network Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode.
Traffic passing through network interfaces to the default pod network is interrupted during live migration.
11.2.1. Configuring masquerade mode from the CLI Copiar o linkLink copiado para a área de transferência!
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
Prerequisites
-
You have installed the OpenShift CLI (
oc). - The virtual machine must be configured to use DHCP to acquire IPv4 addresses.
Procedure
Edit the
interfacesspec of your virtual machine configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Connect using masquerade mode.
- 2
- Optional: List the ports that you want to expose from the virtual machine, each specified by the
portfield. Theportvalue must be a number between 0 and 65536. When theportsarray is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port80.
NotePorts 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped.
Create the virtual machine:
oc create -f <vm-name>.yaml
$ oc create -f <vm-name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) Copiar o linkLink copiado para a área de transferência!
You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init.
The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120. You can edit this value based on your network requirements.
When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.
Prerequisites
- The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack.
-
You have installed the OpenShift CLI (
oc).
Procedure
In a new virtual machine configuration, include an interface with
masqueradeand configure the IPv6 address and default gateway by using cloud-init.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Connect using masquerade mode.
- 2
- Allows incoming traffic on port 80 to the virtual machine.
- 3
- The static IPv6 address as determined by the
Network.pod.vmIPv6NetworkCIDRfield in the virtual machine instance configuration. The default value isfd10:0:2::2/120. - 4
- The gateway IP address as determined by the
Network.pod.vmIPv6NetworkCIDRfield in the virtual machine instance configuration. The default value isfd10:0:2::1.
Create the virtual machine in the namespace:
oc create -f example-vm-ipv6.yaml
$ oc create -f example-vm-ipv6.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
11.2.3. About jumbo frames support Copiar o linkLink copiado para a área de transferência!
When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes.
The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways:
-
libvirt: If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device. - DHCP: If the guest DHCP client can read the MTU value from the DHCP server response.
For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using netsh or a similar tool. This is because the Windows DHCP client does not read the MTU value.
11.3. Connecting a virtual machine to a primary user-defined network Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to a user-defined network (UDN) on the VM’s primary interface by using the OpenShift Container Platform web console or the CLI. The primary user-defined network replaces the default pod network in your specified namespace. Unlike the pod network, you can define the primary UDN per project, where each project can use its specific subnet and topology.
OpenShift Virtualization supports the namespace-scoped UserDefinedNetwork and the cluster-scoped ClusterUserDefinedNetwork custom resource definitions (CRD).
Cluster administrators can configure a primary UserDefinedNetwork CRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use the ClusterUserDefinedNetwork CRD to create a shared OVN network across multiple namespaces.
You must add the k8s.ovn.org/primary-user-defined-network label when you create a namespace that is to be used with user-defined networks.
With the layer 2 topology, OVN-Kubernetes creates an overlay network between nodes. You can use this overlay network to connect VMs on different nodes without having to configure any additional physical networking infrastructure.
The layer 2 topology enables seamless migration of VMs without the need for Network Address Translation (NAT) because persistent IP addresses are preserved across cluster nodes during live migration.
You must consider the following limitations before implementing a primary UDN:
-
You cannot use the
virtctl sshcommand to configure SSH access to a VM. -
You cannot use the
oc port-forwardcommand to forward ports to a VM. - You cannot use headless services to access a VM.
11.3.1. Creating a primary user-defined network by using the web console Copiar o linkLink copiado para a área de transferência!
You can use the OpenShift Container Platform web console to create a primary namespace-scoped UserDefinedNetwork or a cluster-scoped ClusterUserDefinedNetwork CRD. The UDN serves as the default primary network for pods and VMs that you create in namespaces associated with the network.
11.3.1.1. Creating a namespace for user-defined networks by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a namespace to be used with primary user-defined networks (UDNs) by using the OpenShift Container Platform web console.
Prerequisites
-
Log in to the OpenShift Container Platform web console as a user with
cluster-adminpermissions.
Procedure
-
From the Administrator perspective, click Administration
Namespaces. - Click Create Namespace.
- In the Name field, specify a name for the namespace. The name must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character.
-
In the Labels field, add the
k8s.ovn.org/primary-user-defined-networklabel. -
Optional: If the namespace is to be used with an existing cluster-scoped UDN, add the appropriate labels as defined in the
spec.namespaceSelectorfield in theClusterUserDefinedNetworkcustom resource. - Optional: Specify a default network policy.
- Click Create to create the namespace.
11.3.1.2. Creating a primary namespace-scoped user-defined network by using the web console Copiar o linkLink copiado para a área de transferência!
You can create an isolated primary network in your project namespace by creating a UserDefinedNetwork custom resource in the OpenShift Container Platform web console.
Prerequisites
-
You have access to the OpenShift Container Platform web console as a user with
cluster-adminpermissions. -
You have created a namespace and applied the
k8s.ovn.org/primary-user-defined-networklabel. For more information, see "Creating a namespace for user-defined networks by using the web console".
Procedure
-
From the Administrator perspective, click Networking
UserDefinedNetworks. - Click Create UserDefinedNetwork.
- From the Project name list, select the namespace that you previously created.
- Specify a value in the Subnet field.
- Click Create. The user-defined network serves as the default primary network for pods and virtual machines that you create in this namespace.
11.3.1.3. Creating a primary cluster-scoped user-defined network by using the web console Copiar o linkLink copiado para a área de transferência!
You can connect multiple namespaces to the same primary user-defined network (UDN) by creating a ClusterUserDefinedNetwork custom resource in the OpenShift Container Platform web console.
Prerequisites
-
You have access to the OpenShift Container Platform web console as a user with
cluster-adminpermissions.
Procedure
-
From the Administrator perspective, click Networking
UserDefinedNetworks. - From the Create list, select ClusterUserDefinedNetwork.
- In the Name field, specify a name for the cluster-scoped UDN.
- Specify a value in the Subnet field.
- In the Project(s) Match Labels field, add the appropriate labels to select namespaces that the cluster UDN applies to.
- Click Create. The cluster-scoped UDN serves as the default primary network for pods and virtual machines located in namespaces that contain the labels that you specified in step 5.
11.3.2. Creating a primary user-defined network by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create a primary UserDefinedNetwork or ClusterUserDefinedNetwork CRD by using the CLI.
11.3.2.1. Creating a namespace for user-defined networks by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create a namespace to be used with primary user-defined networks (UDNs) by using the OpenShift CLI (oc).
Prerequisites
-
You have access to the cluster as a user with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc).
Procedure
Create a
Namespaceobject as a YAML file similar to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This label is required for the namespace to be associated with a UDN. If the namespace is to be used with an existing cluster UDN, you must also add the appropriate labels that are defined in the
spec.namespaceSelectorfield of theClusterUserDefinedNetworkcustom resource.
Apply the
Namespacemanifest by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.2.2. Creating a primary namespace-scoped user-defined network by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create an isolated primary network in your project namespace by using the CLI. You must use the OVN-Kubernetes layer 2 topology and enable persistent IP address allocation in the user-defined network (UDN) configuration to ensure VM live migration support.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have created a namespace and applied the
k8s.ovn.org/primary-user-defined-networklabel.
Procedure
Create a
UserDefinedNetworkobject to specify the custom network configuration:Example
UserDefinedNetworkmanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the name of the
UserDefinedNetworkcustom resource. - 2
- Specifies the namespace in which the VM is located. The namespace must have the
k8s.ovn.org/primary-user-defined-networklabel. The namespace must not bedefault, anopenshift-*namespace, or match any global namespaces that are defined by the Cluster Network Operator (CNO). - 3
- Specifies the topological configuration of the network. The required value is
Layer2. ALayer2topology creates a logical switch that is shared by all nodes. - 4
- Specifies whether the UDN is primary or secondary. The
Primaryrole means that the UDN acts as the primary network for the VM and all default traffic passes through this network. - 5
- Specifies that virtual workloads have consistent IP addresses across reboots and migration. The
spec.layer2.subnetsfield is required whenipam.lifecycle: Persistentis specified.
Apply the
UserDefinedNetworkmanifest by running the following command:oc apply -f --validate=true <filename>.yaml
$ oc apply -f --validate=true <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.2.3. Creating a primary cluster-scoped user-defined network by using the CLI Copiar o linkLink copiado para a área de transferência!
You can connect multiple namespaces to the same primary user-defined network (UDN) to achieve native tenant isolation by using the CLI.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges. -
You have installed the OpenShift CLI (
oc).
Procedure
Create a
ClusterUserDefinedNetworkobject to specify the custom network configuration:Example
ClusterUserDefinedNetworkmanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the name of the
ClusterUserDefinedNetworkcustom resource. - 2
- Specifies the set of namespaces that the cluster UDN applies to. The namespace selector must not point to
default, anopenshift-*namespace, or any global namespaces that are defined by the Cluster Network Operator (CNO). - 3
- Specifies the type of selector. In this example, the
matchExpressionsselector selects objects that have the labelkubernetes.io/metadata.namewith the valuered-namespaceorblue-namespace. - 4
- Specifies the type of operator. Possible values are
In,NotIn, andExists. - 5
- Specifies the topological configuration of the network. The required value is
Layer2. ALayer2topology creates a logical switch that is shared by all nodes. - 6
- Specifies whether the UDN is primary or secondary. The
Primaryrole means that the UDN acts as the primary network for the VM and all default traffic passes through this network.
Apply the
ClusterUserDefinedNetworkmanifest by running the following command:oc apply -f --validate=true <filename>.yaml
$ oc apply -f --validate=true <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.3. Attaching a virtual machine to the primary user-defined network Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by requesting the pod network attachment and configuring the interface binding.
OpenShift Virtualization supports the following network binding plugins to connect the network interface to the VM:
- Layer 2 bridge
- The Layer 2 bridge binding creates a direct Layer 2 connection between the VM’s virtual interface and the virtual switch of the UDN.
- Passt
The Plug a Simple Socket Transport (passt) binding provides a user-space networking solution that integrates seamlessly with the pod network, providing better integration with the OpenShift Container Platform networking ecosystem.
Passt binding has the following benefits:
- You can define readiness and liveness HTTP probes to configure VM health checks.
- You can use Red Hat Advanced Cluster Security to monitor TCP traffic within the cluster with detailed insights.
Using the passt binding plugin to attach a VM to the primary UDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
11.3.3.1. Attaching a virtual machine to the primary user-defined network by using the web console Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by using the OpenShift Container Platform web console. VMs that are created in a namespace where the primary UDN is configured are automatically attached to the UDN with the Layer 2 bridge network binding plugin.
To attach a VM to the primary UDN by using the Plug a Simple Socket Transport (passt) binding, enable the plugin and configure the VM network interface in the web console.
Using the passt binding plugin to attach a VM to the primary UDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- You are logged in to the OpenShift Container Platform web console.
Procedure
Follow these steps to enable the passt network binding plugin Technology Preview feature:
- From the Virtualization perspective, click Overview.
- On the Virtualization page, click the Settings tab.
- Click Preview features and set Enable Passt binding for primary user-defined networks to on.
- From the Virtualization perspective, click VirtualMachines.
- Select a VM to open the VirtualMachine details page.
- Click the Configuration tab.
- Click Network.
-
Click the Options menu
on the Network interfaces page and select Edit.
- In the Edit network interface dialog, select the default pod network attachment from the Network list.
- Expand Advanced and then select the Passt binding.
- Click Save.
- If your VM is running, restart it for the changes to take effect.
11.3.3.2. Attaching a virtual machine to the primary user-defined network by using the CLI Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the
VirtualMachinemanifest to add the UDN interface details, as in the following example:Example
VirtualMachinemanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace in which the VM is located. This value must match the namespace in which the UDN is defined.
- 2
- The name of the user-defined network interface.
- 3
- The name of the binding plugin that is used to connect the interface to the VM. The possible values are
l2bridgeandpasst. The default value isl2bridge. - 4
- The name of the network. This must match the value of the
spec.template.spec.domain.devices.interfaces.namefield.
Optional: If you are using the Plug a Simple Socket Transport (passt) network binding plugin, set the
hco.kubevirt.io/deployPasstNetworkBindingannotation totruein theHyperConvergedcustom resource (CR) by running the following command:oc annotate hco kubevirt-hyperconverged -n kubevirt-hyperconverged hco.kubevirt.io/deployPasstNetworkBinding=true --overwrite
$ oc annotate hco kubevirt-hyperconverged -n kubevirt-hyperconverged hco.kubevirt.io/deployPasstNetworkBinding=true --overwriteCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantUsing the passt binding plugin to attach a VM to the primary UDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Apply the
VirtualMachinemanifest by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Connecting a virtual machine to a secondary localnet user-defined network Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to an OVN-Kubernetes localnet secondary network by using the CLI. Cluster administrators can use the ClusterUserDefinedNetwork (CUDN) custom resource definition (CRD) to create a shared OVN-Kubernetes network across multiple namespaces.
An OVN-Kubernetes secondary network is compatible with the multi-network policy API which provides the MultiNetworkPolicy custom resource definition (CRD) to control traffic flow to and from VMs.
You must use the ipBlock attribute to define network policy ingress and egress rules for specific CIDR blocks. Using pod or namespace selector policy peers is not supported.
A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes.
11.4.1. Creating a user-defined-network for localnet topology by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create a secondary cluster-scoped user-defined-network (CUDN) for the localnet network topology by using the CLI.
Prerequisites
-
You are logged in to the cluster as a user with
cluster-adminprivileges. -
You have installed the OpenShift CLI (
oc). - You installed the Kubernetes NMState Operator.
Procedure
Create a
NodeNetworkConfigurationPolicyobject to map the OVN-Kubernetes secondary network to an Open vSwitch (OVS) bridge:Example
NodeNetworkConfigurationPolicymanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the configuration object.
- 2
- Specifies the nodes to which the node network configuration policy is applied. The recommended node selector value is
node-role.kubernetes.io/worker: ''. - 3
- The name of the additional network from which traffic is forwarded to the OVS bridge. This attribute must match the value of the
spec.network.localnet.physicalNetworkNamefield of theClusterUserDefinedNetworkobject that defines the OVN-Kubernetes additional network. This example uses the namelocalnet1. - 4
- The name of the OVS bridge on the node. This value is required if the
stateattribute ispresentor not specified. - 5
- The state of the mapping. Must be either
presentto add the mapping orabsentto remove the mapping. The default value ispresent.
ImportantOpenShift Virtualization does not support Linux bridge bonding modes 0, 5, and 6. For more information, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to?.
Apply the
NodeNetworkConfigurationPolicymanifest by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <filename>
-
Specifies the name of your
NodeNetworkConfigurationPolicymanifest YAML file.
Create a
ClusterUserDefinedNetworkobject to create a localnet secondary network:Example
ClusterUserDefinedNetworkmanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
ClusterUserDefinedNetworkcustom resource. - 2
- The set of namespaces that the cluster UDN applies to. The namespace selector must not point to the following values:
default; anopenshift-*namespace; or any global namespaces that are defined by the Cluster Network Operator (CNO). - 3
- The type of selector. In this example, the
matchExpressionsselector selects objects that have the labelkubernetes.io/metadata.namewith the valueredorblue. - 4
- The type of operator. Possible values are
In,NotIn, andExists. - 5
- The topological configuration of the network. A
Localnettopology connects the logical network to the physical underlay. - 6
- Specifies whether the UDN is primary or secondary. The required value is
Secondaryfortopology: Localnet. - 7
- The name of the OVN-Kubernetes bridge mapping that is configured on the node. This value must match the
spec.desiredState.ovn.bridge-mappings.localnetfield in theNodeNetworkConfigurationPolicymanifest that you previously created. This ensures that you are bridging to the intended segment of your physical network. - 8
- Specifies whether IP address management (IPAM) is enabled or disabled. The required value is
Disabled. OpenShift Virtualization does not support configuring IPAM for virtual machines.
Apply the
ClusterUserDefinedNetworkmanifest by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <filename>
-
Specifies the name of your
ClusterUserDefinedNetworkmanifest YAML file.
11.4.2. Creating a namespace for secondary user-defined networks by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create a namespace to be used with an existing secondary cluster-scoped user-defined network (CUDN) by using the CLI.
Prerequisites
-
You are logged in to the cluster as a user with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc).
Procedure
Create a
Namespaceobject similar to the following example:Example
NamespacemanifestapiVersion: v1 kind: Namespace metadata: name: red # ...
apiVersion: v1 kind: Namespace metadata: name: red # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
Namespacemanifest by running the following command:oc apply -f <filename>.yaml
oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <filename>
-
Specifies the name of your
Namespacemanifest YAML file.
11.4.3. Attaching a virtual machine to secondary user-defined networks by using the CLI Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to multiple secondary cluster-scoped user-defined networks (CUDNs) by configuring the interface binding.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the
VirtualMachinemanifest to add the CUDN interface details, as in the following example:Example
VirtualMachinemanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace in which the VM is located. This value must match a namespace that is associated with the secondary CUDN.
- 2
- The name of the secondary user-defined network interface.
- 3
- The name of the network. This must match the value of the
spec.template.spec.domain.devices.interfaces.namefield. - 4
- The name of the localnet
ClusterUserDefinedNetworkobject that you previously created.
Apply the
VirtualMachinemanifest by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <filename>
-
Specifies the name of your
VirtualMachinemanifest YAML file.
When running OpenShift Virtualization on IBM Z® using an OSA card, be aware that the OSA card only forwards network traffic to devices that are registered with the OSA device. As a result, any traffic destined for unregistered devices is not forwarded.
11.5. Exposing a virtual machine by using a service Copiar o linkLink copiado para a área de transferência!
You can expose a virtual machine within the cluster or outside the cluster by creating a Service object.
11.5.1. About services Copiar o linkLink copiado para a área de transferência!
A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world.
- ClusterIP
-
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends.
ClusterIPis the default service type. - NodePort
-
Exposes the service on the same port of each selected node in the cluster.
NodePortmakes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. - LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator.
11.5.2. Dual-stack support Copiar o linkLink copiado para a área de transferência!
If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object.
The spec.ipFamilyPolicy field can be set to one of the following values:
- SingleStack
- The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range.
- PreferDualStack
- The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured.
- RequireDualStack
-
This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to
PreferDualStack. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.
You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values:
-
[IPv4] -
[IPv6] -
[IPv4, IPv6] -
[IPv6, IPv4]
11.5.3. Creating a service by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create a service and associate it with a virtual machine (VM) by using the command line.
Prerequisites
- You configured the cluster network to support the service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the
VirtualMachinemanifest to add the label for service creation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add
special: keyto thespec.template.metadata.labelsstanza.
NoteLabels on a virtual machine are passed through to the pod. The
special: keylabel must match the label in thespec.selectorattribute of theServicemanifest.-
Save the
VirtualMachinemanifest file to apply your changes. Create a
Servicemanifest to expose the VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
Servicemanifest file. Create the service by running the following command:
oc create -f example-service.yaml
$ oc create -f example-service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the VM to apply the changes.
Verification
Query the
Serviceobject to verify that it is available:oc get service -n example-namespace
$ oc get service -n example-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Accessing a virtual machine by using its internal FQDN Copiar o linkLink copiado para a área de transferência!
You can access a virtual machine (VM) that is connected to the default internal pod network on a stable fully qualified domain name (FQDN) by using headless services.
A Kubernetes headless service is a form of service that does not allocate a cluster IP address to represent a set of pods. Instead of providing a single virtual IP address for the service, a headless service creates a DNS record for each pod associated with the service. You can expose a VM through its FQDN without having to expose a specific TCP or UDP port.
If you created a VM by using the OpenShift Container Platform web console, you can find its internal FQDN listed in the Network tile on the Overview tab of the VirtualMachine details page. For more information about connecting to the VM, see Connecting to a virtual machine by using its internal FQDN.
11.6.1. Creating a headless service in a project by using the CLI Copiar o linkLink copiado para a área de transferência!
To create a headless service in a namespace, add the clusterIP: None parameter to the service YAML definition.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a
Servicemanifest to expose the VM, such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the service. This must match the
spec.subdomainattribute in theVirtualMachinemanifest file. - 2
- This service selector must match the
expose:melabel in theVirtualMachinemanifest file. - 3
- Specifies a headless service.
- 4
- The list of ports that are exposed by the service. You must define at least one port. This can be any arbitrary value as it does not affect the headless service.
-
Save the
Servicemanifest file. Create the service by running the following command:
oc create -f headless_service.yaml
$ oc create -f headless_service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6.2. Mapping a virtual machine to a headless service by using the CLI Copiar o linkLink copiado para a área de transferência!
To connect to a virtual machine (VM) from within the cluster by using its internal fully qualified domain name (FQDN), you must first map the VM to a headless service. Set the spec.hostname and spec.subdomain parameters in the VM configuration file.
If a headless service exists with a name that matches the subdomain, a unique DNS A record is created for the VM in the form of <vm.spec.hostname>.<vm.spec.subdomain>.<vm.metadata.namespace>.svc.cluster.local.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the
VirtualMachinemanifest to add the service selector label and subdomain by running the following command:oc edit vm <vm_name>
$ oc edit vm <vm_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
VirtualMachinemanifest fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
expose:melabel must match thespec.selectorattribute of theServicemanifest that you previously created. - 2
- If this attribute is not specified, the resulting DNS A record takes the form of
<vm.metadata.name>.<vm.spec.subdomain>.<vm.metadata.namespace>.svc.cluster.local. - 3
- The
spec.subdomainattribute must match themetadata.namevalue of theServiceobject.
- Save your changes and exit the editor.
- Restart the VM to apply the changes.
11.6.3. Connecting to a virtual machine by using its internal FQDN Copiar o linkLink copiado para a área de transferência!
You can connect to a virtual machine (VM) by using its internal fully qualified domain name (FQDN).
Prerequisites
-
You have installed the
virtctltool. -
You have identified the internal FQDN of the VM from the web console or by mapping the VM to a headless service. The internal FQDN has the format
<vm.spec.hostname>.<vm.spec.subdomain>.<vm.metadata.namespace>.svc.cluster.local.
Procedure
Connect to the VM console by entering the following command:
virtctl console vm-fedora
$ virtctl console vm-fedoraCopy to Clipboard Copied! Toggle word wrap Toggle overflow To connect to the VM by using the requested FQDN, run the following command:
ping myvm.mysubdomain.<namespace>.svc.cluster.local
$ ping myvm.mysubdomain.<namespace>.svc.cluster.localCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms
PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 msCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the preceding example, the DNS entry for
myvm.mysubdomain.default.svc.cluster.localpoints to10.244.0.57, which is the cluster IP address that is currently assigned to the VM.
11.7. Connecting a virtual machine to a Linux bridge network Copiar o linkLink copiado para a área de transferência!
By default, OpenShift Virtualization is installed with a single, internal pod network.
You can create a Linux bridge network and attach a virtual machine (VM) to the network by performing the following steps:
- Create a Linux bridge node network configuration policy (NNCP).
- Create a Linux bridge network attachment definition (NAD) by using the web console or the command line.
- Configure the VM to recognize the NAD by using the web console or the command line.
OpenShift Virtualization does not support Linux bridge bonding modes 0, 5, and 6. For more information, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to?.
11.7.1. Creating a Linux bridge NNCP Copiar o linkLink copiado para a área de transferência!
You can create a NodeNetworkConfigurationPolicy (NNCP) manifest for a Linux bridge network.
Prerequisites
- You have installed the Kubernetes NMState Operator.
Procedure
Create the
NodeNetworkConfigurationPolicymanifest. This example includes sample values that you must replace with your own information.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name of the policy.
- 2
- Name of the interface.
- 3
- Optional: Human-readable description of the interface.
- 4
- The type of interface. This example creates a bridge.
- 5
- The requested state for the interface after creation.
- 6
- Disables IPv4 in this example.
- 7
- Disables STP in this example.
- 8
- The node NIC to which the bridge is attached.
To create the NNCP manifest for a Linux bridge using OSA with IBM Z®, you must disable VLAN filtering by the setting the rx-vlan-filter to false in the NodeNetworkConfigurationPolicy manifest.
Alternatively, if you have SSH access to the node, you can disable VLAN filtering by running the following command:
sudo ethtool -K <osa-interface-name> rx-vlan-filter off
$ sudo ethtool -K <osa-interface-name> rx-vlan-filter off
11.7.2. Creating a Linux bridge NAD Copiar o linkLink copiado para a área de transferência!
You can create a Linux bridge network attachment definition (NAD) by using the OpenShift Container Platform web console or command line.
11.7.2.1. Creating a Linux bridge NAD by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OpenShift Container Platform web console.
A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
Procedure
-
In the web console, click Networking
NetworkAttachmentDefinitions. Click Create Network Attachment Definition.
NoteThe network attachment definition must be in the same namespace as the pod or virtual machine.
- Enter a unique Name and optional Description.
- Select CNV Linux bridge from the Network Type list.
- Enter the name of the bridge in the Bridge Name field.
Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
NoteOSA interfaces on IBM Z® do not support VLAN filtering and VLAN-tagged traffic is dropped. Avoid using VLAN-tagged NADs with OSA interfaces.
- Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
- Click Create.
11.7.2.2. Creating a Linux bridge NAD by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines (VMs) by using the command line.
The NAD and the VM must be in the same namespace.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Add the VM to the
NetworkAttachmentDefinitionconfiguration, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name for the
NetworkAttachmentDefinitionobject. - 2
- Optional: Annotation key-value pair for node selection for the bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the defined bridge connected.
- 3
- The name for the configuration. It is recommended to match the configuration name to the
namevalue of the network attachment definition. - 4
- The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
- 5
- The name of the Linux bridge configured on the node. The name should match the interface bridge name defined in the
NodeNetworkConfigurationPolicymanifest. - 6
- Optional: A flag to enable the MAC spoof check. When set to
true, you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. - 7
- Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy.Note
OSA interfaces on IBM Z® do not support VLAN filtering and VLAN-tagged traffic is dropped. Avoid using VLAN-tagged NADs with OSA interfaces.
- 8
- Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is
true.NoteA Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Optional: If you want to connect a VM to the native network, configure the Linux bridge
NetworkAttachmentDefinitionmanifest without specifying any VLAN:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the network attachment definition:
oc create -f network-attachment-definition.yaml
$ oc create -f network-attachment-definition.yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where
network-attachment-definition.yamlis the file name of the network attachment definition manifest.
Verification
Verify that the network attachment definition was created by running the following command:
oc get network-attachment-definition bridge-network
$ oc get network-attachment-definition bridge-networkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.7.2.3. Enabling port isolation for a Linux bridge NAD Copiar o linkLink copiado para a área de transferência!
You can enable port isolation for a Linux bridge network attachment definition (NAD) so that virtual machines (VMs) or pods that run on the same virtual LAN (VLAN) can operate in isolation from one another. The Linux bridge NAD creates a virtual bridge, or virtual switch, between network interfaces and the physical network.
Isolating ports in this way can provide enhanced security for VM workloads that run on the same node.
Prerequisites
- For VMs, you configured either a static or dynamic IP address for each VM. See "Configuring IP addresses for virtual machines".
- You created a Linux bridge NAD by using either the web console or the command-line interface.
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the Linux bridge NAD by setting
portIsolationtotrue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name for the configuration. The name must match the value in the
metadata.nameof the NAD. - 2
- The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
- 3
- The name of the Linux bridge that is configured on the node. The name must match the interface bridge name defined in the NodeNetworkConfigurationPolicy manifest.
- 4
- Enables or disables port isolation on the virtual bridge. Default value is
false. When set totrue, each VM or pod is assigned to an isolated port. The virtual bridge prevents traffic from one isolated port from reaching another isolated port.
Apply the configuration:
oc apply -f example-vm.yaml
$ oc apply -f example-vm.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
11.7.3. Configuring a VM network interface Copiar o linkLink copiado para a área de transferência!
You can configure a virtual machine (VM) network interface by using the OpenShift Container Platform web console or command line.
11.7.3.1. Configuring a VM network interface by using the web console Copiar o linkLink copiado para a área de transferência!
You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You created a network attachment definition for the network.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Click a VM to view the VirtualMachine details page.
- On the Configuration tab, click the Network interfaces tab.
- Click Add network interface.
- Enter the interface name and select the network attachment definition from the Network list.
- Click Save.
- Restart or live migrate the VM to apply the changes.
Networking fields
| Name | Description |
|---|---|
| Name | Name for the network interface controller. |
| Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio.
For IBM Z® ( |
| Network | List of available network attachment definitions. |
| Type | List of available binding methods. Select the binding method suitable for the network interface:
|
| MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
11.7.3.2. Configuring a VM network interface by using the CLI Copiar o linkLink copiado para a área de transferência!
You can configure a virtual machine (VM) network interface for a bridge network by using the command line.
Prerequisites
-
You have installed the OpenShift CLI (
oc). - Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect.
Procedure
Add the bridge interface and the network attachment definition to the VM configuration as in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
oc apply -f example-vm.yaml
$ oc apply -f example-vm.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
When running OpenShift Virtualization on IBM Z® using an OSA card, you must register the MAC address of the device. For more information, see OSA interface traffic forwarding (IBM documentation).
11.8. Connecting a virtual machine to an SR-IOV network Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps:
11.8.1. Configuring SR-IOV network devices Copiar o linkLink copiado para a área de transferência!
The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR).
When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. Reboot only happens in the following cases:
-
With Mellanox NICs (
mlx5driver) a node reboot happens every time the number of virtual functions (VFs) increase on a physical function (PF). -
With Intel NICs, a reboot only happens if the kernel parameters do not include
intel_iommu=onandiommu=pt.
It might take several minutes for a configuration change to apply.
Prerequisites
-
You installed the OpenShift CLI (
oc). -
You have access to the cluster as a user with the
cluster-adminrole. - You have installed the SR-IOV Network Operator.
- You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
- You have not selected any control plane nodes for SR-IOV network device configuration.
Procedure
Create an
SriovNetworkNodePolicyobject, and then save the YAML in the<name>-sriov-node-network.yamlfile. Replace<name>with the name for this configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the CR object.
- 2
- Specify the namespace where the SR-IOV Operator is installed.
- 3
- Specify the resource name of the SR-IOV device plugin. You can create multiple
SriovNetworkNodePolicyobjects for a resource name. - 4
- Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
- 5
- Optional: Specify an integer value between
0and99. A smaller number gets higher priority, so a priority of10is higher than a priority of99. The default value is99. - 6
- Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
- 7
- Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than
127. - 8
- The
nicSelectormapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specifyrootDevices, you must also specify a value forvendor,deviceID, orpfNames. If you specify bothpfNamesandrootDevicesat the same time, ensure that they point to an identical device. - 9
- Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either
8086or15b3. - 10
- Optional: Specify the device hex code of SR-IOV network device. The only allowed values are
158b,1015,1017. - 11
- Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device.
- 12
- The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format:
0000:02:00.1. - 13
- The
vfio-pcidriver type is required for virtual functions in OpenShift Virtualization. - 14
- Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set
isRdmatofalse. The default value isfalse.
NoteIf
isRDMAflag is set totrue, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.-
Optional: Label the SR-IOV capable cluster nodes with
SriovNetworkNodePolicy.Spec.NodeSelectorif they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the
SriovNetworkNodePolicyobject:oc create -f <name>-sriov-node-network.yaml
$ oc create -f <name>-sriov-node-network.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where
<name>specifies the name for this configuration.After applying the configuration update, all the pods in
sriov-network-operatornamespace transition to theRunningstatus.To verify that the SR-IOV network device is configured, enter the following command. Replace
<node_name>with the name of a node with the SR-IOV network device that you just configured.oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'$ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.8.2. Configuring SR-IOV additional network Copiar o linkLink copiado para a área de transferência!
You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object.
When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object.
Do not modify or delete an SriovNetwork object if it is attached to pods or virtual machines in a running state.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges.
Procedure
-
Create the following
SriovNetworkobject, and then save the YAML in the<name>-sriov-network.yamlfile. Replace<name>with a name for this additional network.
- 1
- Replace
<name>with a name for the object. The SR-IOV Network Operator creates aNetworkAttachmentDefinitionobject with same name. - 2
- Specify the namespace where the SR-IOV Network Operator is installed.
- 3
- Replace
<sriov_resource_name>with the value for the.spec.resourceNameparameter from theSriovNetworkNodePolicyobject that defines the SR-IOV hardware for this additional network. - 4
- Replace
<target_namespace>with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork. - 5
- Optional: Replace
<vlan>with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from0to4095. The default value is0. - 6
- Optional: Replace
<spoof_check>with the spoof check mode of the VF. The allowed values are the strings"on"and"off".ImportantYou must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.
- 7
- Optional: Replace
<link_state>with the link state of virtual function (VF). Allowed value areenable,disableandauto. - 8
- Optional: Replace
<max_tx_rate>with a maximum transmission rate, in Mbps, for the VF. - 9
- Optional: Replace
<min_tx_rate>with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate.NoteIntel NICs do not support the
minTxRateparameter. For more information, see BZ#1772847. - 10
- Optional: Replace
<vlan_qos>with an IEEE 802.1p priority level for the VF. The default value is0. - 11
- Optional: Replace
<trust_vf>with the trust mode of the VF. The allowed values are the strings"on"and"off".ImportantYou must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.
- 12
- Optional: Replace
<capabilities>with the capabilities to configure for this network.
To create the object, enter the following command. Replace
<name>with a name for this additional network.oc create -f <name>-sriov-network.yaml
$ oc create -f <name>-sriov-network.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To confirm that the
NetworkAttachmentDefinitionobject associated with theSriovNetworkobject that you created in the previous step exists, enter the following command. Replace<namespace>with the namespace you specified in theSriovNetworkobject.oc get net-attach-def -n <namespace>
$ oc get net-attach-def -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.8.3. Connecting a virtual machine to an SR-IOV network by using the CLI Copiar o linkLink copiado para a área de transferência!
You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Add the SR-IOV network details to the
spec.domain.devices.interfacesandspec.networksstanzas of the VM configuration as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the virtual machine configuration:
oc apply -f <vm_sriov>.yaml
$ oc apply -f <vm_sriov>.yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the virtual machine YAML file.
11.8.4. Connecting a VM to an SR-IOV network by using the web console Copiar o linkLink copiado para a área de transferência!
You can connect a VM to the SR-IOV network by including the network details in the VM configuration.
Prerequisites
- You must create a network attachment definition for the network.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Click a VM to view the VirtualMachine details page.
- On the Configuration tab, click the Network interfaces tab.
- Click Add network interface.
- Enter the interface name.
- Select an SR-IOV network attachment definition from the Network list.
-
Select
SR-IOVfrom the Type list. - Optional: Add a network Model or Mac address.
- Click Save.
- Restart or live-migrate the VM to apply the changes.
11.9. Using DPDK with SR-IOV Copiar o linkLink copiado para a área de transferência!
The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing.
You can configure clusters and virtual machines (VMs) to run DPDK workloads over SR-IOV networks.
11.9.1. Configuring a cluster for DPDK workloads Copiar o linkLink copiado para a área de transferência!
You can configure an OpenShift Container Platform cluster to run Data Plane Development Kit (DPDK) workloads for improved network performance.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc). - You have installed the SR-IOV Network Operator.
- You have installed the Node Tuning Operator.
Procedure
- Map your compute nodes topology to determine which Non-Uniform Memory Access (NUMA) CPUs are isolated for DPDK applications and which ones are reserved for the operating system (OS).
If your OpenShift Container Platform cluster uses separate control plane and compute nodes for high-availability:
Label a subset of the compute nodes with a custom role; for example,
worker-dpdk:oc label node <node_name> node-role.kubernetes.io/worker-dpdk=""
$ oc label node <node_name> node-role.kubernetes.io/worker-dpdk=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
MachineConfigPoolmanifest that contains theworker-dpdklabel in thespec.machineConfigSelectorobject:Example
MachineConfigPoolmanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
PerformanceProfilemanifest that applies to the labeled nodes and the machine config pool that you created in the previous steps. The performance profile specifies the CPUs that are isolated for DPDK applications and the CPUs that are reserved for house keeping.Example
PerformanceProfilemanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe compute nodes automatically restart after you apply the
MachineConfigPoolandPerformanceProfilemanifests.Retrieve the name of the generated
RuntimeClassresource from thestatus.runtimeClassfield of thePerformanceProfileobject:oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}'$ oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the previously obtained
RuntimeClassname as the default container runtime class for thevirt-launcherpods by editing theHyperConvergedcustom resource (CR):oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "add", "path": "/spec/defaultRuntimeClass", "value":"<runtimeclass-name>"}]'$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "add", "path": "/spec/defaultRuntimeClass", "value":"<runtimeclass-name>"}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEditing the
HyperConvergedCR changes a global setting that affects all VMs that are created after the change is applied.If your DPDK-enabled compute nodes use Simultaneous multithreading (SMT), enable the
AlignCPUsenabler by editing theHyperConvergedCR:oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "replace", "path": "/spec/featureGates/alignCPUs", "value": true}]'$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "replace", "path": "/spec/featureGates/alignCPUs", "value": true}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnabling
AlignCPUsallows OpenShift Virtualization to request up to two additional dedicated CPUs to bring the total CPU count to an even parity when using emulator thread isolation.Create an
SriovNetworkNodePolicyobject with thespec.deviceTypefield set tovfio-pci:Example
SriovNetworkNodePolicymanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.9.1.1. Removing a custom machine config pool for high-availability clusters Copiar o linkLink copiado para a área de transferência!
You can delete a custom machine config pool that you previously created for your high-availability cluster.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc). -
You have created a custom machine config pool by labeling a subset of the compute nodes with a custom role and creating a
MachineConfigPoolmanifest with that label.
Procedure
Remove the
worker-dpdklabel from the compute nodes by running the following command:oc label node <node_name> node-role.kubernetes.io/worker-dpdk-
$ oc label node <node_name> node-role.kubernetes.io/worker-dpdk-Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
MachineConfigPoolmanifest that contains theworker-dpdklabel by entering the following command:oc delete mcp worker-dpdk
$ oc delete mcp worker-dpdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.9.2. Configuring a project for DPDK workloads Copiar o linkLink copiado para a área de transferência!
You can configure the project to run DPDK workloads on SR-IOV hardware.
Prerequisites
- Your cluster is configured to run DPDK workloads.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a namespace for your DPDK applications:
oc create ns dpdk-ns
$ oc create ns dpdk-nsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
SriovNetworkobject that references theSriovNetworkNodePolicyobject. When you create anSriovNetworkobject, the SR-IOV Network Operator automatically creates aNetworkAttachmentDefinitionobject.Example
SriovNetworkmanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Run the virtual machine latency checkup to verify that the network is properly configured.
11.9.3. Configuring a virtual machine for DPDK workloads Copiar o linkLink copiado para a área de transferência!
You can run Data Packet Development Kit (DPDK) workloads on virtual machines (VMs) to achieve lower latency and higher throughput for faster packet processing in the user space. DPDK uses the SR-IOV network for hardware-based I/O sharing.
Prerequisites
- Your cluster is configured to run DPDK workloads.
- You have created and configured the project in which the VM will run.
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the
VirtualMachinemanifest to include information about the SR-IOV network interface, CPU topology, CRI-O annotations, and huge pages:Example
VirtualMachinemanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This annotation specifies that load balancing is disabled for CPUs that are used by the container.
- 2
- This annotation specifies that the CPU quota is disabled for CPUs that are used by the container.
- 3
- This annotation specifies that Interrupt Request (IRQ) load balancing is disabled for CPUs that are used by the container.
- 4
- The number of sockets inside the VM. This field must be set to
1for the CPUs to be scheduled from the same Non-Uniform Memory Access (NUMA) node. - 5
- The number of cores inside the VM. This must be a value greater than or equal to
1. In this example, the VM is scheduled with 5 hyper-threads or 10 CPUs. - 6
- The size of the huge pages. The possible values for x86-64 architecture are 1Gi and 2Mi. In this example, the request is for 8 huge pages of size 1Gi.
- 7
- The name of the SR-IOV
NetworkAttachmentDefinitionobject.
- Save and exit the editor.
Apply the
VirtualMachinemanifest:oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the guest operating system. The following example shows the configuration steps for RHEL 9 operating system:
Configure huge pages by using the GRUB bootloader command-line interface. In the following example, 8 1G huge pages are specified.
grubby --update-kernel=ALL --args="default_hugepagesz=1GB hugepagesz=1G hugepages=8"
$ grubby --update-kernel=ALL --args="default_hugepagesz=1GB hugepagesz=1G hugepages=8"Copy to Clipboard Copied! Toggle word wrap Toggle overflow To achieve low-latency tuning by using the
cpu-partitioningprofile in the TuneD application, run the following commands:dnf install -y tuned-profiles-cpu-partitioning
$ dnf install -y tuned-profiles-cpu-partitioningCopy to Clipboard Copied! Toggle word wrap Toggle overflow echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf
$ echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow The first two CPUs (0 and 1) are set aside for house keeping tasks and the rest are isolated for the DPDK application.
tuned-adm profile cpu-partitioning
$ tuned-adm profile cpu-partitioningCopy to Clipboard Copied! Toggle word wrap Toggle overflow Override the SR-IOV NIC driver by using the
driverctldevice driver control utility:dnf install -y driverctl
$ dnf install -y driverctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow driverctl set-override 0000:07:00.0 vfio-pci
$ driverctl set-override 0000:07:00.0 vfio-pciCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Restart the VM to apply the changes.
11.10. Connecting a virtual machine to an OVN-Kubernetes layer 2 secondary network Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to an OVN-Kubernetes layer2 secondary network by using the CLI.
A layer2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.
An OVN-Kubernetes secondary network is compatible with the multi-network policy API which provides the MultiNetworkPolicy custom resource definition (CRD) to control traffic flow to and from VMs. You must use the ipBlock attribute to define network policy ingress and egress rules for specific CIDR blocks. You cannot use pod or namespace selectors for virtualization workloads.
To configure an OVN-Kubernetes layer2 secondary network and attach a VM to that network, perform the following steps:
11.10.1. Creating an OVN-Kubernetes layer 2 NAD Copiar o linkLink copiado para a área de transferência!
You can create an OVN-Kubernetes network attachment definition (NAD) for the layer 2 network topology by using the OpenShift Container Platform web console or the CLI.
Configuring IP address management (IPAM) by specifying the spec.config.ipam.subnet attribute in a network attachment definition for virtual machines is not supported.
11.10.1.1. Creating a NAD for layer 2 topology by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges. -
You have installed the OpenShift CLI (
oc).
Procedure
Create a
NetworkAttachmentDefinitionobject:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The Container Network Interface (CNI) specification version. The required value is
0.3.1. - 2
- The name of the network. This attribute is not namespaced. For example, you can have a network named
l2-networkreferenced from two differentNetworkAttachmentDefinitionobjects that exist in two different namespaces. This feature is useful to connect VMs in different namespaces. - 3
- The name of the CNI plugin. The required value is
ovn-k8s-cni-overlay. - 4
- The topological configuration for the network. The required value is
layer2. - 5
- Optional: The maximum transmission unit (MTU) value. If you do not set a value, the Cluster Network Operator (CNO) sets a default MTU value by calculating the difference among the underlay MTU of the primary network interface, the overlay MTU of the pod network, such as the Geneve (Generic Network Virtualization Encapsulation), and byte capacity of any enabled features, such as IPsec.
- 6
- The value of the
namespaceandnamefields in themetadatastanza of theNetworkAttachmentDefinitionobject.
NoteThe previous example configures a cluster-wide overlay without a subnet defined. This means that the logical switch implementing the network only provides layer 2 communication. You must configure an IP address when you create the virtual machine by either setting a static IP address or by deploying a DHCP server on the network for a dynamic IP address.
Apply the manifest by running the following command:
oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.10.1.2. Creating a NAD for layer 2 topology by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a network attachment definition (NAD) that describes how to attach a pod to the layer 2 overlay network.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges.
Procedure
-
Go to Networking
NetworkAttachmentDefinitions in the web console. - Click Create Network Attachment Definition. The network attachment definition must be in the same namespace as the pod or virtual machine using it.
- Enter a unique Name and optional Description.
- Select OVN Kubernetes L2 overlay network from the Network Type list.
- Click Create.
11.10.2. Attaching a virtual machine to the OVN-Kubernetes layer 2 secondary network Copiar o linkLink copiado para a área de transferência!
You can attach a virtual machine (VM) to the OVN-Kubernetes layer 2 secondary network interface by using the OpenShift Container Platform web console or the CLI.
11.10.2.1. Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges. -
You have installed the OpenShift CLI (
oc).
Procedure
Edit the
VirtualMachinemanifest to add the OVN-Kubernetes secondary network interface details, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the OVN-Kubernetes secondary interface.
- 2
- The name of the network. This must match the value of the
spec.template.spec.domain.devices.interfaces.namefield. - 3
- The name of the
NetworkAttachmentDefinitionobject. - 4
- Specifies the nodes on which the VM can be scheduled. The recommended node selector value is
node-role.kubernetes.io/worker: ''.
Apply the
VirtualMachinemanifest:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
11.11. Hot plugging secondary network interfaces Copiar o linkLink copiado para a área de transferência!
You can add or remove secondary network interfaces without stopping your virtual machine (VM). OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver. OpenShift Virtualization also supports hot plugging secondary interfaces that use SR-IOV binding. To hot plug or hot unplug a secondary interface, you must have permission to create and list VirtualMachineInstanceMigration objects.
Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces.
11.11.1. VirtIO limitations Copiar o linkLink copiado para a área de transferência!
Each VirtIO interface uses one of the limited Peripheral Connect Interface (PCI) slots in the VM. There are a total of 32 slots available. The PCI slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. OpenShift Virtualization reserves up to four slots for hot plugging interfaces. This includes any existing plugged network interfaces. For example, if your VM has two existing plugged interfaces, you can hot plug two more network interfaces.
The actual number of slots available for hot plugging also depends on the machine type. For example, the default PCI topology for the q35 machine type supports hot plugging one additional PCIe device. For more information on PCI topology and hot plug support, see the libvirt documentation.
If you restart the VM after hot plugging an interface, that interface becomes part of the standard network interfaces.
11.11.2. Hot plugging a secondary network interface by using the CLI Copiar o linkLink copiado para a área de transferência!
Hot plug a secondary network interface to a virtual machine (VM) while the VM is running.
Prerequisites
- A network attachment definition is configured in the same namespace as your VM.
- The VM to which you want to hot plug the network interface is running.
-
You have installed the OpenShift CLI (
oc).
Procedure
Use your preferred text editor to edit the
VirtualMachinemanifest, as shown in the following example:Example VM configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save your changes and exit the editor.
For the new configuration to take effect, apply the changes by running the following command. Applying the changes triggers automatic VM live migration and attaches the network interface to the running VM.
oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <filename>
-
Specifies the name of your
VirtualMachinemanifest YAML file.
Verification
Verify that the VM live migration is successful by using the following command:
oc get VirtualMachineInstanceMigration -w
$ oc get VirtualMachineInstanceMigration -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new interface is added to the VM by checking the status of the virtual machine instance (VMI):
oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }"$ oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The hot plugged interface appears in the VMI status.
11.11.3. Hot unplugging a secondary network interface by using the CLI Copiar o linkLink copiado para a área de transferência!
You can remove a secondary network interface from a running virtual machine (VM).
Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces.
Prerequisites
- Your VM must be running.
- The VM must be created on a cluster running OpenShift Virtualization 4.14 or later.
- The VM must have a bridge network interface attached.
-
You have installed the OpenShift CLI (
oc).
Procedure
Using your preferred text editor, edit the
VirtualMachinemanifest file and set the interface state toabsent. Setting the interface state toabsentdetaches the network interface from the guest, but the interface still exists in the pod.Example VM configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the interface state to
absentto detach it from the running VM. Removing the interface details from the VM specification does not hot unplug the secondary network interface.
- Save your changes and exit the editor.
For the new configuration to take effect, apply the changes by running the following command. Applying the changes triggers automatic VM live migration and removes the interface from the pod.
oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <filename>
-
Specifies the name of your
VirtualMachinemanifest YAML file.
11.12. Managing the link state of a virtual machine interface Copiar o linkLink copiado para a área de transferência!
You can manage the link state of a primary or secondary virtual machine (VM) interface by using the OpenShift Container Platform web console or the CLI. By specifying the link state, you can logically connect or disconnect the virtual network interface controller (vNIC) from a network.
OpenShift Virtualization does not support link state management for Single Root I/O Virtualization (SR-IOV) secondary network interfaces and their link states are not reported.
You can specify the desired link state when you first create a VM, by editing the configuration of an existing VM that is stopped or running, or when you hot plug a new network interface to a running VM. If you edit a running VM, you do not need to restart or migrate the VM for the changes to be applied. The current link state of a VM interface is reported in the status.interfaces.linkState field of the VirtualMachineInstance manifest.
11.12.1. Setting the VM interface link state by using the web console Copiar o linkLink copiado para a área de transferência!
You can set the link state of a primary or secondary virtual machine (VM) network interface by using the web console.
Prerequisites
- You are logged into the OpenShift Container Platform web console.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Select a VM to view the VirtualMachine details page.
- On the Configuration tab, click Network. A list of network interfaces is displayed.
-
Click the Options menu
of the interface that you want to edit.
Choose the appropriate option to set the interface link state:
-
If the current interface link state is
up, select Set link down. -
If the current interface link state is
down, select Set link up.
-
If the current interface link state is
11.12.2. Setting the VM interface link state by using the CLI Copiar o linkLink copiado para a área de transferência!
You can set the link state of a primary or secondary virtual machine (VM) network interface by using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Edit the VM configuration to set the interface link state, as in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the interface.
- 2
- The state of the interface. The possible values are:
-
up: Represents an active network connection. This is the default if no value is specified. -
down: Represents a network interface link that is switched off. absent: Represents a network interface that is hot unplugged.ImportantIf you have defined readiness or liveness probes to run VM health checks, setting the primary interface’s link state to
downcauses the probes to fail. If a liveness probe fails, the VM is deleted and a new VM is created to restore responsiveness.
-
Apply the
VirtualMachinemanifest:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the desired link state is set by checking the
status.interfaces.linkStatefield of theVirtualMachineInstancemanifest.oc get vmi <vmi-name>
$ oc get vmi <vmi-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.13. Connecting a virtual machine to a service mesh Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4.
11.13.1. Adding a virtual machine to a service mesh Copiar o linkLink copiado para a área de transferência!
To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject annotation to true. Then expose your VM as a service to view your application in the mesh.
To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090.
Prerequisites
-
You have installed the OpenShift CLI (
oc). - You have installed the Service Mesh Operator.
Procedure
Edit the VM configuration file to add the
sidecar.istio.io/inject: "true"annotation:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the VM configuration:
oc apply -f <vm_name>.yaml
$ oc apply -f <vm_name>.yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the virtual machine YAML file.
Create a
Serviceobject to expose your VM to the service mesh.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The service selector that determines the set of pods targeted by a service. This attribute corresponds to the
spec.metadata.labelsfield in the VM configuration file. In the above example, theServiceobject namedvm-istiotargets TCP port 8080 on any pod with the labelapp=vm-istio.
Create the service:
oc create -f <service_name>.yaml
$ oc create -f <service_name>.yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the service YAML file.
11.14. Configuring a dedicated network for live migration Copiar o linkLink copiado para a área de transferência!
You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
11.14.1. Configuring a dedicated secondary network for live migration Copiar o linkLink copiado para a área de transferência!
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR).
Prerequisites
-
You installed the OpenShift CLI (
oc). -
You logged in to the cluster as a user with the
cluster-adminrole. - Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
Create a
NetworkAttachmentDefinitionmanifest according to the following example:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
NetworkAttachmentDefinitionobject. - 2
- Specify the name of the NIC to be used for live migration.
- 3
- Specify the name of the CNI plugin that provides the network for the NAD.
- 4
- Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
Open the
HyperConvergedCR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the name of the
NetworkAttachmentDefinitionobject to thespec.liveMigrationConfigstanza of theHyperConvergedCR:Example
HyperConvergedmanifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the Multus
NetworkAttachmentDefinitionobject to be used for live migrations.
-
Save your changes and exit the editor. The
virt-handlerpods restart and connect to the secondary network.
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.14.2. Selecting a dedicated network by using the web console Copiar o linkLink copiado para a área de transferência!
You can select a dedicated network for live migration by using the OpenShift Container Platform web console.
Prerequisites
- You configured a Multus network for live migration.
- You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization > Overview in the OpenShift Container Platform web console.
- Click the Settings tab and then click Live migration.
- Select the network from the Live migration network list.
11.15. Configuring and viewing IP addresses Copiar o linkLink copiado para a área de transferência!
You can configure an IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent.
11.15.1. Configuring IP addresses for virtual machines Copiar o linkLink copiado para a área de transferência!
You can configure a static IP address when you create a virtual machine (VM) by using the web console or the command line.
You can configure a dynamic IP address when you create a VM by using the command line.
The IP address is provisioned with cloud-init.
11.15.1.1. Configuring an IP address when creating a virtual machine by using the CLI Copiar o linkLink copiado para a área de transferência!
You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
If the VM is connected to the pod network, the pod network interface is the default route unless you update it.
Prerequisites
- The virtual machine is connected to a secondary network.
- You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine.
Procedure
Edit the
spec.template.spec.volumes.cloudInitNoCloud.networkDatastanza of the virtual machine configuration:To configure a dynamic IP address, specify the interface name and enable DHCP:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the interface name.
To configure a static IP, specify the interface name and the IP address:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.15.2. Viewing IP addresses of virtual machines Copiar o linkLink copiado para a área de transferência!
You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line.
The network information is collected by the QEMU guest agent.
11.15.2.1. Viewing the IP address of a virtual machine by using the web console Copiar o linkLink copiado para a área de transferência!
You can view the IP address of a virtual machine (VM) by using the OpenShift Container Platform web console.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Procedure
-
In the OpenShift Container Platform console, click Virtualization
VirtualMachines from the side menu. - Select a VM to open the VirtualMachine details page.
- Click the Details tab to view the IP address.
11.15.2.2. Viewing the IP address of a virtual machine by using the CLI Copiar o linkLink copiado para a área de transferência!
You can view the IP address of a virtual machine (VM) by using the command line.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Obtain the virtual machine instance configuration by running the following command:
oc describe vmi <vmi_name>
$ oc describe vmi <vmi_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.16. Accessing a virtual machine by using its external FQDN Copiar o linkLink copiado para a área de transferência!
You can access a virtual machine (VM) that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN).
Accessing a VM from outside the cluster by using its FQDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
11.16.1. Configuring a DNS server for secondary networks Copiar o linkLink copiado para a área de transferência!
The Cluster Network Addons Operator (CNAO) deploys a Domain Name Server (DNS) server and monitoring components when you enable the deployKubeSecondaryDNS feature gate in the HyperConverged custom resource (CR).
Prerequisites
-
You installed the OpenShift CLI (
oc). - You configured a load balancer for the cluster.
-
You logged in to the cluster with
cluster-adminpermissions.
Procedure
Edit the
HyperConvergedCR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the DNS server and monitoring components according to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables the DNS server
- Save the file and exit the editor.
Create a load balancer service to expose the DNS server outside the cluster by running the
oc exposecommand according to the following example:oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb \ --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'
$ oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb \ --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the external IP address by running the following command:
oc get service -n openshift-cnv
$ oc get service -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
HyperConvergedCR again:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the external IP address that you previously retrieved to the
kubeSecondaryDNSNameServerIPfield in the enterprise DNS server records. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the external IP address exposed by the load balancer service.
- Save the file and exit the editor.
Retrieve the cluster FQDN by running the following command:
oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'$ oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
openshift.example.com
openshift.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Point to the DNS server. To do so, add the
kubeSecondaryDNSNameServerIPvalue and the cluster FQDN to the enterprise DNS server records. For example:vm.<FQDN>. IN NS ns.vm.<FQDN>.
vm.<FQDN>. IN NS ns.vm.<FQDN>.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP>
ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.16.2. Connecting to a VM on a secondary network by using the cluster FQDN Copiar o linkLink copiado para a área de transferência!
You can access a running virtual machine (VM) attached to a secondary network interface by using the fully qualified domain name (FQDN) of the cluster.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You installed the QEMU guest agent on the VM.
- The IP address of the VM is public.
- You configured the DNS server for secondary networks.
You retrieved the fully qualified domain name (FQDN) of the cluster.
To obtain the FQDN, use the
oc getcommand as follows:oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain
$ oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomainCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Retrieve the network interface name from the VM configuration by running the following command:
oc get vm -n <namespace> <vm_name> -o yaml
$ oc get vm -n <namespace> <vm_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Note the name of the network interface.
Connect to the VM by using the
sshcommand:ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>
$ ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.17. Managing MAC address pools for network interfaces Copiar o linkLink copiado para a área de transferência!
The KubeMacPool component allocates MAC addresses for virtual machine (VM) network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address.
A virtual machine instance created from that VM retains the assigned MAC address across reboots.
KubeMacPool does not handle virtual machine instances created independently from a virtual machine.
11.17.1. Managing KubeMacPool by using the CLI Copiar o linkLink copiado para a área de transferência!
You can disable and re-enable KubeMacPool by using the command line.
KubeMacPool is enabled by default.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
To disable KubeMacPool in two namespaces, run the following command:
oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore
$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignoreCopy to Clipboard Copied! Toggle word wrap Toggle overflow To re-enable KubeMacPool in two namespaces, run the following command:
oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-
$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-Copy to Clipboard Copied! Toggle word wrap Toggle overflow