Chapter 17. Configuring virtual machine network connections


For your virtual machines (VMs) to connect over a network to your host, to other VMs on your host, and to locations on an external network, the VM networking must be configured accordingly. To provide VM networking, the RHEL 9 hypervisor and newly created VMs have a default network configuration, which can also be modified further. For example:

  • You can enable the VMs on your host to be discovered and connected to by locations outside the host, as if the VMs were on the same network as the host.
  • You can partially or completely isolate a VM from inbound network traffic to increase its security and minimize the risk of any problems with the VM impacting the host.

The following sections explain the various types of VM network configuration and provide instructions for setting up selected VM network configurations.

17.1. Understanding virtual networking

The connection of virtual machines (VMs) to other devices and locations on a network has to be facilitated by the host hardware. The following sections explain the mechanisms of VM network connections and describe the default VM network setting.

17.1.1. How virtual networks work

Virtual networking uses the concept of a virtual network switch. A virtual network switch is a software construct that operates on a host machine. VMs connect to the network through the virtual network switch. Based on the configuration of the virtual switch, a VM can use an existing virtual network managed by the hypervisor, or a different network connection method.

The following figure shows a virtual network switch connecting two VMs to the network:

vn 02 switchandtwoguests

From the perspective of a guest operating system, a virtual network connection is the same as a physical network connection. Host machines view virtual network switches as network interfaces. When the virtnetworkd service is first installed and started, it creates virbr0, the default network interface for VMs.

To view information about this interface, use the ip utility on the host.

$ ip addr show virbr0
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff
inet 192.0.2.1/24 brd 192.0.2.255 scope global virbr0

By default, all VMs on a single host are connected to the same NAT-type virtual network, named default, which uses the virbr0 interface. For details, see Virtual networking default configuration.

For basic outbound-only network access from VMs, no additional network setup is usually needed, because the default network is installed along with the libvirt-daemon-config-network package, and is automatically started when the virtnetworkd service is started.

If a different VM network functionality is needed, you can create additional virtual networks and network interfaces and configure your VMs to use them. In addition to the default NAT, these networks and interfaces can be configured to use one of the following modes:

17.1.2. Virtual networking default configuration

When the virtnetworkd service is first installed on a virtualization host, it contains an initial virtual network configuration in network address translation (NAT) mode. By default, all VMs on the host are connected to the same libvirt virtual network, named default. VMs on this network can connect to locations both on the host and on the network beyond the host, but with the following limitations:

  • VMs on the network are visible to the host and other VMs on the host, but the network traffic is affected by the firewalls in the guest operating system’s network stack and by the libvirt network filtering rules attached to the guest interface.
  • VMs on the network can connect to locations outside the host but are not visible to them. Outbound traffic is affected by the NAT rules, as well as the host system’s firewall.

The following diagram illustrates the default VM network configuration:

vn 08 network overview

17.2. Using the web console for managing virtual machine network interfaces

Using the RHEL 9 web console, you can manage the virtual network interfaces for the virtual machines to which the web console is connected. You can:

17.2.1. Viewing and editing virtual network interface information in the web console

By using the RHEL 9 web console, you can view and modify the virtual network interfaces on a selected virtual machine (VM):

Prerequisites

Procedure

  1. Log in to the RHEL 9 web console.

    For details, see Logging in to the web console.

  2. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  3. Scroll to Network Interfaces.

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Unplug network interfaces.

    Image displaying the network interface details of the selected virtual machine.

    The information includes the following:

    • Type - The type of network interface for the VM. The types include virtual network, bridge to LAN, and direct attachment.

      Note

      Generic Ethernet connection is not supported in RHEL 9 and later.

    • Model type - The model of the virtual network interface.
    • MAC Address - The MAC address of the virtual network interface.
    • IP Address - The IP address of the virtual network interface.
    • Source - The source of the network interface. This is dependent on the network type.
    • State - The state of the virtual network interface.
  4. To edit the virtual network interface settings, Click Edit. The Virtual Network Interface Settings dialog opens.

    Image displaying the various options that can be edited for the selected network interface.
  5. Change the interface type, source, model, or MAC address.
  6. Click Save. The network interface is modified.

    Note

    Changes to the virtual network interface settings take effect only after restarting the VM.

    Additionally, MAC address can only be modified when the VM is shut off.

17.2.2. Adding and connecting virtual network interfaces in the web console

By using the RHEL 9 web console, you can create a virtual network interface and connect a virtual machine (VM) to it.

Prerequisites

Procedure

  1. Log in to the RHEL 9 web console.

    For details, see Logging in to the web console.

  2. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  3. Scroll to Network Interfaces.

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Edit, or Plug network interfaces.

  4. Click Plug in the row of the virtual network interface you want to connect.

    The selected virtual network interface connects to the VM.

17.2.3. Disconnecting and removing virtual network interfaces in the web console

By using the RHEL 9 web console, you can disconnect the virtual network interfaces connected to a selected virtual machine (VM).

Prerequisites

Procedure

  1. Log in to the RHEL 9 web console.

    For details, see Logging in to the web console.

  2. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  3. Scroll to Network Interfaces.

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Unplug network interfaces.

    Image displaying the network interface details of the selected virtual machine.
  4. Click Unplug in the row of the virtual network interface you want to disconnect.

    The selected virtual network interface disconnects from the VM.

17.4. Types of virtual machine network connections

To modify the networking properties and behavior of your VMs, change the type of virtual network or interface the VMs use. The following sections describe the connection types available to VMs in RHEL 9.

17.4.1. Virtual networking with network address translation

By default, virtual network switches operate in network address translation (NAT) mode. They use IP masquerading rather than Source-NAT (SNAT) or Destination-NAT (DNAT). IP masquerading enables connected VMs to use the host machine’s IP address for communication with any external network. When the virtual network switch is operating in NAT mode, computers external to the host cannot communicate with the VMs inside the host.

vn 04 hostwithnatswitch
Warning

Virtual network switches use NAT configured by firewall rules. Editing these rules while the switch is running is not recommended, because incorrect rules may result in the switch being unable to communicate.

17.4.2. Virtual networking in routed mode

When using Routed mode, the virtual switch connects to the physical LAN connected to the host machine, passing traffic back and forth without the use of NAT. The virtual switch can examine all traffic and use the information contained within the network packets to make routing decisions. When using this mode, the virtual machines (VMs) are all in a single subnet, separate from the host machine. The VM subnet is routed through a virtual switch, which exists on the host machine. This enables incoming connections, but requires extra routing-table entries for systems on the external network.

Routed mode uses routing based on the IP address:

vn 06 routed switch

A common topology that uses routed mode is virtual server hosting (VSH). A VSH provider may have several host machines, each with two physical network connections. One interface is used for management and accounting, the other for the VMs to connect through. Each VM has its own public IP address, but the host machines use private IP addresses so that only internal administrators can manage the VMs.

vn 10 routed mode datacenter

17.4.3. Virtual networking in bridged mode

In most VM networking modes, VMs automatically create and connect to the virbr0 virtual bridge. In contrast, in bridged mode, the VM connects to an existing Linux bridge on the host. As a result, the VM is directly visible on the physical network. This enables incoming connections, but does not require any extra routing-table entries.

Bridged mode uses connection switching based on the MAC address:

vn Bridged Mode Diagram

In bridged mode, the VM appear within the same subnet as the host machine. All other physical machines on the same physical network can detect the VM and access it.

Bridged network bonding

It is possible to use multiple physical bridge interfaces on the hypervisor by joining them together with a bond. The bond can then be added to a bridge, after which the VMs can be added to the bridge as well. However, the bonding driver has several modes of operation, and not all of these modes work with a bridge where VMs are in use.

The following bonding modes are usable:

  • mode 1
  • mode 2
  • mode 4

In contrast, modes 0, 3, 5, or 6 is likely to cause the connection to fail. Also note that media-independent interface (MII) monitoring should be used to monitor bonding modes, as Address Resolution Protocol (ARP) monitoring does not work correctly.

For more information about bonding modes, refer to the Red Hat Knowledgebase.

Common scenarios

The most common use cases for bridged mode include:

  • Deploying VMs in an existing network alongside host machines, making the difference between virtual and physical machines invisible to the end user.
  • Deploying VMs without making any changes to existing physical network configuration settings.
  • Deploying VMs that must be easily accessible to an existing physical network. Placing VMs on a physical network where they must access DHCP services.
  • Connecting VMs to an existing network where virtual LANs (VLANs) are used.
  • A demilitarized zone (DMZ) network. For a DMZ deployment with VMs, Red Hat recommends setting up the DMZ at the physical network router and switches, and connecting the VMs to the physical network by using bridged mode.

17.4.4. Virtual networking in isolated mode

By using isolated mode, virtual machines connected to the virtual switch can communicate with each other and with the host machine, but their traffic will not pass outside of the host machine, and they cannot receive traffic from outside the host machine. Using dnsmasq in this mode is required for basic functionality such as DHCP.

vn 07 isolated switch

17.4.5. Virtual networking in open mode

When using open mode for networking, libvirt does not generate any firewall rules for the network. As a result, libvirt does not overwrite firewall rules provided by the host, and the user can therefore manually manage the VM’s firewall rules.

17.4.6. Comparison of virtual machine connection types

The following table provides information about the locations to which selected types of virtual machine (VM) network configurations can connect, and to which they are visible.

Table 17.1. Virtual machine connection types
 Connection to the hostConnection to other VMs on the hostConnection to outside locationsVisible to outside locations

Bridged mode

YES

YES

YES

YES

NAT

YES

YES

YES

no

Routed mode

YES

YES

YES

YES

Isolated mode

YES

YES

no

no

Open mode

Depends on the host’s firewall rules

17.5. Booting virtual machines from a PXE server

Virtual machines (VMs) that use Preboot Execution Environment (PXE) can boot and load their configuration from a network. This chapter describes how to use libvirt to boot VMs from a PXE server on a virtual or bridged network.

Warning

These procedures are provided only as an example. Ensure that you have sufficient backups before proceeding.

17.5.1. Setting up a PXE boot server on a virtual network

This procedure describes how to configure a libvirt virtual network to provide Preboot Execution Environment (PXE). This enables virtual machines on your host to be configured to boot from a boot image available on the virtual network.

Prerequisites

  • A local PXE server (DHCP and TFTP), such as:

    • libvirt internal server
    • manually configured dhcpd and tftpd
    • dnsmasq
    • Cobbler server
  • PXE boot images, such as PXELINUX configured by Cobbler or manually.

Procedure

  1. Place the PXE boot images and configuration in /var/lib/tftpboot folder.
  2. Set folder permissions:

    # chmod -R a+r /var/lib/tftpboot
  3. Set folder ownership:

    # chown -R nobody: /var/lib/tftpboot
  4. Update SELinux context:

    # chcon -R --reference /usr/sbin/dnsmasq /var/lib/tftpboot
    # chcon -R --reference /usr/libexec/libvirt_leaseshelper /var/lib/tftpboot
  5. Shut down the virtual network:

    # virsh net-destroy default
  6. Open the virtual network configuration file in your default editor:

    # virsh net-edit default
  7. Edit the <ip> element to include the appropriate address, network mask, DHCP address range, and boot file, where example-pxelinux is the name of the boot image file.

    <ip address='192.0.2.1' netmask='255.255.255.0'>
       <tftp root='/var/lib/tftpboot'/>
       <dhcp>
          <range start='192.0.2.2' end='192.0.2.254' />
          <bootp file='example-pxelinux'/>
       </dhcp>
    </ip>
  8. Start the virtual network:

    # virsh net-start default

Verification

  • Verify that the default virtual network is active:

    # virsh net-list
    Name             State    Autostart   Persistent
    ---------------------------------------------------
    default          active   no          no

17.5.2. Booting virtual machines by using PXE and a virtual network

To boot virtual machines (VMs) from a Preboot Execution Environment (PXE) server available on a virtual network, you must enable PXE booting.

Prerequisites

Procedure

  • Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the default virtual network, into a new 10 GB qcow2 image file:

    # virt-install --pxe --network network=default --memory 2048 --vcpus 2 --disk size=10
    • Alternatively, you can manually edit the XML configuration file of an existing VM:

      1. Ensure the <os> element has a <boot dev='network'/> element inside:

        <os>
           <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
           <boot dev='network'/>
           <boot dev='hd'/>
        </os>
      2. Ensure the guest network is configured to use your virtual network:

        <interface type='network'>
           <mac address='52:54:00:66:79:14'/>
           <source network='default'/>
           <target dev='vnet0'/>
           <alias name='net0'/>
           <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>

Verification

  • Start the VM by using the virsh start command. If PXE is configured correctly, the VM boots from a boot image available on the PXE server.

17.5.3. Booting virtual machines by using PXE and a bridged network

To boot virtual machines (VMs) from a Preboot Execution Environment (PXE) server available on a bridged network, you must enable PXE booting.

Prerequisites

  • Network bridging is enabled.
  • A PXE boot server is available on the bridged network.

Procedure

  • Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the breth0 bridged network, into a new 10 GB qcow2 image file:

    # virt-install --pxe --network bridge=breth0 --memory 2048 --vcpus 2 --disk size=10
    • Alternatively, you can manually edit the XML configuration file of an existing VM:

      1. Ensure the <os> element has a <boot dev='network'/> element inside:

        <os>
           <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
           <boot dev='network'/>
           <boot dev='hd'/>
        </os>
      2. Ensure the VM is configured to use your bridged network:

        <interface type='bridge'>
           <mac address='52:54:00:5a:ad:cb'/>
           <source bridge='breth0'/>
           <target dev='vnet0'/>
           <alias name='net0'/>
           <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>

Verification

  • Start the VM by using the virsh start command. If PXE is configured correctly, the VM boots from a boot image available on the PXE server.

Additional resources

17.6. Configuring bridges on a network bond to connect virtual machines with the network

The network bridge connects VMs with the same network as the host. If you want to connect VMs on one host to another host or VMs on another host, a bridge establishes communication between them. However, the bridge does not provide a fail-over mechanism. To handle failures in communication, a network bond handles communication in the event of failure of a network interface. To maintain fault tolerance and redundancy, the active-backup bonding mechanism determines that only one port is active in the bond and does not require any switch configuration. If an active port fails, an alternate port becomes active to retain communication between configured VMs in the network.

17.6.1. Configuring network interfaces on a network bond by using nmcli

To configure a network bond on the command line, use the nmcli utility.

Prerequisites

  • Two or more physical devices are installed on the server, and they are not configured in any NetworkManager connection profile.

Procedure

  1. Create a bond interface:

    # nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=active-backup"

    This command creates a bond named bond0 that uses the active-backup mode.

  2. Assign the Ethernet interfaces to the bond:

    # nmcli connection add type ethernet slave-type bond con-name bond0-port1 ifname enp7s0 master bond0
    # nmcli connection add type ethernet slave-type bond con-name bond0-port2 ifname enp8s0 master bond0

    These commands create profiles for enp7s0 and enp8s0, and add them to the bond0 connection.

  3. Configure the IPv4 settings:

    • To use DHCP, no action is required.
    • To set a static IPv4 address, network mask, default gateway, and DNS server to the bond0 connection, enter:

      # nmcli connection modify bond0 ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.253 ipv4.dns-search example.com ipv4.method manual
  4. Configure the IPv6 settings:

    • To use stateless address autoconfiguration (SLAAC), no action is required.
    • To set a static IPv6 address, network mask, default gateway, and DNS server to the bond0 connection, enter:

      # nmcli connection modify bond0 ipv6.addresses 2001:db8:1::1/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::fffd ipv6.dns-search example.com ipv6.method manual
  5. Optional: If you want to set any parameters on the bond ports, use the following command:

    # nmcli connection modify bond0-port1 bond-port.<parameter> <value>
  6. Configure that Red Hat Enterprise Linux enables all ports automatically when the bond is enabled:

    # nmcli connection modify bond0 connection.autoconnect-ports 1
  7. Activate the bridge:

    # nmcli connection up bond0

Verification

  1. Temporarily remove the network cable from the host.

    Note that there is no method to properly test link failure events using software utilities. Tools that deactivate connections, such as nmcli, show only the bonding driver’s ability to handle port configuration changes and not actual link failure events.

  2. Display the status of the bond:

    # cat /proc/net/bonding/bond0

17.6.2. Configuring a network bridge for network bonds by using nmcli

A network bridge for network bonds involves configuring a bond interface that combines multiple network interfaces for improved traffic handling. Therefore, VMs can access the network through the bonded network interfaces by using the network bridge. The nmcli utility creates and edits connection files from the command line required for the configuration.

Procedure

  1. Create a bridge interface:

    # nmcli connection add type bridge con-name br0 ifname br0 ipv4.method disabled ipv6.method disabled
  2. Add the bond0 bond to the br0 bridge:

    # nmcli connection modify bond0 master br0
  3. Configure that Red Hat Enterprise Linux enables all ports automatically when the bridge is enabled:

    # nmcli connection modify br0 connection.autoconnect-ports 1
  4. Reactivate the bridge:

    # nmcli connection up br0

Verification

  • Use the ip utility to display the link status of Ethernet devices that are ports of a specific bridge:

    # ip link show master br0
    6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000
        link/ether 52:54:00:38:a9:4d brd ff:ff:ff:ff:ff:ff
    ...
  • Use the bridge utility to display the status of Ethernet devices that are ports of any bridge device:

    # bridge link show
    6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 100
    ...

    To display the status for a specific Ethernet device, use the bridge link show dev <ethernet_device_name> command.

Additional resources

  • nm-settings(5) man page
  • bridge(8) man page

17.6.3. Creating a virtual network in libvirt with an existing bond interface

To enable virtual machines (VM) to use the br0 bridge with the bond, first add a virtual network to the libvirtd service that uses this bridge.

Prerequisites

  • You installed the libvirt package.
  • You started and enabled the libvirtd service.
  • You configured the br0 device with the bond on Red Hat Enterprise Linux.

Procedure

  1. Create the ~/bond0-bridge.xml file with the following content:

    <network>
    	<name>bond0-bridge</name>
    	<forward mode="bridge" />
    	<bridge name="br0" />
    </network>
  2. Use the ~/bond0-bridge.xml file to create a new virtual network in libvirt:

    # virsh net-define ~/bond0-bridge.xml
  3. Remove the ~/bond0-bridge.xml file:

    # rm ~/bond0-bridge.xml
  4. Start the bond0-bridge virtual network:

    # virsh net-start bond0-bridge
  5. Configure the bond0-bridge virtual network to start automatically when the libvirtd service starts:

    # virsh net-autostart bond0-bridge

Verification

  • Display the list of virtual networks:

    # virsh net-list
    Name              State    Autostart   Persistent
    ----------------------------------------------------
    bond0-bridge      active      yes         yes
    ...

Additional resources

  • virsh(1) man page

17.6.4. Configuring virtual machines to use a bond interface

To configure a VM to use a bridge device with a bond interface on the host, create a new VM that uses the bond0-bridge virtual network or update the settings of existing VMs to use this network.

Perform this procedure on the RHEL hosts.

Prerequisites

  • You configured the bond0-bridge virtual network in libvirtd.

Procedure

  1. To create a new VM and configure it to use the bond0-bridge network, pass the --network network:bond0-bridge option to the virt-install utility when you create the VM:

    # virt-install ... --network network:bond0-bridge
  2. To change the network settings of an existing VM:

    1. Connect the VM’s network interface to the bond0-bridge virtual network:

      # virt-xml <example_vm> --edit --network network=bond0-bridge
  3. Shut down the VM, and start it again:

    # virsh shutdown <example_vm>
    # virsh start <example_vm>

Verification

  • Display the virtual network interfaces of the VM on the host:

    # virsh domiflist <example_vm>
    Interface   Type     Source           Model    MAC
    -------------------------------------------------------------------
    vnet1       bridge   bond0-bridge   virtio   52:54:00:c5:98:1c
  • Display the interfaces attached to the br0 bridge:

    # ip link show master br0
    18: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 2a:53:bd:d5:b3:0a brd ff:ff:ff:ff:ff:ff
    
    19: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:c5:98:1c brd ff:ff:ff:ff:ff:ff
    ...

    Note that the libvirtd service dynamically updates the bridge’s configuration. When you start a VM which uses the bond0-bridge network, the corresponding vnet* device on the host appears as a port of the bridge.

Additional resources

  • virt-install(1) man page
  • virt-xml(1) man page
  • virsh(1) man page
  • arping(8) man page

17.7. Configuring the passt user-space connection

If you require non-privileged access to a virtual network, for example when using a session connection of libvirt, you can configure your virtual machine (VM) to use the passt networking back end.

Prerequisites

  • The passt package has been installed on your system.

    # dnf install passt

Procedure

  1. Open the XML configuration of the VM on which you want to use a passt connection. For example:

    # virsh edit <testguest1>
  2. In the <devices> section, add an <interface type='user'> element that uses passt as its backend type.

    For example, the following configuration sets up a passt connection that uses addresses and routes copied from the host interface associated with the first default route:

    <devices>
      [...]
      <interface type='user'>
        <backend type='passt'/>
      </interface>
    </devices>

    Optionally, when using passt, you can specify multiple <portForward> elements to forward incoming network traffic for the host to this VM interface. You can also customize interface IP addresses. For example:

    <devices>
      [...]
      <interface type='user'>
        <backend type='passt'/>
        <mac address="52:54:00:98:d8:b7"/>
        <source dev='eth0'/>
        <ip family='ipv4' address='192.0.2.1' prefix='24'/>
        <ip family='ipv6' address='::ffff:c000:201'/>
        <portForward proto='tcp'>
          <range start='2022' to='22'/>
        </portForward>
        <portForward proto='udp' address='1.2.3.4'>
           <range start='5000' end='5020' to='6000'/>
           <range start='5010' end='5015' exclude='yes'/>
        </portForward>
        <portForward proto='tcp' address='2001:db8:ac10:fd01::1:10' dev='eth0'>
          <range start='8080'/>
          <range start='4433' to='3444'/>
        </portForward>
      </interface>
    </devices>

    This example configuration sets up a passt connection with the following parameters:

    • The VM copies the network routes for forwarding traffic from the eth0 host interface.
    • The interface MAC is set to 52:54:00:98:d8:b7. If unset, a random one will be generated.
    • The IPv4 address is set to 192.0.2.1/24, and the IPv6 address is set to ::ffff:c000:201.
    • The TCP port 2022 on the host forwards its network traffic to port 22 on the VM.
    • The TCP address 2001:db8:ac10:fd01::1:10 on host interface eth0 and port 8080 forwards its network traffic to port 8080 on the VM. Port 4433 forwards to port 3444 on the VM.
    • The UDP address 1.2.3.4 and ports 5000 - 5009 and 5016 - 5020 on the host forward their network traffic to ports 6000 - 6009 and 6016 - 6020 on the VM.
  3. Save the XML configuration.

Verification

  • Start or restart the VM you configured with passt:

    # virsh reboot <vm-name>
    # virsh start <vm-name>

    If the VM boots successfully, it is now using the passt networking backend.

17.8. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.