2.4. Logical Networks
2.4.1. Logical Network Tasks
2.4.1.1. Performing Networking Tasks
Click each network name and use the tabs in the details view to perform functions including:
- Attaching or detaching the networks to clusters and hosts
- Removing network interfaces from virtual machines and templates
- Adding and removing permissions for users to access and manage networks
These functions are also accessible through each individual resource.
Do not change networking in a data center or a cluster if any hosts are running as this risks making the host unreachable.
If you plan to use Red Hat Virtualization nodes to provide any services, remember that the services will stop if the Red Hat Virtualization environment stops operating.
This applies to all services, but you should be especially aware of the hazards of running the following on Red Hat Virtualization:
- Directory Services
- DNS
- Storage
2.4.1.2. Creating a New Logical Network in a Data Center or Cluster
Create a logical network and define its use in a data center, or in clusters in a data center.
Procedure
-
Click
or . - Click the data center or cluster name. The Details view opens.
- Click the Logical Networks tab.
Open the New Logical Network window:
- From a data center details view, click New.
- From a cluster details view, click Add Network.
- Enter a Name, Description, and Comment for the logical network.
- Optional: Enable Enable VLAN tagging.
- Optional: Disable VM Network.
Optional: Select the Create on external provider checkbox. This disables the network label and the VM network. See External Providers for details.
- Select the External Provider. The External Provider list does not include external providers that are in read-only mode.
- To create an internal, isolated network, select ovirt-provider-ovn on the External Provider list and leave Connect to physical network cleared.
- Enter a new label or select an existing label for the logical network in the Network Label text field.
For MTU, either select Default (1500) or select Custom and specify a custom value.
ImportantAfter you create a network on an external provider, you cannot change the network’s MTU settings.
ImportantIf you change the network’s MTU settings, you must propagate this change to the running virtual machines on the network: Hot unplug and replug every virtual machine’s vNIC that should apply the MTU setting, or restart the virtual machines. Otherwise, these interfaces fail when the virtual machine migrates to another host. For more information, see After network MTU change, some VMs and bridges have the old MTU and seeing packet drops and BZ#1766414.
- If you selected ovirt-provider-ovn from the External Provider drop-down list, define whether the network should implement Security Groups. See Logical Network General Settings Explained for details.
- From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
- If the Create on external provider checkbox is selected, the Subnet tab is visible. From the Subnet tab, select the Create subnet and enter a Name, CIDR, and Gateway address, and select an IP Version for the subnet that the logical network will provide. You can also add DNS servers as required.
- From the vNIC Profiles tab, add vNIC profiles to the logical network as required.
- Click .
If you entered a label for the logical network, it is automatically added to all host network interfaces with that label.
When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.
2.4.1.3. Editing a Logical Network
A logical network cannot be edited or moved to another interface if it is not synchronized with the network configuration on the host. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts on how to synchronize your networks.
When changing the VM Network
property of an existing logical network used as a display network, no new virtual machines can be started on a host already running virtual machines. Only hosts that have no running virtual machines after the change of the VM Network
property can start new virtual machines.
Procedure
-
Click
. - Click the data center’s name. This opens the details view.
- Click the Logical Networks tab and select a logical network.
- Click Edit.
Edit the necessary settings.
NoteYou can edit the name of a new or existing network, with the exception of the default network, without having to stop the virtual machines.
- Click .
Multi-host network configuration automatically applies updated network settings to all of the hosts within the data center to which the network is assigned. Changes can only be applied when virtual machines using the network are down. You cannot rename a logical network that is already configured on a host. You cannot disable the VM Network option while virtual machines or templates using that network are running.
2.4.1.4. Removing a Logical Network
You can remove a logical network from
Procedure
-
Click
. - Click a data center’s name. This opens the details view.
- Click the Logical Networks tab to list the logical networks in the data center.
- Select a logical network and click Remove.
- Optionally, select the Remove external network(s) from the provider(s) as well check box to remove the logical network both from the Manager and from the external provider if the network is provided by an external provider. The check box is grayed out if the external provider is in read-only mode.
- Click .
The logical network is removed from the Manager and is no longer available.
2.4.1.5. Configuring a Non-Management Logical Network as the Default Route
The default route used by hosts in a cluster is through the management network (ovirtmgmt
). The following procedure provides instructions to configure a non-management logical network as the default route.
Prerequisite:
-
If you are using the
default_route
custom property, you need to clear the custom property from all attached hosts and then follow this procedure.
Configuring the Default Route Role
-
Click
. - Click the name of the non-management logical network to configure as the default route to access its details.
- Click the Clusters tab.
- Click Manage Network. This opens the Manage Network window.
- Select the Default Route checkbox for the appropriate cluster(s).
- Click .
When networks are attached to a host, the default route of the host will be set on the network of your choice. It is recommended to configure the default route role before any host is added to your cluster. If your cluster already contains hosts, they may become out-of-sync until you sync your change to them.
Important Limitations with IPv6
- For IPv6, Red Hat Virtualization supports only static addressing.
- If both networks share a single gateway (are on the same subnet), you can move the default route role from the management network (ovirtmgmt) to another logical network.
- If the host and Manager are not on the same subnet, the Manager loses connectivity with the host because the IPv6 gateway has been removed.
- Moving the default route role to a non-management network removes the IPv6 gateway from the network interface and generates an alert: "On cluster clustername the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network."
2.4.1.6. Adding a static route on a host
You can use nmstate to add static routes to hosts. This method requires you to configure the hosts directly, without using Red Hat Virtualization Manager.
Static-routes you add are preserved as long as the related routed bridge, interface, or bond exists and has an IP address. Otherwise, the system removes the static route.
Except for adding or removing a static route on a host, always use the RHV Manager to configure host network settings in your cluster. For details, see Network Manager Stateful Configuration (nmstate).
The custom static-route is preserved so long as its interface/bond exists and has an IP address. Otherwise, it will be removed.
As a result, VM networks behave differently from non-VM networks:
- VM networks are based on a bridge. Moving the network from one interfaces/bond to another does not affect the route on a VM Network.
- Non-VM networks are based on an interface. Moving the network from one interfaces/bond to another deletes the route related to the Non-VM network.
Prerequisites
This procedure requires nmstate, which is only available if your environment uses:
- Red Hat Virtualization Manager version 4.4
- Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts that are based on Red Hat Enterprise Linux 8
Procedure
- Connect to the host you want to configure.
On the host, create a
static_route.yml
file, with the following example content:routes: config: - destination: 192.168.123.0/24 next-hop-address: 192.168.178.1 next-hop-interface: eth1
- Replace the example values shown with real values for your network.
To route your traffic to a secondary added network, use
next-hop-interface
to specify an interface or network name.-
To use a non-virtual machine network, specify an interface such as
eth1
. -
To use a virtual machine network, specify a network name that is also the bridge name such as
net1
.
-
To use a non-virtual machine network, specify an interface such as
Run this command:
$ nmstatectl set static_route.yml
Verification steps
Run the IP route command,
ip route
, with the destination parameter value you set instatic_route.yml
. This should show the desired route. For example, run the following command:$ ip route | grep 192.168.123.0`
Additional resources
2.4.1.7. Removing a static route on a host
You can use nmstate to remove static routes from hosts. This method requires you to configure the hosts directly, without using Red Hat Virtualization Manager.
Except for adding or removing a static route on a host, always use the RHV Manager to configure host network settings in your cluster. For details, see Network Manager Stateful Configuration (nmstate).
The custom static-route is preserved so long as its interface/bond exists and has an IP address. Otherwise, it will be removed.
As a result, VM networks behave differently from non-VM networks:
- VM networks are based on a bridge. Moving the network from one interfaces/bond to another does not affect the route on a VM Network.
- Non-VM networks are based on an interface. Moving the network from one interfaces/bond to another deletes the route related to the Non-VM network.
Prerequisites
This procedure requires nmstate, which is only available if your environment uses:
- Red Hat Virtualization Manager version 4.4
- Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts that are based on Red Hat Enterprise Linux 8
Procedure
- Connect to the host you want to reconfigure.
-
On the host, edit the
static_route.yml
file. -
Insert a line
state: absent
as shown in the following example. Add the value of
next-hop-interface
between the brackets ofinterfaces: []
. The result should look similar to the example shown here.routes: config: - destination: 192.168.123.0/24 next-hop-address: 192.168.178. next-hop-interface: eth1 state: absent interfaces: [{“name”: eth1}]
Run this command:
$ nmstatectl set static_route.yml
Verification steps
Run the IP route command,
ip route
, with the destination parameter value you set instatic_route.yml
. This should no longer show the desired route. For example, run the following command:$ ip route | grep 192.168.123.0`
Additional resources
2.4.1.8. Viewing or Editing the Gateway for a Logical Network
Users can define the gateway, along with the IP address and subnet mask, for a logical network. This is necessary when multiple networks exist on a host and traffic should be routed through the specified network, rather than the default gateway.
If multiple networks exist on a host and the gateways are not defined, return traffic will be routed through the default gateway, which may not reach the intended destination. This would result in users being unable to ping the host.
Red Hat Virtualization handles multiple gateways automatically whenever an interface goes up or down.
Procedure
-
Click
. - Click the host’s name. This opens the details view.
- Click the Network Interfaces tab to list the network interfaces attached to the host, and their configurations.
- Click Setup Host Networks.
- Hover your cursor over an assigned logical network and click the pencil icon. This opens the Edit Management Network window.
The Edit Management Network window displays the network name, the boot protocol, and the IP, subnet mask, and gateway addresses. The address information can be manually edited by selecting a Static boot protocol.
2.4.1.9. Logical Network General Settings Explained
The table below describes the settings for the General tab of the New Logical Network and Edit Logical Network window.
Field Name | Description |
---|---|
Name | The name of the logical network. This text field must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. Note that while the name of the logical network can be longer than 15 characters and can contain non-ASCII characters, the on-host identifier (vdsm_name) will differ from the name you defined. See Mapping VDSM Names to Logical Network Names for instructions on displaying a mapping of these names. |
Description | The description of the logical network. This text field has a 40-character limit. |
Comment | A field for adding plain text, human-readable comments regarding the logical network. |
Create on external provider | Allows you to create the logical network to an OpenStack Networking instance that has been added to the Manager as an external provider. External Provider - Allows you to select the external provider on which the logical network will be created. |
Enable VLAN tagging | VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled. |
VM Network | Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box. |
Port Isolation | If this is set, virtual machines on the same host are prevented from communicating and seeing each other on this logical network. For this option to work on different hypervisors, the switches need to be configured with PVLAN/Port Isolation on the respective port/VLAN connected to the hypervisors, and not reflect back the frames with any hairpin setting. |
MTU | Choose either Default, which sets the maximum transmission unit (MTU) to the value given in the parenthesis (), or Custom to set a custom MTU for the logical network. You can use this to match the MTU supported by your new logical network to the MTU supported by the hardware it interfaces with. Enter a numeric value in the text entry field if Custom is selected. IMPORTANT: If you change the network’s MTU settings, you must propagate this change to the running virtual machines on the network: Hot unplug and replug every virtual machine’s vNIC that should apply the MTU setting, or restart the virtual machines. Otherwise, these interfaces fail when the virtual machine migrates to another host. For more information, see After network MTU change, some VMs and bridges have the old MTU and seeing packet drops and BZ#1766414. |
Network Label | Allows you to specify a new label for the network or select from existing labels already attached to host network interfaces. If you select an existing label, the logical network will be automatically assigned to all host network interfaces with that label. |
Security Groups |
Allows you to assign security groups to the ports on this logical network. |
2.4.1.10. Logical Network Cluster Settings Explained
The table below describes the settings for the Cluster tab of the New Logical Network window.
Field Name | Description |
---|---|
Attach/Detach Network to/from Cluster(s) | Allows you to attach or detach the logical network from clusters in the data center and specify whether the logical network will be a required network for individual clusters. Name - the name of the cluster to which the settings will apply. This value cannot be edited. Attach All - Allows you to attach or detach the logical network to or from all clusters in the data center. Alternatively, select or clear the Attach check box next to the name of each cluster to attach or detach the logical network to or from a given cluster. Required All - Allows you to specify whether the logical network is a required network on all clusters. Alternatively, select or clear the Required check box next to the name of each cluster to specify whether the logical network is a required network for a given cluster. |
2.4.1.11. Logical Network vNIC Profiles Settings Explained
The table below describes the settings for the vNIC Profiles tab of the New Logical Network window.
Field Name | Description |
---|---|
vNIC Profiles | Allows you to specify one or more vNIC profiles for the logical network. You can add or remove a vNIC profile to or from the logical network by clicking the plus or minus button next to the vNIC profile. The first field is for entering a name for the vNIC profile. Public - Allows you to specify whether the profile is available to all users. QoS - Allows you to specify a network quality of service (QoS) profile to the vNIC profile. |
2.4.1.12. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window
Specify the traffic type for the logical network to optimize the network traffic flow.
Procedure
-
Click
. - Click the cluster’s name. This opens the details view.
- Click the Logical Networks tab.
- Click Manage Networks.
- Select the appropriate check boxes and radio buttons.
- Click .
Logical networks offered by external providers must be used as virtual machine networks; they cannot be assigned special cluster roles such as display or migration.
2.4.1.13. Explanation of Settings in the Manage Networks Window
The table below describes the settings for the Manage Networks window.
Field | Description/Action |
---|---|
Assign | Assigns the logical network to all hosts in the cluster. |
Required | A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational. |
VM Network | A logical network marked "VM Network" carries network traffic relevant to the virtual machine network. |
Display Network | A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller. |
Migration Network | A logical network marked "Migration Network" carries virtual machine and storage migration traffic. If an outage occurs on this network, the management network (ovirtmgmt by default) will be used instead. |
2.4.1.14. Configuring virtual functions on a NIC
This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV
Single Root I/O Virtualization (SR-IOV) enables you to use each PCIe endpoint as multiple separate devices by using physical functions (PFs) and virtual functions (VFs). A PCIe card can have between one and eight PFs. Each PF can have many VFs. The number of VFs it can have depends on the specific type of PCIe device.
To configure SR-IOV-capable Network Interface Controllers (NICs), you use the Red Hat Virtualization Manager. There, you can configure the number of VFs on each NIC.
You can configure a VF like you would configure a standalone NIC, including:
- Assigning one or more logical networks to the VF.
- Creating bonded interfaces with VFs.
- Assigning vNICs to VFs for direct device passthrough.
By default, all virtual networks have access to the virtual functions. You can disable this default and specify which networks have access to a virtual function.
Prerequisite
- For a vNIC to be attached to a VF must, its passthrough property must be enabled. For details, see Enabling_Passthrough_on_a_vNIC_Profile.
Procedure
-
Click
. - Click the name of an SR-IOV-capable host. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Select an SR-IOV-capable NIC, marked with a , and click the pencil icon.
Optional: To change the number of virtual functions, click the Number of VFs setting drop-down button and edit the Number of VFs text field.
ImportantChanging the number of VFs deletes all previous VFs on the network interface before creating the new VFs. This includes any VFs that have virtual machines directly attached.
Optional: To limit which virtual networks have access virtual functions, select Specific networks.
- Select the networks that should have access to the VF, or use Labels to select networks based on their network labels.
- Click .
- In the Setup Host Networks window, click .
2.4.2. Virtual Network Interface Cards (vNICs)
2.4.2.1. vNIC Profile Overview
A Virtual Network Interface Card (vNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in the Manager. A vNIC profile allows you to apply Network QoS profiles to a vNIC, enable or disable port mirroring, and add or remove custom properties. A vNIC profile also offers an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network.
2.4.2.2. Creating or Editing a vNIC Profile
Create or edit a Virtual Network Interface Controller (vNIC) profile to regulate network bandwidth for users and groups.
If you are enabling or disabling port mirroring, all virtual machines using the associated profile must be in a down state before editing.
Procedure
-
Click
. - Click the logical network’s name. This opens the details view.
- Click the vNIC Profiles tab.
- Click New or Edit.
- Enter the Name and Description of the profile.
- Select the relevant Quality of Service policy from the QoS list.
- Select a Network Filter from the drop-down list to manage the traffic of network packets to and from virtual machines. For more information on network filters, see Applying network filtering in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide.
- Select the Passthrough check box to enable passthrough of the vNIC and allow direct device assignment of a virtual function. Enabling the passthrough property will disable QoS, network filtering, and port mirroring as these are not compatible. For more information on passthrough, see Enabling Passthrough on a vNIC Profile.
- If Passthrough is selected, optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide.
- Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options.
- Select a custom property from the custom properties list, which displays Please select a key… by default. Use the + and - buttons to add or remove custom properties.
- Click .
Apply this profile to users and groups to regulate their network bandwidth. If you edited a vNIC profile, you must either restart the virtual machine, or hot unplug and then hot plug the vNIC if the guest operating system supports vNIC hot plug and hot unplug.
2.4.2.3. Explanation of Settings in the VM Interface Profile Window
Field Name | Description |
---|---|
Network | A drop-down list of the available networks to apply the vNIC profile to. |
Name | The name of the vNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters. |
Description | The description of the vNIC profile. This field is recommended but not mandatory. |
QoS | A drop-down list of the available Network Quality of Service policies to apply to the vNIC profile. QoS policies regulate inbound and outbound network traffic of the vNIC. |
Network Filter |
A drop-down list of the available network filters to apply to the vNIC profile. Network filters improve network security by filtering the type of packets that can be sent to and from virtual machines. The default filter is
Use Note
Red Hat no longer supports disabling filters by setting the |
Passthrough | A check box to toggle the passthrough property. Passthrough allows a vNIC to connect directly to a virtual function of a host NIC. The passthrough property cannot be edited if the vNIC profile is attached to a virtual machine. QoS, network filters, and port mirroring are disabled in the vNIC profile if passthrough is enabled. |
Migratable | A check box to toggle whether or not vNICs using this profile can be migrated. Migration is enabled by default on regular vNIC profiles; the check box is selected and cannot be changed. When the Passthrough check box is selected, Migratable becomes available and can be deselected, if required, to disable migration of passthrough vNICs. |
Failover | A drop-down menu to select available vNIC profiles that act as a failover device. Available only when the Passthrough and Migratable check boxes are checked. |
Port Mirroring | A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default. For further details, see Port Mirroring in the Technical Reference. |
Device Custom Properties | A drop-down menu to select available custom properties to apply to the vNIC profile. Use the + and - buttons to add and remove properties respectively. |
Allow all users to use this Profile | A check box to toggle the availability of the profile to all users in the environment. It is selected by default. |
2.4.2.4. Enabling Passthrough on a vNIC Profile
This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV
The passthrough property of a vNIC profile enables a vNIC to be directly connected to a virtual function (VF) of an SR-IOV-enabled NIC. The vNIC will then bypass the software network virtualization and connect directly to the VF for direct device assignment.
The passthrough property cannot be enabled if the vNIC profile is already attached to a vNIC; this procedure creates a new profile to avoid this. If a vNIC profile has passthrough enabled, QoS, network filters, and port mirroring cannot be enabled on the same profile.
For more information on SR-IOV, direct device assignment, and the hardware considerations for implementing these in Red Hat Virtualization, see Hardware Considerations for Implementing SR-IOV.
Procedure
-
Click
. - Click the logical network’s name. This opens the details view.
- Click the vNIC Profiles tab to list all vNIC profiles for that logical network.
- Click New.
- Enter the Name and Description of the profile.
- Select the Passthrough check box.
- Optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide.
- If necessary, select a custom property from the custom properties list, which displays Please select a key… by default. Use the + and - buttons to add or remove custom properties.
- Click .
The vNIC profile is now passthrough-capable. To use this profile to directly attach a virtual machine to a NIC or PCI VF, attach the logical network to the NIC and create a new PCI Passthrough vNIC on the desired virtual machine that uses the passthrough vNIC profile. For more information on these procedures respectively, see Editing Host Network Interfaces and Assigning Logical Networks to Hosts, and Adding a New Network Interface in the Virtual Machine Management Guide.
2.4.2.5. Enabling a vNIC profile for SR-IOV migration with failover
Failover allows the selection of a profile that acts as a failover device during virtual machine migration when the VF needs to be detached, preserving virtual machine communication with minimal interruption.
Failover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.
Prerequisites
- The Passthrough and Migratable check boxes of the profile are selected.
- The failover network is attached to the host.
- In order to make a vNIC profile acting as failover editable, you must first remove any failover references.
- vNIC profiles that can act as failover are profiles that are not selected as Passthrough or are not connected to an External Network.
Procedure
-
In the Administration Portal, go to
, select the vNIC profile, click and select a Failover vNIC profile
from the drop down list. - Click to save the profile settings.
Attaching two vNIC profiles that reference the same failover vNIC profile to the same virtual machine will fail in libvirt.
2.4.2.6. Removing a vNIC Profile
Remove a vNIC profile to delete it from your virtualized environment.
Procedure
-
Click
. - Click the logical network’s name. This opens the details view.
- Click the vNIC Profiles tab to display available vNIC profiles.
- Select one or more profiles and click Remove.
- Click .
2.4.2.7. Assigning Security Groups to vNIC Profiles
This feature is only available when ovirt-provider-ovn
is added as an external network provider. Security groups cannot be created through the Red Hat Virtualization Manager. You must create security groups through OpenStack Networking on the ovirt-provider-ovn
. For more information, see Project Security Management in the Red Hat OpenStack Platform Users and Identity Management Guide.
You can assign security groups to the vNIC profile of networks that have been imported from an OpenStack Networking instance and that use the Open vSwitch plug-in. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network interface. The following procedure outlines how to attach a security group to a vNIC profile.
A security group is identified using the ID of that security group as registered in the Open Virtual Network (OVN) External Network Provider. You can find the IDs of security groups for a given tenant using the OpenStack Networking API, see List Security Groups in the OpenStack API Reference.
Procedure
-
Click
. - Click the logical network’s name. This opens the details view.
- Click the vNIC Profiles tab.
- Click New, or select an existing vNIC profile and click Edit.
- From the custom properties drop-down list, select SecurityGroups. Leaving the custom property drop-down blank applies the default security settings, which permit all outbound traffic and intercommunication but deny all inbound traffic from outside of the default security group. Note that removing the SecurityGroups property later will not affect the applied security group.
- In the text field, enter the ID of the security group to attach to the vNIC profile.
- Click .
You have attached a security group to the vNIC profile. All traffic through the logical network to which that profile is attached will be filtered in accordance with the rules defined for that security group.
2.4.2.8. User Permissions for vNIC Profiles
Configure user permissions to assign users to certain vNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile.
User Permissions for vNIC Profiles
-
Click
. - Click the vNIC profile’s name. This opens the details view.
- Click the Permissions tab to show the current user permissions for the profile.
- Click Add or Remove to change user permissions for the vNIC profile.
- In the Add Permissions to User window, click My Groups to display your user groups. You can use this option to grant permissions to other users in your groups.
You have configured user permissions for a vNIC profile.
2.4.3. External Provider Networks
2.4.3.1. Importing Networks From External Providers
To use networks from an Open Virtual Network (OVN), register the provider with the Manager. See Adding an External Network Provider for more information. Then, use the following procedure to import the networks provided by that provider into the Manager so the networks can be used by virtual machines.
Procedure
-
Click
. - Click Import.
- From the Network Provider drop-down list, select an external provider. The networks offered by that provider are automatically discovered and listed in the Provider Networks list.
- Using the check boxes, select the networks to import in the Provider Networks list and click the down arrow to move those networks into the Networks to Import list.
- You can customize the name of the network that you are importing. To customize the name, click the network’s name in the Name column, and change the text.
- From the Data Center drop-down list, select the data center into which the networks will be imported.
- Optional: Clear the Allow All check box to prevent that network from being available to all users.
- Click Import.
The selected networks are imported into the target data center and can be attached to virtual machines. See Adding a New Network Interface in the Virtual Machine Management Guide for more information.
2.4.3.2. Limitations to Using External Provider Networks
The following limitations apply to using logical networks imported from an external provider in a Red Hat Virtualization environment.
- Logical networks offered by external providers must be used as virtual machine networks, and cannot be used as display networks.
- The same logical network can be imported more than once, but only to different data centers.
- You cannot edit logical networks offered by external providers in the Manager. To edit the details of a logical network offered by an external provider, you must edit the logical network directly from the external provider that provides that logical network.
- Port mirroring is not available for virtual network interface cards connected to logical networks offered by external providers.
- If a virtual machine uses a logical network offered by an external provider, that provider cannot be deleted from the Manager while the logical network is still in use by the virtual machine.
- Networks offered by external providers are non-required. As such, scheduling for clusters in which such logical networks have been imported will not take those logical networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the logical network on hosts in clusters in which such logical networks have been imported.
2.4.3.3. Configuring Subnets on External Provider Logical Networks
A logical network provided by an external provider can only assign IP addresses to virtual machines if one or more subnets have been defined on that logical network. If no subnets are defined, virtual machines will not be assigned IP addresses. If there is one subnet, virtual machines will be assigned an IP address from that subnet, and if there are multiple subnets, virtual machines will be assigned an IP address from any of the available subnets. The DHCP service provided by the external network provider on which the logical network is hosted is responsible for assigning these IP addresses.
While the Red Hat Virtualization Manager automatically discovers predefined subnets on imported logical networks, you can also add or remove subnets to or from logical networks from within the Manager.
If you add Open Virtual Network (OVN) (ovirt-provider-ovn) as an external network provider, multiple subnets can be connected to each other by routers. To manage these routers, you can use the OpenStack Networking API v2.0. Please note, however, that ovirt-provider-ovn has a limitation: Source NAT (enable_snat in the OpenStack API) is not implemented.
2.4.3.4. Adding Subnets to External Provider Logical Networks
Create a subnet on a logical network provided by an external provider.
Procedure
-
Click
. - Click the logical network’s name. This opens the details view.
- Click the Subnets tab.
- Click New.
- Enter a Name and CIDR for the new subnet.
- From the IP Version drop-down list, select either IPv4 or IPv6.
- Click .
For IPv6, Red Hat Virtualization supports only static addressing.
2.4.3.5. Removing Subnets from External Provider Logical Networks
Remove a subnet from a logical network provided by an external provider.
Procedure
-
Click
. - Click the logical network’s name. This opens the details view.
- Click the Subnets tab.
- Select a subnet and click Remove.
- Click .
2.4.3.6. Assigning Security Groups to Logical Networks and Ports
This feature is only available when Open Virtual Network (OVN) is added as an external network provider (as ovirt-provider-ovn). Security groups cannot be created through the Red Hat Virtualization Manager. You must create security groups through OpenStack Networking API v2.0 or Ansible.
A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network. You can also use security groups to filter traffic at the port level.
In Red Hat Virtualization 4.2.7, security groups are disabled by default.
Procedure
-
Click
. - Click the cluster name. This opens the details view.
- Click the Logical Networks tab.
-
Click Add Network and define the properties, ensuring that you select
ovirt-provider-ovn
from theExternal Providers
drop-down list. For more information, see Creating a new logical network in a data center or cluster. -
Select
Enabled
from theSecurity Group
drop-down list. For more details see Logical Network General Settings Explained. -
Click
OK
. - Create security groups using either OpenStack Networking API v2.0 or Ansible.
- Create security group rules using either OpenStack Networking API v2.0 or Ansible.
- Update the ports with the security groups that you defined using either OpenStack Networking API v2.0 or Ansible.
-
Optional. Define whether the security feature is enabled at the port level. Currently, this is only possible using the OpenStack Networking API. If the
port_security_enabled
attribute is not set, it will default to the value specified in the network to which it belongs.
2.4.4. Hosts and Networking
2.4.4.1. Network Manager Stateful Configuration (nmstate)
Version 4.4 of Red Hat Virtualization (RHV) uses Network Manager Stateful Configuration (nmstate) to configure networking for RHV hosts that are based on RHEL 8. RHV version 4.3 and earlier use interface configuration (ifcfg) network scripts to manage host networking.
To use nmstate, upgrade the Red Hat Virtualization Manager and hosts as described in the RHV Upgrade Guide.
As an administrator, you do not need to install or configure nmstate. It is enabled by default and runs in the background.
Always use RHV Manager to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration.
The change to nmstate is nearly transparent. It only changes how you configure host networking in the following ways:
- After you add a host to a cluster, always use the RHV Manager to modify the host network.
- Modifying the host network without using the Manager can create an unsupported configuration.
- To fix an unsupported configuration, you replace it with a supported one by using the Manager to synchronize the host network. For details, see Synchronizing Host Networks.
- The only situation where you modify host networks outside the Manager is to configure a static route on a host. For more details, see Adding a static route on a host.
The change to nmstate improves how RHV Manager applies configuration changes you make in Cockpit and Anaconda before adding the host to the Manager. This fixes some issues, such as BZ#1680970 Static IPv6 Address is lost on host deploy if NM manages the interface.
If you use dnf
or yum
to manually update the nmstate
package, restart vdsmd
and supervdsmd
on the host. For example:
# dnf update nmstate # systemctl restart vdsmd supervdsmd
If you use dnf
or yum
to manually update the Network Manager package, restart NetworkManager
on the host. For example:
# dnf update NetworkManager # systemctl restart NetworkManager
2.4.4.2. Refreshing Host Capabilities
When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Manager.
Procedure
-
Click
and select a host. -
Click
.
The list of network interface cards in the Network Interfaces tab for the selected host is updated. Any new network interface cards can now be used in the Manager.
2.4.4.3. Editing Host Network Interfaces and Assigning Logical Networks to Hosts
You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces. Bridge and ethtool custom properties are also supported.
The only way to change the IP address of a host in Red Hat Virtualization is to remove the host and then to add it again.
To change the VLAN settings of a host, see Editing VLAN Settings.
You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines.
If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, you can hover your cursor over a physical network interface to view the switch port’s current configuration. This can help to prevent incorrect configuration. Check the following information prior to assigning logical networks:
- Port Description (TLV type 4) and System Name (TLV type 5) help to detect to which ports and on which switch the host’s interfaces are patched.
- Port VLAN ID shows the native VLAN ID configured on the switch port for untagged ethernet frames. All VLANs configured on the switch port are shown as VLAN Name and VLAN ID combinations.
Procedure
-
Click
. - Click the host’s name. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Optionally, hover your cursor over host network interface to view configuration information provided by the switch.
Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface.
NoteIf a NIC is connected to more than one logical network, only one of the networks can be non-VLAN. All the other logical networks must be unique VLANs.
Configure the logical network:
- Hover your cursor over an assigned logical network and click the pencil icon. This opens the Edit Management Network window.
From the IPv4 tab, select a Boot Protocol from None, DHCP, or Static. If you selected Static, enter the IP, Netmask / Routing Prefix, and the Gateway.
NoteFor IPv6, only static IPv6 addressing is supported. To configure the logical network, select the IPv6 tab and make the following entries:
- Set Boot Protocol to Static.
-
For Routing Prefix, enter the length of the prefix using a forward slash and decimals. For example:
/48
-
IP: The complete IPv6 address of the host network interface. For example:
2001:db8::1:0:0:6
-
Gateway: The source router’s IPv6 address. For example:
2001:db8::1:0:0:1
NoteIf you change the host’s management network IP address, you must reinstall the host for the new IP address to be configured.
Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network will be forwarded using the logical network’s gateway instead of the default gateway used by the management network.
ImportantSet all hosts in a cluster to use the same IP stack for their management network; either IPv4 or IPv6 only. Dual stack is not supported.
Use the QoS tab to override the default host network quality of service. Select Override QoS and enter the desired values in the following fields:
- Weighted Share: Signifies how much of the logical link’s capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100.
- Rate Limit [Mbps]: The maximum bandwidth to be used by a network.
- Committed Rate [Mbps]: The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link.
To configure a network bridge, click the Custom Properties tab and select bridge_opts from the drop-down list. Enter a valid key and value with the following syntax: key=value. Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Explanation of bridge_opts Parameters.
forward_delay=1500 group_addr=1:80:c2:0:0:0 group_fwd_mask=0x0 hash_max=512 hello_time=200 max_age=2000 multicast_last_member_count=2 multicast_last_member_interval=100 multicast_membership_interval=26000 multicast_querier=0 multicast_querier_interval=25500 multicast_query_interval=13000 multicast_query_response_interval=1000 multicast_query_use_ifaddr=0 multicast_router=1 multicast_snooping=1 multicast_startup_query_count=2 multicast_startup_query_interval=3125
To configure ethernet properties, click the Custom Properties tab and select ethtool_opts from the drop-down list. Enter a valid value using the format of the command-line arguments of ethtool. For example: :
--coalesce em1 rx-usecs 14 sample-interval 3 --offload em2 rx on lro on tso off --change em1 speed 1000 duplex half
This field can accept wild cards. For example, to apply the same option to all of this network’s interfaces, use:
--coalesce * rx-usecs 14 sample-interval 3
The ethtool_opts option is not available by default; you need to add it using the engine configuration tool. See How to Set Up Manager to Use Ethtool for more information. For more information on ethtool properties, see the manual page by typing
man ethtool
in the command line.To configure Fibre Channel over Ethernet (FCoE), click the Custom Properties tab and select fcoe from the drop-down list. Enter a valid key and value with the following syntax: key=value. At least
enable=yes
is required. You can also adddcb=[yes|no]
and `auto_vlan=[yes|no]. Separate multiple entries with a whitespace character. The fcoe option is not available by default; you need to add it using the engine configuration tool. See How to Set Up Manager to Use FCoE for more information.NoteA separate, dedicated logical network is recommended for use with FCoE.
- To change the default network used by the host from the management network (ovirtmgmt) to a non-management network, configure the non-management network’s default route. See Configuring a Default Route for more information.
- If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. For more information about unsynchronized hosts and how to synchronize them, see Synchronizing host networks.
- Select the Verify connectivity between Host and Engine check box to check network connectivity. This action only works if the host is in maintenance mode.
- Click .
If not all network interface cards for the host are displayed, click
Troubleshooting
In some cases, making multiple concurrent changes to a host network configuration using the Setup Host Networks window or setupNetwork
command fails with an Operation failed: [Cannot setup Networks]. Another Setup Networks or Host Refresh process in progress on the host. Please try later.]
error in the event log. This error indicates that some of the changes were not configured on the host. This happens because, to preserve the integrity of the configuration state, only a single setup network command can be processed at a time. Other concurrent configuration commands are queued for up to a default timeout of 20 seconds. To help prevent the above failure from happening, use the engine-config
command to increase the timeout period of SetupNetworksWaitTimeoutSeconds
beyond 20 seconds. For example:
# engine-config --set SetupNetworksWaitTimeoutSeconds=40
Additional resources
2.4.4.4. Synchronizing Host Networks
The Manager defines a network interface as out-of-sync
when the definition of the interface on the host differs from the definitions stored by the Manager.
Out-of-sync networks appear with an Out-of-sync icon in the host’s Network Interfaces tab and with this icon in the Setup Host Networks window.
When a host’s network is out of sync, the only activities that you can perform on the unsynchronized network in the Setup Host Networks window are detaching the logical network from the network interface or synchronizing the network.
Understanding How a Host Becomes out-of-sync
A host will become out of sync if:
You make configuration changes on the host rather than using the the Edit Logical Networks window, for example:
- Changing the VLAN identifier on the physical host.
- Changing the Custom MTU on the physical host.
- You move a host to a different data center with the same network name, but with different values/parameters.
- You change a network’s VM Network property by manually removing the bridge from the host.
If you change the network’s MTU settings, you must propagate this change to the running virtual machines on the network: Hot unplug and replug every virtual machine’s vNIC that should apply the MTU setting, or restart the virtual machines. Otherwise, these interfaces fail when the virtual machine migrates to another host. For more information, see After network MTU change, some VMs and bridges have the old MTU and seeing packet drops and BZ#1766414.
Preventing Hosts from Becoming Unsynchronized
Following these best practices will prevent your host from becoming unsynchronized:
- Use the Administration Portal to make changes rather than making changes locally on the host.
- Edit VLAN settings according to the instructions in Editing VLAN Settings.
Synchronizing Hosts
Synchronizing a host’s network interface definitions involves using the definitions from the Manager and applying them to the host. If these are not the definitions that you require, after synchronizing your hosts update their definitions from the Administration Portal. You can synchronize a host’s networks on three levels:
- Per logical network
- Per host
- Per cluster
Synchronizing Host Networks on the Logical Network Level
-
Click
. - Click the host’s name. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Hover your cursor over the unsynchronized network and click the pencil icon. This opens the Edit Network window.
- Select the Sync network check box.
- Click to save the network change.
- Click Setup Host Networks window. to close the
Synchronizing a Host’s Networks on the Host level
- Click the Sync All Networks button in the host’s Network Interfaces tab to synchronize all of the host’s unsynchronized network interfaces.
Synchronizing a Host’s Networks on the Cluster level
- Click the Sync All Networks button in the cluster’s Logical Networks tab to synchronize all unsynchronized logical network definitions for the entire cluster.
You can also synchronize a host’s networks via the REST API. See syncallnetworks in the REST API Guide.
2.4.4.5. Editing a Host’s VLAN Settings
To change the VLAN settings of a host, the host must be removed from the Manager, reconfigured, and re-added to the Manager.
To keep networking synchronized, do the following:
- Put the host in maintenance mode.
- Manually remove the management network from the host. This will make the host reachable over the new VLAN.
- Add the host to the cluster. Virtual machines that are not connected directly to the management network can be migrated between hosts safely.
The following warning message appears when the VLAN ID of the management network is changed:
Changing certain properties (e.g. VLAN, MTU) of the management network could lead to loss of connectivity to hosts in the data center, if its underlying network infrastructure isn't configured to accommodate the changes. Are you sure you want to proceed?
Proceeding causes all of the hosts in the data center to lose connectivity to the Manager and causes the migration of hosts to the new management network to fail. The management network will be reported as "out-of-sync".
If you change the management network’s VLAN ID, you must reinstall the host to apply the new VLAN ID.
2.4.4.6. Adding Multiple VLANs to a Single Network Interface Using Logical Networks
Multiple VLANs can be added to a single network interface to separate traffic on the one host.
You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows.
Procedure
-
Click
. - Click the host’s name. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
Edit the logical networks:
- Hover your cursor over an assigned logical network and click the pencil icon.
- If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
Select a Boot Protocol:
- None
- DHCP
- Static
- Provide the IP and Subnet Mask.
- Click .
- Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
- Click .
Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational.
This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface.
2.4.4.6.1. Copying host networks
To save time, you can copy a source host’s network configuration to a target host in the same cluster.
Copying the network configuration includes:
-
Logical networks attached to the host, except the
ovirtmgmt
management network - Bonds attached to interfaces
Limitations
-
Do not copy network configurations that contain static IP addresses. Doing this sets the boot protocol in the target host to
none
. - Copying a configuration to a target host with the same interface names as the source host but different physical network connections produces a wrong configuration.
- The target host must have an equal or greater number of interfaces than the source host. Otherwise, the operation fails.
-
Copying
QoS
,DNS
, andcustom_properties
is not supported. - Network interface labels are not copied.
Copying host networks replaces ALL network settings on the target host except its attachment to the ovirtmgmt
management network.
Prerequisites
- The number of NICs on the target host must be equal or greater than those on the source host. Otherwise, the operation fails.
- The hosts must be in the same cluster.
Procedure
-
In the Administration Portal, click
. - Select the source host whose configuration you want to copy.
- Click Copy Host Networks window. . This opens the
- Use Target Host to select the host that should receive the configuration. The list only shows hosts that are in the same cluster.
- Click .
- Verify the network settings of the target host
Tips
- Selecting multiple hosts disables the button and context menu.
- Instead of using the button, you can right-click a host and select from the context menu.
- The button is also available in any host’s details view.
2.4.4.7. Assigning Additional IPv4 Addresses to a Host Network
A host network, such as the ovirtmgmt management network, is created with only one IP address when initially set up. This means that if a NIC’s configuration file is configured with multiple IP addresses, only the first listed IP address will be assigned to the host network. Additional IP addresses may be required if connecting to storage, or to a server on a separate private subnet using the same NIC.
The vdsm-hook-extra-ipv4-addrs
hook allows you to configure additional IPv4 addresses for host networks. For more information about hooks, see VDSM and Hooks.
In the following procedure, the host-specific tasks must be performed on each host for which you want to configure additional IP addresses.
Procedure
On the host that you want to configure additional IPv4 addresses for, install the VDSM hook package. The package needs to be installed manually on Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts.
# dnf install vdsm-hook-extra-ipv4-addrs
On the Manager, run the following command to add the key:
# engine-config -s 'UserDefinedNetworkCustomProperties=ipv4_addrs=.*'
Restart the
ovirt-engine
service:# systemctl restart ovirt-engine.service
-
In the Administration Portal, click
. - Click the host’s name. This opens the details view.
- Click the Network Interfaces tab and click Setup Host Networks.
- Edit the host network interface by hovering the cursor over the assigned logical network and clicking the pencil icon.
- Select ipv4_addr from the Custom Properties drop-down list and add the additional IP address and prefix (for example 5.5.5.5/24). Multiple IP addresses must be comma-separated.
- Click Edit Network window. to close the
- Click Setup Host Networks window. to close the
The additional IP addresses will not be displayed in the Manager, but you can run the command ip addr show
on the host to confirm that they have been added.
2.4.4.8. Adding Network Labels to Host Network Interfaces
Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces. Setting a label on a role network (for instance, a migration network or a display network) causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses.
There are two methods of adding labels to a host network interface:
- Manually, in the Administration Portal
- Automatically, with the LLDP Labeler service
Procedure
-
Click
. - Click the host’s name. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Click Labels and right-click [New Label]. Select a physical network interface to label.
- Enter a name for the network label in the Label text field.
- Click .
Procedure
You can automate the process of assigning labels to host network interfaces in the configured list of clusters with the LLDP Labeler service.
2.4.4.8.1. Configuring the LLDP Labeler
By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations.
Prerequisites
- The interfaces must be connected to a Juniper switch.
-
The Juniper switch must be configured to provide the
Port VLAN
using LLDP.
Procedure
Configure the
username
andpassword
in/etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
username
- the username of the Manager administrator. The default isadmin@internal
. -
password
- the password of the Manager administrator. The default is123456
.
-
Configure the LLDP Labeler service by updating the following values in
etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
clusters
- a comma-separated list of clusters on which the service should run. Wildcards are supported. For example,Cluster*
defines LLDP Labeler to run on all clusters starting with wordCluster
. To run the service on all clusters in the data center, type*
. The default isDef*
. -
api_url
- the full URL of the Manager’s API. The default ishttps://Manager_FQDN/ovirt-engine/api
-
ca_file
- the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. -
auto_bonding
- enables LLDP Labeler’s bonding capabilities. The default istrue
. -
auto_labeling
- enables LLDP Labeler’s labeling capabilities. The default istrue
.
-
-
Optionally, you can configure the service to run at a different time interval by changing the value of
OnUnitActiveSec
inetc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer
. The default is1h
. Configure the service to start now and at boot by entering the following command:
# systemctl enable --now ovirt-lldp-labeler
To invoke the service manually, enter the following command:
# /usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py
You have added a network label to a host network interface. Newly created logical networks with the same label are automatically assigned to all host network interfaces with that label. Removing a label from a logical network automatically removes that logical network from all host network interfaces with that label.
2.4.4.9. Changing the FQDN of a Host
Use the following procedure to change the fully qualified domain name of hosts.
Procedure
- Place the host into maintenance mode so the virtual machines are live migrated to another host. See Moving a host to maintenance mode for more information. Alternatively, manually shut down or migrate all the virtual machines to another host. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information.
- Click Remove, and click to remove the host from the Administration Portal.
Use the
hostnamectl
tool to update the host name. For more options, see Configure Host Names in the Red Hat Enterprise Linux 7 Networking Guide.# hostnamectl set-hostname NEW_FQDN
- Reboot the host.
- Re-register the host with the Manager. See Adding standard hosts to the Manager for more information.
2.4.4.9.1. IPv6 Networking Support
Red Hat Virtualization supports static IPv6 networking in most contexts.
Red Hat Virtualization requires IPv6 to remain enabled on the computer or virtual machine where you are running the Manager (also called "the Manager machine"). Do not disable IPv6 on the Manager machine, even if your systems do not use it.
Limitations for IPv6
- Only static IPv6 addressing is supported. Dynamic IPv6 addressing with DHCP or Stateless Address Autoconfiguration are not supported.
- Dual-stack addressing, IPv4 and IPv6, is not supported.
- OVN networking can be used with only IPv4 or IPv6.
- Switching clusters from IPv4 to IPv6 is not supported.
- Only a single gateway per host can be set for IPv6.
- If both networks share a single gateway (are on the same subnet), you can move the default route role from the management network (ovirtmgmt) to another logical network. The host and Manager should have the same IPv6 gateway. If the host and Manager are not on the same subnet, the Manager might lose connectivity with the host because the IPv6 gateway was removed.
- Using a glusterfs storage domain with an IPv6-addressed gluster server is not supported.
2.4.4.9.2. Setting Up and Configuring SR-IOV
This topic summarizes the steps for setting up and configuring SR-IOV, with links out to topics that cover each step in detail.
Prerequisites
Set up your hardware in accordance with the Hardware Considerations for Implementing SR-IOV
Procedure
To set up and configure SR-IOV, complete the following tasks.
Notes
- The number of the 'passthrough' vNICs depends on the number of available virtual functions (VFs) on the host. For example, to run a virtual machine (VM) with three SR-IOV cards (vNICs), the host must have three or more VFs enabled.
- Hotplug and unplug are supported.
- Live migration is supported.
- To migrate a VM, the destination host must also have enough available VFs to receive the VM. During the migration, the VM releases a number of VFs on the source host and occupies the same number of VFs on the destination host.
- On the host, you will see a device, link, or ifcae like any other interface. That device disappears when it is attached to a VM, and reappears when it is released.
- Avoid attaching a host device directly to a VM for SR-IOV feature.
- To use a VF as a trunk port with several VLANs and configure the VLANs within the Guest, please see Cannot configure VLAN on SR-IOV VF interfaces inside the Virtual Machine.
Here is an example of what the libvirt XML for the interface would look like:
---- <interface type='hostdev'> <mac address='00:1a:yy:xx:vv:xx'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x05' slot='0x10' function='0x0'/> </source> <alias name='ua-18400536-5688-4477-8471-be720e9efc68'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </interface> ----
Troubleshooting
The following example shows you how to get diagnostic information about the VFs attached to an interface.
# ip -s link show dev enp5s0f0 1: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT qlen 1000 link/ether 86:e2:ba:c2:50:f0 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 30931671 218401 0 0 0 19165434 TX: bytes packets errors dropped carrier collsns 997136 13661 0 0 0 0 vf 0 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off vf 1 MAC 00:1a:4b:16:01:5e, spoof checking on, link-state auto, trust off, query_rss off vf 2 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off
2.4.4.9.2.1. Additional Resources
2.4.5. Network Bonding
2.4.5.1. Bonding methods
Network bonding combines multiple NICs into a bond device, with the following advantages:
- The transmission speed of bonded NICs is greater than that of a single NIC.
- Network bonding provides fault tolerance, because the bond device will not fail unless all its NICs fail.
Using NICs of the same make and model ensures that they support the same bonding options and modes.
Red Hat Virtualization’s default bonding mode, (Mode 4) Dynamic Link Aggregation
, requires a switch that supports 802.3ad.
The logical networks of a bond must be compatible. A bond can support only 1 non-VLAN logical network. The rest of the logical networks must have unique VLAN IDs.
Bonding must be enabled for the switch ports. Consult the manual provided by your switch vendor for specific instructions.
You can create a network bond device using one of the following methods:
- Manually, in the Administration Portal, for a specific host
- Automatically, using LLDP Labeler, for unbonded NICs of all hosts in a cluster or data center
If your environment uses iSCSI storage and you want to implement redundancy, follow the instructions for configuring iSCSI multipathing.
2.4.5.2. Creating a Bond Device in the Administration Portal
You can create a bond device on a specific host in the Administration Portal. The bond device can carry both VLAN-tagged and untagged traffic.
Procedure
-
Click
. - Click the host’s name. This opens the details view.
- Click the Network Interfaces tab to list the physical network interfaces attached to the host.
- Click Setup Host Networks.
- Check the switch configuration. If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, hover your cursor over a physical NIC to view the switch port’s aggregation configuration.
Drag and drop a NIC onto another NIC or onto a bond.
NoteTwo NICs form a new bond. A NIC and a bond adds the NIC to the existing bond.
If the logical networks are incompatible, the bonding operation is blocked.
Select the Bond Name and Bonding Mode from the drop-down menus. See Bonding Modes for details.
If you select the Custom bonding mode, you can enter bonding options in the text field, as in the following examples:
-
If your environment does not report link states with
ethtool
, you can set ARP monitoring by enteringmode=1 arp_interval=1 arp_ip_target=192.168.0.2
. You can designate a NIC with higher throughput as the primary interface by entering
mode=1 primary=eth0
.For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.
-
If your environment does not report link states with
- Click .
Attach a logical network to the new bond and configure it. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts for instructions.
NoteYou cannot attach a logical network directly to an individual NIC in the bond.
- Optionally, you can select Verify connectivity between Host and Engine if the host is in maintenance mode.
- Click .
2.4.5.3. Creating a Bond Device with the LLDP Labeler Service
The LLDP Labeler service enables you to create a bond device automatically with all unbonded NICs, for all the hosts in one or more clusters or in the entire data center. The bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad)
.
NICs with incompatible logical networks cannot be bonded.
2.4.5.3.1. Configuring the LLDP Labeler
By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations.
Prerequisites
- The interfaces must be connected to a Juniper switch.
- The Juniper switch must be configured for Link Aggregation Control Protocol (LACP) using LLDP.
Procedure
Configure the
username
andpassword
in/etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
username
- the username of the Manager administrator. The default isadmin@internal
. -
password
- the password of the Manager administrator. The default is123456
.
-
Configure the LLDP Labeler service by updating the following values in
etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
clusters
- a comma-separated list of clusters on which the service should run. Wildcards are supported. For example,Cluster*
defines LLDP Labeler to run on all clusters starting with wordCluster
. To run the service on all clusters in the data center, type*
. The default isDef*
. -
api_url
- the full URL of the Manager’s API. The default ishttps://Manager_FQDN/ovirt-engine/api
-
ca_file
- the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. -
auto_bonding
- enables LLDP Labeler’s bonding capabilities. The default istrue
. -
auto_labeling
- enables LLDP Labeler’s labeling capabilities. The default istrue
.
-
-
Optionally, you can configure the service to run at a different time interval by changing the value of
OnUnitActiveSec
inetc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer
. The default is1h
. Configure the service to start now and at boot by entering the following command:
# systemctl enable --now ovirt-lldp-labeler
To invoke the service manually, enter the following command:
# /usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py
Attach a logical network to the new bond and configure it. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts for instructions.
NoteYou cannot attach a logical network directly to an individual NIC in the bond.
2.4.5.4. Bonding Modes
The packet dispersal algorithm is determined by the bonding mode. (See the Linux Ethernet Bonding Driver HOWTO for details). Red Hat Virtualization’s default bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad)
.
Red Hat Virtualization supports the following bonding modes, because they can be used in virtual machine (bridged) networks:
(Mode 1) Active-Backup
- One NIC is active. If the active NIC fails, one of the backup NICs replaces it as the only active NIC in the bond. The MAC address of this bond is visible only on the network adapter port. This prevents MAC address confusion that might occur if the MAC address of the bond were to change, reflecting the MAC address of the new active NIC.
(Mode 2) Load Balance (balance-xor)
-
The NIC that transmits packets is selected by performing an XOR operation on the source MAC address and the destination MAC address, multiplied by the
modulo
of the total number of NICs. This algorithm ensures that the same NIC is selected for each destination MAC address. (Mode 3) Broadcast
- Packets are transmitted to all NICs.
(Mode 4) Dynamic Link Aggregation(802.3ad)
(Default)The NICs are aggregated into groups that share the same speed and duplex settings . All the NICs in the active aggregation group are used.
Note(Mode 4) Dynamic Link Aggregation(802.3ad)
requires a switch that supports 802.3ad.The bonded NICs must have the same aggregator IDs. Otherwise, the Manager displays a warning exclamation mark icon on the bond in the Network Interfaces tab and the
ad_partner_mac
value of the bond is reported as00:00:00:00:00:00
. You can check the aggregator IDs by entering the following command:# cat /proc/net/bonding/bond0
The following bonding modes are incompatible with virtual machine logical networks and therefore only non-VM logical networks can be attached to bonds using these modes:
(Mode 0) Round-Robin
- The NICs transmit packets in sequential order. Packets are transmitted in a loop that begins with the first available NIC in the bond and ends with the last available NIC in the bond. Subsequent loops start with the first available NIC.
(Mode 5) Balance-TLB
, also called Transmit Load-Balance- Outgoing traffic is distributed, based on the load, over all the NICs in the bond. Incoming traffic is received by the active NIC. If the NIC receiving incoming traffic fails, another NIC is assigned.
(Mode 6) Balance-ALB
, also called Adaptive Load-Balance-
(Mode 5) Balance-TLB
is combined with receive load-balancing for IPv4 traffic. ARP negotiation is used for balancing the receive load.