Search

6.5. Hosts and Networking

download PDF

6.5.1. Refreshing Host Capabilities

When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Manager.

Procedure 6.19. To Refresh Host Capabilities

  1. Use the resource tabs, tree mode, or the search function to find and select a host in the results list.
  2. Click the Refresh Capabilities button.
The list of network interface cards in the Network Interfaces tab of the details pane for the selected host is updated. Any new network interface cards can now be used in the Manager.

6.5.2. Editing Host Network Interfaces and Assigning Logical Networks to Hosts

You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces. Bridge and ethtool custom properties are also supported.

Warning

The only way to change the IP address of a host in Red Hat Virtualization is to remove the host and then to add it again.
To change the VLAN settings of a host, the host must be removed from the Manager, reconfigured, and re-added to the Manager.
To keep networking synchronized, do the following. Put the host in maintenance mode and manually remove the management network from the host. This will make the host reachable over the new VLAN. Add the host to the cluster. Virtual machines that are not connected directly to the management network can be migrated between hosts safely.
The following warning message appears when the VLAN ID of the management network is changed:
Changing certain properties (e.g. VLAN, MTU) of the management network could lead to loss of connectivity to hosts in the data center, if its underlying network infrastructure isn't configured to accommodate the changes. Are you sure you want to proceed?
Proceeding causes all of the hosts in the data center to lose connectivity to the Manager and causes the migration of hosts to the new management network to fail. The management network will be reported as "out-of-sync".

Important

You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines.

Procedure 6.20. Editing Host Network Interfaces and Assigning Logical Networks to Hosts

  1. Click the Hosts resource tab, and select the desired host.
  2. Click the Network Interfaces tab in the details pane.
  3. Click the Setup Host Networks button to open the Setup Host Networks window.
  4. Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface.
    Alternatively, right-click the logical network and select a network interface from the drop-down menu.
  5. Configure the logical network:
    1. Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window.
    2. From the IPv4 tab, select a Boot Protocol from None, DHCP, or Static. If you selected Static, enter the IP, Netmask / Routing Prefix, and the Gateway.

      Note

      Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network will be forwarded using the logical network's gateway instead of the default gateway used by the management network.

      Note

      The IPv6 tab should not be used as it is currently not supported.
    3. Use the QoS tab to override the default host network quality of service. Select Override QoS and enter the desired values in the following fields:
      • Weighted Share: Signifies how much of the logical link's capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100.
      • Rate Limit [Mbps]: The maximum bandwidth to be used by a network.
      • Committed Rate [Mbps]: The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link.
      For more information on configuring host network quality of service see Section 3.3, “Host Network Quality of Service”
    4. To configure a network bridge, click the Custom Properties tab and select bridge_opts from the drop-down list. Enter a valid key and value with the following syntax: key=value. Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Section B.1, “Explanation of bridge_opts Parameters”.
      forward_delay=1500
      gc_timer=3765
      group_addr=1:80:c2:0:0:0
      group_fwd_mask=0x0
      hash_elasticity=4
      hash_max=512
      hello_time=200
      hello_timer=70
      max_age=2000
      multicast_last_member_count=2
      multicast_last_member_interval=100
      multicast_membership_interval=26000
      multicast_querier=0
      multicast_querier_interval=25500
      multicast_query_interval=13000
      multicast_query_response_interval=1000
      multicast_query_use_ifaddr=0
      multicast_router=1
      multicast_snooping=1
      multicast_startup_query_count=2
      multicast_startup_query_interval=3125
    5. To configure ethernet properties, click the Custom Properties tab and select ethtool_opts from the drop-down list. Enter a valid value using the format of the command-line arguments of ethtool. For example:
      --coalesce em1 rx-usecs 14 sample-interval 3 --offload em2 rx on lro on tso off --change em1 speed 1000 duplex half
      This field can accept wildcards. For example, to apply the same option to all of this network's interfaces, use:
      --coalesce * rx-usecs 14 sample-interval 3
      The ethtool_opts option is not available by default; you need to add it using the engine configuration tool. See Section B.2, “How to Set Up Red Hat Virtualization Manager to Use Ethtool” for more information. For more information on ethtool properties, see the manual page by typing man ethtool in the command line.
    6. To configure Fibre Channel over Ethernet (FCoE), click the Custom Properties tab and select fcoe from the drop-down list. Enter a valid key and value with the following syntax: key=value. At least enable=yes is required. You can also add dcb=[yes|no] and auto_vlan=[yes|no]. Separate multiple entries with a whitespace character. The fcoe option is not available by default; you need to add it using the engine configuration tool. See Section B.3, “How to Set Up Red Hat Virtualization Manager to Use FCoE” for more information.

      Note

      A separate, dedicated logical network is recommended for use with FCoE.
    7. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. A logical network cannot be edited or moved to another interface until it is synchronized.

      Note

      Networks are not considered synchronized if they have one of the following conditions:
      • The VM Network is different from the physical host network.
      • The VLAN identifier is different from the physical host network.
      • A Custom MTU is set on the logical network, and is different from the physical host network.
  6. Select the Verify connectivity between Host and Engine check box to check network connectivity; this action will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box to make the changes persistent when the environment is rebooted.
  8. Click OK.

Note

If not all network interface cards for the host are displayed, click the Refresh Capabilities button to update the list of network interface cards available for that host.

6.5.3. Adding Multiple VLANs to a Single Network Interface Using Logical Networks

Multiple VLANs can be added to a single network interface to separate traffic on the one host.

Important

You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows.

Procedure 6.21. Adding Multiple VLANs to a Network Interface using Logical Networks

  1. Click the Hosts resource tab, and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
  5. Edit the logical networks by hovering your cursor over an assigned logical network and clicking the pencil icon to open the Edit Network window.
    If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
    Select a Boot Protocol from:
    • None,
    • DHCP, or
    • Static,
      Provide the IP and Subnet Mask.
    Click OK.
  6. Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box
  8. Click OK.
Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational
You have added multiple VLAN-tagged logical networks to a single interface. This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface.

6.5.4. Assigning Additional IPv4 Addresses to a Host Network

A host network, such as the ovirtmgmt management network, is created with only one IP address when initially set up. This means that if a NIC's configuration file (for example, /etc/sysconfig/network-scripts/ifcfg-eth01) is configured with multiple IP addresses, only the first listed IP address will be assigned to the host network. Additional IP addresses may be required if connecting to storage, or to a server on a separate private subnet using the same NIC.
The vdsm-hook-extra-ipv4-addrs hook allows you to configure additional IPv4 addresses for host networks. For more information about hooks, see Appendix A, VDSM and Hooks.
In the following procedure, the host-specific tasks must be performed on each host for which you want to configure additional IP addresses.

Procedure 6.22. Assigning Additional IPv4 Addresses to a Host Network

  1. On the host that you want to configure additional IPv4 addresses for, install the VDSM hook package. The package is available by default on Red Hat Virtualization Hosts but needs to be installed on Red Hat Enterprise Linux hosts.
    # yum install vdsm-hook-extra-ipv4-addrs
  2. On the Manager, run the following command to add the key:
    # engine-config -s 'UserDefinedNetworkCustomProperties=ipv4_addrs=.*'
  3. Restart the ovirt-engine service:
    # systemctl restart ovirt-engine.service
  4. In the Administration Portal, click the Hosts resource tab, and select the host for which additional IP addresses must be configured.
  5. Click the Network Interfaces tab in the details pane and click the Setup Host Networks button to open the Setup Host Networks window.
  6. Edit the host network interface by hovering the cursor over the assigned logical network and clicking the pencil icon to open the Edit Management Network window.
  7. Select ipv4_addr from the Custom Properties drop-down list and add the additional IP address and prefix (for example 5.5.5.5/24). Multiple IP addresses must be comma-separated.
  8. Click OK.
  9. Select the Save network configuration check box.
  10. Click OK.
The additional IP addresses will not be displayed in the Manager, but you can run the command ip addr show on the host to confirm that they have been added.

6.5.5. Adding Network Labels to Host Network Interfaces

Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces.

Note

Setting a label on a role network (for instance, a migration network or a display network) causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses.

Procedure 6.23. Adding Network Labels to Host Network Interfaces

  1. Click the Hosts resource tab, and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Click Labels, and right-click [New Label]. Select a physical network interface to label.
  5. Enter a name for the network label in the Label text field.
  6. Click OK.
You have added a network label to a host network interface. Any newly created logical networks with the same label will be automatically assigned to all host network interfaces with that label. Also, removing a label from a logical network will automatically remove that logical network from all host network interfaces with that label.

6.5.6. Bonds

6.5.6.1. Bonding Logic in Red Hat Virtualization

The Red Hat Virtualization Manager Administration Portal allows you to create bond devices using a graphical interface. There are several distinct bond creation scenarios, each with its own logic.
Two factors that affect bonding logic are:
  • Are either of the devices already carrying logical networks?
  • Are the devices carrying compatible logical networks?
Table 6.7. Bonding Scenarios and Their Results
Bonding Scenario Result
NIC + NIC
The Create New Bond window is displayed, and you can configure a new bond device.
If the network interfaces carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
NIC + Bond
The NIC is added to the bond device. Logical networks carried by the NIC and the bond are all added to the resultant bond device if they are compatible.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
Bond + Bond
If the bond devices are not attached to logical networks, or are attached to compatible logical networks, a new bond device is created. It contains all of the network interfaces, and carries all logical networks, of the component bond devices. The Create New Bond window is displayed, allowing you to configure your new bond.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

6.5.6.2. Bonds

A bond is an aggregation of multiple network interface cards into a single software-defined device. Because bonded network interfaces combine the transmission capability of the network interface cards included in the bond to act as a single network interface, they can provide greater transmission speed than that of a single network interface card. Also, because all network interface cards in the bond must fail for the bond itself to fail, bonding provides increased fault tolerance. However, one limitation is that the network interface cards that form a bonded network interface must be of the same make and model to ensure that all network interface cards in the bond support the same options and modes.
The packet dispersal algorithm for a bond is determined by the bonding mode used.

Important

Modes 1, 2, 3 and 4 support both virtual machine (bridged) and non-virtual machine (bridgeless) network types. Modes 0, 5 and 6 support non-virtual machine (bridgeless) networks only.
Bonding Modes
Red Hat Virtualization uses Mode 4 by default, but supports the following common bonding modes:
Mode 0 (round-robin policy)
Transmits packets through network interface cards in sequential order. Packets are transmitted in a loop that begins with the first available network interface card in the bond and end with the last available network interface card in the bond. All subsequent loops then start with the first available network interface card. Mode 0 offers fault tolerance and balances the load across all network interface cards in the bond. However, Mode 0 cannot be used in conjunction with bridges, and is therefore not compatible with virtual machine logical networks.
Mode 1 (active-backup policy)
Sets all network interface cards to a backup state while one network interface card remains active. In the event of failure in the active network interface card, one of the backup network interface cards replaces that network interface card as the only active network interface card in the bond. The MAC address of the bond in Mode 1 is visible on only one port to prevent any confusion that might otherwise be caused if the MAC address of the bond changed to reflect that of the active network interface card. Mode 1 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 2 (XOR policy)
Selects the network interface card through which to transmit packets based on the result of an XOR operation on the source and destination MAC addresses modulo network interface card slave count. This calculation ensures that the same network interface card is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Virtualization.
Mode 3 (broadcast policy)
Transmits all packets to all network interface cards. Mode 3 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 4 (IEEE 802.3ad policy)
Creates aggregation groups in which the interfaces share the same speed and duplex settings. Mode 4 uses all network interface cards in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Virtualization.
Mode 5 (adaptive transmit load balancing policy)
Ensures the distribution of outgoing traffic accounts for the load on each network interface card in the bond and that the current network interface card receives all incoming traffic. If the network interface card assigned to receive traffic fails, another network interface card is assigned to the role of receiving incoming traffic. Mode 5 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.
Mode 6 (adaptive load balancing policy)
Combines Mode 5 (adaptive transmit load balancing policy) with receive load balancing for IPv4 traffic without any special switch requirements. ARP negotiation is used for balancing the receive load. Mode 6 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.

6.5.6.3. Creating a Bond Device Using the Administration Portal

You can bond compatible network devices together. This type of configuration can increase available bandwidth and reliability. You can bond multiple network interfaces, pre-existing bond devices, and combinations of the two. A bond can carry both VLAN tagged and non-VLAN traffic.

Procedure 6.24. Creating a Bond Device using the Administration Portal

  1. Click the Hosts resource tab, and select the host in the results list.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the host.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Select and drag one of the devices over the top of another device and drop it to open the Create New Bond window. Alternatively, right-click the device and select another device from the drop-down menu.
    If the devices are incompatible, the bond operation fails and suggests how to correct the compatibility issue.
  5. Select the Bond Name and Bonding Mode from the drop-down menus.
    Bonding modes 1, 2, 4, and 5 can be selected. Any other mode can be configured using the Custom option.
  6. Click OK to create the bond and close the Create New Bond window.
  7. Assign a logical network to the newly created bond device.
  8. Optionally choose to Verify connectivity between Host and Engine and Save network configuration.
  9. Click OK accept the changes and close the Setup Host Networks window.
Your network devices are linked into a bond device and can be edited as a single interface. The bond device is listed in the Network Interfaces tab of the details pane for the selected host.
Bonding must be enabled for the ports of the switch used by the host. The process by which bonding is enabled is slightly different for each switch; consult the manual provided by your switch vendor for detailed information on how to enable bonding.

6.5.6.4. Example Uses of Custom Bonding Options with Host Interfaces

You can create customized bond devices by selecting Custom from the Bonding Mode of the Create New Bond window. The following examples should be adapted for your needs. For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.

Example 6.1. xmit_hash_policy

This option defines the transmit load balancing policy for bonding modes 2 and 4. For example, if the majority of your traffic is between many different IP addresses, you may want to set a policy to balance by IP address. You can set this load-balancing policy by selecting a Custom bonding mode, and entering the following into the text field:
mode=4 xmit_hash_policy=layer2+3

Example 6.2. ARP Monitoring

ARP monitor is useful for systems which can't or don't report link-state properly via ethtool. Set an arp_interval on the bond device of the host by selecting a Custom bonding mode, and entering the following into the text field:
mode=1 arp_interval=1 arp_ip_target=192.168.0.2

Example 6.3. Primary

You may want to designate a NIC with higher throughput as the primary interface in a bond device. Designate which NIC is primary by selecting a Custom bonding mode, and entering the following into the text field:
mode=1 primary=eth0

6.5.7. Changing the FQDN of a Host

Use the following procedure to change the fully qualified domain name of hosts.

Procedure 6.25. Updating the FQDN of a Host

  1. Place the host into maintenance mode so the virtual machines are live migrated to another host. See Section 7.5.7, “Moving a Host to Maintenance Mode” for more information. Alternatively, manually shut down or migrate all the virtual machines to another host. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information.
  2. Click Remove, and click OK to remove the host from the Administration Portal.
  3. Use the hostnamectl tool to update the host name. For more options, see Configure Host Names in the Red Hat Enterprise Linux 7 Networking Guide.
    # hostnamectl set-hostname NEW_FQDN
  4. Reboot the host.
  5. Re-register the host with the Manager. See Section 7.5.1, “Adding a Host to the Red Hat Virtualization Manager” for more information.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.