Search

5.5. Hosts and Networking

download PDF

5.5.1. Refreshing Host Capabilities

When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Manager.

Procedure 5.19. To Refresh Host Capabilities

  1. Use the resource tabs, tree mode, or the search function to find and select a host in the results list.
  2. Click the Refresh Capabilities button.
The list of network interface cards in the Network Interfaces tab of the details pane for the selected host is updated. Any new network interface cards can now be used in the Manager.

5.5.2. Editing Host Network Interfaces and Assigning Logical Networks to Hosts

You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces. Bridge and ethtool custom properties are also supported.

Important

You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines.

Procedure 5.20. Editing Host Network Interfaces and Assigning Logical Networks to Hosts

  1. Click the Hosts resource tab, and select the desired host.
  2. Click the Network Interfaces tab in the details pane.
  3. Click the Setup Host Networks button to open the Setup Host Networks window.
  4. Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface.
    Alternatively, right-click the logical network and select a network interface from the drop-down menu.
  5. Configure the logical network:
    1. Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window.
    2. Select a Boot Protocol from None, DHCP, or Static. If you selected Static, enter the IP, Netmask / Routing Prefix, and the Gateway.

      Note

      Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network will be forwarded using the logical network's gateway instead of the default gateway used by the management network.
    3. To override the default host network quality of service, select Override QoS and enter the desired values in the following fields:
      • Weighted Share: Signifies how much of the logical link's capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100.
      • Rate Limit [Mbps]: The maximum bandwidth to be used by a network.
      • Committed Rate [Mbps]: The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link.
      For more information on configuring host network quality of service see Section 2.3, “Host Network Quality of Service”
    4. To configure a network bridge, click the Custom Properties drop-down menu and select bridge_opts. Enter a valid key and value with the following syntax: [key]=[value]. Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Section B.1, “Explanation of bridge_opts Parameters”.
      forward_delay=1500 
      gc_timer=3765 
      group_addr=1:80:c2:0:0:0 
      group_fwd_mask=0x0 
      hash_elasticity=4 
      hash_max=512
      hello_time=200 
      hello_timer=70 
      max_age=2000 
      multicast_last_member_count=2 
      multicast_last_member_interval=100 
      multicast_membership_interval=26000 
      multicast_querier=0 
      multicast_querier_interval=25500 
      multicast_query_interval=13000 
      multicast_query_response_interval=1000 
      multicast_query_use_ifaddr=0 
      multicast_router=1 
      multicast_snooping=1 
      multicast_startup_query_count=2 
      multicast_startup_query_interval=3125
    5. To configure ethtool properties, click the Custom Properties drop-down menu and select ethtool_opts. Enter a valid key and value with the following syntax: [key]=[value]. Separate multiple entries with a whitespace character. The ethtool_opts option is not available by default, and you need to add it using the engine configuration tool. See Section B.2, “How to Set Up Red Hat Enterprise Virtualization Manager to Use Ethtool” for more information. See Red Hat Enterprise Linux 6 Deployment Guide or the manual page for more information on ethtool properties.
    6. To configure Fibre Channel over Ethernet (FCoE), click the Custom Properties drop-down menu and select fcoe. Enter a valid key and value with the following syntax: [key]=[value]. At least enable=yes is required. You can also add dcb=[yes|no] and auto_vlan=[yes|no]. Separate multiple entries with a whitespace character. The fcoe option is not available by default; you need to add it using the engine configuration tool. See Section B.3, “How to Set Up Red Hat Enterprise Virtualization Manager to Use FCoE” for more information.

      Note

      A separate, dedicated logical network is recommended for use with FCoE.
    7. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. A logical network cannot be edited or moved to another interface until it is synchronized.

      Note

      Networks are not considered synchronized if they have one of the following conditions:
      • The VM Network is different from the physical host network.
      • The VLAN identifier is different from the physical host network.
      • A Custom MTU is set on the logical network, and is different from the physical host network.
  6. Select the Verify connectivity between Host and Engine check box to check network connectivity; this action will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box to make the changes persistent when the environment is rebooted.
  8. Click OK.

Note

If not all network interface cards for the host are displayed, click the Refresh Capabilities button to update the list of network interface cards available for that host.

5.5.3. Adding Multiple VLANs to a Single Network Interface Using Logical Networks

Multiple VLANs can be added to a single network interface to separate traffic on the one host.

Important

You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows.

Procedure 5.21. Adding Multiple VLANs to a Network Interface using Logical Networks

  1. Click the Hosts resource tab, and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
  5. Edit the logical networks by hovering your cursor over an assigned logical network and clicking the pencil icon to open the Edit Network window.
    If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
    Select a Boot Protocol from:
    • None,
    • DHCP, or
    • Static,
      Provide the IP and Subnet Mask.
    Click OK.
  6. Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box
  8. Click OK.
Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational
You have added multiple VLAN-tagged logical networks to a single interface. This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface.

5.5.4. Adding Network Labels to Host Network Interfaces

Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces.

Procedure 5.22. Adding Network Labels to Host Network Interfaces

  1. Click the Hosts resource tab, and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Click Labels, and right-click [New Label]. Select a physical network interface to label.
  5. Enter a name for the network label in the Label text field.
  6. Click OK.
You have added a network label to a host network interface. Any newly created logical networks with the same label will be automatically assigned to all host network interfaces with that label. Also, removing a label from a logical network will automatically remove that logical network from all host network interfaces with that label.

5.5.5. Bonds

5.5.5.1. Bonding Logic in Red Hat Enterprise Virtualization

The Red Hat Enterprise Virtualization Manager Administration Portal allows you to create bond devices using a graphical interface. There are several distinct bond creation scenarios, each with its own logic.
Two factors that affect bonding logic are:
  • Are either of the devices already carrying logical networks?
  • Are the devices carrying compatible logical networks?
Table 5.7. Bonding Scenarios and Their Results
Bonding Scenario Result
NIC + NIC
The Create New Bond window is displayed, and you can configure a new bond device.
If the network interfaces carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
NIC + Bond
The NIC is added to the bond device. Logical networks carried by the NIC and the bond are all added to the resultant bond device if they are compatible.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
Bond + Bond
If the bond devices are not attached to logical networks, or are attached to compatible logical networks, a new bond device is created. It contains all of the network interfaces, and carries all logical networks, of the component bond devices. The Create New Bond window is displayed, allowing you to configure your new bond.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

5.5.5.2. Bonds

A bond is an aggregation of multiple network interface cards into a single software-defined device. Because bonded network interfaces combine the transmission capability of the network interface cards included in the bond to act as a single network interface, they can provide greater transmission speed than that of a single network interface card. Also, because all network interface cards in the bond must fail for the bond itself to fail, bonding provides increased fault tolerance. However, one limitation is that the network interface cards that form a bonded network interface must be of the same make and model to ensure that all network interface cards in the bond support the same options and modes.
The packet dispersal algorithm for a bond is determined by the bonding mode used.

Important

Modes 1, 2, 3 and 4 support both virtual machine (bridged) and non-virtual machine (bridgeless) network types. Modes 0, 5 and 6 support non-virtual machine (bridgeless) networks only.
Bonding Modes
Red Hat Enterprise Virtualization uses Mode 4 by default, but supports the following common bonding modes:
Mode 0 (round-robin policy)
Transmits packets through network interface cards in sequential order. Packets are transmitted in a loop that begins with the first available network interface card in the bond and end with the last available network interface card in the bond. All subsequent loops then start with the first available network interface card. Mode 0 offers fault tolerance and balances the load across all network interface cards in the bond. However, Mode 0 cannot be used in conjunction with bridges, and is therefore not compatible with virtual machine logical networks.
Mode 1 (active-backup policy)
Sets all network interface cards to a backup state while one network interface card remains active. In the event of failure in the active network interface card, one of the backup network interface cards replaces that network interface card as the only active network interface card in the bond. The MAC address of the bond in Mode 1 is visible on only one port to prevent any confusion that might otherwise be caused if the MAC address of the bond changed to reflect that of the active network interface card. Mode 1 provides fault tolerance and is supported in Red Hat Enterprise Virtualization.
Mode 2 (XOR policy)
Selects the network interface card through which to transmit packets based on the result of an XOR operation on the source and destination MAC addresses modulo network interface card slave count. This calculation ensures that the same network interface card is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Enterprise Virtualization.
Mode 3 (broadcast policy)
Transmits all packets to all network interface cards. Mode 3 provides fault tolerance and is supported in Red Hat Enterprise Virtualization.
Mode 4 (IEEE 802.3ad policy)
Creates aggregation groups in which the interfaces share the same speed and duplex settings. Mode 4 uses all network interface cards in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Enterprise Virtualization.
Mode 5 (adaptive transmit load balancing policy)
Ensures the distribution of outgoing traffic accounts for the load on each network interface card in the bond and that the current network interface card receives all incoming traffic. If the network interface card assigned to receive traffic fails, another network interface card is assigned to the role of receiving incoming traffic. Mode 5 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.
Mode 6 (adaptive load balancing policy)
Combines Mode 5 (adaptive transmit load balancing policy) with receive load balancing for IPv4 traffic without any special switch requirements. ARP negotiation is used for balancing the receive load. Mode 6 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.

5.5.5.3. Creating a Bond Device Using the Administration Portal

You can bond compatible network devices together. This type of configuration can increase available bandwidth and reliability. You can bond multiple network interfaces, pre-existing bond devices, and combinations of the two. A bond can also carry both VLAN tagged and non-VLAN traffic.

Procedure 5.23. Creating a Bond Device using the Administration Portal

  1. Click the Hosts resource tab, and select the host in the results list.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the host.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Select and drag one of the devices over the top of another device and drop it to open the Create New Bond window. Alternatively, right-click the device and select another device from the drop-down menu.
    If the devices are incompatible, the bond operation fails and suggests how to correct the compatibility issue.
  5. Select the Bond Name and Bonding Mode from the drop-down menus.
    Bonding modes 1, 2, 4, and 5 can be selected. Any other mode can be configured using the Custom option.
  6. Click OK to create the bond and close the Create New Bond window.
  7. Assign a logical network to the newly created bond device.
  8. Optionally choose to Verify connectivity between Host and Engine and Save network configuration.
  9. Click OK accept the changes and close the Setup Host Networks window.
Your network devices are linked into a bond device and can be edited as a single interface. The bond device is listed in the Network Interfaces tab of the details pane for the selected host.
Bonding must be enabled for the ports of the switch used by the host. The process by which bonding is enabled is slightly different for each switch; consult the manual provided by your switch vendor for detailed information on how to enable bonding.

5.5.5.4. Example Uses of Custom Bonding Options with Host Interfaces

You can create customized bond devices by selecting Custom from the Bonding Mode of the Create New Bond window. The following examples should be adapted for your needs. For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.

Example 5.1. xmit_hash_policy

This option defines the transmit load balancing policy for bonding modes 2 and 4. For example, if the majority of your traffic is between many different IP addresses, you may want to set a policy to balance by IP address. You can set this load-balancing policy by selecting a Custom bonding mode, and entering the following into the text field:
mode=4 xmit_hash_policy=layer2+3

Example 5.2. ARP Monitoring

ARP monitor is useful for systems which can't or don't report link-state properly via ethtool. Set an arp_interval on the bond device of the host by selecting a Custom bonding mode, and entering the following into the text field:
mode=1 arp_interval=1 arp_ip_target=192.168.0.2

Example 5.3. Primary

You may want to designate a NIC with higher throughput as the primary interface in a bond device. Designate which NIC is primary by selecting a Custom bonding mode, and entering the following into the text field:
mode=1 primary=eth0

5.5.6. Changing the FQDN of a Host

Use the following procedure to change the fully qualified domain name of hypervisor hosts.

Procedure 5.24. Updating the FQDN of a Hypervisor Host

  1. Place the hypervisor into maintenance mode so the virtual machines are live migrated to another hypervisor. See Section 6.5.7, “Moving a Host to Maintenance Mode” for more information. Alternatively, manually shut down or migrate all the virtual machines to another hypervisor. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information.
  2. Click Remove, and click OK to remove the host from the Administration Portal.
    • For RHEL-based hosts:
      • For Red Hat Enterprise Linux 6:
        Edit the /etc/sysconfig/network file, update the host name, and save.
        # vi /etc/sysconfig/network
        HOSTNAME=NEW_FQDN
      • For Red Hat Enterprise Linux 7:
        Use the hostnamectl tool to update the host name. For more options, see Configure Host Names in the Red Hat Enterprise Linux 7 Networking Guide.
        # hostnamectl set-hostname NEW_FQDN
    • For Red Hat Enterprise Virtualization Hypervisors (RHEV-H):
      In the text user interface, select the Network screen, press the right arrow key and enter a new host name in the Hostname field. Select <Save> and press Enter.
  3. Reboot the host.
  4. Re-register the host with the Manager. See Manually Adding a Hypervisor from the Administration Portal in the Installation Guide for more information.

5.5.7. Changing the IP Address of a Red Hat Enterprise Virtualization Hypervisor (RHEV-H)

Procedure 5.25. 

  1. Place the Hypervisor into maintenance mode so the virtual machines are live migrated to another hypervisor. See Section 6.5.7, “Moving a Host to Maintenance Mode” for more information. Alternatively, manually shut down or migrate all the virtual machines to another hypervisor. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information.
  2. Click Remove, and click OK to remove the host from the Administration Portal.
  3. Log in to your Hypervisor as the admin user.
  4. Press F2, select OK, and press Enter to enter the rescue shell.
  5. Modify the IP address by editing the /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt file. For example:
    # vi /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
    ...
    BOOTPROTO=none	
    IPADDR=10.x.x.x
    PREFIX=24
    ...
  6. Restart the network service and verify that the IP address has been updated.
    • For Red Hat Enterprise Linux 6:
      # service network restart
      # ifconfig ovirtmgmt
    • For Red Hat Enterprise Linux 7:
      # systemctl restart network.service
      # ip addr show ovirtmgmt
  7. Type exit to exit the rescue shell and return to the text user interface.
  8. Re-register the host with the Manager. See Manually Adding a Hypervisor from the Administration Portal in the Installation Guide for more information.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.