Search

Chapter 4. OpenStack Networking Concepts

download PDF

OpenStack Networking has system services to manage core services such as routing, DHCP, and metadata. Together, these services are included in the concept of the controller node, which is a conceptual role assigned to a physical server. A physical server is typically assigned the role of Network node, keeping it dedicated to the task of managing Layer 3 routing for network traffic to and from instances. In OpenStack Networking, you can have multiple physical hosts performing this role, allowing for redundant service in the event of hardware failure. For more information, see the chapter on Layer 3 High Availability.

4.1. Installing OpenStack Networking (neutron)

4.1.1. Supported installation

The OpenStack Networking component is installed as part of a Red Hat OpenStack Platform director deployment. Refer to the Red Hat OpenStack Platform director installation guide for more information.

4.2. OpenStack Networking diagram

This diagram depicts a sample OpenStack Networking deployment, with a dedicated OpenStack Networking node performing L3 routing and DHCP, and running the advanced services FWaaS and LBaaS. Two Compute nodes run the Open vSwitch (openvswitch-agent) and have two physical network cards each, one for tenant traffic, and another for management connectivity. The OpenStack Networking node has a third network card specifically for provider traffic:

example network

4.3. Security Groups

Security groups and rules filter the type and direction of network traffic sent to (and received from) a given neutron port. This provides an additional layer of security to complement any firewall rules present on the Compute instance. The security group is a container object with one or more security rules. A single security group can manage traffic to multiple compute instances. Ports created for floating IP addresses, OpenStack Networking LBaaS VIPs, and instances are associated with a security group. If none is specified, then the port is associated with the default security group. By default, this group will drop all inbound traffic and allow all outbound traffic. Additional security rules can be added to the default security group to modify its behavior or new security groups can be created as necessary.

4.4. Open vSwitch

Open vSwitch (OVS) is a software-defined networking (SDN) virtual switch similar to the Linux software bridge. OVS provides switching services to virtualized networks with support for industry standard NetFlow, OpenFlow, and sFlow. Open vSwitch is also able to integrate with physical switches using layer 2 features, such as STP, LACP, and 802.1Q VLAN tagging. Tunneling with VXLAN and GRE is supported with Open vSwitch version 1.11.0-1.el6 or later.

Note

Do not use LACP with OVS-based bonds, as this configuration is problematic and unsupported. Instead, consider using bond_mode=balance-slb as a replacement for this functionality. In addition, you can still use LACP with Linux bonding.

Note

To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond may be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges.

4.5. Modular Layer 2 (ML2)

ML2 is the OpenStack Networking core plug-in introduced in OpenStack’s Havana release. Superseding the previous model of monolithic plug-ins, ML2’s modular design enables the concurrent operation of mixed network technologies. The monolithic Open vSwitch and Linux Bridge plug-ins have been deprecated and removed; their functionality has instead been reimplemented as ML2 mechanism drivers.

Note

ML2 is the default OpenStack Networking plug-in, with Open vSwitch configured as the default mechanism driver.

4.5.1. The reasoning behind ML2

Previously, OpenStack Networking deployments were only able to use the plug-in that had been selected at implementation time. For example, a deployment running the Open vSwitch plug-in was only able to use Open vSwitch exclusively; it wasn’t possible to simultaneously run another plug-in such as linuxbridge. This was found to be a limitation in environments with heterogeneous requirements.

4.5.2. ML2 network types

Multiple network segment types can be operated concurrently. In addition, these network segments can interconnect using ML2’s support for multi-segmented networks. Ports are automatically bound to the segment with connectivity; it is not necessary to bind them to a specific segment. Depending on the mechanism driver, ML2 supports the following network segment types:

  • flat
  • GRE
  • local
  • VLAN
  • VXLAN

The various Type drivers are enabled in the ML2 section of the ml2_conf.ini file:

[ml2]
type_drivers = local,flat,vlan,gre,vxlan

4.5.3. ML2 Mechanism Drivers

Plug-ins have been reimplemented as mechanisms with a common code base. This approach enables code reuse and eliminates much of the complexity around code maintenance and testing.

Note

Refer to the Release Notes for the list of supported mechanism drivers.

The various mechanism drivers are enabled in the ML2 section of the ml2_conf.ini file. For example:

[ml2]
mechanism_drivers = openvswitch,linuxbridge,l2population
Note

If your deployment uses Red Hat OpenStack Platform director, then these settings are managed by director and should not be changed manually.

4.6. Network Back Ends in OpenStack

The Red Hat OpenStack Platform offers two distinctly different networking back ends: Nova networking and OpenStack Networking (neutron). Nova networking has been deprecated in the OpenStack technology roadmap, but still remains currently available. OpenStack Networking is considered the core software-defined networking (SDN) component of OpenStack’s forward-looking roadmap and is under active development. It is important to consider that there is currently no migration path between Nova networking and OpenStack Networking. This would impact an operator’s plan to deploy Nova networking with the intention of upgrading to OpenStack Networking at a later date. At present, any attempt to switch between these technologies would need to be performed manually, and would likely require planned outages.

Note

Nova networking is not available for deployment using the Red Hat OpenStack Platform Director.

4.6.1. Choose OpenStack Networking (neutron)

  • If you require an overlay network solution: OpenStack Networking supports GRE or VXLAN tunneling for virtual machine traffic isolation. With GRE or VXLAN, no VLAN configuration is required on the network fabric and the only requirement from the physical network is to provide IP connectivity between the nodes. Furthermore, VXLAN or GRE allows a theoretical scale limit of 16 million unique IDs which is far beyond the 4094 limitation of 802.1q VLAN ID. Nova networking bases the network segregation on 802.1q VLANs and does not support tunneling with GRE or VXLAN.
  • If you require overlapping IP addresses between tenants: OpenStack Networking uses the network namespace capabilities in the Linux kernel, which allows different tenants to use the same subnet range (e.g. 192.168.1/24) on the same Compute node without any risk of overlap or interference. This is suited for large multi-tenancy deployments. By comparison, Nova networking offers flat topologies that must remain mindful of subnets used by all tenants.
  • If you require a Red Hat-certified third-party OpenStack Networking plug-in: By default, Red Hat OpenStack Platform 8 uses the open source ML2 core plug-in with the Open vSwitch (OVS) mechanism driver. Based on the physical network fabric and other network requirements, third-party OpenStack Networking plug-ins can be deployed instead of the default ML2/Open vSwitch driver due to the pluggable architecture of OpenStack Networking. Red Hat is constantly working to enhance our Partner Certification Program to certify more OpenStack Networking plugins against Red Hat OpenStack Platform. You can learn more about our Certification Program and the certified OpenStack Networking plug-ins at http://marketplace.redhat.com.
  • If you require Firewall-as-a-service (FWaaS) or Load-Balancing-as-a-service (LBaaS): These network services are only available in OpenStack Networking and are not available for Nova networking. The dashboard allows tenants to manage these services with no need for administrator intervention.

4.6.2. Choose Nova Networking

  • If your deployment requires flat (untagged) or VLAN (802.1q tagged) networking: This implies scalabilty requirements (theoretical scale limit of 4094 VLAN IDs, where in practice physical switches tend to support a much lower number) as well as management and provisioning requirements. Specific configuration is necessary on the physical network to trunk the required set of VLANs between the nodes.
  • If your deployment does not require overlapping IP addresses between tenants: This is usually suitable only for small, private deployments.
  • If you do not need a software-defined networking (SDN) solution, or the ability to interact with the physical network fabric.
  • If you do not need self-service VPN, Firewall, or Load-Balancing services.

4.7. L2 Population

The L2 Population driver enables broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a particular network only between the nodes that host the network. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast.

1. Enable the L2 population driver by adding it to the list of mechanism drivers. You also need to have at least one tunneling driver enabled; either GRE, VXLAN, or both. Add the appropriate configuration options to the ml2_conf.ini file:

[ml2]
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,linuxbridge,l2population

2. Enable L2 population in the openvswitch_agent.ini file. This must be enabled on each node running the L2 agent:

[agent]
l2_population = True
Note

To install ARP reply flows, you will need to configure the arp_responder flag. For example:

[agent]
l2_population = True
arp_responder = True

4.8. OpenStack Networking Services

By default, Red Hat OpenStack Platform includes components that integrate with the ML2 and Open vSwitch plugin to provide networking functionality in your deployment:

4.8.1. L3 Agent

The L3 agent is part of the openstack-neutron package. Network namespaces are used to provide each project with its own isolated layer 3 routers, which direct traffic and provide gateway services for the layer 2 networks; the L3 agent assists with managing these routers. The nodes on which the L3 agent is to be hosted must not have a manually-configured IP address on a network interface that is connected to an external network. Instead there must be a range of IP addresses from the external network that are available for use by OpenStack Networking. These IP addresses will be assigned to the routers that provide the link between the internal and external networks. The range selected must be large enough to provide a unique IP address for each router in the deployment as well as each desired floating IP.

4.8.2. DHCP Agent

The OpenStack Networking DHCP agent manages the network namespaces that are spawned for each project subnet to act as DHCP server. Each namespace is running a dnsmasq process that is capable of allocating IP addresses to virtual machines running on the network. If the agent is enabled and running when a subnet is created then by default that subnet has DHCP enabled.

4.8.3. Open vSwitch Agent

The Open vSwitch (OVS) neutron plug-in uses its own agent, which runs on each node and manages the OVS bridges. The ML2 plugin integrates with a dedicated agent to manage L2 networks. By default, Red Hat OpenStack Platform uses ovs-agent, which builds overlay networks using OVS bridges.

4.9. Tenant and Provider networks

The following diagram presents an overview of the tenant and provider network types, and illustrates how they interact within the overall OpenStack Networking topology:

network types

4.9.1. Tenant networks

Tenant networks are created by users for connectivity within projects. They are fully isolated by default and are not shared with other projects. OpenStack Networking supports a range of tenant network types:

  • Flat - All instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation takes place.
  • VLAN - OpenStack Networking allows users to create multiple provider or tenant networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other network infrastructure on the same layer 2 VLAN.
  • VXLAN and GRE tunnels - VXLAN and GRE use network overlays to support private communication between instances. An OpenStack Networking router is required to enable traffic to traverse outside of the GRE or VXLAN tenant network. A router is also required to connect directly-connected tenant networks with external networks, including the Internet; the router provides the ability to connect to instances directly from an external network using floating IP addresses.
Note

You can configure QoS policies for tenant networks. For more information, see Chapter 12, Configure Quality-of-Service (QoS).

4.9.2. Provider networks

Provider networks are created by the OpenStack administrator and map directly to an existing physical network in the data center. Useful network types in this category are flat (untagged) and VLAN (802.1Q tagged). It is possible to allow provider networks to be shared among tenants as part of the network creation process.

4.9.2.1. Flat provider networks

You can use flat provider networks to connect instances directly to the external network. This is useful if you have multiple physical networks (for example, physnet1 and physnet2) and separate physical interfaces (eth0 physnet1 and eth1 physnet2), and intend to connect each Compute and Network node to those external networks. If you would like to use multiple vlan-tagged interfaces on a single interface to connect to multiple provider networks, please refer to Section 9.2, “Using VLAN provider networks”.

4.9.2.2. Configure controller nodes

1. Edit /etc/neutron/plugin.ini (symbolic link to /etc/neutron/plugins/ml2/ml2_conf.ini) and add flat to the existing list of values, and set flat_networks to *. For example:

type_drivers = vxlan,flat
flat_networks =*

2. Create an external network as a flat network and associate it with the configured physical_network. Configuring it as a shared network (using --shared) will let other users create instances directly connected to it.

neutron net-create public01 --provider:network_type flat --provider:physical_network physnet1 --router:external=True --shared

3. Create a subnet using neutron subnet-create, or the dashboard. For example:

# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=192.168.100.20,end=192.168.100.100 --gateway=192.168.100.1 public01 192.168.100.0/24

4. Restart the neutron-server service to apply the change:

systemctl restart neutron-server

4.9.2.3. Configure the Network and Compute nodes

Perform these steps on the network node and compute nodes. This will connect the nodes to the external network, and allow instances to communicate directly with the external network.

1. Create an external network bridge (br-ex) and add an associated port (eth1) to it:

Create the external bridge in /etc/sysconfig/network-scripts/ifcfg-br-ex:

DEVICE=br-ex
TYPE=OVSBridge
DEVICETYPE=ovs
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

In /etc/sysconfig/network-scripts/ifcfg-eth1, configure eth1 to connect to br-ex:

DEVICE=eth1
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Reboot the node or restart the network service for the changes to take effect.

2. Configure physical networks in /etc/neutron/plugins/ml2/openvswitch_agent.ini and map bridges to the physical network:

bridge_mappings = physnet1:br-ex
Note

For more information on bridge mappings, see Chapter 13, Configure Bridge Mappings.

3. Restart the neutron-openvswitch-agent service on both the network and compute nodes for the changes to take effect:

systemctl restart neutron-openvswitch-agent

4.9.2.4. Configure the network node

1. Set external_network_bridge = to an empty value in /etc/neutron/l3_agent.ini:

Previously, OpenStack Networking used external_network_bridge when only a single bridge was used for connecting to an external network. This value may now be set to a blank string, which allows multiple external network bridges. OpenStack Networking will then create a patch from each bridge to br-int.

# Name of bridge used for external network traffic. This should be set to
# empty value for the linux bridge
external_network_bridge =

2. Restart neutron-l3-agent for the changes to take effect.

systemctl restart neutron-l3-agent
Note

If there are multiple flat provider networks, then each of them should have a separate physical interface and bridge to connect them to the external network. You will need to configure the ifcfg-* scripts appropriately and use a comma-separated list for each network when specifying the mappings in the bridge_mappings option. For more information on bridge mappings, see Chapter 13, Configure Bridge Mappings.

4.10. Layer 2 and layer 3 networking

When designing your virtual network, you will need to anticipate where the majority of traffic is going to be sent. Network traffic moves faster within the same logical network, rather than between networks. This is because traffic between logical networks (using different subnets) needs to pass through a router, resulting in additional latency.

Consider the diagram below which has network traffic flowing between instances on separate VLANs:

layers

Note

Even a high performance hardware router is still going to add some latency to this configuration.

4.10.1. Use switching where possible

Switching occurs at a lower level of the network (layer 2), so can function much quicker than the routing that occurs at layer 3. The preference should be to have as few hops as possible between systems that frequently communicate. For example, this diagram depicts a switched network that spans two physical nodes, allowing the two instances to directly communicate without using a router for navigation first. You’ll notice that the instances now share the same subnet, to indicate that they’re on the same logical network:

switching

In order to allow instances on separate nodes to communicate as if they’re on the same logical network, you’ll need to use an encapsulation tunnel such as VXLAN or GRE. It is recommended you consider adjusting the MTU size from end-to-end in order to accommodate the additional bits required for the tunnel header, otherwise network performance can be negatively impacted as a result of fragmentation. For more information, see Configure MTU Settings.

You can further improve the performance of VXLAN tunneling by using supported hardware that features VXLAN offload capabilities. The full list is available here: https://access.redhat.com/articles/1390483

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.