Search

Chapter 2. OpenStack networking concepts

download PDF

OpenStack Networking has system services to manage core services such as routing, DHCP, and metadata. Together, these services are included in the concept of the Controller node, which is a conceptual role assigned to a physical server. A physical server is typically assigned the role of Network node and dedicated to the task of managing Layer 3 routing for network traffic to and from instances. In OpenStack Networking, you can have multiple physical hosts performing this role, allowing for redundant service in the event of hardware failure. For more information, see the chapter on Layer 3 High Availability.

Note

Red Hat OpenStack Platform 11 added support for composable roles, allowing you to separate network services into a custom role. However, for simplicity, this guide assumes that a deployment uses the default controller role.

2.1. Installing OpenStack Networking (neutron)

The OpenStack Networking component is installed as part of a Red Hat OpenStack Platform director deployment. For more information about director deployment, see Director Installation and Usage.

2.2. OpenStack Networking diagram

This diagram depicts a sample OpenStack Networking deployment, with a dedicated OpenStack Networking node performing layer 3 routing and DHCP, and running load balancing as a Service (LBaaS). Two Compute nodes run the Open vSwitch (openvswitch-agent) and have two physical network cards each, one for tenant traffic, and another for management connectivity. The OpenStack Networking node has a third network card specifically for provider traffic:

example network

2.3. Security groups

Security groups and rules filter the type and direction of network traffic that neutron ports send and receive. This provides an additional layer of security to complement any firewall rules present on the compute instance. The security group is a container object with one or more security rules. A single security group can manage traffic to multiple compute instances.

Ports created for floating IP addresses, OpenStack Networking LBaaS VIPs, and instances are associated with a security group. If you do not specify a security group, then the port is associated with the default security group. By default, this group drops all inbound traffic and allows all outbound traffic. However, traffic flows between instances that are members of the default security group, because the group has a remote group ID that points to itself.

To change the filtering behavior of the default security group, you can add security rules to the group, or create entirely new security groups.

2.4. Open vSwitch

Open vSwitch (OVS) is a software-defined networking (SDN) virtual switch similar to the Linux software bridge. OVS provides switching services to virtualized networks with support for industry standard , OpenFlow, and sFlow. OVS can also integrate with physical switches using layer 2 features, such as STP, LACP, and 802.1Q VLAN tagging. Open vSwitch version 1.11.0-1.el6 or later also supports tunneling with VXLAN and GRE.

For more information about network interface bonds, see the Network Interface Bonding chapter of the Advanced Overcloud Customization guide.

Note

To mitigate the risk of network loops in OVS, only a single interface or a single bond can be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges.

2.5. Modular layer 2 (ML2) networking

ML2 is the OpenStack Networking core plug-in introduced in the OpenStack Havana release. Superseding the previous model of monolithic plug-ins, the ML2 modular design enables the concurrent operation of mixed network technologies. The monolithic Open vSwitch and Linux Bridge plug-ins have been deprecated and removed; their functionality is now implemented by ML2 mechanism drivers.

Note

ML2 is the default OpenStack Networking plug-in, with OVN configured as the default mechanism driver.

2.5.1. The reasoning behind ML2

Previously, OpenStack Networking deployments could use only the plug-in selected at implementation time. For example, a deployment running the Open vSwitch (OVS) plug-in was required to use the OVS plug-in exclusively. The monolithic plug-in did not support the simultaneously use of another plug-in such as linuxbridge. This limitation made it difficult to meet the needs of environments with heterogeneous requirements.

2.5.2. ML2 network types

Multiple network segment types can be operated concurrently. In addition, these network segments can interconnect using ML2 support for multi-segmented networks. Ports are automatically bound to the segment with connectivity; it is not necessary to bind ports to a specific segment. Depending on the mechanism driver, ML2 supports the following network segment types:

  • flat
  • GRE
  • local
  • VLAN
  • VXLAN
  • Geneve

Enable Type drivers in the ML2 section of the ml2_conf.ini file. For example:

[ml2]
type_drivers = local,flat,vlan,gre,vxlan,geneve

2.5.3. ML2 mechanism drivers

Plug-ins are now implemented as mechanisms with a common code base. This approach enables code reuse and eliminates much of the complexity around code maintenance and testing.

Note

For the list of supported mechanism drivers, see Release Notes.

The default mechanism driver is OVN. Mechanism drivers are enabled in the ML2 section of the ml2_conf.ini file. For example:

[ml2]
mechanism_drivers = ovn
Note

Red Hat OpenStack Platform director manages these settings. Do not change them manually.

2.6. ML2 type and mechanism driver compatibility

Mechanism DriverType Driver
 

flat

gre

vlan

vxlan

geneve

ovn

yes

no

yes

no

yes

openvswitch

yes

yes

yes

yes

no

2.7. Limits of the ML2/OVN mechanism driver

The following table describes features that Red Hat does not yet support with ML2/OVN. Red Hat plans to support each of these features in a future Red Hat OpenStack Platform release.

In addition, this release of the Red Hat OpenStack Platform (RHOSP) does not provide a supported migration from the ML2/OVS mechanism driver to the ML2/OVN mechanism driver. This RHPOSP release does not support the OpenStack community migration strategy. Migration support is planned for a future RHOSP release.

FeatureNotesTrack this Feature

Fragmentation / Jumbo Frames

OVN does not yet support sending ICMP "fragmentation needed" packets. Larger ICMP/UDP packets that require fragmentation do not work with ML2/OVN as they would with the ML2/OVS driver implementation. TCP traffic is handled by maximum segment sized (MSS) clamping.

https://bugzilla.redhat.com/show_bug.cgi?id=1547074 (ovn-network)

https://bugzilla.redhat.com/show_bug.cgi?id=1702331 (Core ovn)

Port Forwarding

OVN does not support port forwarding.

https://bugzilla.redhat.com/show_bug.cgi?id=1654608

https://blueprints.launchpad.net/neutron/+spec/port-forwarding

Security Groups Logging API

ML2/OVN does not provide a log file that logs security group events such as an instance trying to execute restricted operations or access restricted ports in remote servers.

https://bugzilla.redhat.com/show_bug.cgi?id=1619266

End-to-End Encryption

Red Hat has not tested and does not yet support end-to-end TLS/SSL encryption between ML2/OVN services.

https://bugzilla.redhat.com/show_bug.cgi?id=1601926

QoS DSCP

ML2/OVN does not provide support for quality of service (QoS) differentiated services code point (DSCP)tagging and egress bandwidth limiting. Though integrated in the OVN C core, these features are not integrated and fully tested in the ML2/OVN mechanism driver.

https://bugzilla.redhat.com/show_bug.cgi?id=1546996

https://specs.openstack.org/openstack/neutron-specs/specs/newton/ml2-qos-with-dscp.html

Multicast

When using ML2/OVN as the integration bridge, multicast traffic is treated as broadcast traffic.

The integration bridge operates in FLOW mode, so IGMP snooping is not available. To support this, core OVN must support IGMP snooping.

https://bugzilla.redhat.com/show_bug.cgi?id=1672278

SR-IOV

Presently, SR-IOV only works with the neutron DHCP agent deployed.

https://bugzilla.redhat.com/show_bug.cgi?id=1666684

Provisioning Baremetal Machines with OVN DHCP

The built-in DHCP server on OVN presently can not provision baremetal nodes. It cannot serve DHCP for the provisioning networks. Chainbooting iPXE requires tagging (--dhcp-match in dnsmasq), which is not supported in the OVN DHCP server.

https://bugzilla.redhat.com/show_bug.cgi?id=1622154

OVS_DPDK

OVS_DPDK is presently not supported with OVN.

 

2.8. Using the ML2/OVS mechanism driver instead of the default ML2/OVN driver

If your application requires the ML2/OVS mechanism driver, you can deploy the overcloud with the environment file neutron-ovs.yaml, which disables the default ML2/OVN mechanism driver and enables ML2/OVS.

2.8.1. Using ML2/OVS in a new RHOSP 15 deployment

In the overcloud deployment command, include the environment file neutron-ovs.yaml as shown in the following example.

-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml

For more information about using environment files, see Including Environment Files in Overcloud Creation in the Advanced Overcloud Customization guide.

2.8.2. Upgrading from ML2/OVS in a previous RHOSP to ML2/OVS in RHOSP 15

To keep using ML2/OVS after an upgrade from a previous version of RHOSP that uses ML2/OVS, follow Red Hat’s upgrade procedure as documented, and do not perform the ML2/OVS-to-ML2/OVN migration.

The upgrade procedure includes adding -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml to the overcloud deployment command.

2.9. Configuring the L2 population driver

The L2 Population driver enables broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a particular network only between the nodes that host the network. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast.

To enable the L2 Population driver, complete the following steps:

1. Enable the L2 population driver by adding it to the list of mechanism drivers. You also must enable at least one tunneling driver enabled; either GRE, VXLAN, or both. Add the appropriate configuration options to the ml2_conf.ini file:

[ml2]
type_drivers = local,flat,vlan,gre,vxlan,geneve
mechanism_drivers = l2population
Note

Neutron’s Linux Bridge ML2 driver and agent were deprecated in Red Hat OpenStack Platform 11. The Open vSwitch (OVS) plugin OpenStack Platform director default, and is recommended by Red Hat for general usage.

2. Enable L2 population in the openvswitch_agent.ini file. Enable it on each node that contains the L2 agent:

[agent]
l2_population = True
Note

To install ARP reply flows, configure the arp_responder flag:

[agent]
l2_population = True
arp_responder = True

2.10. OpenStack Networking services

By default, Red Hat OpenStack Platform includes components that integrate with the ML2 and Open vSwitch plugin to provide networking functionality in your deployment:

2.10.1. L3 agent

The L3 agent is part of the openstack-neutron package. Use network namespaces to provide each project with its own isolated layer 3 routers, which direct traffic and provide gateway services for the layer 2 networks. The L3 agent assists with managing these routers. The nodes that host the L3 agent must not have a manually-configured IP address on a network interface that is connected to an external network. Instead there must be a range of IP addresses from the external network that are available for use by OpenStack Networking. Neutron assigns these IP addresses to the routers that provide the link between the internal and external networks. The IP range that you select must be large enough to provide a unique IP address for each router in the deployment as well as each floating IP.

2.10.2. DHCP agent

The OpenStack Networking DHCP agent manages the network namespaces that are spawned for each project subnet to act as DHCP server. Each namespace runs a dnsmasq process that can allocate IP addresses to virtual machines on the network. If the agent is enabled and running when a subnet is created then by default that subnet has DHCP enabled.

2.10.3. Open vSwitch agent

The Open vSwitch (OVS) neutron plug-in uses its own agent, which runs on each node and manages the OVS bridges. The ML2 plugin integrates with a dedicated agent to manage L2 networks. By default, Red Hat OpenStack Platform uses ovs-agent, which builds overlay networks using OVS bridges.

2.11. Tenant and provider networks

The following diagram presents an overview of the tenant and provider network types, and illustrates how they interact within the overall OpenStack Networking topology:

network types

2.11.1. Tenant networks

Users create tenant networks for connectivity within projects. Tenant networks are fully isolated by default and are not shared with other projects. OpenStack Networking supports a range of tenant network types:

  • Flat - All instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation occurs.
  • VLAN - OpenStack Networking allows users to create multiple provider or tenant networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other network infrastructure on the same layer 2 VLAN.
  • VXLAN and GRE tunnels - VXLAN and GRE use network overlays to support private communication between instances. An OpenStack Networking router is required to enable traffic to traverse outside of the GRE or VXLAN tenant network. A router is also required to connect directly-connected tenant networks with external networks, including the Internet; the router provides the ability to connect to instances directly from an external network using floating IP addresses.
Note

You can configure QoS policies for tenant networks. For more information, see Chapter 10, Configuring Quality of Service (QoS) policies.

2.11.2. Provider networks

The OpenStack administrator creates provider networks. Provider networks map directly to an existing physical network in the data center. Useful network types in this category include flat (untagged) and VLAN (802.1Q tagged). You can also share provider networks among tenants as part of the network creation process.

2.11.2.1. Flat provider networks

You can use flat provider networks to connect instances directly to the external network. This is useful if you have multiple physical networks (for example, physnet1 and physnet2) and separate physical interfaces (eth0 physnet1 and eth1 physnet2), and intend to connect each Compute and Network node to those external networks. To use multiple vlan-tagged interfaces on a single interface to connect to multiple provider networks, see Section 7.3, “Using VLAN provider networks”.

2.11.2.2. Configuring networking for Controller nodes

1. Edit /etc/neutron/plugin.ini (symbolic link to /etc/neutron/plugins/ml2/ml2_conf.ini) to add flat to the existing list of values and set flat_networks to *. For example:

type_drivers = vxlan,flat
flat_networks =*

2. Create an external network as a flat network and associate it with the configured physical_network. Configure it as a shared network (using --share) to let other users create instances that connect to the external network directly.

# openstack network create --share --provider-network-type flat --provider-physical-network physnet1 --external public01

3. Create a subnet using the openstack subnet create command, or the dashboard. For example:

# openstack subnet create --no-dhcp --allocation-pool start=192.168.100.20,end=192.168.100.100 --gateway 192.168.100.1 --network public01 public_subnet

4. Restart the neutron-server service to apply the change:

systemctl restart tripleo_neutron_api

2.11.2.3. Configuring networking for the Network and Compute nodes

Complete the following steps on the Network node and Compute nodes to connect the nodes to the external network, and allow instances to communicate directly with the external network.

1. Create an external network bridge (br-ex) and add an associated port (eth1) to it:

Create the external bridge in /etc/sysconfig/network-scripts/ifcfg-br-ex:

DEVICE=br-ex
TYPE=OVSBridge
DEVICETYPE=ovs
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

In /etc/sysconfig/network-scripts/ifcfg-eth1, configure eth1 to connect to br-ex:

DEVICE=eth1
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Reboot the node or restart the network service for the changes to take effect.

2. Configure physical networks in /etc/neutron/plugins/ml2/openvswitch_agent.ini and map bridges to the physical network:

bridge_mappings = physnet1:br-ex
Note

For more information on bridge mappings, see Chapter 11, Configuring bridge mappings.

3. Restart the neutron-openvswitch-agent service on both the network and compute nodes to apply the changes:

systemctl restart neutron-openvswitch-agent

2.11.2.4. Configuring the Network node

1. Set external_network_bridge = to an empty value in /etc/neutron/l3_agent.ini:

Setting external_network_bridge = to an empty value allows multiple external network bridges. OpenStack Networking creates a patch from each bridge to br-int.

external_network_bridge =

2. Restart neutron-l3-agent for the changes to take effect.

systemctl restart neutron-l3-agent
Note

If there are multiple flat provider networks, then each of them must have a separate physical interface and bridge to connect them to the external network. Configure the ifcfg-* scripts appropriately and use a comma-separated list for each network when specifying the mappings in the bridge_mappings option. For more information on bridge mappings, see Chapter 11, Configuring bridge mappings.

2.12. Layer 2 and layer 3 networking

When designing your virtual network, anticipate where the majority of traffic is going to be sent. Network traffic moves faster within the same logical network, rather than between multiple logical networks. This is because traffic between logical networks (using different subnets) must pass through a router, resulting in additional latency.

Consider the diagram below which has network traffic flowing between instances on separate VLANs:

layers

Note

Even a high performance hardware router adds latency to this configuration.

2.12.1. Use switching where possible

Because switching occurs at a lower level of the network (layer 2) it can function faster than the routing that occurs at layer 3. Design as few hops as possible between systems that communicate frequently. For example, the following diagram depicts a switched network that spans two physical nodes, allowing the two instances to communicate directly without using a router for navigation first. Note that the instances now share the same subnet, to indicate that they are on the same logical network:

switching

To allow instances on separate nodes to communicate as if they are on the same logical network, use an encapsulation tunnel such as VXLAN or GRE. Red Hat recommends adjusting the MTU size from end-to-end to accommodate the additional bits required for the tunnel header, otherwise network performance can be negatively impacted as a result of fragmentation. For more information, see Configure MTU Settings.

You can further improve the performance of VXLAN tunneling by using supported hardware that features VXLAN offload capabilities. The full list is available here: https://access.redhat.com/articles/1390483

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.