Chapter 18. Common administrative networking tasks
Sometimes you might need to perform administration tasks on the Red Hat OpenStack Platform Networking service (neutron) such as configuring the Layer 2 Population driver or specifying the name assigned to ports by the internal DNS.
18.1. Configuring the L2 population driver
The L2 Population driver enables broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a particular network only between the nodes that host the network. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast.
To enable the L2 Population driver, complete the following steps:
1. Enable the L2 population driver by adding it to the list of mechanism drivers. You also must enable at least one tunneling driver: either GRE, VXLAN, or both. Add the appropriate configuration options to the ml2_conf.ini file:
[ml2] type_drivers = local,flat,vlan,gre,vxlan,geneve mechanism_drivers = l2population
[ml2]
type_drivers = local,flat,vlan,gre,vxlan,geneve
mechanism_drivers = l2population
Neutron’s Linux Bridge ML2 driver and agent were deprecated in Red Hat OpenStack Platform 11. The Open vSwitch (OVS) plugin OpenStack Platform director default, and is recommended by Red Hat for general usage.
2. Enable L2 population in the openvswitch_agent.ini file. Enable it on each node that contains the L2 agent:
[agent] l2_population = True
[agent]
l2_population = True
To install ARP reply flows, configure the arp_responder
flag:
[agent] l2_population = True arp_responder = True
[agent]
l2_population = True
arp_responder = True
18.2. Tuning keepalived to avoid VRRP packet loss
If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs, the Virtual Router Redundancy Protocol (VRRP) messages might overflow the IRQ queues. This overflow stops Open vSwitch (OVS) from responding and forwarding those VRRP messages.
To avoid VRRP packet overload, you must increase the VRRP advertisement interval using the ha_vrrp_advert_int
parameter in the ExtraConfig
section for the Controller role.
Procedure
Log in to the undercloud as the stack user, and source the
stackrc
file to enable the director command line tools.Example
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Create a custom YAML environment file.
Example
vi /home/stack/templates/my-neutron-environment.yaml
$ vi /home/stack/templates/my-neutron-environment.yaml
Copy to Clipboard Copied! TipThe Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates.
In the YAML environment file, increase the VRRP advertisement interval using the
ha_vrrp_advert_int
argument with a value specific for your site. (The default is2
seconds.)You can also set values for gratuitous ARP messages:
ha_vrrp_garp_master_repeat
- The number of gratuitous ARP messages to send at one time after the transition to the master state. (The default is 5 messages.)
ha_vrrp_garp_master_delay
The delay for second set of gratuitous ARP messages after the lower priority advert is received in the master state. (The default is 5 seconds.)
Example
parameter_defaults: ControllerExtraConfig: neutron::agents::l3::ha_vrrp_advert_int: 7 neutron::config::l3_agent_config: DEFAULT/ha_vrrp_garp_master_repeat: value: 5 DEFAULT/ha_vrrp_garp_master_delay: value: 5
parameter_defaults: ControllerExtraConfig: neutron::agents::l3::ha_vrrp_advert_int: 7 neutron::config::l3_agent_config: DEFAULT/ha_vrrp_garp_master_repeat: value: 5 DEFAULT/ha_vrrp_garp_master_delay: value: 5
Copy to Clipboard Copied!
Run the
openstack overcloud deploy
command and include the core heat templates, environment files, and this new custom environment file.ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml
$ openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml
Copy to Clipboard Copied!
18.3. Specifying the name that DNS assigns to ports
You can specify the name assigned to ports by the internal DNS when you enable the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) dns_domain for ports extension (dns_domain_ports
).
You enable the dns_domain for ports extension by declaring the RHOSP Orchestration (heat) NeutronPluginExtensions
parameter in a YAML-formatted environment file. Using a corresponding parameter, NeutronDnsDomain
, you specify your domain name, which overrides the default value, openstacklocal
. After redeploying your overcloud, you can use the OpenStack Client port commands, port set
or port create
, with --dns-name
to assign a port name.
Also, when the dns_domain for ports extension is enabled, the Compute service automatically populates the dns_name
attribute with the hostname
attribute of the instance during the boot of VM instances. At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname.
Procedure
Log in to the undercloud as the stack user, and source the
stackrc
file to enable the director command line tools.Example
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Create a custom YAML environment file (
my-neutron-environment.yaml
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site.
Example
vi /home/stack/templates/my-neutron-environment.yaml
$ vi /home/stack/templates/my-neutron-environment.yaml
Copy to Clipboard Copied! TipThe undercloud includes a set of Orchestration service templates that form the plan for your overcloud creation. You can customize aspects of the overcloud with environment files, which are YAML-formatted files that override parameters and resources in the core Orchestration service template collection. You can include as many environment files as necessary.
In the environment file, add a
parameter_defaults
section. Under this section, add the dns_domain for ports extension,dns_domain_ports
.Example
parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
Copy to Clipboard Copied! NoteIf you set
dns_domain_ports
, ensure that the deployment does not also usedns_domain
, the DNS Integration extension. These extensions are incompatible, and both extensions cannot be defined simultaneously.Also in the
parameter_defaults
section, add your domain name (example.com
) using theNeutronDnsDomain
parameter.Example
parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports" NeutronDnsDomain: "example.com"
parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports" NeutronDnsDomain: "example.com"
Copy to Clipboard Copied! Run the
openstack overcloud deploy
command and include the core Orchestration templates, environment files, and this new environment file.ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml
$ openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml
Copy to Clipboard Copied!
Verification
Log in to the overcloud, and create a new port (
new_port
) on a network (public
). Assign a DNS name (my_port
) to the port.Example
source ~/overcloudrc openstack port create --network public --dns-name my_port new_port
$ source ~/overcloudrc $ openstack port create --network public --dns-name my_port new_port
Copy to Clipboard Copied! Display the details for your port (
new_port
).Example
openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
Copy to Clipboard Copied! Output
+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_port.example.com', | | | hostname='my_port', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_port | | name | new_port | +-------------------------+----------------------------------------------+
+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_port.example.com', | | | hostname='my_port', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_port | | name | new_port | +-------------------------+----------------------------------------------+
Copy to Clipboard Copied! Under
dns_assignment
, the fully qualified domain name (fqdn
) value for the port contains a concatenation of the DNS name (my_port
) and the domain name (example.com
) that you set earlier withNeutronDnsDomain
.Create a new VM instance (
my_vm
) using the port (new_port
) that you just created.Example
openstack server create --image rhel --flavor m1.small --port new_port my_vm
$ openstack server create --image rhel --flavor m1.small --port new_port my_vm
Copy to Clipboard Copied! Display the details for your port (
new_port
).Example
openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
Copy to Clipboard Copied! Output
+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_vm.example.com', | | | hostname='my_vm', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_vm | | name | new_port | +-------------------------+----------------------------------------------+
+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_vm.example.com', | | | hostname='my_vm', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_vm | | name | new_port | +-------------------------+----------------------------------------------+
Copy to Clipboard Copied! Note that the Compute service changes the
dns_name
attribute from its original value (my_port
) to the name of the instance with which the port is associated (my_vm
).
18.4. Assigning DHCP attributes to ports
You can use Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extensions to add networking functions. You can use the extra DHCP option extension (extra_dhcp_opt
) to configure ports of DHCP clients with DHCP attributes. For example, you can add a PXE boot option such as tftp-server
, server-ip-address
, or bootfile-name
to a DHCP client port.
The value of the extra_dhcp_opt
attribute is an array of DHCP option objects, where each object contains an opt_name
and an opt_value
. IPv4 is the default version, but you can change this to IPv6 by including a third option, ip-version=6
.
When a VM instance starts, the RHOSP Networking service supplies port information to the instance using DHCP protocol. If you add DHCP information to a port already connected to a running instance, the instance only uses the new DHCP port information when the instance is restarted.
Some of the more common DHCP port attributes are: bootfile-name
, dns-server
, domain-name
, mtu
, server-ip-address
, and tftp-server
. For the complete set of acceptable values for opt_name
, refer to the DHCP specification.
Prerequisites
- You must have RHOSP administrator privileges.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the undercloud credentials file:
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Create a custom YAML environment file.
Example
vi /home/stack/templates/my-octavia-environment.yaml
$ vi /home/stack/templates/my-octavia-environment.yaml
Copy to Clipboard Copied! Your environment file must contain the keywords
parameter_defaults
. Under these keywords, add the extra DHCP option extension,extra_dhcp_opt
.Example
parameter_defaults: NeutronPluginExtensions: "qos,port_security,extra_dhcp_opt"
parameter_defaults: NeutronPluginExtensions: "qos,port_security,extra_dhcp_opt"
Copy to Clipboard Copied! Run the deployment command and include the core heat templates, environment files, and this new custom environment file.
ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml \ -e /home/stack/templates/my-octavia-environment.yaml
$ openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml \ -e /home/stack/templates/my-octavia-environment.yaml
Copy to Clipboard Copied!
Verification
Source your credentials file.
Example
source ~/overcloudrc
$ source ~/overcloudrc
Copy to Clipboard Copied! Create a new port (
new_port
) on a network (public
). Assign a valid attribute from the DHCP specification to the new port.Example
openstack port create --extra-dhcp-option \ name=domain-name,value=test.domain --extra-dhcp-option \ name=ntp-server,value=192.0.2.123 --network public new_port
$ openstack port create --extra-dhcp-option \ name=domain-name,value=test.domain --extra-dhcp-option \ name=ntp-server,value=192.0.2.123 --network public new_port
Copy to Clipboard Copied! Display the details for your port (
new_port
).Example
openstack port show new_port -c extra_dhcp_opts
$ openstack port show new_port -c extra_dhcp_opts
Copy to Clipboard Copied! Sample output
+-----------------+-----------------------------------------------------------------+ | Field | Value | +-----------------+-----------------------------------------------------------------+ | extra_dhcp_opts | ip_version='4', opt_name='domain-name', opt_value='test.domain' | | | ip_version='4', opt_name='ntp-server', opt_value='192.0.2.123' | +-----------------+-----------------------------------------------------------------+
+-----------------+-----------------------------------------------------------------+ | Field | Value | +-----------------+-----------------------------------------------------------------+ | extra_dhcp_opts | ip_version='4', opt_name='domain-name', opt_value='test.domain' | | | ip_version='4', opt_name='ntp-server', opt_value='192.0.2.123' | +-----------------+-----------------------------------------------------------------+
Copy to Clipboard Copied!
18.5. Enabling NUMA affinity on ports
To enable users to create instances with NUMA affinity on the port, you must load the Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extension, port_numa_affinity_policy
.
Prerequisites
- Access to the undercloud host and credentials for the stack user.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the undercloud credentials file:
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! To enable the
port_numa_affinity_policy
extension, open the environment file where theNeutronPluginExtensions
parameter is defined, and addport_numa_affinity_policy
to the list:parameter_defaults: NeutronPluginExtensions: "qos,port_numa_affinity_policy"
parameter_defaults: NeutronPluginExtensions: "qos,port_numa_affinity_policy"
Copy to Clipboard Copied! Add the environment file that you modified to the stack with your other environment files, and redeploy the overcloud:
ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/<custom_environment_file>.yaml
$ openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/<custom_environment_file>.yaml
Copy to Clipboard Copied!
Verification
Source your credentials file.
Example
source ~/overcloudrc
$ source ~/overcloudrc
Copy to Clipboard Copied! Create a new port.
When you create a port, use one of the following options to specify the NUMA affinity policy to apply to the port:
-
--numa-policy-required
- NUMA affinity policy required to schedule this port. -
--numa-policy-preferred
- NUMA affinity policy preferred to schedule this port. --numa-policy-legacy
- NUMA affinity policy using legacy mode to schedule this port.Example
openstack port create --network public \ --numa-policy-legacy myNUMAAffinityPort
$ openstack port create --network public \ --numa-policy-legacy myNUMAAffinityPort
Copy to Clipboard Copied!
-
Display the details for your port.
Example
openstack port show myNUMAAffinityPort -c numa_affinity_policy
$ openstack port show myNUMAAffinityPort -c numa_affinity_policy
Copy to Clipboard Copied! Sample output
When the extension is loaded, the
Value
column should read,legacy
,preferred
orrequired
. If the extension has failed to load,Value
readsNone
:+----------------------+--------+ | Field | Value | +----------------------+--------+ | numa_affinity_policy | legacy | +----------------------+--------+
+----------------------+--------+ | Field | Value | +----------------------+--------+ | numa_affinity_policy | legacy | +----------------------+--------+
Copy to Clipboard Copied!
18.6. Loading kernel modules
Some features in Red Hat OpenStack Platform (RHOSP) require certain kernel modules to be loaded. For example, the OVS firewall driver requires you to load the nf_conntrack_proto_gre
kernel module to support GRE tunneling between two VM instances.
By using a special Orchestration service (heat) parameter, ExtraKernelModules
, you can ensure that heat stores configuration information about the required kernel modules needed for features like GRE tunneling. Later, during normal module management, these required kernel modules are loaded.
Procedure
On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
vi /home/stack/templates/my-modules-environment.yaml
$ vi /home/stack/templates/my-modules-environment.yaml
Copy to Clipboard Copied! TipHeat uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates.
In the YAML environment file under
parameter_defaults
, setExtraKernelModules
to the name of the module that you want to load.Example
ComputeParameters: ExtraKernelModules: nf_conntrack_proto_gre: {} ControllerParameters: ExtraKernelModules: nf_conntrack_proto_gre: {}
ComputeParameters: ExtraKernelModules: nf_conntrack_proto_gre: {} ControllerParameters: ExtraKernelModules: nf_conntrack_proto_gre: {}
Copy to Clipboard Copied! Run the
openstack overcloud deploy
command and include the core heat templates, environment files, and this new custom environment file.ImportantThe order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.
Example
openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yaml
$ openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yaml
Copy to Clipboard Copied!
Verification
If heat has properly loaded the module, you should see output when you run the
lsmod
command on the Compute node:Example
sudo lsmod | grep nf_conntrack_proto_gre
sudo lsmod | grep nf_conntrack_proto_gre
Copy to Clipboard Copied!