Chapter 21. Common administrative networking tasks
Sometimes you might need to perform administration tasks on the Red Hat OpenStack Platform Networking service (neutron) such as configuring the Layer 2 Population driver or specifying the name assigned to ports by the internal DNS.
21.1. Configuring the L2 population driver
The L2 Population driver is used in Networking service (neutron) ML2/OVS environments to enable broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a particular network only between the nodes that host the network. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast.
Prerequisites
- You must have RHOSP administrator privileges.
- The Networking service must be using the ML2/OVS mechanism driver.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the undercloud credentials file:
$ source ~/stackrc
Create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-environment.yaml
Your environment file must contain the keywords
parameter_defaults
. Under these keywords, add the following lines:parameter_defaults: NeutronMechanismDrivers: ['openvswitch', 'l2population'] NeutronEnableL2Pop: 'True' NeutronEnableARPResponder: true
Run the deployment command and include the core heat templates, environment files, and this new custom environment file.
ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
$ openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/my-environment.yaml
Verification
Obtain the IDs for the OVS agents.
$ openstack network agent list -c ID -c Binary
Sample output
+--------------------------------------+---------------------------+ | ID | Binary | +--------------------------------------+---------------------------+ | 003a8750-a6f9-468b-9321-a6c03c77aec7 | neutron-openvswitch-agent | | 02bbbb8c-4b6b-4ce7-8335-d1132df31437 | neutron-l3-agent | | 0950e233-60b2-48de-94f6-483fd0af16ea | neutron-openvswitch-agent | | 115c2b73-47f5-4262-bc66-8538d175029f | neutron-openvswitch-agent | | 2a9b2a15-e96d-468c-8dc9-18d7c2d3f4bb | neutron-metadata-agent | | 3e29d033-c80b-4253-aaa4-22520599d62e | neutron-dhcp-agent | | 3ede0b64-213d-4a0d-9ab3-04b5dfd16baa | neutron-dhcp-agent | | 462199be-0d0f-4bba-94da-603f1c9e0ec4 | neutron-sriov-nic-agent | | 54f7c535-78cc-464c-bdaa-6044608a08d7 | neutron-l3-agent | | 6657d8cf-566f-47f4-856c-75600bf04828 | neutron-metadata-agent | | 733c66f1-a032-4948-ba18-7d1188a58483 | neutron-l3-agent | | 7e0a0ce3-7ebb-4bb3-9b89-8cccf8cb716e | neutron-openvswitch-agent | | dfc36468-3a21-4a2d-84c3-2bc40f224235 | neutron-metadata-agent | | eb7d7c10-69a2-421e-bd9e-aec3edfe1b7c | neutron-openvswitch-agent | | ef5219b4-ee49-4635-ad04-048291209373 | neutron-sriov-nic-agent | | f36c7af0-e20c-400b-8a37-4ffc5d4da7bd | neutron-dhcp-agent | +--------------------------------------+---------------------------+
Using an ID from one of the OVS agents, confirm that the L2 Population driver is set on the OVS agent.
Example
This example verifies the configuration of the L2 Population driver on the
neutron-openvswitch-agent
with ID003a8750-a6f9-468b-9321-a6c03c77aec7
:$ openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep l2_population
Sample output
"l2_population": true,
Ensure that the ARP responder feature is enabled for the OVS agent.
Example
$ openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep arp_responder_enabled
Sample output
"arp_responder_enabled": true,
Additional resources
- OVN supported DHCP options
- Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
- Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide
21.2. Tuning keepalived to avoid VRRP packet loss
If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs, the Virtual Router Redundancy Protocol (VRRP) messages might overflow the IRQ queues. This overflow stops Open vSwitch (OVS) from responding and forwarding those VRRP messages.
To avoid VRRP packet overload, you must increase the VRRP advertisement interval using the ha_vrrp_advert_int
parameter in the ExtraConfig
section for the Controller role.
Procedure
Log in to the undercloud as the stack user, and source the
stackrc
file to enable the director command line tools.Example
$ source ~/stackrc
Create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TipThe Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates.
In the YAML environment file, increase the VRRP advertisement interval using the
ha_vrrp_advert_int
argument with a value specific for your site. (The default is2
seconds.)You can also set values for gratuitous ARP messages:
ha_vrrp_garp_master_repeat
- The number of gratuitous ARP messages to send at one time after the transition to the master state. (The default is 5 messages.)
ha_vrrp_garp_master_delay
The delay for second set of gratuitous ARP messages after the lower priority advert is received in the master state. (The default is 5 seconds.)
Example
parameter_defaults: ControllerExtraConfig: neutron::agents::l3::ha_vrrp_advert_int: 7 neutron::config::l3_agent_config: DEFAULT/ha_vrrp_garp_master_repeat: value: 5 DEFAULT/ha_vrrp_garp_master_delay: value: 5
Run the
openstack overcloud deploy
command and include the core heat templates, environment files, and this new custom environment file.ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
$ openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml
Additional resources
- 2.1.2 Data Forwarding Rules, Subsection 2 in RFC 4541
- Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
- Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide
21.3. Specifying the name that DNS assigns to ports
You can specify the name assigned to ports by the internal DNS when you enable the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) DNS domain for ports extension (dns_domain_ports
).
You enable the DNS domain for ports extension by declaring the RHOSP Orchestration (heat) NeutronPluginExtensions
parameter in a YAML-formatted environment file. Using a corresponding parameter, NeutronDnsDomain
, you specify your domain name, which overrides the default value, openstacklocal
. After redeploying your overcloud, you can use the OpenStack Client port commands, port set
or port create
, with --dns-name
to assign a port name.
You must enable the DNS domain for ports extension (dns_domain_ports
) for DNS to internally resolve names for ports in your RHOSP environment. Using the NeutronDnsDomain
default value, openstacklocal
, means that the Networking service does not internally resolve port names for DNS.
Also, when the DNS domain for ports extension is enabled, the Compute service automatically populates the dns_name
attribute with the hostname
attribute of the instance during the boot of VM instances. At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname.
Procedure
Log in to the undercloud as the stack user, and source the
stackrc
file to enable the director command line tools.Example
$ source ~/stackrc
Create a custom YAML environment file (
my-neutron-environment.yaml
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TipThe undercloud includes a set of Orchestration service templates that form the plan for your overcloud creation. You can customize aspects of the overcloud with environment files, which are YAML-formatted files that override parameters and resources in the core Orchestration service template collection. You can include as many environment files as necessary.
In the environment file, add a
parameter_defaults
section. Under this section, add the DNS domain for ports extension,dns_domain_ports
.Example
parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
NoteIf you set
dns_domain_ports
, ensure that the deployment does not also usedns_domain
, the DNS Integration extension. These extensions are incompatible, and both extensions cannot be defined simultaneously.Also in the
parameter_defaults
section, add your domain name (example.com
) using theNeutronDnsDomain
parameter.Example
parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports" NeutronDnsDomain: "example.com"
Run the
openstack overcloud deploy
command and include the core Orchestration templates, environment files, and this new environment file.ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
$ openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml
Verification
Log in to the overcloud, and create a new port (
new_port
) on a network (public
). Assign a DNS name (my_port
) to the port.Example
$ source ~/overcloudrc $ openstack port create --network public --dns-name my_port new_port
Display the details for your port (
new_port
).Example
$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
Output
+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_port.example.com', | | | hostname='my_port', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_port | | name | new_port | +-------------------------+----------------------------------------------+
Under
dns_assignment
, the fully qualified domain name (fqdn
) value for the port contains a concatenation of the DNS name (my_port
) and the domain name (example.com
) that you set earlier withNeutronDnsDomain
.Create a new VM instance (
my_vm
) using the port (new_port
) that you just created.Example
$ openstack server create --image rhel --flavor m1.small --port new_port my_vm
Display the details for your port (
new_port
).Example
$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
Output
+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_vm.example.com', | | | hostname='my_vm', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_vm | | name | new_port | +-------------------------+----------------------------------------------+
Note that the Compute service changes the
dns_name
attribute from its original value (my_port
) to the name of the instance with which the port is associated (my_vm
).
Additional resources
- Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
- Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide
- port in the Command line interface reference
- server create in the Command line interface reference
21.4. Assigning DHCP attributes to ports
You can use Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extensions to add networking functions. You can use the extra DHCP option extension (extra_dhcp_opt
) to configure ports of DHCP clients with DHCP attributes. For example, you can add a PXE boot option such as tftp-server
, server-ip-address
, or bootfile-name
to a DHCP client port.
The value of the extra_dhcp_opt
attribute is an array of DHCP option objects, where each object contains an opt_name
and an opt_value
. IPv4 is the default version, but you can change this to IPv6 by including a third option, ip-version=6
.
When a VM instance starts, the RHOSP Networking service supplies port information to the instance using DHCP protocol. If you add DHCP information to a port already connected to a running instance, the instance only uses the new DHCP port information when the instance is restarted.
Some of the more common DHCP port attributes are: bootfile-name
, dns-server
, domain-name
, mtu
, server-ip-address
, and tftp-server
. For the complete set of acceptable values for opt_name
, refer to the DHCP specification.
Prerequisites
- You must have RHOSP administrator privileges.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the undercloud credentials file:
$ source ~/stackrc
Create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-environment.yaml
Your environment file must contain the keywords
parameter_defaults
. Under these keywords, add the extra DHCP option extension,extra_dhcp_opt
.Example
parameter_defaults: NeutronPluginExtensions: "qos,port_security,extra_dhcp_opt"
Run the deployment command and include the core heat templates, environment files, and this new custom environment file.
ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
$ openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml \ -e /home/stack/templates/my-environment.yaml
Verification
Source your credentials file.
Example
$ source ~/overcloudrc
Create a new port (
new_port
) on a network (public
). Assign a valid attribute from the DHCP specification to the new port.Example
$ openstack port create --extra-dhcp-option \ name=domain-name,value=test.domain --extra-dhcp-option \ name=ntp-server,value=192.0.2.123 --network public new_port
Display the details for your port (
new_port
).Example
$ openstack port show new_port -c extra_dhcp_opts
Sample output
+-----------------+-----------------------------------------------------------------+ | Field | Value | +-----------------+-----------------------------------------------------------------+ | extra_dhcp_opts | ip_version='4', opt_name='domain-name', opt_value='test.domain' | | | ip_version='4', opt_name='ntp-server', opt_value='192.0.2.123' | +-----------------+-----------------------------------------------------------------+
Additional resources
- OVN supported DHCP options
- Dynamic Host Configuration Protocol (DHCP) and Bootstrap Protocol (BOOTP) Parameters
- Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
- Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide
- port create in the Command line interface reference
- port show in the Command line interface reference
21.5. Enabling NUMA affinity on ports
To enable users to create instances with NUMA affinity on the port, you must load the Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extension, port_numa_affinity_policy
.
Prerequisites
- Access to the undercloud host and credentials for the stack user.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the undercloud credentials file:
$ source ~/stackrc
To enable the
port_numa_affinity_policy
extension, open the environment file where theNeutronPluginExtensions
parameter is defined, and addport_numa_affinity_policy
to the list:parameter_defaults: NeutronPluginExtensions: "qos,port_numa_affinity_policy"
Add the environment file that you modified to the stack with your other environment files, and redeploy the overcloud:
ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
$ openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/<custom_environment_file>.yaml
Verification
Source your credentials file.
Example
$ source ~/overcloudrc
Create a new port.
When you create a port, use one of the following options to specify the NUMA affinity policy to apply to the port:
-
--numa-policy-required
- NUMA affinity policy required to schedule this port. -
--numa-policy-preferred
- NUMA affinity policy preferred to schedule this port. --numa-policy-legacy
- NUMA affinity policy using legacy mode to schedule this port.Example
$ openstack port create --network public \ --numa-policy-legacy myNUMAAffinityPort
-
Display the details for your port.
Example
$ openstack port show myNUMAAffinityPort -c numa_affinity_policy
Sample output
When the extension is loaded, the
Value
column should read,legacy
,preferred
orrequired
. If the extension has failed to load,Value
readsNone
:+----------------------+--------+ | Field | Value | +----------------------+--------+ | numa_affinity_policy | legacy | +----------------------+--------+
Additional resources
- Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
- Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide
- Creating an instance with NUMA affinity on the port in the Creating and managing instances guide
21.6. Loading kernel modules
Some features in Red Hat OpenStack Platform (RHOSP) require certain kernel modules to be loaded. For example, the OVS firewall driver requires you to load the nf_conntrack_proto_gre
kernel module to support GRE tunneling between two VM instances.
By using a special Orchestration service (heat) parameter, ExtraKernelModules
, you can ensure that heat stores configuration information about the required kernel modules needed for features like GRE tunneling. Later, during normal module management, these required kernel modules are loaded.
Procedure
On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-modules-environment.yaml
TipHeat uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates.
In the YAML environment file under
parameter_defaults
, setExtraKernelModules
to the name of the module that you want to load.Example
ComputeParameters: ExtraKernelModules: nf_conntrack_proto_gre: {} ControllerParameters: ExtraKernelModules: nf_conntrack_proto_gre: {}
Run the
openstack overcloud deploy
command and include the core heat templates, environment files, and this new custom environment file.ImportantThe order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.
Example
$ openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yaml
Verification
If heat has properly loaded the module, you should see output when you run the
lsmod
command on the Compute node:Example
sudo lsmod | grep nf_conntrack_proto_gre
Additional resources
- Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
- Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide
21.7. Limiting queries to the metadata service
To protect the RHOSP environment against cyber threats such as denial of service (DoS) attacks, the Networking service (neutron) offers administrators the ability to limit the rate at which VM instances can query the Compute metadata service. Administrators do this by assigning values to a set of parameters in the metadata_rate_limiting
section of the neutron.conf
configuration file. The Networking service uses these parameters to configure HAProxy servers to perform the rate limiting. The HAProxy servers run inside L3 routers and DHCP agents in the OVS back end, and inside the metadata service in the OVN back end.
Prerequisites
- You have access to the RHOSP Compute nodes and permission to update configuration files.
- Your RHOSP environment uses IPv4 networking. Currently, the Networking service does not support metadata rate limiting on IPv6 networks.
- This procedure requires you to restart the OVN metadata service or the OVS metadata agent. Schedule this activity for a maintenance window to minimize the operational impact of any potential disruption.
Procedure
On every Compute node, in the
metadata_rate_limiting
section of/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf
, set values for the following parameters:rate_limit_enabled
-
enables you to limit the rate of metadata requests. The default value is
false
. Set the value totrue
to enable metadata rate limiting. ip_versions
-
the IP version,
4
, used for metadata IP addresses on which you want to control query rates. RHOSP does not yet support metadata rate limiting for IPv6 networks. base_window_duration
-
the time span, in seconds, during which query requests are limited. The default value is
10
seconds. base_query_rate_limit
-
the maximum number of requests allowed during the
base_window_duration
. The default value is10
requests. burst_window_duration
-
the time span, in seconds, that a request rate higher than the
base_window_duration
is allowed. The default value is10
seconds. burst_query_rate_limit
the maximum number of requests allowed during the
burst_window_duration
. The default value is10
requests.Example
In this example, the Networking service is configured for a base time and rate that allows instances to query the IPv4 metadata service IP address 6 times over a 60 second period. The Networking service is also configured for a burst time and rate that allows a higher rate of 2 queries during shorter periods of 10 seconds each:
[metadata_rate_limiting] rate_limit_enabled = True ip_versions = 4 base_window_duration = 60 base_query_rate_limit = 6 burst_window_duration = 10 burst_query_rate_limit = 2
Restart the metadata service.
Depending on the Networking service mechanism driver your deployment uses, do one of the following:
ML2/OVN
On the Compute nodes, restart
tripleo_ovn_metadata_agent.service
.ML2/OVS
On the Compute nodes, restart
tripleo_neutron_metadata_agent.service
.