Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 21. Common administrative networking tasks
Sometimes you might need to perform administration tasks on the Red Hat OpenStack Platform Networking service (neutron) such as configuring the Layer 2 Population driver or specifying the name assigned to ports by the internal DNS.
21.1. Configuring the L2 population driver Link kopierenLink in die Zwischenablage kopiert!
The L2 Population driver is used in Networking service (neutron) ML2/OVS environments to enable broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a particular network only between the nodes that host the network. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast.
Prerequisites
- You must have RHOSP administrator privileges.
- The Networking service must be using the ML2/OVS mechanism driver.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the undercloud credentials file:
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a custom YAML environment file.
Example
vi /home/stack/templates/my-environment.yaml
$ vi /home/stack/templates/my-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Your environment file must contain the keywords
parameter_defaults. Under these keywords, add the following lines:parameter_defaults: NeutronMechanismDrivers: ['openvswitch', 'l2population'] NeutronEnableL2Pop: 'True' NeutronEnableARPResponder: true
parameter_defaults: NeutronMechanismDrivers: ['openvswitch', 'l2population'] NeutronEnableL2Pop: 'True' NeutronEnableARPResponder: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the deployment command and include the core heat templates, environment files, and this new custom environment file.
ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/my-environment.yaml
$ openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/my-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Obtain the IDs for the OVS agents.
openstack network agent list -c ID -c Binary
$ openstack network agent list -c ID -c BinaryCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Using an ID from one of the OVS agents, confirm that the L2 Population driver is set on the OVS agent.
- Example
This example verifies the configuration of the L2 Population driver on the
neutron-openvswitch-agentwith ID003a8750-a6f9-468b-9321-a6c03c77aec7:openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep l2_population
$ openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep l2_populationCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
"l2_population": true,
"l2_population": true,Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the ARP responder feature is enabled for the OVS agent.
Example
openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep arp_responder_enabled
$ openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep arp_responder_enabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
"arp_responder_enabled": true,
"arp_responder_enabled": true,Copy to Clipboard Copied! Toggle word wrap Toggle overflow
21.2. Tuning keepalived to avoid VRRP packet loss Link kopierenLink in die Zwischenablage kopiert!
If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs, the Virtual Router Redundancy Protocol (VRRP) messages might overflow the IRQ queues. This overflow stops Open vSwitch (OVS) from responding and forwarding those VRRP messages.
To avoid VRRP packet overload, you must increase the VRRP advertisement interval using the ha_vrrp_advert_int parameter in the ExtraConfig section for the Controller role.
Procedure
Log in to the undercloud as the stack user, and source the
stackrcfile to enable the director command line tools.- Example
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a custom YAML environment file.
- Example
vi /home/stack/templates/my-neutron-environment.yaml
$ vi /home/stack/templates/my-neutron-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipThe Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates.
In the YAML environment file, increase the VRRP advertisement interval using the
ha_vrrp_advert_intargument with a value specific for your site. (The default is2seconds.)You can also set values for gratuitous ARP messages:
ha_vrrp_garp_master_repeat- The number of gratuitous ARP messages to send at one time after the transition to the master state. (The default is 5 messages.)
ha_vrrp_garp_master_delay- The delay for second set of gratuitous ARP messages after the lower priority advert is received in the master state. (The default is 5 seconds.)
- Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the
openstack overcloud deploycommand and include the core heat templates, environment files, and this new custom environment file.ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
- Example
openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml
$ openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
21.3. Specifying the name that DNS assigns to ports Link kopierenLink in die Zwischenablage kopiert!
You can specify the name assigned to ports by the internal DNS when you enable the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) DNS domain for ports extension (dns_domain_ports).
You enable the DNS domain for ports extension by declaring the RHOSP Orchestration (heat) NeutronPluginExtensions parameter in a YAML-formatted environment file. Using a corresponding parameter, NeutronDnsDomain, you specify your domain name, which overrides the default value, openstacklocal. After redeploying your overcloud, you can use the OpenStack Client port commands, port set or port create, with --dns-name to assign a port name.
You must enable the DNS domain for ports extension (dns_domain_ports) for DNS to internally resolve names for ports in your RHOSP environment. Using the NeutronDnsDomain default value, openstacklocal, means that the Networking service does not internally resolve port names for DNS.
Also, when the DNS domain for ports extension is enabled, the Compute service automatically populates the dns_name attribute with the hostname attribute of the instance during the boot of VM instances. At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname.
Procedure
Log in to the undercloud as the stack user, and source the
stackrcfile to enable the director command line tools.- Example
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a custom YAML environment file (
my-neutron-environment.yaml).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
- Example
vi /home/stack/templates/my-neutron-environment.yaml
$ vi /home/stack/templates/my-neutron-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipThe undercloud includes a set of Orchestration service templates that form the plan for your overcloud creation. You can customize aspects of the overcloud with environment files, which are YAML-formatted files that override parameters and resources in the core Orchestration service template collection. You can include as many environment files as necessary.
In the environment file, add a
parameter_defaultssection. Under this section, add the DNS domain for ports extension,dns_domain_ports.- Example
parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you set
dns_domain_ports, ensure that the deployment does not also usedns_domain, the DNS Integration extension. These extensions are incompatible, and both extensions cannot be defined simultaneously.
Also in the
parameter_defaultssection, add your domain name (example.com) using theNeutronDnsDomainparameter.- Example
parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports" NeutronDnsDomain: "example.com"parameter_defaults: NeutronPluginExtensions: "qos,port_security,dns_domain_ports" NeutronDnsDomain: "example.com"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the
openstack overcloud deploycommand and include the core Orchestration templates, environment files, and this new environment file.ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
- Example
openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml
$ openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Log in to the overcloud, and create a new port (
new_port) on a network (public). Assign a DNS name (my_port) to the port.- Example
source ~/overcloudrc openstack port create --network public --dns-name my_port new_port
$ source ~/overcloudrc $ openstack port create --network public --dns-name my_port new_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Display the details for your port (
new_port).- Example
openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
dns_assignment, the fully qualified domain name (fqdn) value for the port contains a concatenation of the DNS name (my_port) and the domain name (example.com) that you set earlier withNeutronDnsDomain.
Create a new VM instance (
my_vm) using the port (new_port) that you just created.- Example
openstack server create --image rhel --flavor m1.small --port new_port my_vm
$ openstack server create --image rhel --flavor m1.small --port new_port my_vmCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Display the details for your port (
new_port).- Example
openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the Compute service changes the
dns_nameattribute from its original value (my_port) to the name of the instance with which the port is associated (my_vm).
21.4. Assigning DHCP attributes to ports Link kopierenLink in die Zwischenablage kopiert!
You can use Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extensions to add networking functions. You can use the extra DHCP option extension (extra_dhcp_opt) to configure ports of DHCP clients with DHCP attributes. For example, you can add a PXE boot option such as tftp-server, server-ip-address, or bootfile-name to a DHCP client port.
The value of the extra_dhcp_opt attribute is an array of DHCP option objects, where each object contains an opt_name and an opt_value. IPv4 is the default version, but you can change this to IPv6 by including a third option, ip-version=6.
When a VM instance starts, the RHOSP Networking service supplies port information to the instance using DHCP protocol. If you add DHCP information to a port already connected to a running instance, the instance only uses the new DHCP port information when the instance is restarted.
Some of the more common DHCP port attributes are: bootfile-name, dns-server, domain-name, mtu, server-ip-address, and tftp-server. For the complete set of acceptable values for opt_name, refer to the DHCP specification.
Prerequisites
- You must have RHOSP administrator privileges.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the undercloud credentials file:
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a custom YAML environment file.
Example
vi /home/stack/templates/my-environment.yaml
$ vi /home/stack/templates/my-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Your environment file must contain the keywords
parameter_defaults. Under these keywords, add the extra DHCP option extension,extra_dhcp_opt.Example
parameter_defaults: NeutronPluginExtensions: "qos,port_security,extra_dhcp_opt"
parameter_defaults: NeutronPluginExtensions: "qos,port_security,extra_dhcp_opt"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the deployment command and include the core heat templates, environment files, and this new custom environment file.
ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml \ -e /home/stack/templates/my-environment.yaml
$ openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml \ -e /home/stack/templates/my-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Source your credentials file.
Example
source ~/overcloudrc
$ source ~/overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new port (
new_port) on a network (public). Assign a valid attribute from the DHCP specification to the new port.Example
openstack port create --extra-dhcp-option \ name=domain-name,value=test.domain --extra-dhcp-option \ name=ntp-server,value=192.0.2.123 --network public new_port
$ openstack port create --extra-dhcp-option \ name=domain-name,value=test.domain --extra-dhcp-option \ name=ntp-server,value=192.0.2.123 --network public new_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display the details for your port (
new_port).Example
openstack port show new_port -c extra_dhcp_opts
$ openstack port show new_port -c extra_dhcp_optsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
21.5. Enabling NUMA affinity on ports Link kopierenLink in die Zwischenablage kopiert!
To enable users to create instances with NUMA affinity on the port, you must load the Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extension, port_numa_affinity_policy.
Prerequisites
- Access to the undercloud host and credentials for the stack user.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the undercloud credentials file:
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow To enable the
port_numa_affinity_policyextension, open the environment file where theNeutronPluginExtensionsparameter is defined, and addport_numa_affinity_policyto the list:parameter_defaults: NeutronPluginExtensions: "qos,port_numa_affinity_policy"
parameter_defaults: NeutronPluginExtensions: "qos,port_numa_affinity_policy"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the environment file that you modified to the stack with your other environment files, and redeploy the overcloud:
ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/<custom_environment_file>.yaml
$ openstack overcloud deploy --templates \ -e <your_environment_files> \ -e /home/stack/templates/<custom_environment_file>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Source your credentials file.
Example
source ~/overcloudrc
$ source ~/overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new port.
When you create a port, use one of the following options to specify the NUMA affinity policy to apply to the port:
-
--numa-policy-required- NUMA affinity policy required to schedule this port. -
--numa-policy-preferred- NUMA affinity policy preferred to schedule this port. --numa-policy-legacy- NUMA affinity policy using legacy mode to schedule this port.Example
openstack port create --network public \ --numa-policy-legacy myNUMAAffinityPort
$ openstack port create --network public \ --numa-policy-legacy myNUMAAffinityPortCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Display the details for your port.
Example
openstack port show myNUMAAffinityPort -c numa_affinity_policy
$ openstack port show myNUMAAffinityPort -c numa_affinity_policyCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
When the extension is loaded, the
Valuecolumn should read,legacy,preferredorrequired. If the extension has failed to load,ValuereadsNone:+----------------------+--------+ | Field | Value | +----------------------+--------+ | numa_affinity_policy | legacy | +----------------------+--------+
+----------------------+--------+ | Field | Value | +----------------------+--------+ | numa_affinity_policy | legacy | +----------------------+--------+Copy to Clipboard Copied! Toggle word wrap Toggle overflow
21.6. Loading kernel modules Link kopierenLink in die Zwischenablage kopiert!
Some features in Red Hat OpenStack Platform (RHOSP) require certain kernel modules to be loaded. For example, the OVS firewall driver requires you to load the nf_conntrack_proto_gre kernel module to support GRE tunneling between two VM instances.
By using a special Orchestration service (heat) parameter, ExtraKernelModules, you can ensure that heat stores configuration information about the required kernel modules needed for features like GRE tunneling. Later, during normal module management, these required kernel modules are loaded.
Procedure
On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
vi /home/stack/templates/my-modules-environment.yaml
$ vi /home/stack/templates/my-modules-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipHeat uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates.
In the YAML environment file under
parameter_defaults, setExtraKernelModulesto the name of the module that you want to load.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
openstack overcloud deploycommand and include the core heat templates, environment files, and this new custom environment file.ImportantThe order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.
Example
openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yaml
$ openstack overcloud deploy --templates \ -e [your-environment-files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If heat has properly loaded the module, you should see output when you run the
lsmodcommand on the Compute node:Example
sudo lsmod | grep nf_conntrack_proto_gre
sudo lsmod | grep nf_conntrack_proto_greCopy to Clipboard Copied! Toggle word wrap Toggle overflow
21.7. Limiting queries to the metadata service Link kopierenLink in die Zwischenablage kopiert!
To protect the RHOSP environment against cyber threats such as denial of service (DoS) attacks, the Networking service (neutron) offers administrators the ability to limit the rate at which VM instances can query the Compute metadata service. Administrators do this by assigning values to a set of parameters in the metadata_rate_limiting section of the neutron.conf configuration file. The Networking service uses these parameters to configure HAProxy servers to perform the rate limiting. The HAProxy servers run inside L3 routers and DHCP agents in the OVS back end, and inside the metadata service in the OVN back end.
Prerequisites
- You have access to the RHOSP Compute nodes and permission to update configuration files.
- Your RHOSP environment uses IPv4 networking. Currently, the Networking service does not support metadata rate limiting on IPv6 networks.
- This procedure requires you to restart the OVN metadata service or the OVS metadata agent. Schedule this activity for a maintenance window to minimize the operational impact of any potential disruption.
Procedure
On every Compute node, in the
metadata_rate_limitingsection of/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf, set values for the following parameters:rate_limit_enabled-
enables you to limit the rate of metadata requests. The default value is
false. Set the value totrueto enable metadata rate limiting. ip_versions-
the IP version,
4, used for metadata IP addresses on which you want to control query rates. RHOSP does not yet support metadata rate limiting for IPv6 networks. base_window_duration-
the time span, in seconds, during which query requests are limited. The default value is
10seconds. base_query_rate_limit-
the maximum number of requests allowed during the
base_window_duration. The default value is10requests. burst_window_duration-
the time span, in seconds, that a request rate higher than the
base_window_durationis allowed. The default value is10seconds. burst_query_rate_limit-
the maximum number of requests allowed during the
burst_window_duration. The default value is10requests. - Example
In this example, the Networking service is configured for a base time and rate that allows instances to query the IPv4 metadata service IP address 6 times over a 60 second period. The Networking service is also configured for a burst time and rate that allows a higher rate of 2 queries during shorter periods of 10 seconds each:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart the metadata service.
Depending on the Networking service mechanism driver your deployment uses, do one of the following:
- ML2/OVN
-
On the Compute nodes, restart
tripleo_ovn_metadata_agent.service. - ML2/OVS
-
On the Compute nodes, restart
tripleo_neutron_metadata_agent.service.