Chapter 9. Using Quality of Service (QoS) policies to manage data traffic
You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic on Red Hat OpenStack Platform (RHOSP) networks.
You can apply QoS policies to individual ports, or apply QoS policies to a project network, where ports with no specific policy attached inherit the policy.
Internal network owned ports, such as DHCP and internal router ports, are excluded from network policy application.
You can apply, modify, or remove QoS policies dynamically. However, for guaranteed minimum bandwidth QoS policies, you can only apply modifications when there are no instances that use any of the ports the policy is assigned to.
9.1. QoS rules
You can configure the following rule types to define a quality of service (QoS) policy in the Red Hat OpenStack Platform (RHOSP) Networking service (neutron):
- Minimum bandwidth (
minimum_bandwidth
) - Provides minimum bandwidth constraints on certain types of traffic. If implemented, best efforts are made to provide no less than the specified bandwidth to each port on which the rule is applied.
- Bandwidth limit (
bandwidth_limit
) - Provides bandwidth limitations on networks, ports, floating IPs, and router gateway IPs. If implemented, any traffic that exceeds the specified rate is dropped.
- DSCP marking (
dscp_marking
) - Marks network traffic with a Differentiated Services Code Point (DSCP) value.
QoS policies can be enforced in various contexts, including virtual machine instance placements, floating IP assignments, and gateway IP assignments.
Depending on the enforcement context and on the mechanism driver you use, a QoS rule affects egress traffic (upload from instance), ingress traffic (download to instance), or both.
Rule [8] | Supported traffic direction by mechanism driver | ||
ML2/OVS | ML2/SR-IOV | ML2/OVN | |
Minimum bandwidth | Egress only [4][5] | Egress only | Currently, no support [6] |
Bandwidth limit | Egress [1][2] and ingress | Egress only [3] | Egress and ingress |
DSCP marking | Egress only | N/A | Egress only [7] |
[1] The OVS egress bandwidth limit is performed in the TAP interface and is traffic policing, not traffic shaping.
[2] In RHOSP 16.2.2 and later, the OVS egress bandwidth limit is supported in hardware offloaded ports by applying the QoS policy in the network interface using ip link
commands.
[3] The mechanism drivers ignore the max-burst-kbits
parameter because they do not support it.
[4] Rule applies only to non-tunnelled networks: flat and VLAN.
[5] The OVS egress minimum bandwidth is supported in hardware offloaded ports by applying the QoS policy in the network interface using ip link
commands.
[6] https://bugzilla.redhat.com/show_bug.cgi?id=2060310
[7] ML2/OVN does not support DSCP marking on tunneled protocols.
[8] RHOSP does not support QoS for trunk ports.
Enforcement type | Supported traffic by direction mechanism driver | ||
ML2/OVS | ML2/SR-IOV | ML2/OVN | |
Placement | Egress and ingress | Egress and ingress | Currently, no support |
Enforcement type | Supported traffic direction by mechanism driver | |
ML2/OVS | ML2/OVN | |
Floating IP | Egress and ingress | Egress and ingress |
Gateway IP | Egress and ingress | Currently, no support [1] |
9.2. Configuring the Networking service for QoS policies
The quality of service feature in the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) is provided through the qos
service plug-in. With the ML2/OVS and ML2/OVN mechanism drivers, qos
is loaded by default. However, this is not true for ML2/SR-IOV.
When using the qos
service plug-in with the ML2/OVS and ML2/SR-IOV mechanism drivers, you must also load the qos
extension on their respective agents.
The following list summarizes the tasks that you must perform to configure the Networking service for QoS. The task details follow this list:
For all types of QoS policies:
-
Add the
qos
service plug-in. -
Add
qos
extension for the agents (OVS and SR-IOV only).
-
Add the
Additional tasks for scheduling VM instances using minimum bandwidth policies only:
- Specify the hypervisor name if it differs from the name that the Compute service (nova) uses.
- Configure the resource provider ingress and egress bandwidths for the relevant agents on each Compute node.
-
(Optional) Mark
vnic_types
as not supported.
Additional task for DSCP marking policies on systems that use ML/OVS with tunneling only:
-
Set
dscp_inherit
totrue
.
-
Set
Prerequisites
-
You must be the
stack
user with access to the RHOSP undercloud.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the undercloud credentials file:
$ source ~/stackrc
Confirm that the
qos
service plug-in is not already loaded.$ openstack network qos policy list
If the
qos
service plug-in is not loaded, then you receive aResourceNotFound
error. If you do not receive the error, then the plug-in is loaded and you do not need to perform the steps in this topic.Create a YAML custom environment file.
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
Your environment file must contain the keywords
parameter_defaults
. On a new line belowparameter_defaults
addqos
to theNeutronServicePlugins
parameter:parameter_defaults: NeutronServicePlugins: "qos"
If you use ML2/OVS and ML2/SR-IOV mechanism drivers, then you must also load the
qos
extension on the agent, by using either theNeutronAgentExtensions
or theNeutronSriovAgentExtensions
variable, respectively:ML2/OVS
parameter_defaults: NeutronServicePlugins: "qos" NeutronAgentExtensions: "qos"
ML2/SR-IOV
parameter_defaults: NeutronServicePlugins: "qos" NeutronSriovAgentExtensions: "qos"
If you want to schedule VM instances by using minimum bandwidth QoS policies, then you must also do the following:
Add
placement
to the list of plug-ins and ensure the list also includesqos
:parameter_defaults: NeutronServicePlugins: "qos,placement"
If the hypervisor name matches the canonical hypervisor name used by the Compute service (nova), skip to step 7.iii.
If the hypervisor name does not match the canonical hypervisor name used by the Compute service, specify the alternative hypervisor name, using
resource_provider_default_hypervisor
:ML2/OVS
parameter_defaults: NeutronServicePlugins: "qos,placement" ExtraConfig: Neutron::agents::ml2::ovs::resource_provider_default_hypervisor: %{hiera('fqdn_canonical')}
ML2/SR-IOV
parameter_defaults: NeutronServicePlugins: "qos,placement" ExtraConfig: Neutron::agents::ml2::sriov::resource_provider_default_hypervisor: %{hiera('fqdn_canonical')}
ImportantAnother method for setting the alternative hypervisor name is to use
resource_provider_hypervisor
:ML2/OVS
parameter_defaults: ExtraConfig: Neutron::agents::ml2::ovs::resource_provider_hypervisors:"ens5:%{hiera('fqdn_canonical')},ens6:%{hiera('fqdn_canonical')}"
ML2/SR-IOV
parameter_defaults: ExtraConfig: Neutron::agents::ml2::sriov::resource_provider_hypervisors: "ens5:%{hiera('fqdn_canonical')},ens6:%{hiera('fqdn_canonical')}"
Configure the resource provider ingress and egress bandwidths for the relevant agents on each Compute node that needs to provide a minimum bandwidth.
You can configure egress, ingress, or both, using the following formats:
Configure only egress bandwidth, in kbps:
NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:,<bridge1>:<egress_kbps>:,...,<bridgeN>:<egress_kbps>:
Configure only ingress bandwidth, in kbps:
NeutronOvsResourceProviderBandwidths: <bridge0>::<ingress_kbps>,<bridge1>::<ingress_kbps>,...,<bridgeN>::<ingress_kbps>
Configure both egress and ingress bandwidth, in kbps:
NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:<ingress_kbps>,<bridge1>:<egress_kbps>:<ingress_kbps>,...,<bridgeN>:<egress_kbps>:<ingress_kbps>
Example - OVS agent
To configure the resource provider ingress and egress bandwidths for the OVS agent, add the following configuration to your network environment file:
parameter_defaults: ... NeutronBridgeMappings: physnet0:br-physnet0 NeutronOvsResourceProviderBandwidths: br-physnet0:10000000:10000000
Example - SRIOV agent
To configure the resource provider ingress and egress bandwidths for the SRIOV agent, add the following configuration to your network environment file:
parameter_defaults: ... NeutronML2PhysicalNetworkMtus: physnet0:1500,physnet1:1500 NeutronSriovResourceProviderBandwidths: ens5:40000000:40000000,ens6:40000000:40000000
Optional: To mark
vnic_types
as not supported when multiple ML2 mechanism drivers support them by default and multiple agents are being tracked in the Placement service, also add the following configuration to your environment file:Example - OVS agent
parameter_defaults: ... NeutronOvsVnicTypeBlacklist: direct
Example - SRIOV agent
parameter_defaults: ... NeutronSriovVnicTypeBlacklist: direct
If you want to create DSCP marking policies and use ML2/OVS with a tunneling protocol (VXLAN or GRE), then under
NeutronAgentExtensions
, add the following lines:parameter_defaults: ... ControllerExtraConfig: neutron::config::server_config: agent/dscp_inherit: value: true
When
dscp_inherit
istrue
, the Networking service copies the DSCP value of the inner header to the outer header.Run the deployment command and include the core heat templates, other environment files, and this new custom environment file.
ImportantThe order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.
Example
$ openstack overcloud deploy --templates \ -e <other_environment_files> \ -e /home/stack/templates/my-neutron-environment.yaml
Verification
Confirm that the
qos
service plug-in is loaded:$ openstack network qos policy list
If the
qos
service plug-in is loaded, then you do not receive aResourceNotFound
error.
Additional resources
- Extension drivers for the RHOSP Networking service
- Environment files in the Director Installation and Usage guide
- Including environment files in overcloud creation in the Director Installation and Usage guide
- Section 9.3.1, “Using Networking service back-end enforcement to enforce minimum bandwidth”
- Section 9.3.2, “Scheduling instances by using minimum bandwidth QoS policies”
- Section 9.4, “Limiting network traffic by using QoS policies”
- Section 9.5, “Prioritizing network traffic by using DSCP marking QoS policies”
9.3. Controlling minimum bandwidth by using QoS policies
For the Red Hat OpenStack Platform (RHOSP) Networking service (neutron), a guaranteed minimum bandwidth QoS rule can be enforced in two distinct contexts: Networking service back-end enforcement and resource allocation scheduling enforcement.
The network back end, ML2/OVS or ML2/SR-IOV, attempts to guarantee that each port on which the rule is applied has no less than the specified network bandwidth.
When you use resource allocation scheduling bandwidth enforcement, the Compute service (nova) only places VM instances on hosts that support the minimum bandwidth.
You can apply QoS minumum bandwidth rules using Networking service back-end enforcement, resource allocation scheduling enforcement, or both.
The following table identifies the Modular Layer 2 (ML2) mechanism drivers that support minimum bandwidth QoS policies.
ML2 mechanism driver | Agent | VNIC types |
---|---|---|
ML2/SR-IOV | sriovnicswitch | direct |
ML2/OVS | openvswitch | normal |
Additional resources
9.3.1. Using Networking service back-end enforcement to enforce minimum bandwidth
You can guarantee a minimum bandwidth for network traffic for ports by applying Red Hat OpenStack Platform (RHOSP) quality of service (QoS) policies to the ports. These ports must be backed by a flat or VLAN physical network.
Currently, the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN) does not support minimum bandwidth QoS rules.
Prerequisites
-
The RHOSP Networking service (neutron) must have the
qos
service plug-in loaded. (This is the default.) Do not mix ports with and without bandwidth guarantees on the same physical interface, because this might cause denial of necessary resources (starvation) to the ports without a guarantee.
TipCreate host aggregates to separate ports with bandwidth guarantees from those ports without bandwidth guarantees.
Procedure
Source your credentials file.
Example
$ source ~/overcloudrc
Confirm that the
qos
service plug-in is loaded in the Networking service:$ openstack network qos policy list
If the
qos
service plug-in is not loaded, then you receive aResourceNotFound
error, and you must load theqos
services plug-in before you can continue. For more information, see Section 9.2, “Configuring the Networking service for QoS policies”.Identify the ID of the project you want to create the QoS policy for:
$ openstack project list
Sample output
+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+
Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named
guaranteed_min_bw
is created for theadmin
project:$ openstack network qos policy create --share \ --project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw
Configure the rules for the policy.
Example
In this example, QoS rules for ingress and egress with a minimum bandwidth of
40000000
kbps are created for the policy namedguaranteed_min_bw
:$ openstack network qos rule create \ --type minimum-bandwidth --min-kbps 40000000 \ --ingress guaranteed_min_bw $ openstack network qos rule create \ --type minimum-bandwidth --min-kbps 40000000 \ --egress guaranteed_min_bw
Configure a port to apply the policy to.
Example
In this example, the
guaranteed_min_bw
policy is applied to port ID,56x9aiw1-2v74-144x-c2q8-ed8w423a6s12
:$ openstack port set --qos-policy guaranteed_min_bw \ 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12
Verification
ML2/SR-IOV
Using root access, log in to the Compute node, and show the details of the virtual functions that are held in the physical function.
Example
# ip -details link show enp4s0f1
Sample output
50: enp4s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master mx-bond state UP mode DEFAULT group default qlen 1000 link/ether 98:03:9b:9d:73:74 brd ff:ff:ff:ff:ff:ff permaddr 98:03:9b:9d:73:75 promiscuity 0 minmtu 68 maxmtu 9978 bond_slave state BACKUP mii_status UP link_failure_count 0 perm_hwaddr 98:03:9b:9d:73:75 queue_id 0 addrgenmode eui64 numtxqueues 320 numrxqueues 40 gso_max_size 65536 gso_max_segs 65535 portname p1 switchid 74739d00039b0398 vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 4 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 5 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 6 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 7 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 8 link/ether fa:16:3e:2a:d2:7f brd ff:ff:ff:ff:ff:ff, tx rate 999 (Mbps), max_tx_rate 999Mbps, spoof checking off, link-state disable, trust off, query_rss off vf 9 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
ML2/OVS
Using root access, log in to the compute node, show the
tc
rules and classes on the physical bridge interface.Example
# tc class show dev mx-bond
Sample output
class htb 1:11 parent 1:fffe prio 0 rate 4Gbit ceil 34359Mbit burst 9000b cburst 8589b class htb 1:1 parent 1:fffe prio 0 rate 72Kbit ceil 34359Mbit burst 9063b cburst 8589b class htb 1:fffe root rate 34359Mbit ceil 34359Mbit burst 8589b cburst 8589b
Additional resources
- network qos policy create in the Command Line Interface Reference
- network qos rule create in the Command Line Interface Reference
- port set in the Command Line Interface Reference
9.3.2. Scheduling instances by using minimum bandwidth QoS policies
You can apply a minimum bandwidth QoS policy to a port to guarantee that the host on which its Red Hat OpenStack Platform (RHOSP) VM instance is spawned has a minimum network bandwidth.
Prerequisites
-
The RHOSP Networking service (neutron) must have the
qos
andplacement
service plug-ins loaded. Theqos
service plug-in is loaded by default. The Networking service must support the following API extensions:
-
agent-resources-synced
-
port-resource-request
-
qos-bw-minimum-ingress
-
- You must use the ML2/OVS or ML2/SR-IOV mechanism drivers.
- You can only modify a minimum bandwidth QoS policy when there are no instances using any of the ports the policy is assigned to. The Networking service cannot update the Placement API usage information if a port is bound.
- The Placement service must support microversion 1.29.
- The Compute service (nova) must support microversion 2.72.
Procedure
Source your credentials file.
Example
$ source ~/overcloudrc
Confirm that the
qos
service plug-in is loaded in the Networking service:$ openstack network qos policy list
If the
qos
service plug-in is not loaded, then you receive aResourceNotFound
error, and you must load theqos
services plug-in before you can continue. For more information, see Section 9.2, “Configuring the Networking service for QoS policies”.Identify the ID of the project you want to create the QoS policy for:
$ openstack project list
Sample output
+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+
Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named
guaranteed_min_bw
is created for theadmin
project:$ openstack network qos policy create --share \ --project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw
Configure the rules for the policy.
Example
In this example, QoS rules for ingress and egress with a minimum bandwidth of
40000000
kbps are created for the policy namedguaranteed_min_bw
:$ openstack network qos rule create \ --type minimum-bandwidth --min-kbps 40000000 \ --ingress guaranteed_min_bw $ openstack network qos rule create \ --type minimum-bandwidth --min-kbps 40000000 \ --egress guaranteed_min_bw
Configure a port to apply the policy to.
Example
In this example, the
guaranteed_min_bw
policy is applied to port ID,56x9aiw1-2v74-144x-c2q8-ed8w423a6s12
:$ openstack port set --qos-policy guaranteed_min_bw \ 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12
Verification
- Log in to the undercloud host as the stack user.
Source the undercloud credentials file:
$ source ~/stackrc
List all the available resource providers:
$ openstack --os-placement-api-version 1.17 resource provider list
Sample output
+--------------------------------------+-----------------------------------------------------+------------+--------------------------------------+--------------------------------------+ | uuid | name | generation | root_provider_uuid | parent_provider_uuid | +--------------------------------------+-----------------------------------------------------+------------+--------------------------------------+--------------------------------------+ | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | dell-r730-014.localdomain | 28 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | None | | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | dell-r730-063.localdomain | 18 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | None | | e2f5082a-c965-55db-acb3-8daf9857c721 | dell-r730-063.localdomain:NIC Switch agent | 0 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | | d2fb0ef4-2f45-53a8-88be-113b3e64ba1b | dell-r730-014.localdomain:NIC Switch agent | 0 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | | f1ca35e2-47ad-53a0-9058-390ade93b73e | dell-r730-063.localdomain:NIC Switch agent:enp6s0f1 | 13 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | e2f5082a-c965-55db-acb3-8daf9857c721 | | e518d381-d590-5767-8f34-c20def34b252 | dell-r730-014.localdomain:NIC Switch agent:enp6s0f1 | 19 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | d2fb0ef4-2f45-53a8-88be-113b3e64ba1b | +--------------------------------------+-----------------------------------------------------+------------+--------------------------------------+--------------------------------------+
Check the bandwidth a specific resource provides.
(undercloud)$ openstack --os-placement-api-version 1.17 \ resource provider inventory list <rp_uuid>
Example
In this example, the bandwidth provided by interface
enp6s0f1
on the hostdell-r730-014
is checked, using the resource provider UUID,e518d381-d590-5767-8f34-c20def34b252
:[stack@dell-r730-014 nova]$ openstack --os-placement-api-version 1.17 \ resource provider inventory list e518d381-d590-5767-8f34-c20def34b252
Sample output
+----------------------------+------------------+----------+------------+----------+-----------+----------+ | resource_class | allocation_ratio | min_unit | max_unit | reserved | step_size | total | +----------------------------+------------------+----------+------------+----------+-----------+----------+ | NET_BW_EGR_KILOBIT_PER_SEC | 1.0 | 1 | 2147483647 | 0 | 1 | 10000000 | | NET_BW_IGR_KILOBIT_PER_SEC | 1.0 | 1 | 2147483647 | 0 | 1 | 10000000 | +----------------------------+------------------+----------+------------+----------+-----------+----------+
To check claims against the resource provider when instances are running, run the following command:
(undercloud)$ openstack --os-placement-api-version 1.17 \ resource provider show --allocations <rp_uuid>
Example
In this example, claims against the resource provider are checked on the host,
dell-r730-014
, using the resource provider UUID,e518d381-d590-5767-8f34-c20def34b252
:[stack@dell-r730-014 nova]$ openstack --os-placement-api-version 1.17 resource provider show --allocations e518d381-d590-5767-8f34-c20def34b252 -f value -c allocations
Sample output
{3cbb9e07-90a8-4154-8acd-b6ec2f894a83: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC: 1000000}}, 8848b88b-4464-443f-bf33-5d4e49fd6204: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC: 1000000}}, 9a29e946-698b-4731-bc28-89368073be1a: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC: 1000000}}, a6c83b86-9139-4e98-9341-dc76065136cc: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC: 3000000}}, da60e33f-156e-47be-a632-870172ec5483: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC: 1000000}}, eb582a0e-8274-4f21-9890-9a0d55114663: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC: 3000000}}}
Additional resources
- network qos policy create in the Command Line Interface Reference
- network qos rule create in the Command Line Interface Reference
- port set in the Command Line Interface Reference
9.4. Limiting network traffic by using QoS policies
You can create a Red Hat OpenStack Platform (RHOSP) Networking service (neutron) quality of service (QoS) policy that limits the bandwidth on your RHOSP networks, ports, or floating IPs, and drops any traffic that exceeds the specified rate.
Prerequisites
-
The Networking service must have the
qos
service plug-in loaded.(The plug-in is loaded by default.)
Procedure
Source your credentials file.
Example
$ source ~/overcloudrc
Confirm that the
qos
service plug-in is loaded in the Networking service:$ openstack network qos policy list
If the
qos
service plug-in is not loaded, then you receive aResourceNotFound
error, and you must load theqos
services plug-in before you can continue. For more information, see Section 9.2, “Configuring the Networking service for QoS policies”.Identify the ID of the project you want to create the QoS policy for:
$ openstack project list
Sample output
+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+
Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named
bw-limiter
is created for theadmin
project:$ openstack network qos policy create --share --project 98a2f53c20ce4d50a40dac4a38016c69 bw-limiter
Configure the rules for the policy.
NoteYou can add more than one rule to a policy, as long as the type or direction of each rule is different. For example, You can specify two bandwidth-limit rules, one with egress and one with ingress direction.
Example
In this example, QoS ingress and egress rules are created for the policy named
bw-limiter
with a bandwidth limit of50000
kbps and a maximum burst size of50000
kbps:$ openstack network qos rule create --type bandwidth-limit \ --max-kbps 50000 --max-burst-kbits 50000 --ingress bw-limiter $ openstack network qos rule create --type bandwidth-limit \ --max-kbps 50000 --max-burst-kbits 50000 --egress bw-limiter
You can create a port with a policy attached to it, or attach a policy to a pre-existing port.
Example - create a port with a policy attached
In this example, the policy
bw-limiter
is associated with portport2
:$ openstack port create --qos-policy bw-limiter --network private port2
Sample output
+-----------------------+--------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2022-07-04T19:20:24Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='192.0.2.210', subnet_id='292f8c-...' | | id | f51562ee-da8d-42de-9578-f6f5cb248226 | | ip_address | None | | mac_address | fa:16:3e:d9:f2:ba | | name | port2 | | network_id | 55dc2f70-0f92-4002-b343-ca34277b0234 | | option_name | None | | option_value | None | | port_security_enabled | False | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | | qos_policy_id | 8491547e-add1-4c6c-a50e-42121237256c | | revision_number | 6 | | security_group_ids | 0531cc1a-19d1-4cc7-ada5-49f8b08245be | | status | DOWN | | subnet_id | None | | tags | [] | | trunk_details | None | | updated_at | 2022-07-04T19:23:00Z | +-----------------------+--------------------------------------------------+
Example - attach a policy to a pre-existing port
In this example, the policy
bw-limiter
is associated withport1
:$ openstack port set --qos-policy bw-limiter port1
Verification
Confirm that the bandwith limit policy is applied to the port.
Obtain the policy ID.
Example
In this example, the QoS policy,
bw-limiter
is queried:$ openstack network qos policy show bw-limiter
Sample output
+-------------------+-------------------------------------------------------------------+ | Field | Value | +-------------------+-------------------------------------------------------------------+ | description | | | id | 8491547e-add1-4c6c-a50e-42121237256c | | is_default | False | | name | bw-limiter | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | | revision_number | 4 | | rules | [{u'max_kbps': 50000, u'direction': u'egress', | | | u'type': u'bandwidth_limit', | | | u'id': u'0db48906-a762-4d32-8694-3f65214c34a6', | | | u'max_burst_kbps': 50000, | | | u'qos_policy_id': u'8491547e-add1-4c6c-a50e-42121237256c'}, | | | [{u'max_kbps': 50000, u'direction': u'ingress', | | | u'type': u'bandwidth_limit', | | | u'id': u'faabef24-e23a-4fdf-8e92-f8cb66998834', | | | u'max_burst_kbps': 50000, | | | u'qos_policy_id': u'8491547e-add1-4c6c-a50e-42121237256c'}] | | shared | False | +-------------------+-------------------------------------------------------------------+
Query the port, and confirm that its policy ID matches the one obtained in the previous step.
Example
In this example,
port1
is queried:$ openstack port show port1
Sample output
+-------------------------+--------------------------------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | ip_address='192.0.2.128', mac_address='fa:16:3e:e1:eb:73' | | binding_host_id | compute-2.redhat.local | | binding_profile | | | binding_vif_details | port_filter='True' | | binding_vif_type | ovs | | binding_vnic_type | normal | | created_at | 2022-07-04T19:07:56 | | data_plane_status | None | | description | | | device_id | 53abd2c4-955d-4b44-b6ad-f106e3f15df0 | | device_owner | compute:nova | | dns_assignment | fqdn='host-192-0-2-213.openstacklocal.', hostname='my-host3', | | | ip_address='192.0.2.213' | | dns_domain | None | | dns_name | | | extra_dhcp_opts | | | fixed_ips | ip_address='192.0.2..213', subnet_id='641d1db2-3b40-437b-b87b-63 | | | 079a7063ca' | | | ip_address='2001:db8:0:f868:f816:3eff:fee1:eb73', subnet_id='c7ed0 | | | 70a-d2ee-4380-baab-6978932a7dcc' | | id | 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12 | | location | cloud='', project.domain_id=, project.domain_name=, project.id='7c | | | b99d752fdb4944a2208ec9ee019226', project.name=, region_name='regio | | | nOne', zone= | | mac_address | fa:16:3e:e1:eb:73 | | name | port2 | | network_id | 55dc2f70-0f92-4002-b343-ca34277b0234 | | port_security_enabled | True | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | | propagate_uplink_status | None | | qos_policy_id | 8491547e-add1-4c6c-a50e-42121237256c | | resource_request | None | | revision_number | 6 | | security_group_ids | 4cdeb836-b5fd-441e-bd01-498d758704fd | | status | ACTIVE | | tags | | | trunk_details | None | | updated_at | 2022-07-04T19:11:41Z | +-------------------------+--------------------------------------------------------------------+
Additional resources
- network qos rule create in the Command Line Interface Reference
- network qos rule set in the Command Line Interface Reference
- network qos rule delete in the Command Line Interface Reference
- network qos rule list in the Command Line Interface Reference
9.5. Prioritizing network traffic by using DSCP marking QoS policies
You can use differentiated services code point (DSCP) to implement quality of service (QoS) policies on your Red Hat OpenStack Platform (RHOSP) network by embedding relevant values in the IP headers. The RHOSP Networking service (neutron) QoS policies can use DSCP marking to manage only egress traffic on neutron ports and networks.
Prerequisites
-
The Networking service must have the
qos
service plug-in loaded. (This is the default.) - You must use the ML2/OVS or ML2/OVN mechanism drivers.
Procedure
Source your credentials file.
Example
$ source ~/overcloudrc
Confirm that the
qos
service plug-in is loaded in the Networking service:$ openstack network qos policy list
If the
qos
service plug-in is not loaded, then you receive aResourceNotFound
error, and you must configure the Networking service before you can continue. For more information, see Section 9.2, “Configuring the Networking service for QoS policies”.Identify the ID of the project you want to create the QoS policy for:
$ openstack project list
Sample output
+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+
Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named
qos-web-servers
is created for theadmin
project:openstack network qos policy create --project 98a2f53c20ce4d50a40dac4a38016c69 qos-web-servers
Create a DSCP rule and apply it to a policy.
Example
In this example, a DSCP rule is created using DSCP mark
18
and is applied to theqos-web-servers
policy:openstack network qos rule create --type dscp-marking --dscp-mark 18 qos-web-servers
Sample output
Created a new dscp_marking_rule: +-----------+--------------------------------------+ | Field | Value | +-----------+--------------------------------------+ | dscp_mark | 18 | | id | d7f976ec-7fab-4e60-af70-f59bf88198e6 | +-----------+--------------------------------------+
You can change the DSCP value assigned to a rule.
Example
In this example, the DSCP mark value is changed to 22 for the rule,
d7f976ec-7fab-4e60-af70-f59bf88198e6
, in theqos-web-servers
policy:$ openstack network qos rule set --dscp-mark 22 qos-web-servers d7f976ec-7fab-4e60-af70-f59bf88198e6
You can delete a DSCP rule.
Example
In this example, the DSCP rule,
d7f976ec-7fab-4e60-af70-f59bf88198e6
, in theqos-web-servers
policy is deleted:$ openstack network qos rule delete qos-web-servers d7f976ec-7fab-4e60-af70-f59bf88198e6
Verification
Confirm that the DSCP rule is applied to the QoS policy.
Example
In this example, the DSCP rule,
d7f976ec-7fab-4e60-af70-f59bf88198e6
is applied to the QoS policy,qos-web-servers
:$ openstack network qos rule list qos-web-servers
Sample output
+-----------+--------------------------------------+ | dscp_mark | id | +-----------+--------------------------------------+ | 18 | d7f976ec-7fab-4e60-af70-f59bf88198e6 | +-----------+--------------------------------------+
Additional resources
- network qos rule create in the Command Line Interface Reference
- network qos rule set in the Command Line Interface Reference
- network qos rule delete in the Command Line Interface Reference
- network qos rule list in the Command Line Interface Reference
9.6. Applying QoS policies to projects by using Networking service RBAC
With the Red Hat OpenStack Platform (RHOSP) Networking service (neutron), you can add a role-based access control (RBAC) for quality of service (QoS) policies. As a result, you can apply QoS policies to individual projects.
Prerequisities
- You must have one or more QoS policies available.
Procedure
Create an RHOSP Networking service RBAC policy associated with a specific QoS policy, and assign it to a specific project:
$ openstack network rbac create --type qos_policy --target-project <project_name | project_ID> --action access_as_shared <QoS_policy_name | QoS_policy_ID>
Example
For example, you might have a QoS policy that allows for lower-priority network traffic, named
bw-limiter
. Using a RHOSP Networking service RBAC policy, you can apply the QoS policy to a specific project:$ openstack network rbac create --type qos_policy --target-project 80bf5732752a41128e612fe615c886c6 --action access_as_shared bw-limiter
Additional resources
- network rbac create in the Command Line Interface Reference
- Section 9.3.1, “Using Networking service back-end enforcement to enforce minimum bandwidth”
- Section 9.3.2, “Scheduling instances by using minimum bandwidth QoS policies”
- Section 9.4, “Limiting network traffic by using QoS policies”
- Section 9.5, “Prioritizing network traffic by using DSCP marking QoS policies”