Chapter 7. Using Quality of Service (QoS) policies to manage data traffic
You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic in Red Hat OpenStack Services on OpenShift (RHOSO) environments.
You can apply QoS policies to individual ports, or apply QoS policies to a project network, where ports with no specific policy attached inherit the policy.
Internal network owned ports, such as DHCP and internal router ports, are excluded from network policy application.
You can apply, modify, or remove QoS policies dynamically. However, for guaranteed minimum bandwidth QoS policies, you can only apply modifications when there are no instances that use any of the ports the policy is assigned to.
7.1. QoS rules Copy linkLink copied to clipboard!
You can configure the following rule types to define a quality of service (QoS) policy in the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron):
- Minimum bandwidth (
minimum_bandwidth) - Provides minimum bandwidth constraints on certain types of traffic. If implemented, best efforts are made to provide no less than the specified bandwidth to each port on which the rule is applied.
- Bandwidth limit (
bandwidth_limit) - Provides bandwidth limitations on networks, ports, floating IPs (FIPs), and router gateway IPs. If implemented, any traffic that exceeds the specified rate is dropped.
- DSCP marking (
dscp_marking) - Marks network traffic with a differentiated services code point (DSCP) value.
- Minimum packet rate (
minimum-packet-rate) - Provides minimum rate of packet transmission constraints on certain types of traffic. If implemented, best efforts are made to provide no less than the specified rate of packet transmission to each port on which the rule is applied. Currently, only placement enforcement is supported.
QoS policies can be enforced in various contexts, including virtual machine instance placements, floating IP assignments, and gateway IP assignments.
Depending on the enforcement context and on the mechanism driver you use, a QoS rule affects egress traffic (upload from instance), ingress traffic (download to instance), or both.
In ML2/OVN deployments, you can enable minimum bandwidth and bandwidth limit egress policies for hardware offloaded ports. You cannot enable ingress policies for hardware offloaded ports. For more information, see Section 7.2, “Configuring the Networking service for QoS policies”.
| Rule [1] | Supported traffic direction by mechanism driver | |
| ML2/SR-IOV | ML2/OVN | |
| Minimum bandwidth | Egress only | Egress only [2] |
| Bandwidth limit | Egress only [3] | Egress and ingress |
| DSCP marking | N/A | Egress only [4] |
[1] RHOSO does not support QoS for trunk ports.
[2] In ML2/OVN deployments, minimum bandwidth rules are enforced in the physical device. You cannot configure this enforcement on bond interfaces.
[3] The mechanism drivers ignore the max-burst-kbits parameter because they do not support it.
[4] ML2/OVN does not support DSCP marking on tunneled protocols.
| Enforcement type | Supported traffic by direction mechanism driver | |
| ML2/SR-IOV | ML2/OVN | |
| Placement | Egress and ingress | Technology preview [1] |
[1] See OSPRH-507.
| Enforcement type | Supported traffic direction by mechanism driver |
| ML2/OVN | |
| Floating IP | Egress and ingress |
| Gateway IP | Egress and ingress |
7.2. Configuring the Networking service for QoS policies Copy linkLink copied to clipboard!
The quality of service feature in the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) is provided through the qos service plug-in. With the ML2/OVN mechanism driver, qos is loaded by default. However, this is not true for ML2/SR-IOV.
When using the qos service plug-in with the ML2/SR-IOV mechanism driver, you must also load the qos extension on their respective agents.
The following list summarizes the tasks that you must perform to configure the Networking service for QoS. The task details follow this list:
For all types of QoS policies:
-
Add the
qosservice plug-in. -
Add
qosextension for the agents (SR-IOV only).
-
Add the
- In ML2/OVN deployments, you can enable minimum bandwidth and bandwidth limit egress policies for hardware offloaded ports. You cannot enable ingress policies for hardware offloaded ports.
Additional tasks for scheduling VM instances using minimum bandwidth policies only:
- Specify the hypervisor name if it differs from the name that the Compute service (nova) uses.
- Configure the resource provider ingress and egress bandwidths for the relevant agents on each Compute node.
-
(Optional) Mark
vnic_typesas not supported.
Additional task for DSCP marking policies:
-
Enable
edpm_ovn_encap_tos. By default,edpm_ovn_encap_tosis disabled.
-
Enable
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
If you are using the ML2/SR-IOV mechanism driver, you must enable the
qosagent extension on the Compute nodes, also referred to as the RHOSO data plane.For more information, see Configuring the Networking service for QoS policies for SR-IOV.
Add the required QoS configuration. Place the configuration in the
edpm_network_config_templateunderansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_network_config_template: | --- OvnHardwareOffloadedQos: true ...If you want to create DSCP marking policies, add
edpm_ovn_encap_tos: '1'underansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_network_config_template: | --- OvnHardwareOffloadedQos: true edpm_ovn_encap_tos: 1 ...When
edpm_ovn_encap_tosis enabled (has a value of1), the Networking service copies the DSCP value of the inner header to the outer header. The default is0.-
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f my_data_plane_node_set.yamlVerify that the data plane resource has been updated:
$ oc get openstackdataplanenodeset- Sample output
NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started
Create a file on your workstation to define the
OpenStackDataPlaneDeploymentCR, for example,my_data_plane_deploy.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deployTipGive the definition file and the
OpenStackDataPlaneDeploymentCR a unique and descriptive name that indicates the purpose of the modified node set.Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - my-data-plane-node-set-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f my_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -n openstack -w $ oc logs -l app=openstackansibleee -n openstack -f \ --max-log-requests 10Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed:- Example
$ oc get openstackdataplanedeployment -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete
Repeat the
oc getcommand until you see theNodeSet Readymessage:- Example
$ oc get openstackdataplanenodeset -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True NodeSet ReadyFor information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
Verification
Confirm that the
qosservice plug-in is loaded:$ openstack network qos policy listIf the
qosservice plug-in is loaded, then you do not receive aResourceNotFounderror.
7.3. Configuring the Networking service for QoS policies for SR-IOV Copy linkLink copied to clipboard!
The quality of service feature in the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) is provided through the qos service plug-in. If your Networking service ML2 mechanism driver is SR-IOV, then you must also load the qos extension driver for the NIC switch agent, neutron-sriov-nic-agent, which runs on the Compute nodes, also referred to as the RHOSO data plane.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
-
Open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,my_data_plane_node_set.yaml. Add the required QoS configuration,
NeutronSriovAgentExtensions: "qos".Place the configuration in the
edpm_network_config_templateunderansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_network_config_template: | --- NeutronSriovAgentExtensions: "qos" ...-
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f my_data_plane_node_set.yamlVerify that the data plane resource has been updated:
$ oc get openstackdataplanenodeset- Sample output
NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started
Create a file on your workstation to define the
OpenStackDataPlaneDeploymentCR, for example,my_data_plane_deploy.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deployTipGive the definition file and the
OpenStackDataPlaneDeploymentCR a unique and descriptive name that indicates the purpose of the modified node set.Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - my-data-plane-node-set-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f my_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -n openstack -w $ oc logs -l app=openstackansibleee -n openstack -f \ --max-log-requests 10Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed:- Example
$ oc get openstackdataplanedeployment -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete
Repeat the
oc getcommand until you see theNodeSet Readymessage:- Example
$ oc get openstackdataplanenodeset -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True NodeSet ReadyFor information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
Verification
Confirm that the NIC switch agent, neutron-sriov-nic-agent, has loaded the qos extension.
Obtain the UUID for the NIC switch agent:
$ openstack network agent listWith the
neutron-sriov-nic-agentUUID, run the following command:$ openstack network agent show <uuid>- Example
$ openstack network agent show 8676ccb3-1de0-4ca6-8fb7-b814015d9e5f \ --max-width 70- Sample output
You should see an agent object with a field called
configuration. When theqosextension is loaded, theextensionsfield should containqosin its list.-------------------------------------------------------------------+ | Field | Value | -------------------------------------------------------------------+ | admin_state_up | UP | | agent_type | NIC Switch agent | | alive | :-) | | availability_zone | None | | binary | neutron-sriov-nic-agent | | configuration | {device_mappings: {}, devices: 0, extensi | | | ons: [qos], resource_provider_bandwidths: | | | {}, resource_provider_hypervisors: {}, reso | | | urce_provider_inventory_defaults: {allocatio | | | n_ratio: 1.0, min_unit: 1, step_size: 1, | | | reserved: 0}} | | created_at | 2024-08-08 08:22:57 | | description | None | | ha_state | None | | host | edpm-compute-0.ctlplane.example.com | | id | 8676ccb3-1de0-4ca6-8fb7-b814015d9e5f | | last_heartbeat_at | 2024-08-08 08:24:27 | | resources_synced | None | | started_at | 2024-08-08 08:22:57 | | topic | N/A | -------------------------------------------------------------------+