Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 7. Configuring an SR-IOV deployment
In your Red Hat OpenStack Platform NFV deployment, you can achieve higher performance with single root I/O virtualization (SR-IOV), when you configure direct access from your instances to a shared PCIe resource through virtual resources.
This section includes examples that you must modify for your topology and use case. For more information, see Hardware requirements for NFV.
Prerequisites
A RHOSP undercloud.
You must install and configure the undercloud before you can deploy the overcloud. For more information, see Installing and managing Red Hat OpenStack Platform with director.
NoteRHOSP director modifies SR-IOV configuration files through the key-value pairs that you specify in templates and custom environment files. You must not modify the SR-IOV files directly.
-
Access to the undercloud host and credentials for the
stack
user. - Access to the hosts that contain the NICs.
Ensure that you keep the NIC firmware updated.
Yum
ordnf
updates might not complete the firmware update. For more information, see your vendor documentation.
Procedure
Use Red Hat OpenStack Platform (RHOSP) director to install and configure RHOSP in an SR-IOV environment. The high-level steps are:
-
Create a network configuration file,
network_data.yaml
, to configure the physical network for your overcloud, by following the instructions in Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director. - Generate roles and image files.
- Configure PCI passthrough devices for SR-IOV.
- Add role-specific parameters and configuration overrides.
- Create a bare metal nodes definition file.
- Create a NIC configuration template for SR-IOV.
- (Optional) Partition NICs.
Provision overcloud networks and VIPs.
For more information, see:
- Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide.
- Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
Provision overcloud bare metal nodes.
For more information, see Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
- Deploy an SR-IOV overcloud.
Additional resources
7.1. Generating roles and image files for SR-IOV
Red Hat OpenStack Platform (RHOSP) director uses roles to assign services to nodes. When deploying RHOSP in an SR-IOV environment, ComputeSriov
is a default role provided with your RHOSP installation that includes the NeutronSriovAgent
service, in addition to the default compute services.
The undercloud installation requires an environment file to determine where to obtain container images and how to store them.
Prerequisites
-
Access to the undercloud host and credentials for the
stack
user.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:$ source ~/stackrc
Generate a new roles data file named
roles_data_compute_sriov.yaml
, that includes theController
andComputeSriov
roles:$ openstack overcloud roles \ generate -o /home/stack/templates/roles_data_compute_sriov.yaml \ Controller ComputeSriov
NoteIf you are using multiple technologies in your RHOSP environment, OVS-DPDK, SR-IOV, and OVS hardware offload, you generate just one roles data file to include all the roles:
$ openstack overcloud roles generate -o /home/stack/templates/\ roles_data.yaml Controller ComputeOvsDpdk ComputeOvsDpdkSriov \ Compute:ComputeOvsHwOffload
To generate an images file, you run the
openstack tripleo container image prepare
command. The following inputs are needed:-
The roles data file that you generated in an earlier step, for example,
roles_data_compute_sriov.yaml
. The SR-IOV environment file appropriate for your Networking service mechanism driver:
ML2/OVN environments
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml
ML2/OVS environments
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml
Example
In this example, the
overcloud_images.yaml
file is being generated for an ML2/OVN environment:$ sudo openstack tripleo container image prepare \ --roles-file ~/templates/roles_data_compute_sriov.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml \ -e ~/containers-prepare-parameter.yaml \ --output-env-file=/home/stack/templates/overcloud_images.yaml
-
The roles data file that you generated in an earlier step, for example,
- Note the path and file name of the roles data file and the images file that you have created. You use these files later when you deploy your overcloud.
Next steps
Additional resources
- For more information, see Composable services and custom roles in Installing and managing Red Hat OpenStack Platform with director.
- Preparing container images in Installing and managing Red Hat OpenStack Platform with director.
7.2. Configuring PCI passthrough devices for SR-IOV
When deploying Red Hat OpenStack Platform for an SR-IOV environment, you must configure the PCI passthrough devices for the SR-IOV compute nodes in a custom environment file.
Prerequisites
- Access to the one or more physical servers that contains the PCI cards.
-
Access to the undercloud host and credentials for the
stack
user.
Procedure
Use one of the following commands on the physical server that has the PCI cards:
If your overcloud is deployed:
$ lspci -nn -s <pci_device_address>
Sample output
3b:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [<vendor_id>: <product_id>] (rev 02)
If your overcloud has not been deployed:
$ openstack baremetal introspection data save <baremetal_node_name> | jq '.inventory.interfaces[] | .name, .vendor, .product'
- Retain the vendor and product IDs for PCI passthrough devices on the SR-IOV compute nodes. You will need these IDs in a later step.
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:$ source ~/stackrc
Create a custom environment YAML file, for example,
sriov-overrides.yaml
. Configure the PCI passthrough devices for the SR-IOV compute nodes by adding the following content to the file:parameter_defaults: ComputeSriovParameters: ... NovaPCIPassthrough: - vendor_id: "<vendor_id>" product_id: "<product_id>" address: <NIC_address> physical_network: "<physical_network>" ...
-
Replace
<vendor_id>
with the vendor ID of the PCI device. -
Replace
<product_id>
with the product ID of the PCI device. -
Replace
<NIC_address>
with the address of the PCI device. Replace
<physical_network>
with the name of the physical network the PCI device is located on.NoteDo not use the
devname
parameter when you configure PCI passthrough because the device name of a NIC can change. To create a Networking service (neutron) port on a PF, specify thevendor_id
, theproduct_id
, and the PCI device address inNovaPCIPassthrough
, and create the port with the--vnic-type direct-physical
option. To create a Networking service port on a virtual function (VF), specify thevendor_id
andproduct_id
inNovaPCIPassthrough
, and create the port with the--vnic-type direct
option. The values of thevendor_id
andproduct_id
parameters might be different between physical function (PF) and VF contexts.
-
Replace
Also in the custom environment file, ensure that
PciPassthroughFilter
andAggregateInstanceExtraSpecsFilter
are in the list of filters for theNovaSchedulerEnabledFilters
parameter, that the Compute service (nova) uses to filter a node:parameter_defaults: ComputeSriovParameters: ... NovaPCIPassthrough: - vendor_id: "<vendor_id>" product_id: "<product_id>" address: <NIC_address> physical_network: "<physical_network>" ... NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - AggregateInstanceExtraSpecsFilter
- Note the path and file name of the custom environment file that you have created. You use this file later when you deploy your overcloud.
Next steps
Additional resources
-
Guidelines for configuring
NovaPCIPassthrough
in Configuring the Compute service for instance creation
7.3. Adding role-specific parameters and configuration overrides
You can add role-specific parameters for the SR-IOV Compute nodes and override default configuration values in a custom environment YAML file that Red Hat OpenStack Platform (RHOSP) director uses when deploying your SR-IOV environment.
Prerequisites
-
Access to the undercloud host and credentials for the
stack
user.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:$ source ~/stackrc
- Open the custom environment YAML file that you created in Section 7.2, “Configuring PCI passthrough devices for SR-IOV”, or create a new one.
Add role-specific parameters for the SR-IOV Compute nodes to the custom environment file.
Example
ComputeSriovParameters: IsolCpusList: 9-63,73-127 KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt numa_balancing=disable processor.max_cstate=0 isolcpus=9-63,73-127 NovaReservedHostMemory: 4096 NovaComputeCpuSharedSet: 0-8,64-72 NovaComputeCpuDedicatedSet: 9-63,73-127
Review the configuration defaults that RHOSP director uses to configure SR-IOV. These defaults are provided in the file and they vary based on your mechanism driver:
ML2/OVN
/usr/share/openstack-tripleo-heat-templates/environment/services/neutron-ovn-sriov.yaml
ML2/OVS
/usr/share/openstack-tripleo-heat-templates/environment/services/neutron-sriov.yaml
If you need to override any of the configuration defaults, add your overrides to the custom environment file.
This custom environment file, for example, is where you can add Nova PCI whitelist values or set the network type.
Example
In this example, the Networking service (neutron) network type is set to VLAN and ranges are added for the tenants:
parameter_defaults: NeutronNetworkType: 'vlan' NeutronNetworkVLANRanges: - tenant:22:22 - tenant:25:25 NeutronTunnelTypes: ''
- If you created a new custom environment file, note its path and file name. You use this file later when you deploy your overcloud.
Next steps
Additional resources
- Supported custom roles in the Customizing your Red Hat OpenStack Platform deployment guide
7.4. Creating a bare metal nodes definition file for SR-IOV
Use Red Hat OpenStack Platform (RHOSP) director and a definition file to provision your bare metal nodes for your SR-IOV environment. In the bare metal nodes definition file, define the quantity and attributes of the bare metal nodes that you want to deploy and assign overcloud roles to these nodes. Also define the network layout of the nodes.
Prerequisites
-
Access to the undercloud host and credentials for the
stack
user.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:$ source ~/stackrc
-
Create a bare metal nodes definition file, such as
overcloud-baremetal-deploy.yaml
, as instructed in Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. In the bare metal nodes definition file, add a declaration to the Ansible playbook,
cli-overcloud-node-kernelargs.yaml
.The playbook contains kernel arguments to use when you provision bare metal nodes.
- name: ComputeSriov ... ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml ...
If you want to set any extra Ansible variables when running the playbook, use the
extra_vars
property to set them.NoteThe variables that you add to
extra_vars
should be the same role-specific parameters for the SR-IOV Compute nodes that you added to the custom environment file earlier in Section 7.3, “Adding role-specific parameters and configuration overrides”.Example
- name: ComputeSriov ... ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: 'default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt isolcpus=9-63,73-127' tuned_isolated_cores: '9-63,73-127' tuned_profile: 'cpu-partitioning' reboot_wait_timeout: 1800
-
Note the path and file name of the bare metal nodes definition file that you have created. You use this file later when you configure your NICs and as the input file for the
overcloud node provision
command when you provision your nodes.
Next steps
Additional resources
- Composable services and custom roles in Installing and managing Red Hat OpenStack Platform with director
- Tested NICs for NFV
- Bare-metal node provisioning attributes in the Installing and managing Red Hat OpenStack Platform with director guide
7.5. Creating a NIC configuration template for SR-IOV
Define your NIC configuration templates by modifying copies of the sample Jinja2 templates that ship with Red Hat OpenStack Platform (RHOSP).
Prerequisites
-
Access to the undercloud host and credentials for the
stack
user.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:$ source ~/stackrc
Copy a sample network configuration template.
Copy a NIC configuration Jinja2 template from the examples in the
/usr/share/ansible/roles/tripleo_network_config/templates/
directory. Choose the one that most closely matches your NIC requirements. Modify it as needed.In your NIC configuration template, for example,
single_nic_vlans.j2
, add your PF and VF interfaces. To create SR-IOV VFs, configure the interfaces as standalone NICs.Example
... - type: sriov_pf name: enp196s0f0np0 mtu: 9000 numvfs: 16 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false ...
NoteThe
numvfs
parameter replaces theNeutronSriovNumVFs
parameter in the network configuration templates. Red Hat does not support modification of theNeutronSriovNumVFs
parameter or thenumvfs
parameter after deployment. If you modify either parameter after deployment, the modification might cause a disruption for the running instances that have an SR-IOV port on that PF. In this case, you must hard reboot these instances to make the SR-IOV PCI device available again.Add the custom network configuration template to the bare metal nodes definition file that you created in Section 7.4, “Creating a bare metal nodes definition file for SR-IOV”.
Example
- name: ComputeSriov count: 2 hostname_format: compute-%index% defaults: networks: - network: internal_api subnet: internal_api_subnet - network: tenant subnet: tenant_subnet - network: storage subnet: storage_subnet network_config: template: /home/stack/templates/single_nic_vlans.j2 ...
- Note the path and file name of the NIC configuration template that you have created. You use this file later if you want to partition your NICs.
Next steps
- If you want to partition your NICs, proceed to Section 7.6, “Configuring NIC partitioning”.
Otherwise, perform these steps:
- Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide
- Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide
- Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide
- Section 7.8, “Deploying an SR-IOV overcloud”
7.6. Configuring NIC partitioning
You can reduce the number of NICs that you need for each host by configuring single root I/O virtualization (SR-IOV) virtual functions (VFs) for Red Hat OpenStack Platform (RHOSP) management networks and provider networks. When you partition a single, high-speed NIC into multiple VFs, you can use the NIC for both control and data plane traffic. This feature has been validated on Intel Fortville NICs, and Mellanox CX-5 NICs.
Prerequisites
-
Access to the undercloud host and credentials for the
stack
user. Ensure that NICs, their applications, the VF guest, and OVS reside on the same NUMA Compute node.
Doing so helps to prevent performance degradation from cross-NUMA operations.
Ensure that you keep the NIC firmware updated.
Yum
ordnf
updates might not complete the firmware update. For more information, see your vendor documentation.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:$ source ~/stackrc
Open the NIC configuration template, for example
single_nic_vlans.j2
, that you created earlier in Section 7.5, “Creating a NIC configuration template for SR-IOV”.TipAs you complete the steps in this section, you can refer to Section 7.7, “Example configurations for NIC partitions”.
Add an entry for the interface type
sriov_pf
to configure a physical function that the host can use:- type: sriov_pf name: <interface_name> use_dhcp: false numvfs: <number_of_vfs> promisc: <true/false>
-
Replace
<interface_name>
with the name of the interface. -
Replace
<number_of_vfs>
with the number of VFs. -
Optional: Replace
<true/false>
withtrue
to set promiscuous mode, orfalse
to disable promiscuous mode. The default value istrue
.
NoteThe
numvfs
parameter replaces theNeutronSriovNumVFs
parameter in the network configuration templates. Red Hat does not support modification of theNeutronSriovNumVFs
parameter or thenumvfs
parameter after deployment. If you modify either parameter after deployment, it might cause a disruption for the running instances that have an SR-IOV port on that physical function (PF). In this case, you must hard reboot these instances to make the SR-IOV PCI device available again.-
Replace
Add an entry for the interface type
sriov_vf
to configure virtual functions that the host can use:- type: <bond_type> name: internal_bond bonding_options: mode=<bonding_option> use_dhcp: false members: - type: sriov_vf device: <pf_device_name> vfid: <vf_id> - type: sriov_vf device: <pf_device_name> vfid: <vf_id> - type: vlan vlan_id: get_param: InternalApiNetworkVlanID spoofcheck: false device: internal_bond addresses: - ip_netmask: get_param: InternalApiIpSubnet routes: list_concat_unique: - get_param: InternalApiInterfaceRoutes
-
Replace
<bond_type>
with the required bond type, for example,linux_bond
. You can apply VLAN tags on the bond for other bonds, such asovs_bond
. Replace
<bonding_option>
with one of the following supported bond modes:-
active-backup
Balance-slb
NoteLACP bonds are not supported.
-
Specify the
sriov_vf
as the interface type to bond in themembers
section.NoteIf you are using an OVS bridge as the interface type, you can configure only one OVS bridge on the
sriov_vf
of asriov_pf
device. More than one OVS bridge on a singlesriov_pf
device can result in packet duplication across VFs, and decreased performance.-
Replace
<pf_device_name>
with the name of the PF device. -
If you use a
linux_bond
, you must assign VLAN tags. If you set a VLAN tag, ensure that you set a unique tag for each VF associated with a singlesriov_pf
device. You cannot have two VFs from the same PF on the same VLAN. -
Replace
<vf_id>
with the ID of the VF. The applicable VF ID range starts at zero, and ends at the maximum number of VFs minus one. - Disable spoof checking.
-
Apply VLAN tags on the
sriov_vf
forlinux_bond
over VFs.
-
Replace
To reserve VFs for instances, include the
NovaPCIPassthrough
parameter in an environment file.Example
NovaPCIPassthrough: - address: "0000:19:0e.3" trusted: "true" physical_network: "sriov1" - address: "0000:19:0e.0" trusted: "true" physical_network: "sriov2"
RHOSP director identifies the host VFs, and derives the PCI addresses of the VFs that are available to the instance.
Enable
IOMMU
on all nodes that require NIC partitioning.Example
For example, if you want NIC partitioning for Compute nodes, enable IOMMU using the
KernelArgs
parameter for that role:parameter_defaults: ComputeParameters: KernelArgs: "intel_iommu=on iommu=pt"
NoteWhen you first add the
KernelArgs
parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment.-
Ensure that you add this NIC configuration template, for example
single_nic_vlans.j2
, to the bare metal nodes definition file that you created in Section 7.4, “Creating a bare metal nodes definition file for SR-IOV”.
Next steps
- Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide
- Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide
- Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide
- Section 7.8, “Deploying an SR-IOV overcloud”
Additional resources
7.7. Example configurations for NIC partitions
Refer to these example configurations when you want to partition NICs in a Red Hat OpenStack Platform SR-IOV environment.
Linux bond over VFs
The following example configures a Linux bond over VFs, disables spoofcheck
, and applies VLAN tags to sriov_vf
:
- type: linux_bond name: bond_api bonding_options: "mode=active-backup" members: - type: sriov_vf device: eno2 vfid: 1 vlan_id: get_param: InternalApiNetworkVlanID spoofcheck: false - type: sriov_vf device: eno3 vfid: 1 vlan_id: get_param: InternalApiNetworkVlanID spoofcheck: false addresses: - ip_netmask: get_param: InternalApiIpSubnet routes: list_concat_unique: - get_param: InternalApiInterfaceRoutes
OVS bridge on VFs
The following example configures an OVS bridge on VFs:
- type: ovs_bridge name: br-bond use_dhcp: true members: - type: vlan vlan_id: get_param: TenantNetworkVlanID addresses: - ip_netmask: get_param: TenantIpSubnet routes: list_concat_unique: - get_param: ControlPlaneStaticRoutes - type: ovs_bond name: bond_vf ovs_options: "bond_mode=active-backup" members: - type: sriov_vf device: p2p1 vfid: 2 - type: sriov_vf device: p2p2 vfid: 2
OVS user bridge on VFs
The following example configures an OVS user bridge on VFs and applies VLAN tags to ovs_user_bridge
:
- type: ovs_user_bridge name: br-link0 use_dhcp: false mtu: 9000 ovs_extra: - str_replace: template: set port br-link0 tag=_VLAN_TAG_ params: _VLAN_TAG_: get_param: TenantNetworkVlanID addresses: - ip_netmask: list_concat_unique: - get_param: TenantInterfaceRoutes members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 ovs_extra: - set port dpdkbond0 bond_mode=balance-slb members: - type: ovs_dpdk_port name: dpdk0 members: - type: sriov_vf device: eno2 vfid: 3 - type: ovs_dpdk_port name: dpdk1 members: - type: sriov_vf device: eno3 vfid: 3
Additional resources
7.8. Deploying an SR-IOV overcloud
The last step in configuring your Red Hat OpenStack Platform (RHOSP) overcloud in an SR-IOV environment is to run the openstack overcloud deploy
command. Inputs to the command include all of the various overcloud templates and environment files that you constructed.
Prerequisites
-
Access to the undercloud host and credentials for the
stack
user. -
You have performed all of the steps listed in the earlier procedures in this section and have assembled all of the various heat templates and environment files to use as inputs for the
overcloud deploy
command.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Enter the
openstack overcloud deploy
command.It is important to list the inputs to the
openstack overcloud deploy
command in a particular order. The general rule is to specify the default heat template files first followed by your custom environment files and custom templates that contain custom configurations, such as overrides to the default properties.Add your inputs to the
openstack overcloud deploy
command in the following order:A custom network definition file that contains the specifications for your SR-IOV network on the overcloud, for example,
network-data.yaml
.For more information, see Network definition file configuration options in the Installing and managing Red Hat OpenStack Platform with director guide.
A roles file that contains the
Controller
andComputeOvsHwOffload
roles that RHOSP director uses to deploy your OVS hardware offload environment.Example:
roles_data_compute_sriov.yaml
For more information, see Section 7.1, “Generating roles and image files for SR-IOV”.
An output file from provisioning your overcloud networks.
Example:
overcloud-networks-deployed.yaml
For more information, see Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide.
An output file from provisioning your overcloud VIPs.
Example:
overcloud-vip-deployed.yaml
For more information, see Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
An output file from provisioning bare-metal nodes.
Example:
overcloud-baremetal-deployed.yaml
For more information, see Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
An images file that director uses to determine where to obtain container images and how to store them.
Example:
overcloud_images.yaml
For more information, see Section 7.1, “Generating roles and image files for SR-IOV”.
An environment file for the Networking service (neutron) mechanism driver and router scheme that your environment uses:
ML2/OVN
-
Distributed virtual routing (DVR):
neutron-ovn-dvr-ha.yaml
-
Centralized virtual routing:
neutron-ovn-ha.yaml
-
Distributed virtual routing (DVR):
ML2/OVS
-
Distributed virtual routing (DVR):
neutron-ovs-dvr.yaml
-
Centralized virtual routing:
neutron-ovs.yaml
-
Distributed virtual routing (DVR):
An environment file for SR-IOV, depending on your mechanism driver:
ML2/OVN
-
neutron-ovn-sriov.yaml
-
ML2/OVS
neutron-sriov.yaml
NoteIf you also have an OVS-DPDK environment, and want to locate OVS-DPDK and SR-IOV instances on the same node, include the following environment files in your deployment script:
ML2/OVN
neutron-ovn-dpdk.yaml
ML2/OVS
neutron-ovs-dpdk.yaml
One or more custom environment files that contain your configuration for:
- PCI passthrough devices for the SR-IOV nodes.
- role-specific parameters for the SR-IOV nodes
overrides of default configuration values for the SR-IOV environment.
Example:
sriov-overrides.yaml
For more information, see:
- Section 7.2, “Configuring PCI passthrough devices for SR-IOV”.
Section 7.3, “Adding role-specific parameters and configuration overrides”.
Example
This excerpt from a sample
openstack overcloud deploy
command demonstrates the proper ordering of the command’s inputs for an SR-IOV, ML2/OVN environment that uses DVR:$ openstack overcloud deploy \ --log-file overcloud_deployment.log \ --templates /usr/share/openstack-tripleo-heat-templates/ \ --stack overcloud \ -n /home/stack/templates/network_data.yaml \ -r /home/stack/templates/roles_data_compute_sriov.yaml \ -e /home/stack/templates/overcloud-networks-deployed.yaml \ -e /home/stack/templates/overcloud-vip-deployed.yaml \ -e /home/stack/templates/overcloud-baremetal-deployed.yaml \ -e /home/stack/templates/overcloud-images.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/\ neutron-ovn-dvr-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/\ neutron-ovn-sriov.yaml \ -e /home/stack/templates/sriov-overrides.yaml
Run the
openstack overcloud deploy
command.When the overcloud creation is finished, the RHOSP director provides details to help you access your overcloud.
Verification
- Perform the steps in Validating your overcloud deployment in the Installing and managing Red Hat OpenStack Platform with director guide.
To verify that your NICs are partitioned properly, do the following:
Log in to the overcloud Compute node as
tripleo-admin
and check the number of VFs:Example
In this example, the number of VFs for both
p4p1
andp4p2
is10
:$ sudo cat /sys/class/net/p4p1/device/sriov_numvfs 10 $ sudo cat /sys/class/net/p4p2/device/sriov_numvfs 10
Show the OVS connections:
$ sudo ovs-vsctl show
Sample output
You should see output similar to the following:
b6567fa8-c9ec-4247-9a08-cbf34f04c85f Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-sriov2 Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure datapath_type: netdev Port phy-br-sriov2 Interface phy-br-sriov2 type: patch options: {peer=int-br-sriov2} Port br-sriov2 Interface br-sriov2 type: internal Bridge br-sriov1 Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure datapath_type: netdev Port phy-br-sriov1 Interface phy-br-sriov1 type: patch options: {peer=int-br-sriov1} Port br-sriov1 Interface br-sriov1 type: internal Bridge br-ex Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure datapath_type: netdev Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Bridge br-tenant Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure datapath_type: netdev Port br-tenant tag: 305 Interface br-tenant type: internal Port phy-br-tenant Interface phy-br-tenant type: patch options: {peer=int-br-tenant} Port dpdkbond0 Interface dpdk0 type: dpdk options: {dpdk-devargs="0000:18:0e.0"} Interface dpdk1 type: dpdk options: {dpdk-devargs="0000:18:0a.0"} Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure datapath_type: netdev Port vxlan-98140025 Interface vxlan-98140025 type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="152.20.0.229", out_key=flow, remote_ip="152.20.0.37"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port vxlan-98140015 Interface vxlan-98140015 type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="152.20.0.229", out_key=flow, remote_ip="152.20.0.21"} Port vxlan-9814009f Interface vxlan-9814009f type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="152.20.0.229", out_key=flow, remote_ip="152.20.0.159"} Port vxlan-981400cc Interface vxlan-981400cc type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="152.20.0.229", out_key=flow, remote_ip="152.20.0.204"} Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure datapath_type: netdev Port int-br-tenant Interface int-br-tenant type: patch options: {peer=phy-br-tenant} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port int-br-sriov1 Interface int-br-sriov1 type: patch options: {peer=phy-br-sriov1} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Port int-br-sriov2 Interface int-br-sriov2 type: patch options: {peer=phy-br-sriov2} Port vhu4142a221-93 tag: 1 Interface vhu4142a221-93 type: dpdkvhostuserclient options: {vhost-server-path="/var/lib/vhost_sockets/vhu4142a221-93"} ovs_version: "2.13.2"
Log in to your SR-IOV Compute node as
tripleo-admin
and check the Linux bonds:$ cat /proc/net/bonding/<bond_name>
Sample output
You should see output similar to the following:
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eno3v1 MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Peer Notification Delay (ms): 0 Slave Interface: eno3v1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 4e:77:94:bd:38:d2 Slave queue ID: 0 Slave Interface: eno4v1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 4a:74:52:a7:aa:7c Slave queue ID: 0
List the OVS bonds:
$ sudo ovs-appctl bond/show
Sample output
You should see output similar to the following:
---- dpdkbond0 ---- bond_mode: balance-slb bond may use recirculation: no, Recirc-ID : -1 bond-hash-basis: 0 updelay: 0 ms downdelay: 0 ms next rebalance: 9491 ms lacp_status: off lacp_fallback_ab: false active slave mac: ce:ee:c7:58:8e:b2(dpdk1) slave dpdk0: enabled may_enable: true slave dpdk1: enabled active slave may_enable: true
-
If you used
NovaPCIPassthrough
to pass VFs to instances, test by deploying an SR-IOV instance.
Additional resources
- Creating your overcloud in the Installing and managing Red Hat OpenStack Platform with director guide
- overcloud deploy in the Command line interface reference
- Section 7.10, “Creating an instance in an SR-IOV or an OVS TC-flower hardware offload environment”
7.9. Creating host aggregates in an SR-IOV or an OVS TC-flower hardware offload environment
For better performance in your Red Hat OpenStack Platform (RHOSP) SR-IOV or OVS TC-flower hardware offload environment, deploy guests that have CPU pinning and huge pages. You can schedule high performance instances on a subset of hosts by matching aggregate metadata with flavor metadata.
Prerequisites
- A RHOSP overcloud configured for an SR-IOV or an OVS hardware offload environment.
Your RHOSP overcloud must be configured for the
AggregateInstanceExtraSpecsFilter
.For more information, see Section 7.2, “Configuring PCI passthrough devices for SR-IOV”.
Procedure
Create an aggregate group, and add relevant hosts.
Define metadata, for example,
sriov=true
, that matches defined flavor metadata.$ openstack aggregate create sriov_group $ openstack aggregate add host sriov_group compute-sriov-0.localdomain $ openstack aggregate set --property sriov=true sriov_group
Create a flavor.
$ openstack flavor create <flavor> --ram <size_mb> --disk <size_gb> \ --vcpus <number>
Set additional flavor properties.
Note that the defined metadata,
sriov=true
, matches the defined metadata on the SR-IOV aggregate.$ openstack flavor set --property sriov=true \ --property hw:cpu_policy=dedicated \ --property hw:mem_page_size=1GB <flavor>
7.10. Creating an instance in an SR-IOV or an OVS TC-flower hardware offload environment
You use several commands to create an instance in a Red Hat OpenStack Platform (RHOSP) SR-IOV or an OVS TC-flower hardware offload environment.
Use host aggregates to separate high performance Compute hosts. For more information, see Section 7.9, “Creating host aggregates in an SR-IOV or an OVS TC-flower hardware offload environment”.
Pinned CPU instances can be located on the same Compute node as unpinned instances. For more information, see Configuring CPU pinning on Compute nodes in the Configuring the Compute service for instance creation guide.
Prerequisites
- A RHOSP overcloud configured for an SR-IOV or an OVS hardware offload environment.
Procedure
Create a flavor.
$ openstack flavor create <flavor_name> --ram <size_mb> \ --disk <size_gb> --vcpus <number>
TipYou can specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces by adding the extra spec
hw:pci_numa_affinity_policy
to your flavor. For more information, see Flavor metadata in Configuring the Compute service for instance creation.Create the network and the subnet:
$ openstack network create <network_name> \ --provider-physical-network tenant \ --provider-network-type vlan --provider-segment <vlan_id> $ openstack subnet create <name> --network <network_name> \ --subnet-range <ip_address_cidr> --dhcp
Create a virtual function (VF) port or physical function (PF) port:
VF port:
$ openstack port create --network <network_name> \ --vnic-type direct <port_name>
PF port that is dedicated to a single instance:
This PF port is a Networking service (neutron) port but is not controlled by the Networking service, and is not visible as a network adapter because it is a PCI device that is passed through to the instance.
$ openstack port create --network <network_name> \ --vnic-type direct-physical <port_name>
Create an instance.
$ openstack server create --flavor <flavor> --image <image_name> \ --nic port-id=<id> <instance_name>
Additional resources
- flavor create in the Command line interface reference
- network create in the Command line interface reference
- subnet create in the Command line interface reference
- port create in the Command line interface reference
- server create in the Command line interface reference