Este contenido no está disponible en el idioma seleccionado.
Chapter 7. Configuring an SR-IOV deployment
In your Red Hat OpenStack Platform NFV deployment, you can achieve higher performance with single root I/O virtualization (SR-IOV), when you configure direct access from your instances to a shared PCIe resource through virtual resources.
This section includes examples that you must modify for your topology and use case. For more information, see Hardware requirements for NFV.
Prerequisites
A RHOSP undercloud.
You must install and configure the undercloud before you can deploy the overcloud. For more information, see Installing and managing Red Hat OpenStack Platform with director.
NoteRHOSP director modifies SR-IOV configuration files through the key-value pairs that you specify in templates and custom environment files. You must not modify the SR-IOV files directly.
-
Access to the undercloud host and credentials for the
stackuser. - Access to the hosts that contain the NICs.
Ensure that you keep the NIC firmware updated.
Yumordnfupdates might not complete the firmware update. For more information, see your vendor documentation.
Procedure
Use Red Hat OpenStack Platform (RHOSP) director to install and configure RHOSP in an SR-IOV environment. The high-level steps are:
-
Create a network configuration file,
network_data.yaml, to configure the physical network for your overcloud, by following the instructions in Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director. - Generate roles and image files.
- Configure PCI passthrough devices for SR-IOV.
- Add role-specific parameters and configuration overrides.
- Create a bare metal nodes definition file.
- Create a NIC configuration template for SR-IOV.
- (Optional) Partition NICs.
Provision overcloud networks and VIPs.
For more information, see:
- Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide.
- Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
Provision overcloud bare metal nodes.
For more information, see Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
- Deploy an SR-IOV overcloud.
7.1. Generating roles and image files for SR-IOV Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenStack Platform (RHOSP) director uses roles to assign services to nodes. When deploying RHOSP in an SR-IOV environment, ComputeSriov is a default role provided with your RHOSP installation that includes the NeutronSriovAgent service, in addition to the default compute services.
The undercloud installation requires an environment file to determine where to obtain container images and how to store them.
Prerequisites
-
Access to the undercloud host and credentials for the
stackuser.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a new roles data file named
roles_data_compute_sriov.yaml, that includes theControllerandComputeSriovroles:openstack overcloud roles \ generate -o /home/stack/templates/roles_data_compute_sriov.yaml \ Controller ComputeSriov
$ openstack overcloud roles \ generate -o /home/stack/templates/roles_data_compute_sriov.yaml \ Controller ComputeSriovCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you are using multiple technologies in your RHOSP environment, OVS-DPDK, SR-IOV, and OVS hardware offload, you generate just one roles data file to include all the roles:
openstack overcloud roles generate -o /home/stack/templates/\ roles_data.yaml Controller ComputeOvsDpdk ComputeOvsDpdkSriov \ Compute:ComputeOvsHwOffload
$ openstack overcloud roles generate -o /home/stack/templates/\ roles_data.yaml Controller ComputeOvsDpdk ComputeOvsDpdkSriov \ Compute:ComputeOvsHwOffloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow To generate an images file, you run the
openstack tripleo container image preparecommand. The following inputs are needed:-
The roles data file that you generated in an earlier step, for example,
roles_data_compute_sriov.yaml. The SR-IOV environment file appropriate for your Networking service mechanism driver:
ML2/OVN environments
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yamlML2/OVS environments
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yamlExample
In this example, the
overcloud_images.yamlfile is being generated for an ML2/OVN environment:sudo openstack tripleo container image prepare \ --roles-file ~/templates/roles_data_compute_sriov.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml \ -e ~/containers-prepare-parameter.yaml \ --output-env-file=/home/stack/templates/overcloud_images.yaml
$ sudo openstack tripleo container image prepare \ --roles-file ~/templates/roles_data_compute_sriov.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml \ -e ~/containers-prepare-parameter.yaml \ --output-env-file=/home/stack/templates/overcloud_images.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
The roles data file that you generated in an earlier step, for example,
- Note the path and file name of the roles data file and the images file that you have created. You use these files later when you deploy your overcloud.
Next steps
7.2. Configuring PCI passthrough devices for SR-IOV Copiar enlaceEnlace copiado en el portapapeles!
When deploying Red Hat OpenStack Platform for an SR-IOV environment, you must configure the PCI passthrough devices for the SR-IOV compute nodes in a custom environment file.
Prerequisites
- Access to the one or more physical servers that contains the PCI cards.
-
Access to the undercloud host and credentials for the
stackuser.
Procedure
Use one of the following commands on the physical server that has the PCI cards:
If your overcloud is deployed:
lspci -nn -s <pci_device_address>
$ lspci -nn -s <pci_device_address>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
3b:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [<vendor_id>: <product_id>] (rev 02)
3b:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [<vendor_id>: <product_id>] (rev 02)Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your overcloud has not been deployed:
openstack baremetal introspection data save <baremetal_node_name> | jq '.inventory.interfaces[] | .name, .vendor, .product'
$ openstack baremetal introspection data save <baremetal_node_name> | jq '.inventory.interfaces[] | .name, .vendor, .product'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Retain the vendor and product IDs for PCI passthrough devices on the SR-IOV compute nodes. You will need these IDs in a later step.
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a custom environment YAML file, for example,
sriov-overrides.yaml. Configure the PCI passthrough devices for the SR-IOV compute nodes by adding the following content to the file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<vendor_id>with the vendor ID of the PCI device. -
Replace
<product_id>with the product ID of the PCI device. -
Replace
<NIC_address>with the address of the PCI device. Replace
<physical_network>with the name of the physical network the PCI device is located on.NoteDo not use the
devnameparameter when you configure PCI passthrough because the device name of a NIC can change. To create a Networking service (neutron) port on a PF, specify thevendor_id, theproduct_id, and the PCI device address inNovaPCIPassthrough, and create the port with the--vnic-type direct-physicaloption. To create a Networking service port on a virtual function (VF), specify thevendor_idandproduct_idinNovaPCIPassthrough, and create the port with the--vnic-type directoption. The values of thevendor_idandproduct_idparameters might be different between physical function (PF) and VF contexts.
-
Replace
Also in the custom environment file, ensure that
PciPassthroughFilterandAggregateInstanceExtraSpecsFilterare in the list of filters for theNovaSchedulerEnabledFiltersparameter, that the Compute service (nova) uses to filter a node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Note the path and file name of the custom environment file that you have created. You use this file later when you deploy your overcloud.
Next steps
7.3. Adding role-specific parameters and configuration overrides Copiar enlaceEnlace copiado en el portapapeles!
You can add role-specific parameters for the SR-IOV Compute nodes and override default configuration values in a custom environment YAML file that Red Hat OpenStack Platform (RHOSP) director uses when deploying your SR-IOV environment.
Prerequisites
-
Access to the undercloud host and credentials for the
stackuser.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Open the custom environment YAML file that you created in Section 7.2, “Configuring PCI passthrough devices for SR-IOV”, or create a new one.
Add role-specific parameters for the SR-IOV Compute nodes to the custom environment file.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the configuration defaults that RHOSP director uses to configure SR-IOV. These defaults are provided in the file and they vary based on your mechanism driver:
ML2/OVN
/usr/share/openstack-tripleo-heat-templates/environment/services/neutron-ovn-sriov.yamlML2/OVS
/usr/share/openstack-tripleo-heat-templates/environment/services/neutron-sriov.yaml
If you need to override any of the configuration defaults, add your overrides to the custom environment file.
This custom environment file, for example, is where you can add Nova PCI whitelist values or set the network type.
Example
In this example, the Networking service (neutron) network type is set to VLAN and ranges are added for the tenants:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you created a new custom environment file, note its path and file name. You use this file later when you deploy your overcloud.
Next steps
7.4. Creating a bare metal nodes definition file for SR-IOV Copiar enlaceEnlace copiado en el portapapeles!
Use Red Hat OpenStack Platform (RHOSP) director and a definition file to provision your bare metal nodes for your SR-IOV environment. In the bare metal nodes definition file, define the quantity and attributes of the bare metal nodes that you want to deploy and assign overcloud roles to these nodes. Also define the network layout of the nodes.
Prerequisites
-
Access to the undercloud host and credentials for the
stackuser.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a bare metal nodes definition file, such as
overcloud-baremetal-deploy.yaml, as instructed in Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. In the bare metal nodes definition file, add a declaration to the Ansible playbook,
cli-overcloud-node-kernelargs.yaml.The playbook contains kernel arguments to use when you provision bare metal nodes.
- name: ComputeSriov ... ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml ...- name: ComputeSriov ... ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you want to set any extra Ansible variables when running the playbook, use the
extra_varsproperty to set them.NoteThe variables that you add to
extra_varsshould be the same role-specific parameters for the SR-IOV Compute nodes that you added to the custom environment file earlier in Section 7.3, “Adding role-specific parameters and configuration overrides”.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Note the path and file name of the bare metal nodes definition file that you have created. You use this file later when you configure your NICs and as the input file for the
overcloud node provisioncommand when you provision your nodes.
Next steps
7.5. Creating a NIC configuration template for SR-IOV Copiar enlaceEnlace copiado en el portapapeles!
Define your NIC configuration templates by modifying copies of the sample Jinja2 templates that ship with Red Hat OpenStack Platform (RHOSP).
Prerequisites
-
Access to the undercloud host and credentials for the
stackuser.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy a sample network configuration template.
Copy a NIC configuration Jinja2 template from the examples in the
/usr/share/ansible/roles/tripleo_network_config/templates/directory. Choose the one that most closely matches your NIC requirements. Modify it as needed.In your NIC configuration template, for example,
single_nic_vlans.j2, add your PF and VF interfaces. To create SR-IOV VFs, configure the interfaces as standalone NICs.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
numvfsparameter replaces theNeutronSriovNumVFsparameter in the network configuration templates. Red Hat does not support modification of theNeutronSriovNumVFsparameter or thenumvfsparameter after deployment. If you modify either parameter after deployment, the modification might cause a disruption for the running instances that have an SR-IOV port on that PF. In this case, you must hard reboot these instances to make the SR-IOV PCI device available again.Add the custom network configuration template to the bare metal nodes definition file that you created in Section 7.4, “Creating a bare metal nodes definition file for SR-IOV”.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Note the path and file name of the NIC configuration template that you have created. You use this file later if you want to partition your NICs.
Next steps
- If you want to partition your NICs, proceed to Section 7.6, “Configuring NIC partitioning”.
Otherwise, perform these steps:
- Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide
- Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide
- Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide
- Section 7.8, “Deploying an SR-IOV overcloud”
7.6. Configuring NIC partitioning Copiar enlaceEnlace copiado en el portapapeles!
You can reduce the number of NICs that you need for each host by configuring single root I/O virtualization (SR-IOV) virtual functions (VFs) for Red Hat OpenStack Platform (RHOSP) management networks and provider networks. When you partition a single, high-speed NIC into multiple VFs, you can use the NIC for both control and data plane traffic. This feature has been validated on Intel Fortville NICs, and Mellanox CX-5 NICs.
Starting with RHOSP 17.1.4, all kernel console logging arguments are removed because console logging can cause unacceptable latency issues in Compute workloads.
If your RHOSP 17.1.3 or earlier deployment includes a filter rule in nftables or iptables with a LOG action, and the kernel command line (/proc/cmdline) has console=tty50, logging actions can cause substantial latency in packet transmission.
If your 17.1.3 or earlier deployment has this configuration and you observe excessive latency, apply the workaround described in Knowledgebase solution Sometimes receiving packet(e.g. ICMP echo) has latency, around 190 ms.
If you update to RHOSP 17.1.4, perform the steps in the Knowledgebase solution first.
Prerequisites
-
Access to the undercloud host and credentials for the
stackuser. Ensure that NICs, their applications, the VF guest, and OVS reside on the same NUMA Compute node.
Doing so helps to prevent performance degradation from cross-NUMA operations.
Ensure that you keep the NIC firmware updated.
Yumordnfupdates might not complete the firmware update. For more information, see your vendor documentation.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the NIC configuration template, for example
single_nic_vlans.j2, that you created earlier in Section 7.5, “Creating a NIC configuration template for SR-IOV”.TipAs you complete the steps in this section, you can refer to Section 7.7, “Example configurations for NIC partitions”.
Add an entry for the interface type
sriov_pfto configure a physical function that the host can use:- type: sriov_pf name: <interface_name> use_dhcp: false numvfs: <number_of_vfs> promisc: <true/false>- type: sriov_pf name: <interface_name> use_dhcp: false numvfs: <number_of_vfs> promisc: <true/false>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<interface_name>with the name of the interface. -
Replace
<number_of_vfs>with the number of VFs. -
Optional: Replace
<true/false>withtrueto set promiscuous mode, orfalseto disable promiscuous mode. The default value istrue.
NoteThe
numvfsparameter replaces theNeutronSriovNumVFsparameter in the network configuration templates. Red Hat does not support modification of theNeutronSriovNumVFsparameter or thenumvfsparameter after deployment. If you modify either parameter after deployment, it might cause a disruption for the running instances that have an SR-IOV port on that physical function (PF). In this case, you must hard reboot these instances to make the SR-IOV PCI device available again.-
Replace
Add an entry for the interface type
sriov_vfto configure virtual functions that the host can use:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bond_type>with the required bond type, for example,linux_bond. You can apply VLAN tags on the bond for other bonds, such asovs_bond. Replace
<bonding_option>with one of the following supported bond modes:-
active-backup Balance-slbNoteLACP bonds are not supported.
-
Specify the
sriov_vfas the interface type to bond in thememberssection.NoteIf you are using an OVS bridge as the interface type, you can configure only one OVS bridge on the
sriov_vfof asriov_pfdevice. More than one OVS bridge on a singlesriov_pfdevice can result in packet duplication across VFs, and decreased performance.-
Replace
<pf_device_name>with the name of the PF device. -
If you use a
linux_bond, you must assign VLAN tags. If you set a VLAN tag, ensure that you set a unique tag for each VF associated with a singlesriov_pfdevice. You cannot have two VFs from the same PF on the same VLAN. -
Replace
<vf_id>with the ID of the VF. The applicable VF ID range starts at zero, and ends at the maximum number of VFs minus one. - Disable spoof checking.
-
Apply VLAN tags on the
sriov_vfforlinux_bondover VFs.
-
Replace
To reserve VFs for instances, include the
NovaPCIPassthroughparameter in an environment file.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow RHOSP director identifies the host VFs, and derives the PCI addresses of the VFs that are available to the instance.
Enable
IOMMUon all nodes that require NIC partitioning.Example
For example, if you want NIC partitioning for Compute nodes, enable IOMMU using the
KernelArgsparameter for that role:parameter_defaults: ComputeParameters: KernelArgs: "intel_iommu=on iommu=pt"parameter_defaults: ComputeParameters: KernelArgs: "intel_iommu=on iommu=pt"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen you first add the
KernelArgsparameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment.-
Ensure that you add this NIC configuration template, for example
single_nic_vlans.j2, to the bare metal nodes definition file that you created in Section 7.4, “Creating a bare metal nodes definition file for SR-IOV”.
Next steps
- Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide
- Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide
- Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide
- Section 7.8, “Deploying an SR-IOV overcloud”
7.7. Example configurations for NIC partitions Copiar enlaceEnlace copiado en el portapapeles!
Refer to these example configurations when you want to partition NICs in a Red Hat OpenStack Platform SR-IOV environment.
Linux bond over VFs
The following example configures a Linux bond over VFs, disables spoofcheck, and applies VLAN tags to sriov_vf:
OVS bridge on VFs
The following example configures an OVS bridge on VFs:
OVS user bridge on VFs
The following example configures an OVS user bridge on VFs and applies VLAN tags to ovs_user_bridge:
When an OVS user bridge is used with no OVS-DPDK bond, and there is an OVS-DPDK port under the bridge, then you must set ovs_extra under ovs_dpdk_port.
7.8. Deploying an SR-IOV overcloud Copiar enlaceEnlace copiado en el portapapeles!
The last step in configuring your Red Hat OpenStack Platform (RHOSP) overcloud in an SR-IOV environment is to run the openstack overcloud deploy command. Inputs to the command include all of the various overcloud templates and environment files that you constructed.
Prerequisites
-
Access to the undercloud host and credentials for the
stackuser. -
You have performed all of the steps listed in the earlier procedures in this section and have assembled all of the various heat templates and environment files to use as inputs for the
overcloud deploycommand.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the
openstack overcloud deploycommand.It is important to list the inputs to the
openstack overcloud deploycommand in a particular order. The general rule is to specify the default heat template files first followed by your custom environment files and custom templates that contain custom configurations, such as overrides to the default properties.Add your inputs to the
openstack overcloud deploycommand in the following order:A custom network definition file that contains the specifications for your SR-IOV network on the overcloud, for example,
network-data.yaml.For more information, see Network definition file configuration options in the Installing and managing Red Hat OpenStack Platform with director guide.
A roles file that contains the
ControllerandComputeOvsHwOffloadroles that RHOSP director uses to deploy your OVS hardware offload environment.Example:
roles_data_compute_sriov.yamlFor more information, see Section 7.1, “Generating roles and image files for SR-IOV”.
An output file from provisioning your overcloud networks.
Example:
overcloud-networks-deployed.yamlFor more information, see Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide.
An output file from provisioning your overcloud VIPs.
Example:
overcloud-vip-deployed.yamlFor more information, see Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
An output file from provisioning bare-metal nodes.
Example:
overcloud-baremetal-deployed.yamlFor more information, see Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
An images file that director uses to determine where to obtain container images and how to store them.
Example:
overcloud_images.yamlFor more information, see Section 7.1, “Generating roles and image files for SR-IOV”.
An environment file for the Networking service (neutron) mechanism driver and router scheme that your environment uses:
ML2/OVN
-
Distributed virtual routing (DVR):
neutron-ovn-dvr-ha.yaml -
Centralized virtual routing:
neutron-ovn-ha.yaml
-
Distributed virtual routing (DVR):
ML2/OVS
-
Distributed virtual routing (DVR):
neutron-ovs-dvr.yaml -
Centralized virtual routing:
neutron-ovs.yaml
-
Distributed virtual routing (DVR):
An environment file for SR-IOV, depending on your mechanism driver:
ML2/OVN
-
neutron-ovn-sriov.yaml
-
ML2/OVS
neutron-sriov.yamlNoteIf you also have an OVS-DPDK environment, and want to locate OVS-DPDK and SR-IOV instances on the same node, include the following environment files in your deployment script:
ML2/OVN
neutron-ovn-dpdk.yamlML2/OVS
neutron-ovs-dpdk.yaml
One or more custom environment files that contain your configuration for:
- PCI passthrough devices for the SR-IOV nodes.
- role-specific parameters for the SR-IOV nodes
overrides of default configuration values for the SR-IOV environment.
Example:
sriov-overrides.yamlFor more information, see:
- Section 7.2, “Configuring PCI passthrough devices for SR-IOV”.
Section 7.3, “Adding role-specific parameters and configuration overrides”.
Example
This excerpt from a sample
openstack overcloud deploycommand demonstrates the proper ordering of the command’s inputs for an SR-IOV, ML2/OVN environment that uses DVR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the
openstack overcloud deploycommand.When the overcloud creation is finished, the RHOSP director provides details to help you access your overcloud.
Verification
- Perform the steps in Validating your overcloud deployment in the Installing and managing Red Hat OpenStack Platform with director guide.
To verify that your NICs are partitioned properly, do the following:
Log in to the overcloud Compute node as
tripleo-adminand check the number of VFs:Example
In this example, the number of VFs for both
p4p1andp4p2is10:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Show the OVS connections:
sudo ovs-vsctl show
$ sudo ovs-vsctl showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
You should see output similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to your SR-IOV Compute node as
tripleo-adminand check the Linux bonds:cat /proc/net/bonding/<bond_name>
$ cat /proc/net/bonding/<bond_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
You should see output similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
List the OVS bonds:
sudo ovs-appctl bond/show
$ sudo ovs-appctl bond/showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
You should see output similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If you used
NovaPCIPassthroughto pass VFs to instances, test by deploying an SR-IOV instance.
7.9. Creating host aggregates in an SR-IOV or an OVS TC-flower hardware offload environment Copiar enlaceEnlace copiado en el portapapeles!
For better performance in your Red Hat OpenStack Platform (RHOSP) SR-IOV or OVS TC-flower hardware offload environment, deploy guests that have CPU pinning and huge pages. You can schedule high performance instances on a subset of hosts by matching aggregate metadata with flavor metadata.
Prerequisites
- A RHOSP overcloud configured for an SR-IOV or an OVS hardware offload environment.
Your RHOSP overcloud must be configured for the
AggregateInstanceExtraSpecsFilter.For more information, see Section 7.2, “Configuring PCI passthrough devices for SR-IOV”.
Procedure
Create an aggregate group, and add relevant hosts.
Define metadata, for example,
sriov=true, that matches defined flavor metadata.openstack aggregate create sriov_group openstack aggregate add host sriov_group compute-sriov-0.localdomain openstack aggregate set --property sriov=true sriov_group
$ openstack aggregate create sriov_group $ openstack aggregate add host sriov_group compute-sriov-0.localdomain $ openstack aggregate set --property sriov=true sriov_groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a flavor.
openstack flavor create <flavor> --ram <size_mb> --disk <size_gb> \ --vcpus <number>
$ openstack flavor create <flavor> --ram <size_mb> --disk <size_gb> \ --vcpus <number>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set additional flavor properties.
Note that the defined metadata,
sriov=true, matches the defined metadata on the SR-IOV aggregate.openstack flavor set --property sriov=true \ --property hw:cpu_policy=dedicated \ --property hw:mem_page_size=1GB <flavor>
$ openstack flavor set --property sriov=true \ --property hw:cpu_policy=dedicated \ --property hw:mem_page_size=1GB <flavor>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.10. Creating an instance in an SR-IOV or an OVS TC-flower hardware offload environment Copiar enlaceEnlace copiado en el portapapeles!
You use several commands to create an instance in a Red Hat OpenStack Platform (RHOSP) SR-IOV or an OVS TC-flower hardware offload environment.
Use host aggregates to separate high performance Compute hosts. For more information, see Section 7.9, “Creating host aggregates in an SR-IOV or an OVS TC-flower hardware offload environment”.
Pinned CPU instances can be located on the same Compute node as unpinned instances. For more information, see Configuring CPU pinning on Compute nodes in the Configuring the Compute service for instance creation guide.
Prerequisites
- A RHOSP overcloud configured for an SR-IOV or an OVS hardware offload environment.
Procedure
Create a flavor.
openstack flavor create <flavor_name> --ram <size_mb> \ --disk <size_gb> --vcpus <number>
$ openstack flavor create <flavor_name> --ram <size_mb> \ --disk <size_gb> --vcpus <number>Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces by adding the extra spec
hw:pci_numa_affinity_policyto your flavor. For more information, see Flavor metadata in Configuring the Compute service for instance creation.Create the network and the subnet:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a virtual function (VF) port or physical function (PF) port:
VF port:
openstack port create --network <network_name> \ --vnic-type direct <port_name>
$ openstack port create --network <network_name> \ --vnic-type direct <port_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow PF port that is dedicated to a single instance:
This PF port is a Networking service (neutron) port but is not controlled by the Networking service, and is not visible as a network adapter because it is a PCI device that is passed through to the instance.
openstack port create --network <network_name> \ --vnic-type direct-physical <port_name>
$ openstack port create --network <network_name> \ --vnic-type direct-physical <port_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an instance.
openstack server create --flavor <flavor> --image <image_name> \ --nic port-id=<id> <instance_name>
$ openstack server create --flavor <flavor> --image <image_name> \ --nic port-id=<id> <instance_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow