Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 12. Enabling RT-KVM for NFV Workloads
To facilitate installing and configuring Red Hat Enterprise Linux Real Time KVM (RT-KVM), Red Hat OpenStack Platform provides the following features:
- A real-time Compute node role that provisions Red Hat Enterprise Linux for real-time.
- The additional RT-KVM kernel module.
- Automatic configuration of the Compute node.
12.1. Planning for your RT-KVM Compute nodes Copier lienLien copié sur presse-papiers!
When planning for RT-KVM Compute nodes, ensure that the following tasks are completed:
You must use Red Hat certified servers for your RT-KVM Compute nodes.
For more information, see Red Hat Enterprise Linux for Real Time certified servers.
Register your undercloud and attach a valid Red Hat OpenStack Platform subscription.
For more information, see: Registering the undercloud and attaching subscriptions in Installing and managing Red Hat OpenStack Platform with director.
Enable the repositories that are required for the undercloud, such as the
rhel-9-server-nfv-rpms
repository for RT-KVM, and update the system packages to the latest versions.NoteYou need a separate subscription to a
Red Hat OpenStack Platform for Real Time
SKU before you can access this repository.For more information, see Enabling repositories for the undercloud in Installing and managing Red Hat OpenStack Platform with director.
Building the real-time image
Install the libguestfs-tools package on the undercloud to get the virt-customize tool:
sudo dnf install libguestfs-tools
(undercloud) [stack@undercloud-0 ~]$ sudo dnf install libguestfs-tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you install the
libguestfs-tools
package on the undercloud, disableiscsid.socket
to avoid port conflicts with thetripleo_iscsid
service on the undercloud:sudo systemctl disable --now iscsid.socket
$ sudo systemctl disable --now iscsid.socket
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the images:
tar -xf /usr/share/rhosp-director-images/overcloud-hardened-uefi-full-17.1.x86_64.tar tar -xf /usr/share/rhosp-director-images/ironic-python-agent-17.1.x86_64.tar
(undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/overcloud-hardened-uefi-full-17.1.x86_64.tar (undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/ironic-python-agent-17.1.x86_64.tar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the default image:
cp overcloud-hardened-uefi-full.qcow2 overcloud-realtime-compute.qcow2
(undercloud) [stack@undercloud-0 ~]$ cp overcloud-hardened-uefi-full.qcow2 overcloud-realtime-compute.qcow2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register your image to enable Red Hat repositories relevant to your customizations. Replace
[username]
and[password]
with valid credentials in the following example.virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager register --username=[username] --password=[password]' \ subscription-manager release --set 9.0
virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager register --username=[username] --password=[password]' \ subscription-manager release --set 9.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor security, you can remove credentials from the history file if they are used on the command prompt. You can delete individual lines in history using the
history -d
command followed by the line number.Find a list of pool IDs from your account’s subscriptions, and attach the appropriate pool ID to your image.
sudo subscription-manager list --all --available | less ... virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager attach --pool [pool-ID]'
sudo subscription-manager list --all --available | less ... virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager attach --pool [pool-ID]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the repositories necessary for Red Hat OpenStack Platform with NFV.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a script to configure real-time capabilities on the image.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the script to configure the real-time image:
virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log
(undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you see the following line in the
rt.sh
script output,"grubby fatal error: unable to find a suitable template"
, you can ignore this error.Examine the
virt-customize.log
file that resulted from the previous command, to check that the packages installed correctly using thert.sh
script .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Relabel SELinux:
virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel
(undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract vmlinuz and initrd:
mkdir image guestmount -a overcloud-realtime-compute.qcow2 -i --ro image cp image/boot/vmlinuz-3.10.0-862.rt56.804.el7.x86_64 ./overcloud-realtime-compute.vmlinuz cp image/boot/initramfs-3.10.0-862.rt56.804.el7.x86_64.img ./overcloud-realtime-compute.initrd guestunmount image
(undercloud) [stack@undercloud-0 ~]$ mkdir image (undercloud) [stack@undercloud-0 ~]$ guestmount -a overcloud-realtime-compute.qcow2 -i --ro image (undercloud) [stack@undercloud-0 ~]$ cp image/boot/vmlinuz-3.10.0-862.rt56.804.el7.x86_64 ./overcloud-realtime-compute.vmlinuz (undercloud) [stack@undercloud-0 ~]$ cp image/boot/initramfs-3.10.0-862.rt56.804.el7.x86_64.img ./overcloud-realtime-compute.initrd (undercloud) [stack@undercloud-0 ~]$ guestunmount image
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe software version in the
vmlinuz
andinitramfs
filenames vary with the kernel version.Upload the image:
openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2
(undercloud) [stack@undercloud-0 ~]$ openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You now have a real-time image you can use with the ComputeOvsDpdkRT
composable role on your selected Compute nodes.
Modifying BIOS settings on RT-KVM Compute nodes
To reduce latency on your RT-KVM Compute nodes, disable all options for the following parameters in your Compute node BIOS settings:
- Power Management
- Hyper-Threading
- CPU sleep states
- Logical processors
12.2. Configuring OVS-DPDK with RT-KVM Copier lienLien copié sur presse-papiers!
12.2.1. Designating nodes for Real-time Compute Copier lienLien copié sur presse-papiers!
To designate nodes for Real-time Compute, create a new role file to configure the Real-time Compute role, and configure the bare-metal nodes with a Real-time Compute resource class to tag the Compute nodes for real-time.
The following procedure applies to new overcloud nodes that you have not yet provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, scale down the overcloud to unprovision the node, then scale up the overcloud to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes in Installing and managing Red Hat OpenStack Platform with director.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:source ~/stackrc
[stack@director ~]$ source ~/stackrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Based on the
/usr/share/openstack-tripleo-heat-templates/environments/compute-real-time-example.yaml
file, create acompute-real-time.yaml
environment file that sets the parameters for theComputeRealTime
role. Generate a new roles data file named
roles_data_rt.yaml
that includes theComputeRealTime
role, along with any other roles that you need for the overcloud. The following example generates the roles data fileroles_data_rt.yaml
, which includes the rolesController
,Compute
, andComputeRealTime
:openstack overcloud roles generate \ -o /home/stack/templates/roles_data_rt.yaml \ ComputeRealTime Compute Controller
(undercloud)$ openstack overcloud roles generate \ -o /home/stack/templates/roles_data_rt.yaml \ ComputeRealTime Compute Controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the roles_data_rt.yaml file for the ComputeRealTime role:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the ComputeRealTime nodes for the overcloud by adding them to your node definition template:
node.json
ornode.yaml
.For more information, see Registering nodes for the overcloud in Installing and managing Red Hat OpenStack Platform with director.
Inspect the node hardware:
openstack overcloud node introspect --all-manageable --provide
(undercloud)$ openstack overcloud node introspect --all-manageable --provide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Creating an inventory of the bare-metal node hardware in Installing and managing Red Hat OpenStack Platform with director.
Tag each bare-metal node that you want to designate for ComputeRealTime with a custom ComputeRealTime resource class:
openstack baremetal node set \ --resource-class baremetal.RTCOMPUTE <node>
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.RTCOMPUTE <node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <node> with the name or UUID of the bare-metal node.
Add the ComputeRealTime role to your node definition file,
overcloud-baremetal-deploy.yaml
, and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<role_topology_file>
with the name of the topology file to use for theComputeRealTime
role, for example,myRoleTopology.j2
. You can reuse an existing network topology or create a new custom network interface template for the role.For more information, see Defining custom network interface templates in Installing and managing Red Hat OpenStack Platform with director. To use the default network definition settings, do not include
network_config
in the role definition.For more information about the properties you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes in Installing and managing Red Hat OpenStack Platform with director.
For an example node definition file, see Example node definition file in Installing and managing Red Hat OpenStack Platform with director.
Create the following Ansible playbook to configure the kernel during the node provisioning, and save the playbook as
/home/stack/templates/fix_rt_kernel.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include
/home/stack/templates/fix_rt_kernel.yaml
as a playbook in theComputeOvsDpdkSriovRT
role definition in your node provisioning file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about the properties you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes in Installing and managing Red Hat OpenStack Platform with director.
For an example node definition file, see Example node definition file in Installing and managing Red Hat OpenStack Platform with director.
Provision the new nodes for your role:
openstack overcloud node provision \ [--stack <stack> \ ]
(undercloud)$ openstack overcloud node provision \ [--stack <stack> \ ] [--network-config \] --output <deployment_file> \ /home/stack/templates/overcloud-baremetal-deploy.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Replace
<stack>
with the name of the stack for which the bare-metal nodes are provisioned. The default isovercloud
. -
Optional: Include the
--network-config
optional argument to provide the network definitions to thecli-overcloud-node-network-config.yaml
Ansible playbook. If you do not define the network definitions by using thenetwork_config
property, then the default network definitions are used. -
Replace
<deployment_file>
with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-baremetal-deployed.yaml
.
-
Optional: Replace
Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from
available
toactive
:watch openstack baremetal node list
(undercloud)$ watch openstack baremetal node list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you ran the provisioning command without the
--network-config
option, then configure the<Role>NetworkConfigTemplate
parameters in yournetwork-environment.yaml
file to point to your NIC template files:parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeAMDSEVNetworkConfigTemplate: /home/stack/templates/nic-configs/<rt_compute>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2
parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeAMDSEVNetworkConfigTemplate: /home/stack/templates/nic-configs/<rt_compute>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<rt_compute>
with the name of the file that contains the network topology of theComputeRealTime
role, for example,computert.yaml
to use the default network topology.Add your environment file to the stack with your other environment files and deploy the overcloud:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.2. Configuring OVS-DPDK parameters Copier lienLien copié sur presse-papiers!
Under
parameter_defaults
, set the tunnel type tovxlan
, and the network type tovxlan,vlan
:NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan,vlan'
NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan,vlan'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameters_defaults
, set the bridge mapping:# The OVS logical->physical bridge mappings to use. NeutronBridgeMappings: - dpdk-mgmt:br-link0
# The OVS logical->physical bridge mappings to use. NeutronBridgeMappings: - dpdk-mgmt:br-link0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults
, set the role-specific parameters for theComputeOvsDpdkSriov
role:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo prevent failures during guest creation, assign at least one CPU with sibling thread on each NUMA node. In the example, the values for the
OvsPmdCoreList
parameter denote cores 2 and 22 from NUMA 0, and cores 3 and 23 from NUMA 1.NoteThese huge pages are consumed by the virtual machines, and also by OVS-DPDK using the
OvsDpdkSocketMemory
parameter as shown in this procedure. The number of huge pages available for the virtual machines is theboot
parameter minus theOvsDpdkSocketMemory
.You must also add
hw:mem_page_size=1GB
to the flavor you associate with the DPDK instance.NoteOvsDpdkMemoryChannels
is a required setting for this procedure. For optimum operation, ensure you deploy DPDK with appropriate parameters and values.Configure the role-specific parameters for SR-IOV:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3. Launching an RT-KVM instance Copier lienLien copié sur presse-papiers!
Perform the following steps to launch an RT-KVM instance on a real-time enabled Compute node:
Create an RT-KVM flavor on the overcloud:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Launch an RT-KVM instance:
openstack server create --image <rhel> --flavor r1.small --nic net-id=<dpdk-net> test-rt
$ openstack server create --image <rhel> --flavor r1.small --nic net-id=<dpdk-net> test-rt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the instance uses the assigned emulator threads, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow