Chapter 11. Tuning a Red Hat OpenStack Platform environment
11.1. Pinning emulator threads Copy linkLink copied to clipboard!
Emulator threads handle interrupt requests and non-blocking processes for virtual machine hardware emulation. These threads float across the CPUs that the guest uses for processing. If threads used for the poll mode driver (PMD) or real-time processing run on these guest CPUs, you can experience packet loss or missed deadlines.
You can separate emulator threads from VM processing tasks by pinning the threads to their own guest CPUs, increasing performance as a result.
To improve performance, reserve a subset of host CPUs for hosting emulator threads.
Procedure
Deploy an overcloud with
NovaComputeCpuSharedSetdefined for a given role. The value ofNovaComputeCpuSharedSetapplies to thecpu_shared_setparameter in thenova.conffile for hosts within that role.parameter_defaults: ComputeOvsDpdkParameters: NovaComputeCpuSharedSet: "0-1,16-17" NovaComputeCpuDedicatedSet: "2-15,18-31"parameter_defaults: ComputeOvsDpdkParameters: NovaComputeCpuSharedSet: "0-1,16-17" NovaComputeCpuDedicatedSet: "2-15,18-31"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a flavor to build instances with emulator threads separated into a shared pool.
openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <vcpus> <flavor>
openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <vcpus> <flavor>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
hw:emulator_threads_policyextra specification, and set the value toshare. Instances created with this flavor will use the instance CPUs defined in thecpu_share_setparameter in the nova.conf file.openstack flavor set <flavor> --property hw:emulator_threads_policy=share
openstack flavor set <flavor> --property hw:emulator_threads_policy=shareCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You must set the cpu_share_set parameter in the nova.conf file to enable the share policy for this extra specification. You should use heat for this preferably, as editing nova.conf manually might not persist across redeployments.
Verification
Identify the host and name for a given instance.
openstack server show <instance_id>
openstack server show <instance_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use SSH to log on to the identified host as tripleo-admin.
ssh tripleo-admin@compute-1 [compute-1]$ sudo virsh dumpxml instance-00001 | grep `'emulatorpin cpuset'`
ssh tripleo-admin@compute-1 [compute-1]$ sudo virsh dumpxml instance-00001 | grep `'emulatorpin cpuset'`Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2. Configuring trust between virtual and physical functions Copy linkLink copied to clipboard!
You can configure trust between physical functions (PFs) and virtual functions (VFs), so that VFs can perform privileged actions, such as enabling promiscuous mode, or modifying a hardware address.
Prerequisites
- An operational installation of Red Hat OpenStack Platform including director
Procedure
Complete the following steps to configure and deploy the overcloud with trust between physical and virtual functions:
Add the
NeutronPhysicalDevMappingsparameter in theparameter_defaultssection to link between the logical network name and the physical interface.parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new property,
trusted, to the SR-IOV parameters.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must include double quotation marks around the value "true".
11.3. Utilizing trusted VF networks Copy linkLink copied to clipboard!
Create a network of type
vlan.openstack network create trusted_vf_network --provider-network-type vlan \ --provider-segment 111 --provider-physical-network sriov2 \ --external --disable-port-security
openstack network create trusted_vf_network --provider-network-type vlan \ --provider-segment 111 --provider-physical-network sriov2 \ --external --disable-port-securityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subnet.
openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_network
openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_networkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a port. Set the
vnic-typeoption todirect, and thebinding-profileoption totrue.openstack port create --network sriov111 \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trusted
openstack port create --network sriov111 \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trustedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an instance, and bind it to the previously-created trusted port.
openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trusted
openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trustedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm the trusted VF configuration on the hypervisor:
On the compute node that you created the instance, enter the following command:
ip link 7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off# ip link 7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss offCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the trust status of the VF is
trust on. The example output contains details of an environment that contains two ports. Note thatvf 6contains the texttrust on. -
You can disable spoof checking if you set
port_security_enabled: falsein the Networking service (neutron) network, or if you include the argument--disable-port-securitywhen you run theopenstack port createcommand.
11.4. Preventing packet loss by managing RX-TX queue size Copy linkLink copied to clipboard!
You can experience packet loss at high packet rates above 3.5 million packets per second (mpps) for many reasons, such as:
- a network interrupt
- a SMI
- packet processing latency in the Virtual Network Function
To prevent packet loss, increase the queue size from the default of 512 to a maximum of 1024.
Prerequisites
-
Access to the undercloud host and credentials for the
stackuser.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a custom environment YAML file and under
parameter_defaultsadd the following definitions to increase the RX and TX queue size:parameter_defaults: NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024
parameter_defaults: NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the deployment command and include the core heat templates, other environment files, the environment file that contains your RX and TX queue size changes:
Example
openstack overcloud deploy --templates \ -e <other_environment_files> \ -e /home/stack/my_tx-rx_queue_sizes.yaml
$ openstack overcloud deploy --templates \ -e <other_environment_files> \ -e /home/stack/my_tx-rx_queue_sizes.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Observe the values for RX queue size and TX queue size in the
nova.conffile.egrep "^[rt]x_queue_size" /var/lib/config-data/puppet-generated/\ nova_libvirt/etc/nova/nova.conf
$ egrep "^[rt]x_queue_size" /var/lib/config-data/puppet-generated/\ nova_libvirt/etc/nova/nova.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see the following:
rx_queue_size=1024 tx_queue_size=1024
rx_queue_size=1024 tx_queue_size=1024Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the values for RX queue size and TX queue size in the VM instance XML file generated by libvirt on the Compute host:
- Create a new instance.
Obtain the Compute host and and instance name:
openstack server show testvm-queue-sizes -c OS-EXT-SRV-ATTR:\ hypervisor_hostname -c OS-EXT-SRV-ATTR:instance_name
$ openstack server show testvm-queue-sizes -c OS-EXT-SRV-ATTR:\ hypervisor_hostname -c OS-EXT-SRV-ATTR:instance_nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
You should see output similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the Compute host and dump the instance definition.
Example
podman exec nova_libvirt virsh dumpxml instance-00000059
$ podman exec nova_libvirt virsh dumpxml instance-00000059Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
You should see output similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Configuring a NUMA-aware vSwitch Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Before you implement a NUMA-aware vSwitch, examine the following components of your hardware configuration:
- The number of physical networks.
- The placement of PCI cards.
- The physical architecture of the servers.
Memory-mapped I/O (MMIO) devices, such as PCIe NICs, are associated with specific NUMA nodes. When a VM and the NIC are on different NUMA nodes, there is a significant decrease in performance. To increase performance, align PCIe NIC placement and instance processing on the same NUMA node.
Use this feature to ensure that instances that share a physical network are located on the same NUMA node. To optimize utilization of datacenter hardware, you must use multiple physnets.
To configure NUMA-aware networks for optimal server utilization, you must understand the mapping of the PCIe slot and the NUMA node. For detailed information on your specific hardware, refer to your vendor’s documentation. If you fail to plan or implement your NUMA-aware vSwitch correctly, you can cause the servers to use only a single NUMA node.
To prevent a cross-NUMA configuration, place the VM on the correct NUMA node, by providing the location of the NIC to Nova.
Prerequisites
-
You have enabled the filter
NUMATopologyFilter.
Procedure
-
Set a new
NeutronPhysnetNUMANodesMappingparameter to map the physical network to the NUMA node that you associate with the physical network. If you use tunnels, such as VxLAN or GRE, you must also set the
NeutronTunnelNUMANodesparameter.parameter_defaults: NeutronPhysnetNUMANodesMapping: {<physnet_name>: [<NUMA_NODE>]} NeutronTunnelNUMANodes: <NUMA_NODE>,<NUMA_NODE>parameter_defaults: NeutronPhysnetNUMANodesMapping: {<physnet_name>: [<NUMA_NODE>]} NeutronTunnelNUMANodes: <NUMA_NODE>,<NUMA_NODE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Here is an example with two physical networks tunneled to NUMA node 0:
- one project network associated with NUMA node 0
one management network without any affinity
parameter_defaults: NeutronBridgeMappings: - tenant:br-link0 NeutronPhysnetNUMANodesMapping: {tenant: [1], mgmt: [0,1]} NeutronTunnelNUMANodes: 0parameter_defaults: NeutronBridgeMappings: - tenant:br-link0 NeutronPhysnetNUMANodesMapping: {tenant: [1], mgmt: [0,1]} NeutronTunnelNUMANodes: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, assign the physnet of the device named
eno2to NUMA number 0.ethtool -i eno2 bus-info: 0000:18:00.1 cat /sys/devices/pci0000:16/0000:16:02.0/0000:18:00.1/numa_node 0
# ethtool -i eno2 bus-info: 0000:18:00.1 # cat /sys/devices/pci0000:16/0000:16:02.0/0000:18:00.1/numa_node 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Observe the physnet settings in the example heat template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Follow these steps to test your NUMA-aware vSwitch:
Observe the configuration in the file
/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf:[neutron_physnet_tenant] numa_nodes=1 [neutron_tunnel] numa_nodes=1
[neutron_physnet_tenant] numa_nodes=1 [neutron_tunnel] numa_nodes=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the new configuration with the
lscpucommand:lscpu
$ lscpuCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Launch a VM with the NIC attached to the appropriate network.
11.6. Known limitations for NUMA-aware vSwitches Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
This section lists the constraints for implementing a NUMA-aware vSwitch in a Red Hat OpenStack Platform (RHOSP) network functions virtualization infrastructure (NFVi).
- You cannot start a VM that has two NICs connected to physnets on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- You cannot start a VM that has one NIC connected to a physnet and another NIC connected to a tunneled network on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- You cannot start a VM that has one vhost port and one VF on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- NUMA-aware vSwitch parameters are specific to overcloud roles. For example, Compute node 1 and Compute node 2 can have different NUMA topologies.
- If the interfaces of a VM have NUMA affinity, ensure that the affinity is for a single NUMA node only. You can locate any interface without NUMA affinity on any NUMA node.
- Configure NUMA affinity for data plane networks, not management networks.
- NUMA affinity for tunneled networks is a global setting that applies to all VMs.
11.7. Quality of Service (QoS) in NFVi environments Copy linkLink copied to clipboard!
You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic on Red Hat OpenStack Platform (RHOSP) networks in a network functions virtualization infrastructure (NFVi).
In NFVi environments, QoS support is limited to the following rule types:
-
minimum bandwidthon SR-IOV, if supported by vendor. -
bandwidth limiton SR-IOV and OVS-DPDK egress interfaces.
11.8. Creating an HCI overcloud that uses DPDK Copy linkLink copied to clipboard!
You can deploy your NFV infrastructure with hyperconverged nodes, by co-locating and configuring Compute and Ceph Storage services for optimized resource usage.
For more information about hyper-converged infrastructure (HCI), see Deploying a hyperconverged infrastructure.
The sections that follow provide examples of various configurations.
11.8.1. Example NUMA node configuration Copy linkLink copied to clipboard!
For increased performance, place the tenant network and Ceph object service daemon (OSD)s in one NUMA node, such as NUMA-0, and the VNF and any non-NFV VMs in another NUMA node, such as NUMA-1.
CPU allocation:
| NUMA-0 | NUMA-1 |
|---|---|
| Number of Ceph OSDs * 4 HT | Guest vCPU for the VNF and non-NFV VMs |
| DPDK lcore - 2 HT | DPDK lcore - 2 HT |
| DPDK PMD - 2 HT | DPDK PMD - 2 HT |
Example of CPU allocation:
| NUMA-0 | NUMA-1 | |
|---|---|---|
| Ceph OSD | 32,34,36,38,40,42,76,78,80,82,84,86 | |
| DPDK-lcore | 0,44 | 1,45 |
| DPDK-pmd | 2,46 | 3,47 |
| nova | 5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87 |
11.8.2. Example Ceph configuration file Copy linkLink copied to clipboard!
This section describes a sample Red Hat Ceph Storage configuration file. You can model your configuration file on this one, by substituting values that are appropriate for your Red Hat OpenStack Platform environment.
Assign CPU resources for Ceph Object Storage Daemons (OSDs) processes with the following parameters. The values shown here are examples. Adjust the values as appropriate based on your workload and hardware.
- 1
osd_numa_node: sets the affinity of Ceph processes to a NUMA node, for example,0forNUMA-0,1forNUMA-1, and so on.-1sets the affinity to no NUMA node.In this example,
osd_numa_nodeis set toNUMA-0. As shown in Section 11.8.3, “Example DPDK configuration file”,IsolCpusListcontains odd numbered CPUs onNUMA-1, after elements ofOvsPmdCoreListare removed. Because the latency-sensitive Compute service (nova) workload is hosted onNUMA-1, you must isolate the Ceph workload onNUMA-0. This example assumes that both the disk controllers and network interfaces for the stroage network are onNUMA-0.- 2
osd_memory_target_autotune: when set to true, the OSD daemons adjust their memory consumption based on theosd_memory_targetconfiguration option.- 3
autotune_memory_target_ratio: used to allocate memory for OSDs. The default is0.7.70% of the total RAM in the system is the starting point, from which any memory consumed by non-autotuned Ceph daemons are subtracted. When
osd_memory_target_autotuneis true for all OSDs, the remaining memory is divided by the OSDs. For HCI deployments themgr/cephadm/autotune_memory_target_ratiocan be set to0.2so that more memory is available for the Compute service. Adjust as needed to ensure each OSD has at least 5 GB of memory.
11.8.3. Example DPDK configuration file Copy linkLink copied to clipboard!
- 1
- KernelArgs: To calculate
hugepages, subtract the value of theNovaReservedHostMemoryparameter from total memory. - 2
- IsolCpusList: Assign a set of CPU cores that you want to isolate from the host processes with this parameter. Add the value of the
OvsPmdCoreListparameter to the value of theNovaComputeCpuDedicatedSetparameter to calculate the value for theIsolCpusListparameter. - 3
- OvsDpdkSocketMemory: Specify the amount of memory in MB to pre-allocate from the hugepage pool per NUMA node with the
OvsDpdkSocketMemoryparameter. For more information about calculating OVS-DPDK parameters, see OVS-DPDK parameters. - 4
- OvsPmdCoreList: Specify the CPU cores that are used for the DPDK poll mode drivers (PMD) with this parameter. Choose CPU cores that are associated with the local NUMA nodes of the DPDK interfaces. Allocate 2 HT sibling threads for each NUMA node to calculate the value for the
OvsPmdCoreListparameter.
11.8.4. Example nova configuration file Copy linkLink copied to clipboard!
- 1
- NovaReservedHugePages: Pre-allocate memory in MB from the hugepage pool with the
NovaReservedHugePagesparameter. It is the same memory total as the value for theOvsDpdkSocketMemoryparameter. - 2
- NovaReservedHostMemory: Reserve memory in MB for tasks on the host with the
NovaReservedHostMemoryparameter. Use the following guidelines to calculate the amount of memory that you must reserve:- 5 GB for each OSD.
- 0.5 GB overhead for each VM.
- 4GB for general host processing. Ensure that you allocate sufficient memory to prevent potential performance degradation caused by cross-NUMA OSD operation.
- 3
- NovaComputeCpuDedicatedSet: List the CPUs not found in
OvsPmdCoreList, orCeph_osd_docker_cpuset_cpuswith theNovaComputeCpuDedicatedSetparameter. The CPUs must be in the same NUMA node as the DPDK NICs.
11.8.5. Recommended configuration for HCI-DPDK deployments Copy linkLink copied to clipboard!
| Block Device Type | OSDs, Memory, vCPUs per device |
|---|---|
| NVMe |
Memory : 5GB per OSD |
| SSD |
Memory : 5GB per OSD |
| HDD |
Memory : 5GB per OSD |
Use the same NUMA node for the following functions:
- Disk controller
- Storage networks
- Storage CPU and memory
Allocate another NUMA node for the following functions of the DPDK provider network:
- NIC
- PMD CPUs
- Socket memory
11.8.6. Deploying the HCI-DPDK overcloud Copy linkLink copied to clipboard!
Follow these steps to deploy a hyperconverged overcloud that uses DPDK.
Prerequisites
- Red Hat OpenStack Platform (RHOSP) 17.1 or later.
- The latest version of Red Hat Ceph Storage 6.1.
Procedure
Generate the
roles_data.yamlfile for the Controller and the ComputeHCIOvsDpdk roles.openstack overcloud roles generate -o ~/<templates>/roles_data.yaml \ Controller ComputeHCIOvsDpdk
$ openstack overcloud roles generate -o ~/<templates>/roles_data.yaml \ Controller ComputeHCIOvsDpdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create and configure a new flavor with the
openstack flavor createandopenstack flavor setcommands. Deploy Ceph by using RHOSP director and the Ceph configuration file.
Example
openstack overcloud ceph deploy --config initial-ceph.conf
$ openstack overcloud ceph deploy --config initial-ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the overcloud with the custom
roles_data.yamlfile that you generated.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis example deploys Ceph RBD (block storage) without Ceph RGW (object storage). To include RGW in the deployment, use
cephadm.yamlinstead ofcephadm-rbd-only.yaml.
11.9. Synchronize your compute nodes with Timemaster Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Use time protocols to maintain a consistent timestamp between systems.
Red Hat OpenStack Platform (RHOSP) includes support for Precision Time Protocol (PTP) and Network Time Protocol (NTP).
You can use NTP to synchronize clocks in your network in the millisecond range, and you can use PTP to synchronize clocks to a higher, sub-microsecond, accuracy. An example use case for PTP is a virtual radio access network (vRAN) that contains multiple antennas which provide higher throughput with more risk of interference.
Timemaster is a program that uses ptp4l and phc2sys in combination with chronyd or ntpd to synchronize the system clock to NTP and PTP time sources. The phc2sys and ptp4l programs use Shared Memory Driver (SHM) reference clocks to send PTP time to chronyd or ntpd, which compares the time sources to synchronize the system clock.
The implementation of the PTPv2 protocol in the Red Hat Enterprise Linux (RHEL) kernel is linuxptp.
The linuxptp package includes the ptp4l program for PTP boundary clock and ordinary clock synchronization, and the phc2sys program for hardware time stamping. For more information about PTP, see: Introduction to PTP in the Red Hat Enterprise Linux System Administrator’s Guide.
Chrony is an implementation of the NTP protocol. The two main components of Chrony are chronyd, which is the Chrony daemon, and chronyc which is the Chrony command line interface.
For more information about Chrony, see Using the Chrony suite to configure NTP in the Red Hat Enterprise Linux System Administrator’s Guide.
The following image is an overview of a packet journey in a PTP configuration.
Figure 11.1. PTP packet journey overview
The following image is a overview of a packet journey in the Compute node in a PTP configuration.
Figure 11.2. PTP packet journey detail
11.9.1. Timemaster hardware requirements Copy linkLink copied to clipboard!
Ensure that you have the following hardware functionality:
- You have configured the NICs with hardware timestamping capability.
- You have configured the switch to allow multicast packets.
- You have configured the switch to also function as a boundary or transparent clock.
You can verify the hardware timestamping with the command ethtool -T <device>.
You can use either a transparent or boundary clock switch for better accuracy and less latency. You can use an uplink switch for the boundary clock. The boundary clock switch uses an 8-bit correctionField on the PTPv2 header to correct delay variations, and ensure greater accuracy on the end clock. In a transparent clock switch, the end clock calculates the delay variation, not the correctionField.
11.9.2. Configuring Timemaster Copy linkLink copied to clipboard!
The default Red Hat OpenStack Platform (RHOSP) service for time synchronization in overcloud nodes is OS::TripleO::Services::Timesync.
Known limitations
- Enable NTP for virtualized controllers, and enable PTP for bare metal nodes.
-
Virtio interfaces are incompatible, because
ptp4lrequires a compatible PTP device. -
Use a physical function (PF) for a VM with SR-IOV. A virtual function (VF) does not expose the registers necessary for PTP, and a VM uses
kvm_ptpto calculate time. - High Availability (HA) interfaces with multiple sources and multiple network paths are incompatible.
Procedure
To enable the Timemaster service on the nodes that belong to a role that you choose, replace the line that contains
OS::TripleO::Services::Timesyncwith the lineOS::TripleO::Services::TimeMasterin theroles_data.yamlfile section for that role.#- OS::TripleO::Services::Timesync - OS::TripleO::Services::TimeMaster
#- OS::TripleO::Services::Timesync - OS::TripleO::Services::TimeMasterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the heat parameters for the compute role that you use.
#Example ComputeSriovParameters: PTPInterfaces: ‘0:eno1,1:eno2’ PTPMessageTransport: ‘UDPv4’
#Example ComputeSriovParameters: PTPInterfaces: ‘0:eno1,1:eno2’ PTPMessageTransport: ‘UDPv4’Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include the new environment file in the
openstack overcloud deploycommand with any other environment files that are relevant to your environment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment.
- Replace <new_environment_file> with the new environment file or files that you want to include in the overcloud deployment process.
Verification
Use the command
phc_ctl, installed withptp4linux, to query the NIC hardware clock.phc_ctl <clock_name> get phc_ctl <clock_name> cmp
# phc_ctl <clock_name> get # phc_ctl <clock_name> cmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow