3.2. Red Hat OpenStack Platform 10 Maintenance Releases - May 2018 Update


These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.

3.2.1. Enhancements

This release of Red Hat OpenStack Platform features the following enhancements:
BZ#1578625
This enhancement backports collectd 5.8 to Red Hat OpenStack Platform 10. collectd 5.8 includes some additional features, such as ovs-events, ovs-stats, and extended libvirt statistics.
                  
* ovs-events: The ovs-events plugin monitors the link status of Open vSwitch (OVS) connected interfaces, dispatches the values to collectd and sends the notification whenever the link state change occurs. This plugin uses OVS database to get a link state change notification. For more information, see Plugin ovs_events
                  
* ovs-stats: The ovs-stats plugin collects statistics of OVS connected interfaces. This plugin uses OVSDB management protocol (RFC7047) monitor mechanism to get statistics from OVSDB. For more information, see Plugin ovs_stats.

* Extended libvirt: The libvirt plugin extended to support CMT, MBM, CPU Pinning, utilization, state metrics on the platform. By default, no extra statistics are reported. If enabled, the plugin reports more detailed statistics about the behaviour of virtual machines. For more information, see Plugin virt.

* hugepages: collectd reads the /sys/devices/system/node/*/hugepages and /sys/kernel/mm/hugepages directories to collect metrics on hugepages. By default, this option is enabled. For more information, see Plugin hugepages.

* rdt: The intel_rdt plugin collects information provided by monitoring features of Intel Resource Director Technology (Intel(R) RDT) like Cache Monitoring Technology (CMT), Memory Bandwidth Monitoring (MBM). These features provide information about utilization of shared resources. For more information, see Plugin intel_rdt.
                  
NOTE: To get the latest collectd packages, enable the opstools repository by running the following command:
$ sudo subscription-manager repos --enable=10-optools
                  
collectd is available as a Technology Preview in this release. For more information on the support scope for features marked as technology previews, see Technology Preview Features Support Scope.
BZ#1258832
With this release, it is now possible to deploy neutron with the OpenDaylight ML2 driver and OpenDaylight L3 DVR service plugin (no OVS Agent or Neutron L3 Agent needed). A pre-defined environment file is provided for OpenDaylight deployments and can be found in `environments/neutron-opendaylight-l3.yaml`
Note: The OpenDaylight controller itself is deployed and activated on the first overcloud controller node with default roles. OpenDaylight can also be deployed on a custom role.  In addition, with this release there is no support for clustering of the OpenDaylight controller, so only a single instance may be deployed.
BZ#1315651
The High Availability architecture in this release is more simplified, resulting in a less invasive process when services need to be restarted. During scaling operations, only the needed services are restarted. Previously, a scaling operation required the entire cluster to be restarted.
BZ#1337656
The OpenStack Data Processing service now supports version 2.3 of the HDP (Ambari) plug-in.
BZ#1365857
In this release, the Red Hat OpenDaylight is available as a Technology Preview. This version is based on OpenDaylight Boron SR2.
BZ#1365865
The Red Hat OpenDaylight controller does not support clustering in this release, but High Availability is provided for the neutron API service by default.
BZ#1365874
Red Hat OpenDaylight now supports tenant-configurable security groups for IPv4 traffic. In the default setting, each tenant uses a security group that allows communication among instances associated with that group. Consequently, all egress traffic within the security group is allowed, while the ingress traffic from the outside is dropped.
BZ#1415828
This enhancement implements ProcessMonitor in the HaproxyNSDriver class (v2) to use the external_process module, which allows it to monitor and respawn the haproxy processes as needed. The LBaaS agent (v2) will load options related to external_process in order to take a configured action when the HAproxy process dies unexpectedly.
BZ#1415829
This enhancement adds the ability to automatically reschedule load balancers from dead LBaaS agents. Previously, load balancers could be scheduled across multiple LBaaS agents, however if a hypervisor died, the load balancers scheduled to that node would cease operation. With this update, these load balancers are automatically rescheduled to a different agent.
This feature is turned off by default and controlled using `allow_automatic_lbaas_agent_failover`.
BZ#1469453
Previously, during any stack update operation, there was a unique identifier parameter value called DeployIdentifier that was set to a new timestamp value on every run. This caused puppet to be reapplied across all nodes in the deployment.

This fix adds  a new cli arg to "openstack overcloud deploy" called --skip-deploy-identifier. The new CLI argument will skip setting this DeployIdentifier value, and puppet will no longer be forced to execute on every stack update.

In some scenarios, Puppet will still execute even if --skip-deploy-identifier is passed. Those scenarios include a change to the puppet manifest itself.

Performance of stack update operations, such as scale out, are greatly improved when passing --skip-deploy-identifier argument since puppet does not have to run.
BZ#1480338
With this update, the OS::Nova::ServerGroup resource now allows the 'soft-affinity' and 'soft-anti-affinity' policies to be used in addition to the 'affinity' and 'anti-affinity' policies.
BZ#1488390
This update adds support for multiple Availability Zones within a single Block Storage (cinder) volume service; this is done by defining the AZ in each driver section.
BZ#1498513
With this update, the `OS::Neutron::Port` resource now supports the 'baremetal' and 'direct-physical' (passthrough) vnic_type.
BZ#1503896
This release adds support for deploying Dell EMC VMAX Block Storage backend using the Red Hat OpenStack Platform director.
BZ#1508030
This update increases the default value of `fs.inotify.max_user_instances` to 1024. This update also allows you to manage the value through a heat template, using `InotifyIntancesMax`.
BZ#1519867
This update adds support for remote snapshot attachment for backups. This was added because some backends can attach remote snapshots, but have inefficient `create volume from snapshot` operations, so the inability to remotely attach a snapshot prevents efficient snapshot or in-use volume backups when scaling out the backup service. As a result, the backup service can efficiently scale out without having to be concerned about whether it is co-located with the volume service on the same node.
BZ#1547323
This feature adds the rpc_response_timeout option to the /etc/cinder/cinder.conf file.

This adds the ability to configure Cinder's RPC response timeout.

3.2.2. Release Notes

This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment.
BZ#1403914
The Dashboard 'Help' button now directs users to the Red Hat OpenStack Platform documentation page (namely, https://access.redhat.com/documentation/en/red-hat-openstack-platform/).
BZ#1451714
Problem in detail:
In OSP10 (OvS2.5), following are the issues:
1) tuned is configured with wrong set of CPUs. Expected configuration is NeutronDpdkCoreList + NovaVcpuPinSet, but it has been configured as HostCpusList.
2) In post-config, the -l of DPDK_OPTIONS is set as 0 and NeutronDpdkdCoreList is configured as pmd-cpu-mask

What needs to be corrected after update, manually?
1) Add the list of cpus to be isolated, which is NeutronDpdkCoreList + NovaVcpuPinSet to the tuned conf file.

TUNED_CORES="<list of CPUs"
sed -i 's/^isolated_cores=.*/isolated_cores=$TUNED_CORES/' $tuned_conf_path
tuned-adm profile cpu-partitioning

2) lcore mask after the update will be set to 0. Get the cpu mask with get_mask code from the first-boot script [1].
LCORE_MASK="<mask value output of get_mask"
ovs-vsctl --no-wait set Open_vSwitch . other-config:dpdk-lcore-mask=$LCORE_MASK
BZ#1454624
Workaround: Before you upgrade or update OpenStack, delete the guest that attached to the PF. Then you can proceed to update or upgrade and it will pass.

3.2.3. Known Issues

These known issues exist in Red Hat OpenStack Platform at this time:
BZ#1295374
It is currently not possible to establish the Red Hat OpenStack Platform Director 10 with VxLAN over VLAN tunneling as the VLAN port is not compatible with the DPDK port.

As a workaround, after deploying the Red Hat OpenStack Platform Director with VxLAN, run the following:

# ifup br-link
# systemctl restart neutron-openvswitch-agent

* Add the local IP addr to br-link bridge
# ip addr add <local_IP/PREFIX> dev br-link

* Tag br-link port with the VLAN used as tenant network VLAN ID.
# ovs-vsctl set port br-link tag=<VLAN-ID>
BZ#1366356
When using userspace datapath (DPDK), some non-PMD threads run on the same CPU that runs PMD (configured by `pmd-cpu-mask`). This causes the PMD to be preempted which causes latency spikes, drops, etc.

With this update, a fix is implemented within the post-install.yaml files available at: https://access.redhat.com/documentation/en/red-hat-openstack-platform/10/single/network-functions-virtualization-configuration-guide/#ap-ovsdpdk-post-install.
BZ#1390065
When using OVS-DPDK, all bridges should be of type ovs_user_bridge on the Compute node. Red Hat OpenStack Platform director does not support mixing ovs_bridge and ovs_user_bridges as it kills the OVS-DPDK performances.
BZ#1394402
In order to reduce any interruptions to the allocated CPUs while running either Open vSwitch, virtual machine CPUs or the VNF threads within the virtual machines as much as possible, CPUs should be isolated. However, CPUAffinity cannot prevent all kernel threads from running on these CPUs. To prevent most of the kernel threads, you must use the boot option 'isolcpus=<cpulist>'. This uses the same CPU list as 'nohz_full' and 'rcu_nocbs'. The 'isolcpus' is engaged right at the kernel boot, and can thus prevent many kernel threads from being scheduled on the CPUs. This could be run on both the hypervisor and guest server.

#!/bin/bash
isol_cpus=`awk '{ for (i = 1; i <= NF; i++) if ($i ~ /nohz/) print $i };'
/proc/cmdline | cut -d"=" -f2`

if [ ! -z "$isol_cpus" ]; then
  grubby --update-kernel=grubby --default-kernel --args=isolcpus=$isol_cpus
fi


2) The following snippet re-pins the emulator thread action and is not recommended unless you experience specific performance problems.

#!/bin/bash
cpu_list=`grep -e "^CPUAffinity=.*" /etc/systemd/system.conf | sed -e 's/CPUAffinity=//' -e 's/ /,/'`
 if [ ! -z "$cpu_list" ]; then
   virsh_list=`virsh list| sed -e '1,2d' -e 's/\s\+/ /g' | awk -F" " '{print $2}'`
     if [ ! -z "$virsh_list" ]; then
       for vm in $virsh_list; do virsh emulatorpin $vm --cpulist $cpu_list; done
     fi
 fi
BZ#1394537
After a `tuned` profile is activated, `tuned` service must start before the `openvswitch` service does, in order to set the cores allocated to the PMD correctly.

As a workaround, you can change the `tuned`  service by running the following script:

#!/bin/bash

tuned_service=/usr/lib/systemd/system/tuned.service

grep -q "network.target" $tuned_service
if [ "$?" -eq 0 ]; then
  sed -i '/After=.*/s/network.target//g' $tuned_service
fi

grep -q "Before=.*network.target" $tuned_service
if [ ! "$?" -eq 0 ]; then
  grep -q "Before=.*" $tuned_service
  if [ "$?" -eq 0 ]; then
    sed -i 's/^\(Before=.*\)/\1 network.target openvswitch.service/g' $tuned_service
  else
    sed -i '/After/i Before=network.target openvswitch.service' $tuned_service
  fi
fi

systemctl daemon-reload
systemctl restart openvswitch
exit 0
BZ#1398323
The 'stack delete' command does not delete the mistral environment and swift container corresponding to the deleted stack.

Use "openstack overcloud plan delete" after deleting a stack.
BZ#1404749
During an upgrade from Red Hat OpenStack Platform (RHOSP) version 9 to version 10, credentials from RHOSP 9 are carried over until convergence, when the full upgrade is completed. This causes alarm evaluation to fail.

 Manually update options in '[service_credentials]' section:
1. Set auth_type to password:
   auth_type=password

2. os_* options are no longer valid. Remove os_* prefix from the following options:

os_username    - replace with username
os_tenant_name - replace with project_name
os_password    - replace with password
os_auth_url    - replace with auth_url
os_region_name - replace with region_name

3. Remove 'v2.0' version from auth_url
   auth_url=http://[fd00:fd00:fd00:2000::10]:5000/

4. Restart the service: systemctl restart openstack-aodh-evaluator.service

Aodh alarms will now be evaluated correctly.
BZ#1416070
Currently, the Red Hat OpenStack Platform director 10 with SR-IOV overcloud deployment fails when using the NIC IDs (for example, nic1, nic2, nic3 and so on) in the compute.yaml file.

As a workaround, you need to use NIC names (for example, ens1f0, ens1f1, ens2f0, and so on) instead of the NIC IDs to ensure the overcloud deployment completes successfully.
BZ#1416421
While creating the DPDK bond, `if-up` of the bond interface will activate the member iterfaces by itself. Individual members should not be able to call for `if-up`. As a result, the deployment fails with bonding in the OVS-DPDK use case.

As a workaround, you need to comment out the interfaces in the `impl_ifcfg.py` file as follows:
# if base_opt.primary_interface_name:
          #    primary_name = base_opt.primary_interface_name
          #    self.bond_primary_ifaces[base_opt.name] = primary_name
BZ#1481821
The default value of `pg_num` and `pgp_num` has been set to 128 instead of 32.
Consequently, the existing Ceph pools will be updated so that their `pg_num` and `pgp_num` changes to 128 and the data will be rebalanced on the OSDs. Customized values previously set in custom Heat environment files will be preserved. To keep `pg_num` and `pgp_num` set to their previous default values, add an extra environment file to the update or upgrade command. The command should have the following contents:

  parameter_defaults:
    ExtraConfig:
      ceph::profile::params::osd_pool_default_pg_num: 32
      ceph::profile::params::osd_pool_default_pgp_num: 32
BZ#1488517
RHEL overcloud images contain tuned version 2.8.
In OVS-DPDK and SR-IOV deployments, tuned install and activation is done through the first-boot mechanism.

This install and activation fails, as described in https://bugzilla.redhat.com/show_bug.cgi?id=1488369#c1

You need to reboot the compute node to enforce the tuned profile.
BZ#1489070
The new iptables version that ships with RHEL 7.4 includes a new --wait parameter. This parameter allows  iptables commands issued in parallel to wait until a lock is released by the prior command. For OpenStack, the neutron service provides the iptables locking but only on the routers level.

As such, when processing routers (for example, during a fullsync after the l3 agent is started), some iptables commands issued by neutron may fail because they are experiencing this lock and require the --wait parameter that is not available in neutron yet. Any routers affected by this will cause malfunctions of some floating IPs, or some instances may not access the metadata API during cloud-init.

We recommend that you do not upgrade to RHEL 7.4 until neutron is released with a fix that adopts the new iptables --wait parameter.
BZ#1549694
Deployments with OVS-DPDK experience a performance degradation, with the following package versions:

OVS: openvswitch-2.6.1-16.git20161206.el7ost.x86_64
kernel: 3.10.0-693.17.1.el7.x86_64.

3.2.4. Deprecated Functionality

The items in this section are either no longer supported, or will no longer be supported in a future release.
BZ#1402497
Certain CLI arguments are considered deprecated and should not be used. The update will allow you to use the CLI args, but there is still a need to specify at the least an environment file to set the `sat_repo`. You can use an `env` file to work around the issue, before running the overcloud command:

1. cp -r /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration  .

2. Edit the rhel-registration/environment-rhel-registration.yaml and set the   rhel_reg_org, rhel_reg_activation_key, rhel_reg_method, rhel_reg_sat_repo and rhel_reg_sat_url according to your environment.

3. Run the deployment command with -e rhel-registration/rhel-registration-resource-registry.yaml -e rhel-registration/environment-rhel-registration.yaml

This workaround has been checked for both Red Hat Satellite 5 and 6, with repos present on the overcloud nodes upon successful deployment.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.