Chapter 2. Top New Features
This section provides an overview of the top new features in this release of Red Hat OpenStack Platform.
2.1. Compute
This section outlines the top new features for the Compute service.
- Multi-cell deployments using Cells v2
- OpenStack Compute is now powered by cells by default. You can configure larger deployments to use multiple cells, each cell with specific Compute nodes and database. Global services control placement and fail-safe operations, and the separation into cells help improve security and process isolation.
- Colocation of pinned and floating instances on a single host
-
You can now schedule instances with pinned CPUs (
hw:cpu_policy=dedicated
) on the same host as instances that use floating CPUs (hw:cpu_policy=shared
). It is no longer necessary to use host aggregates to ensure these instance types run on separate hosts. - Live migration for instances with SR-IOV (Single root I/O virtualization)
- Instances configured with SR-IOV interfaces can now be live migrated. For direct mode SR-IOV interfaces, this operation will incur some network downtime as the interface is detached and reattached as part of the migration. This is not an issue for indirect mode SR-IOV interfaces.
- Live migration of pinned instances
- This NUMA-aware live migration feature allows you to live migrate instances with a NUMA topology.
- Bandwidth-aware scheduling
- You can now create instances that request a guaranteed minimum bandwidth using a Quality of Service (QoS) policy. The Compute scheduling service selects a host for the instance that satisfies the guaranteed minimum bandwidth request.
2.2. Networking
This section outlines the top new features for the Networking service.
- ACL support for Load-balancing service (Octavia)
- The Red Hat OpenStack Platform Load-balancing service (Octavia) now supports VIP access control lists (ACL) to limit incoming traffic to a listener to a set of allowed source IP addresses (CIDRs).
- OVN internal API TLS/SSL support
- Red Hat OpenStack Platform now supports the encryption of internal API traffic for OVN using Transport Layer Security (TLS).
- OVN deployment over IPv6
- Red Hat OpenStack Platform now supports deploying OVN on an IPv6 network.
2.3. Storage
This section outlines the top new features for the Storage service.
- Block Storage service changes keys when cloning volumes
- With this release, when the Block Storage service (cinder) clones encrypted volumes, it automatically changes the encryption key. This feature improves security in Red Hat OpenStack Platform by not using the same encryption key more than once.
- Image service manages deletion of encrypted keys
- With this release, the Block Storage service (cinder) creates an encryption key in the Key Management service (barbican) when it uploads an encrypted volume to the Image service (glance). This creates a one-to-one relationship between an encryption key and a stored image. Encryption key deletion prevents unlimited resource consumption by the Key Management service.
- Director supports back end availability zone configuration
- This release adds director support to configure Block Storage back end availability zones. An availability zone is a provider-specific method of grouping cloud instances and services.
- Removal of Data Processing service (sahara)
- The Data Processing service (sahara) is deprecated in Red Hat OpenStack Platform (RHOSP) 15 and removed in RHOSP 16.0. Red Hat continues to offer support for the Data Processing service in RHOSP versions 13 and 15.
2.4. Ceph Storage
This section outlines the top new features for Ceph Storage.
- Red Hat Ceph Storage Upgrade
- To maintain compatibility with Red Hat Enterprise Linux 8, Red Hat OpenStack Platform 16.0 director deploys Red Hat Ceph Storage 4. You can use Red Hat OpenStack Platform 16.0 running on RHEL8 to connect to a preexisting external Red Hat Ceph Storage 3 cluster running on RHEL7.
2.5. Cloud Ops
This section outlines the top new features and changes for the Cloud Ops components.
- Service Telemetry Framework
Service Telemetry Framework (STF) provides the core components for a monitoring application framework for Red Hat OpenStack Platform. It is a data storage component deployed as an application on top of OpenShift 4.x and is managed by the Operator Lifecycle Manager. Data transport for metrics and events is provided using AMQ Interconnect. This first GA release of STF has the following features:
- Deployment of server-side Service Telemetry Framework as a micro-service application on OpenShift 4.3, leveraging the Operator Lifecycle Management (OLM)
- Full client-side installation capability through Red Hat OpenStack Platform director with collectd as the infrastructure data collector, Ceilometer for OpenStack events data, and AMQ Interconnect as the transport layer application
- Performance Metrics integration with Prometheus as the time-series database
- Events storage with ElasticSearch
- Alertmanager integration with a list of alerts out-of-the-box
- Infrastructure dashboard to visualize performance data with Grafana
- Multi-Cloud monitoring support with STF
2.6. Technology Previews
This section outlines features that are in technology preview in Red Hat OpenStack Platform 16.0.
For more information on the support scope for features marked as technology previews, see Technology Preview Features Support Scope.
2.6.1. New Technology Previews
- Deploy and manage multiple overclouds from a single undercloud
This release includes the capability to deploy multiple overclouds from a single undercloud.
- Interact with a single undercloud to manage multiple distinct overclouds.
- Switch context on the undercloud to interact with different overclouds.
- Reduce redundant management nodes.
- Undercloud minion
-
This release contains the ability to install undercloud minions. An undercloud minion provides additional
heat-engine
andironic-conductor
services on a separate host. These additional services support the undercloud with orchestration and provisioning operations. The distribution of undercloud operations across multiple hosts provides more resources to run an overcloud deployment, which can result in potentially faster and larger deployments. - Validation Framework
- Red Hat OpenStack Platform includes a validation framework to help verify the requirements and functionality of the undercloud and overcloud. The framework includes two types of validations:
-
Manual Ansible-based validations, which you execute through the
openstack tripleo validator
command set. - Automatic in-flight validations, which execute during the deployment process.
Director provides a new set of commands to list validations and run validations against the undercloud and overcloud.
These commands are:
-
openstack tripleo validator list
openstack tripleo validator run
These commands interact with a set of Ansible-based tests from the
openstack-tripleo-validations
package. To enable this feature, set theenable_validations
parameter totrue
in theundercloud.conf
file and runopenstack undercloud install
.
-
- New director feature to create an active-active configuration for Block Storage service
With Red Hat OpenStack Platform director, you can now deploy the Block Storage service (cinder) in an active-active configuration if the back end driver supports this configuration. As of GA, only the Ceph RADOS Block Device (RBD) back end driver supports an active-active configuration.
The new
cinder-volume-active-active.yaml
file defines the active-active cluster name by assigning a value to theCinderVolumeCluster
parameter.CinderVolumeCluster
is a global Block Storage parameter, and prevents you from including clustered (active-active) and non-clustered back ends in the same deployment.The
cinder-volume-active-active.yaml
file causes director to use the non-Pacemaker, cinder-volume Orchestration service template, and adds the etcd service to your Red Hat OpenStack Platform deployment as a distributed lock manager (DLM).- New director parameter for configuring Block Storage service availability zones
-
With Red Hat OpenStack Platform director, you can now configure different availability zones for Block Storage service (cinder) volume back ends. Director has a new parameter,
CinderXXXAvailabilityZone
, where XXX is associated with a specific back end. - New Redfish BIOS management interface for Bare Metal service
Red Hat OpenStack Platform Bare Metal service (ironic) now has a BIOS management interface, with which you can inspect and modify a device’s BIOS configuration.
In Red Hat OpenStack Platform 16.0, the Bare Metal service supports BIOS management capabilities for data center devices that are Redfish API compliant. The Bare Metal service implements Redfish calls through the Python library, Sushy.
- Deploying multiple Ceph clusters
-
You can use director to deploy multiple Ceph clusters, either on nodes dedicated to running Ceph or hyper-converged, using separate heat stacks for each cluster. For edge sites, you can deploy a hyper-converged infrastructure (HCI) stack that uses Compute and Ceph storage on the same node. For example, you might deploy two edge stacks named
HCI-01
andHCI-02
, each in their own availability zone. As a result, each edge stack has its own Ceph cluster and Compute services.
- New Compute (nova) configuration added to enable memoryBacking source type file, with shared access
A new Compute (nova) parameter is available,
QemuMemoryBackingDir
, which specifies the directory in which to store the memory backing file when a libvirtmemoryBacking
element is configured withsource type="file"
andaccess mode="shared"
.Note: The
memoryBacking
element is only available from libvirt 4.0.0 and QEMU 2.6.0.- Fencing for RedFish API
- Fencing is now available with Pacemaker for RedFish API.
- Deploying bare metal over IPv6 with director
- If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack Platform onto IPv6 nodes.
- Nova-less provisioning
In Red Hat OpenStack Platform 16.0, you can separate the provisioning and deployment stages of your deployment into distinct steps:
Provision your bare metal nodes.
- Create a node definition file in yaml format.
- Run the provisioning command, including the node definition file.
Deploy your overcloud.
- Run the deployment command, including the heat environment file that the provisioning command generates.
The provisioning process provisions your nodes and generates a heat environment file that contains various node specifications, including node count, predictive node placement, custom images, and custom NICs. When you deploy your overcloud, include this file in the deployment command.
- networking-ansible trunk port support
- In Red Hat OpenStack Platform 16.0, you can use switch ports in trunk mode as well as access mode, and assign multiple VLANs to a switch port.
- networking-ansible Arista support
- In Red Hat OpenStack Platform 16.0, you can configure ML2 networking-ansible functionality with Arista Extensible Operating System (Arista EOS) switches.
- Redfish virtual media boot
- You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.