Release Notes
Release details for Red Hat OpenStack Platform 12
Abstract
Chapter 1. Introduction Copy linkLink copied to clipboard!
1.1. About this Release Copy linkLink copied to clipboard!
Note
1.2. Requirements Copy linkLink copied to clipboard!
- Chrome
- Firefox
- Firefox ESR
- Internet Explorer 11 and later (with Compatibility Mode disabled)
Note
1.3. Deployment Limits Copy linkLink copied to clipboard!
1.4. Database Size Management Copy linkLink copied to clipboard!
1.5. Certified Drivers and Plug-ins Copy linkLink copied to clipboard!
1.6. Certified Guest Operating Systems Copy linkLink copied to clipboard!
1.7. Bare Metal Provisioning Supported Operating Systems Copy linkLink copied to clipboard!
1.8. Hypervisor Support Copy linkLink copied to clipboard!
libvirt driver (using KVM as the hypervisor on Compute nodes).
1.9. Content Delivery Network (CDN) Channels Copy linkLink copied to clipboard!
Warning
#subscription-manager repos --enable=[reponame]
#subscription-manager repos --enable=[reponame]
#subscription-manager repos --disable=[reponame]
#subscription-manager repos --disable=[reponame]
| Channel | Repository Name |
|---|---|
| Red Hat Enterprise Linux 7 Server (RPMS) |
rhel-7-server-rpms
|
| Red Hat Enterprise Linux 7 Server - RH Common (RPMs) |
rhel-7-server-rh-common-rpms
|
| Red Hat Enterprise Linux High Availability (for RHEL 7 Server) |
rhel-ha-for-rhel-7-server-rpms
|
| Red Hat OpenStack Platform 12 for RHEL 7 (RPMs) |
rhel-7-server-openstack-12-rpms
|
| Red Hat Enterprise Linux 7 Server - Extras (RPMs) |
rhel-7-server-extras-rpms
|
| Channel | Repository Name |
|---|---|
| Red Hat Enterprise Linux 7 Server - Optional |
rhel-7-server-optional-rpms
|
| Red Hat OpenStack Platform 12 Operational Tools for RHEL 7 (RPMs) |
rhel-7-server-openstack-12-optools-rpms
|
| Channel | Repository Name |
|---|---|
| Red Hat Enterprise Linux for IBM Power, little endian |
rhel-7-for-power-le-rpms
|
| Red Hat OpenStack Platform 12 for RHEL 7 (RPMs) |
rhel-7-server-openstack-12-for-power-le-rpms
|
The following table outlines the channels you must disable to ensure Red Hat OpenStack Platform 12 functions correctly.
| Channel | Repository Name |
|---|---|
| Red Hat CloudForms Management Engine |
"cf-me-*"
|
| Red Hat Enterprise Virtualization |
"rhel-7-server-rhev*"
|
| Red Hat Enterprise Linux 7 Server - Extended Update Support |
"*-eus-rpms"
|
Warning
1.10. Product Support Copy linkLink copied to clipboard!
- Customer Portal
- The Red Hat Customer Portal offers a wide range of resources to help guide you through planning, deploying, and maintaining your OpenStack deployment. Facilities available via the Customer Portal include:
- Knowledge base articles and solutions.
- Technical briefs.
- Product documentation.
- Support case management.
Access the Customer Portal at https://access.redhat.com/. - Mailing Lists
- Red Hat provides these public mailing lists that are relevant to OpenStack users:
- The
rhsa-announcemailing list provides notification of the release of security fixes for all Red Hat products, including Red Hat OpenStack Platform.Subscribe at https://www.redhat.com/mailman/listinfo/rhsa-announce.
1.11. Key Changes to the Documentation Set Copy linkLink copied to clipboard!
- Configuration Reference and Command-Line Interface Reference
- The Configuration Reference and Command-Line Interface Reference documents are not available with the general availability of Red Hat OpenStack Platform 12.Both documents have a dependency on source content generated by openstack.org. In the Pike release, the format and location of this content changed. As a result, the scope of the work required extended beyond the Red Hat OpenStack Platform 12 GA schedule. The documents will be compiled and published as an asynchronous release.
- Manual Installation Procedures
- The Manual Installation Procedures document has been removed from the documentation set starting from Red Hat OpenStack Platform 12.Red Hat supports installation processes performed only using the Red Hat OpenStack Platform director and the official documentation steps.Users can find manual installation information, for their reference, on the OpenStack website: https://docs.openstack.org/pike/install. These procedures will not be supported by Red Hat.
- Manual Upgrades
- Red Hat OpenStack Platform 12 does not support manual upgrade steps performed without director, and, as a result, no documentation is available for this scenario. For supported upgrade scenarios, using Red Hat OpenStack Platform director, see Upgrading Red Hat OpenStack Platform.
- OpenStack Benchmarking Service Guide
- The OpenStack Benchmarking Service document was outdated and contained incorrect information. It has been removed from the documentation set across all versions. The following Bugzilla ticket requests a full review of the document: https://bugzilla.redhat.com/show_bug.cgi?id=1459469.
- Red Hat Ceph Storage for the Overcloud
- The Red Hat Ceph Storage for the Overcloud document has been replaced by two new guides, which describe two available options for working with Red Hat Ceph Storage in the overcloud: Integrating an Overcloud with an Existing Red Hat Ceph Cluster and Deploying an Overcloud with Containerized Red Hat Ceph.
- RPM-Based Overcloud Installation
- RPM packages for Red Hat OpenStack Platform 12 are still shipped alongside container images; however, the official installer provided by Red Hat, Red Hat OpenStack Platform director, does not support deployments of non-containerized (RPM-based) Red Hat OpenStack Platform 12. As a result, instructions for deploying an RPM-based overcloud are not provided in the documentation. For supported deployment scenarios, see Director Installation and Usage.Although installation procedures are not provided or supported for RPM-based deployments, environments resulting from manual deployment are still supported if they comply with Red Hat support policy: https://access.redhat.com/articles/2477851.
- VMware Integration Guide
- The VMware Integration Guide has been removed from the documentation set across all versions. The integration described in the document is no longer supported.
Chapter 2. Top New Features Copy linkLink copied to clipboard!
2.1. Red Hat OpenStack Platform Director Copy linkLink copied to clipboard!
- Prompt Changes
- Sourcing a settings file on the undercloud, such as
stackrcorovercloudrc, changes Prompt String 1 (PS1) to include the cloud name. This helps identify the cloud currently being accessed. For example, if you source thestackrcfile, the prompt appears with an(undercloud)prefix:source ~/stackrc (undercloud) [stack@director-12 ~]$
[stack@director-12 ~]$ source ~/stackrc (undercloud) [stack@director-12 ~]$Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Registration through a HTTP Proxy
- The director provides updated templates to register your overcloud through a HTTP proxy.
- New Custom Roles Generation
- The director provides the ability to create a
roles_datafile from individual custom role files. This simplifies the management of individual custom roles. The director also includes a default set of role files to help you get started. - Node Blacklist
- The director now accepts a node blacklist using the
DeploymentServerBlacklistparameter. This parameter isolates a list of nodes from receiving updated parameters and resources during the execution ofopenstack overcloud deploy. This parameter is useful to scale additional nodes while the existing nodes remain untouched during the deployment process. - Composable Networks
- Previously, the Bare Metal service could only use networks defined in the director templates. It is now possible to compose custom networks for the director to create during an overcloud deployment or update. You can also now assign custom labels to the director's template-defined networks.
- UI: Improved Node Management
- The director's web UI now provides more detail for each node and additional functions to manage nodes. You can view this additional information on the Nodes screen of the director's web UI.
- UI: Improved Role Assignment
- The director's web UI includes a simplified role assignment for nodes. The UI uses a spinner to automatically assign a selected number of nodes per role. You can also manually assign specific nodes to roles in the Nodes screen of the director's web UI.
2.2. Containers Copy linkLink copied to clipboard!
- Containerized Overcloud
- The Red Hat OpenStack Platform director now creates an overcloud consisting of containerized services. Users can implement the following sources for their container images:
- Remote from
registry.access.redhat.com - Locally from the undercloud (images initially pulled from
registry.access.redhat.com) - Red Hat Satellite 6 (images synchronized from
registry.access.redhat.com)
The overcloud continues to support composable service infrastructure using containers to augment existing services. Note that the only services not containerized by default are:- OpenStack Networking (neutron)
- OpenStack Block Storage (cinder)
- OpenStack Shared File Systems (Manila)
Red Hat provides the containers for these services as a technical preview only. - Containerized Upgrades
- The Red Hat OpenStack Platform director provides an upgrade path from a non-containerized Red Hat OpenStack Platform 11 overcloud to containerized Red Hat OpenStack Platform 12 overcloud.
2.3. Bare Metal Service Copy linkLink copied to clipboard!
- L3 Routed Spine/Leaf Network Topology
- With spine/leaf, the bare metal network now uses layer 3 routing. This new topology makes full use of connections through equal-cost multipathing (ECMP). You can use the new topology only with the Compute and Ceph storage roles. It is not yet possible to use this routing for the provisioning network.
- Node Auto-Discovery
- Previously, writing an instack.json file was the only way to add overcloud nodes in bulk. The Bare Metal Service can now discover unidentified nodes automatically, without an instack.json file.
- Redfish Support
- The Redfish API is an open standard for the management of hardware. The Bare Metal service now includes the Redfish API driver. To manage servers compliant with the Redfish protocol, set the driver property to
redfish. - Whole-disk Overcloud image support
- The Bare Metal Service now supports whole-disk images for the overcloud. Previously, initrd and vmlinuz images were required in addition to the qcow2 image. Now, the Bare Metal Service can accept a single qcow2 image upload as a full disk image. You must build the whole-disk image before you deploy it.
2.4. Block Storage Copy linkLink copied to clipboard!
- Capacity-derived QoS Limits
- Users can now use volume types to set deterministic IOPS throughput based on the size of provisioned volumes. This simplifies how storage resources are allocated to users -- namely, through pre-determined (and, ultimately, highly predictable) throughput rates based on requested volume size.
- Veritas HyperScale Support
- The Block Storage service now supports the HyperScale driver. HyperScale is a software-defined storage solution that uses a dual-plane architecture to decouple storage management tasks from workload processing at the Compute plane. This technique helps make efficient use of storage directly attached to Compute nodes, thereby minimizing total cost of ownership without compromising performance.Veritas Hyperscale requires binaries, puppet modules, and Heat templates provided directly by Veritas. For an overview, see HyperScale for OpenStack; for deployment and usage documentation, see HyperScale for OpenStack guides for Linux.
2.5. Ceph Storage Copy linkLink copied to clipboard!
- Containerized Ceph Deployment
- The director can now deploy a containerized Red Hat Ceph cluster. To do this, the director uses built-in heat templates and environment files that work with Ansible playbooks available through the
ceph-ansibleproject. - Improved Resource Management for HCI
- Previously, when deploying Hyper-Converged Infrastructure (HCI) users had to manually configure resource isolation on hyper-converged Compute nodes. The director can now use OpenStack Workflow to derive HCI-suitable CPU and RAM allocation settings and apply them.
2.6. Compute Copy linkLink copied to clipboard!
- Emulator Thread Policies
- The Compute scheduler determines the CPU resource utilization and places instances based on the number of virtual CPUs (vCPUs) in the flavor. There are a number of hypervisor operations that are performed on the host on behalf of the guest instance. The
libvirtdriver implements a generic placement policy for KVM, which allows QEMU emulator threads to float across the same physical CPUs (pCPUs) that the vCPUs are running on. This leads to the emulator threads using time borrowed from the vCPU operations.With this release, Compute reserves one vCPU for running non-realtime workloads using thehw:emulator_threads_policy=isolateoption. Before you enable the emulator threads placement policy on an instance flavor, you must set thehw:cpu_policyoption to dedicated. - Reserve (weight) SR-IOV Capable NUMA Nodes
- This release includes updates to the filter scheduler and resource tracker to place non-PCI instances on non-PCI NUMA nodes. Instances not bound to PCI devices will be preferably placed on hosts without PCI devices. If there is no host without PCI devices, then hosts with PCI devices will be used. To enable this option, use the new
nova.scheduler.weights.all_weighersPCI weigher option. You can also enable this option manually using thefilter_scheduler.weight_classesconfiguration option.
2.7. High Availability Copy linkLink copied to clipboard!
- Containerized High Availability Reference Architecture
- The Instance High Availability (Instance HA) reference architecture is now provided in containers that you can deploy on a Red Hat Enterprise Linux Atomic Host with the Red Hat OpenStack director. The Instance HA configuration is provided in an agent container, which then deploys the application containers and shared services across the cluster.The following Instance HA components and managed services are now delivered as containers:
- Pacemaker
- Pacemaker_remote
- Corosync
- Ancillary supporting components
- Galera (MariaDB)
- RabbitMQ
- HAProxy
- Cinder-backup
- Cinder-volume
- Manila-share
- Redis
- Virtual-ips
- memcached
- Health Checks with httpchk
- Instance HA now uses the httpchk to check the health of compatible service nodes in the cluster.
2.8. Identity Copy linkLink copied to clipboard!
- Novajoin Availability for Infrastructure
- Novajoin allows you to enroll undercloud and overcloud nodes with Red Hat Identity Management (IdM). As a result, you can now use IdM features with your OpenStack deployment, including identities, kerberos credentials, and access controls.
- TLS Coverage
- Red Hat OpenStack Platform 12 includes TLS support for MariaDB, RabbitMQ and internal services endpoints.
2.9. Network Functions Virtualization Copy linkLink copied to clipboard!
- Easy Heterogeneous Cluster Management
- Easy heterogeneous cluster management allows you to set different service parameter values to match varying node capabilities or tuning needs. For example, if node 1 had more RAM than node 2, you could not previously take advantage of the added RAM with two different Compute roles, since the service parameters were defined globally. You can now combine composable roles with role-specific parameters to define unique parameters that match the capabilities of different nodes or different tuning needs.
- OpenDaylight (Technology Preview)
- The OpenDaylight software-defined networking (SDN) controller is now integrated into Red Hat OpenStack Platform.
- OVS-DPDK Ease of Deployment
- The Red Hat OpenStack Platform simplifies OVS-DPDK deployments through a predefined Mistral workflow to auto generate OVS-DPDK parameters. You now need to decide on two simple parameters (the minimum number of CPU threads used for DPDK PMD, and the percent of available memory to reserve for Hugepages). Based on this information and the hardware introspection results of your bare-metal nodes, the workflow calculates the remaining eight OVS-DPDK parameters needed for your deployment.
- NUMA Topology through Bare Metal Introspection
- To ease deployment, you can now retrieve the NUMA topology details from Compute nodes with the Bare Metal hardware inspection service. The retrieved NUMA topology details include NUMA nodes, associated RAM, NICs, and physical CPU cores with sibling pairs.
2.10. Object Storage Copy linkLink copied to clipboard!
- Stand-Alone Object Storage Deployments
- With this release, users can now configure a new overcloud deployment to use an existing Object Storage cluster.
2.11. OpenDaylight (Technology Preview) Copy linkLink copied to clipboard!
- Improved Red Hat OpenStack Platform director integration
- The Red Hat OpenStack Platform director installs and manages a complete OpenStack environment. With Red Hat OpenStack Platform 12, the director can deploy and configure OpenStack to work with OpenDaylight. OpenDaylight can run together with the OpenStack overcloud controller role, or in a separate custom role on a different node.In Red Hat OpenStack Platform 12, OpenDaylight is installed and run in containers. This provides more flexibility to its maintenance and use.
- IPv6
- OpenDaylight in Red Hat OpenStack Platform 12 brings some feature parity in IPv6 use-cases with OpenStack neutron ML2/OVS implementation. These use-cases include:
- IPv6 addressing support including SLAAC
- Stateless and Stateful DHCPv6
- IPv6 Security Groups with allowed address pairs
- IPv6 communication among virtual machines in the same network
- IPv6 East-West routing support
- VLAN-aware virtual machines
- VLAN-aware virtual machines (or virtual machines with trunking support) allow an instance to be connected to one or more networks over one virtual NIC (vNIC). Multiple networks can be presented to an instance by connecting it to a single port. Network trunking lets users create a port, associate it with a trunk, and launch an instance on that port. Later, additional networks can be attached to or detached from the instance dynamically without interrupting the operations of that instance.
- SNAT
- Red Hat OpenStack Platform 12 introduces the conntrack-based SNAT, where it uses OVS netfilter to maintain translations. A switch is selected as the NAPT switch per router, and does the centralized translation. All the other switches send the packet to centralized switch for SNAT. If a NAPT switch goes down, an alternate switch is selected for the translations and the existing translations will be lost on a failover.
- SR-IOV Integration
- OpenDaylight in Red Hat OpenStack Platform 12 can be deployed with compute nodes that support SR-IOV. It is also possible to create mixed environments with both OVS and SR-IOV nodes in a single OpenDaylight installation. The SR-IOV deployment requires the neutron SR-IOV agent in order to configure the virtual functions (VFs), which are directly passed to the compute instance when it is deployed as a network port.
- Controller Clustering
- The OpenDaylight Controller in Red Hat OpenStack Platform 12 supports a cluster-based High Availability model. Several instances of the OpenDaylight Controller form a Controller Cluster. Together, they work as one logical controller. The service provided by the controller (viewed as a logical unit) will continue to operate as long as a majority of the controller instances are functional and able to communicate with each other.The Red Hat OpenDaylight Clustering model provides both High Availability and horizontal scaling: more nodes can be added to absorb more load, if necessary.
- OVS-DPDK
- OpenDaylight in Red Hat OpenStack Platform 12 may be deployed with Open vSwitch Data Plane Development Kit (DPDK) acceleration with director. This deployment offers higher data plane performance as packets are processed in user space rather than in the kernel.
- L2GW/HW-VTEP
- Red Hat OpenStack Platform 12 supports L2GW to integrate traditional bare-metal services into a neutron overlay. This is especially useful for bridging external physical workloads into a neutron tenant network, for bringing a bare metal server (managed by OpenStack) into a tenant network, and bridging SR-IOV traffic into a VXLAN overlay. This fact takes advantage of the line-rate speed of SR-IOV and the benefits of an overlay network to interconnect SR-IOV virtual machines.
- The networking-odl Package
- Red Hat OpenStack Platform 12 offers a new version of the networking-odl package that brings important changes. It introduces the
port status update supportcommand that provides accurate information on the port status and when the port is available for a virtual machine to use. The default port binding changes from network topology to pseudo agent based. The network topology binding support is not available in this release. Customers using their network topology based on port binding should migrate to pseudo agent based port binding (pseudo-agentdb-binding).
2.12. OpenStack Data Processing Service Copy linkLink copied to clipboard!
- OpenStack Data Processing Service Integration With Baremetal to Tenant
- The OpenStack Data Processing service can help improve performance by removing the hypervisor abstraction layer. The OpenStack Bare Metal Provisioning (ironic) service provides an API and a Compute driver to serve bare metal instances using the same Compute and OpenStack Networking APIs.This release adds the support to install and configure the OpenStack Bare Metal Provisioning service and Compute to serve the bare metal instance to the tenant. For Red Hat OpenStack Platform deployments with both virtual and bare metal instances, you need to use host aggregates as follows:
- One for all the bare metal hosts
- One for all the virtual Compute nodes
- Support for the Cloudera Distribution of Apache Hadoop (CDH) 5.11
- You can deploy the Cloudera Distribution of Apache Hadoop (CDH) 5.11 on Red Hat OpenStack Platform. CDH can store, process, and analyze large and diverse data volumes with the latest big data processing techniques such as Spark and Impala.
2.13. OpenStack Networking Copy linkLink copied to clipboard!
- Native Open vSwitch Firewall Driver
- The OVS firewall driver has graduated from Technology Preview to full support. The conntrack-based firewall driver can be used to implement Security Groups. With conntrack, Compute instances are connected directly to the integration bridge for a more simplified architecture and improved performance.
- Layer-2 Gateway API
- The Layer-2 Gateway is a service plugin which allows you to bridge networks together so they appear as a single L2 broadcast domain. This update introduces support for the Layer-2 Gateway API.
- BGP/VPN API
- OpenStack Networking now supports BGPVPN capabilities. BGPVPN allows your instances to connect to your existing layer 3 VPN services. Once a BGPVPN network is created, you can associate it with a project, allowing the project's users to connect to the BGPVPN network.
2.14. Operations Tooling Copy linkLink copied to clipboard!
- SSL Support in the Monitoring Agent
- You can now configure the Monitoring Agent (Sensu client) to connect to the RabbitMQ instance with SSL. To do this, you define the SSL connection parameters and certificates in the monitoring environment YAML file.
- Integration with Red Hat Enterprise Common Logging
- You can now use the Red Hat Enterprise Common Logging solution to collect logs from Red Hat OpenStack Platform. To do this, you configure the Log Collection Agent (Fluentd) to send the log files to the central logging collector.
- Containerized Monitoring and Logging Tools
- Some monitoring and logging tools are now provided in containers that you can deploy on a Red Hat Enterprise Linux Atomic Host with the Red Hat OpenStack director.The following operation tools are now delivered in containers:
- Availability monitoring (Sensu)
- Performance monitoring (Collectd)
- Log aggregation (Fluentd)
2.16. Telemetry Copy linkLink copied to clipboard!
- OpenStack Telemetry Metrics (gnocchi) at Scale
- Telemetry used MongoDB and Telemetry API to store metrics, while the performance was acceptable when it came to storing the metrics, the usage is limited as you cannot retrieve and exploit the stored information.The OpenStack Telemetry Metrics (gnocchi) service uses a new distributed selective acknowledgements (SACKs) mechanism and scheduling algorithm for the
gnocchi-metricddaemon improving the performance at a larger scale. The default settings are enhanced to adapt to cloud deployment of larger sizes. - Intel Cache Monitoring Technology (CMT)
- Cache Monitoring Technology (CMT) allows you to monitor cache-related statistics on an Intel platform. Telemetry now supports CMT reporting using the
collectddaemon.This release adds a new meter to collect the L3 cache usage statistics for each virtual machine. You can enable thecmtplugin withLibvirtEnabledPerfEventsparameter innova-libvirt.yamlfile. - Containerization of Telemetry Services
- With this release, Red Hat OpenStack Platform can create a cloud that uses containers to host its services. Each service is isolated within its own container on the host node. Each container connects to and shares the host’s own network. As a result, the host node exposes the API ports of each service on its own network. Telemetry service can now be hosted on containers. This makes upgrades easy.
- OpenStack Telemetry Event Storage (panko) Deprecation
- The OpenStack Telemetry Event Storage service is officially now deprecated. Support for panko will be limited to usage from Red Hat Cloudforms only. Red Hat does not recommend using panko outside of the Red Hat Cloudforms use-case. You can use the following options instead of using panko:
- Poll the OpenStack Telemetry Metrics (gnocchi) service instead of polling panko. This gives you access to the resource history.
- Use the OpenStack Telemetry Alarming (aodh) service to trigger alarms when an event arises. You can use OpenStack Messaging Service (zaqar) to store alarms in a queue if an application cannot be reached directly by the OpenStack Telemetry Alarming (aodh) service.
- Telemetry API and
ceilometer-collectorDeprecation - The Telemetry API service is now deprecated. It is replaced by the OpenStack Telemetry Metrics (gnocchi) service, and the OpenStack Telemetry Alarming (aodh) service APIs. You should begin to switch to the Telemetry API service instead. In Red Hat OpenStack Platform 12, the Telemetry API is disabled by default, with the option to enable it only if required.The
ceilometer-collectorservice is deprecated. You can now use theceilometer-notification-agentdaemon because the Telemetry polling agent sends the messages from the sample file to theceilometer-notification-agentdaemon.NOTE: Ceilometer as a whole is not deprecated, just the Telemetry API service and theceilometer-collectorservice.
2.17. Technology Previews Copy linkLink copied to clipboard!
Note
2.17.1. New Technology Previews Copy linkLink copied to clipboard!
- Octavia LBaaS
- Octavia is a new component that can be used as a back-end plug-in for the LBaaS v2 API and is intended to replace the current HAProxy-based implementation.
- Open Virtual Network (OVN)
- OVN is an Open vSwitch-based network virtualization solution for supplying network services to instances.
- Red Hat OpenStack Platform for POWER
- You can now deploy pre-provisioned overcloud Compute nodes on IBM POWER8 little endian hardware.
2.17.2. Previously Released Technology Previews Copy linkLink copied to clipboard!
- Benchmarking Service - Introduction of a new plug-in type: Hooks
- Allows test scenarios to run as iterations, and provides timestamps (and other information) about executed actions in the rally report.
- Benchmarking Service - New Scenarios
- Benchmarking scenarios have been added for nova, cinder, magnum, ceilometer, manila, and neutron.
- Benchmarking Service - Refactor of the Verification Component
- Rally Verify is used to launch Tempest. It was refactored to cover a new model: verifier type, verifier, and verification results.
- Block Storage - Highly Available Active-Active Volume Service
- In previous releases, the openstack-cinder-volume service could only run in Active-Passive HA mode. Active-Active configuration is now available as a technology preview with this release. This configuration aims to provide a higher operational SLA and throughput.
- Block Storage - RBD Cinder Volume Replication
- The Ceph volume driver now includes RBD replication, which provides replication capabilities at the cluster level. This feature allows you to set a secondary Ceph cluster as a replication device; replicated volumes are then mirrored to this device. During failover, all replicated volumes are set to 'primary', and all new requests for those volumes will be redirected to the replication device.To enable this feature, use the parameter replication_device to specify a cluster that the Ceph back end should mirror to. This feature requires both primary and secondary Ceph clusters to have RBD mirroring set up between them. For more information, see http://docs.ceph.com/docs/master/rbd/rbd-mirroring/.At present, RBD replication does not feature a failback mechanism. In addition, the freeze option does not work as described, and replicated volumes are not automatically attached/detached to the same instance during failover.
- CephFS Integration - CephFS Native Driver Enhancements
- The CephFS driver is still available as a Technology Preview, and features the following enhancements:
- Read-only shares
- Access rules sync
- Backwards compatibility for earlier versions of
CephFSVolumeClient
- Link Aggregation for Bare Metal Nodes
- This release introduces link aggregation for bare metal nodes. Link aggregation allows you to configure bonding on your bare metal node NICs to support failover and load balancing. This feature requires specific hardware switch vendor support that can be configured from a dedicated neutron plug-in. Verify that your hardware vendor switch supports the correct neutron plug-in.Alternatively, you can manually preconfigure switches to have bonds set up for the bare metal nodes. To enable nodes to boot off one of the bond interfaces, the switches need to support both LACP and LACP fallback (bond links fall back to individual links if a bond is not formed). Otherwise, the nodes will also need a separate provisioning and cleaning network.
- Benchmarking Service
- Rally is a benchmarking tool that automates and unifies multi-node OpenStack deployment, cloud verification, benchmarking and profiling. It can be used as a basic tool for an OpenStack CI/CD system that would continuously improve its SLA, performance and stability. It consists of the following core components:
- Server Providers - provide a unified interface for interaction with different virtualization technologies (LXS, Virsh etc.) and cloud suppliers. It does so via ssh access and in one L3 network
- Deploy Engines - deploy an OpenStack distribution before any benchmarking procedures take place, using servers retrieved from Server Providers
- Verification - runs specific set of tests against the deployed cloud to check that it works correctly, collects results & presents them in human readable form
- Benchmark Engine - allows to write parameterized benchmark scenarios & run them against the cloud.
- Cells
- OpenStack Compute includes the concept of Cells, provided by the nova-cells package, for dividing computing resources. In this release, Cells v1 has been replaced by Cells v2. Red Hat OpenStack Platform deploys a "cell of one" as a default configuration, but does not support multi-cell deployments at this time.
- CephFS Native Driver for Manila
- The CephFS native driver allows the Shared File System service to export shared CephFS file systems to guests through the Ceph network protocol. Instances must have a Ceph client installed to mount the file system. The CephFS file system is included in Red Hat Ceph Storage 2 as a technology preview as well.
- DNS-as-a-Service (DNSaaS)
- Red Hat OpenStack Platform 12 includes a Technology Preview of DNS-as-a-Service (DNSaaS), also known as Designate. DNSaaS includes a REST API for domain and record management, is multi-tenanted, and integrates with OpenStack Identity Service (keystone) for authentication. DNSaaS includes a framework for integration with Compute (nova) and OpenStack Networking (neutron) notifications, allowing auto-generated DNS records. DNSaaS includes integration with the Bind9 back end.
- Firewall-as-a-Service (FWaaS)
- The Firewall-as-a-Service plug-in adds perimeter firewall management to OpenStack Networking (neutron). FWaaS uses iptables to apply firewall policy to all virtual routers within a project, and supports one firewall policy and logical firewall instance per project. FWaaS operates at the perimeter by filtering traffic at the OpenStack Networking (neutron) router. This distinguishes it from security groups, which operate at the instance level.
- Google Cloud Storage Backup Driver (Block Storage)
- The Block Storage service can now be configured to use Google Cloud Storage for storing volume backups. This feature presents an alternative to the costly maintenance of a secondary cloud simply for disaster recovery.
- Object Storage Service - At-Rest Encryption
- Objects can now be stored in encrypted form (using AES in CTR mode with 256-bit keys). This provides options for protecting objects and maintaining security compliance in Object Storage clusters.
- Object Storage Service - Erasure Coding (EC)
- The Object Storage service includes an EC storage policy type for devices with massive amounts of data that are infrequently accessed. The EC storage policy uses its own ring and configurable set of parameters designed to maintain data availability while reducing cost and storage requirements (by requiring about half of the capacity of triple-replication). Because EC requires more CPU and network resources, implementing EC as a policy allows you to isolate all the storage devices associated with your cluster's EC capability.
- OpenDaylight Integration
- Red Hat OpenStack Platform 12 includes a technology preview of integration with the OpenDaylight SDN controller. OpenDaylight is a flexible, modular, and open SDN platform that supports many different applications. The OpenDaylight distribution included with Red Hat OpenStack Platform 12 is limited to the modules required to support OpenStack deployments using NetVirt, and is based on the upstream Carbon version.For more information, see the Red Hat OpenDaylight Product Guide and the Red Hat OpenDaylight Installation and Configuration Guide.
- Real Time KVM Integration
- Integration of real time KVM with the Compute service further enhances the vCPU scheduling guarantees that CPU pinning provides by reducing the impact of CPU latency resulting from causes such as kernel tasks running on host CPUs. This functionality is crucial to workloads such as network functions virtualization (NFV), where reducing CPU latency is highly important.
- Red Hat SSO
- This release includes a version of the keycloak-httpd-client-install package. This package provides a command-line tool that helps configure the Apache mod_auth_mellon SAML Service Provider as a client of the Keycloak SAML IdP.
Chapter 3. Release Information Copy linkLink copied to clipboard!
3.1. Red Hat OpenStack Platform 12 GA Copy linkLink copied to clipboard!
3.1.1. Enhancements Copy linkLink copied to clipboard!
- BZ#1117883
This update provides the Docker image for the Keystone service.
This update provides the Docker image for the Keystone service.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1276147
This update adds support to OpenStack Bare Metal (ironic) for the Emulex hardware iSCSI (be2iscsi) ramdisk.
This update adds support to OpenStack Bare Metal (ironic) for the Emulex hardware iSCSI (be2iscsi) ramdisk.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1277652
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1293435
Uploading to and downloading from Cinder volumes with Glance is now supported with the Cinder backend driver. Note: This update does not include support for Ceph RBD. Use the Ceph backend driver to perform RBD operations on Ceph volumes.
Uploading to and downloading from Cinder volumes with Glance is now supported with the Cinder backend driver. Note: This update does not include support for Ceph RBD. Use the Ceph backend driver to perform RBD operations on Ceph volumes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1301549
The update adds a new validation to check the overcloud's network environment. This helps avoid any conflicts with IP addresses, VLANs, and allocation pool when deploying your overcloud.
The update adds a new validation to check the overcloud's network environment. This helps avoid any conflicts with IP addresses, VLANs, and allocation pool when deploying your overcloud.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1334545
You can now set QoS IOPS limits that scale per GB size of the volume with the options "total_iops_sec_per_gb", "read_iops_sec_per_gb", and "write_iops_sec_per_gb". For example, if you set the total_iops_sec_per_gb=1000 option, you will get 1000 IOPS for a 1GB volume, 2000 IOPS for a 2GB volume, and so on.
You can now set QoS IOPS limits that scale per GB size of the volume with the options "total_iops_sec_per_gb", "read_iops_sec_per_gb", and "write_iops_sec_per_gb". For example, if you set the total_iops_sec_per_gb=1000 option, you will get 1000 IOPS for a 1GB volume, 2000 IOPS for a 2GB volume, and so on.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1368512
The update adds a new validation to check the hardware resource on the undercloud before an deployment or upgrade. The validation ensures the undercloud meets the necessary disk space and memory requirements prior to a deployment or upgrade.
The update adds a new validation to check the hardware resource on the undercloud before an deployment or upgrade. The validation ensures the undercloud meets the necessary disk space and memory requirements prior to a deployment or upgrade.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1383576
This update adds an action to "Manage Nodes" through the director UI. This action switches nodes to a "manageable" state so the director can perform introspection through the UI.
This update adds an action to "Manage Nodes" through the director UI. This action switches nodes to a "manageable" state so the director can perform introspection through the UI.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1406102
Director now supports the creation of custom networks during the deployment and update phases. These additional networks can be used for dedicated network controllers, Ironic baremetal nodes, system management, or to create separate networks for different roles. A single data file ('network_data.yaml') manages the list of networks that will be deployed. The role definition process then assigns the networks to the required roles.Director now supports the creation of custom networks during the deployment and update phases. These additional networks can be used for dedicated network controllers, Ironic baremetal nodes, system management, or to create separate networks for different roles. A single data file ('network_data.yaml') manages the list of networks that will be deployed. The role definition process then assigns the networks to the required roles.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1430885
This update increases the granularity of the deployment progress bar. This is achieved with an increase in the nesting level that retrieves the stack resources. This provides more accurate progress of a deployment.
This update increases the granularity of the deployment progress bar. This is achieved with an increase in the nesting level that retrieves the stack resources. This provides more accurate progress of a deployment.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1434929
Previously, the OS_IMAGE_API_VERSION and the OS_VOLUME_API_VERSION environment variables were not set, which forced Glance and Cinder to fall back to the default API versions. For Cinder, this was the older v2 API. With this update, the overcloudrc file now sets the environment variables to specify the API versions for Glance and Cinder.
Previously, the OS_IMAGE_API_VERSION and the OS_VOLUME_API_VERSION environment variables were not set, which forced Glance and Cinder to fall back to the default API versions. For Cinder, this was the older v2 API. With this update, the overcloudrc file now sets the environment variables to specify the API versions for Glance and Cinder.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2. Technology Preview Copy linkLink copied to clipboard!
- BZ#1300425
With the Manila service, you can now create shares within Consistency Groups to guarantee snapshot consistency across multiple shares. Driver vendors must report this capability and implement its functions to work according to the back end. This feature is not recommended for production cloud environments, as it is still in its experimental stage.
With the Manila service, you can now create shares within Consistency Groups to guarantee snapshot consistency across multiple shares. Driver vendors must report this capability and implement its functions to work according to the back end. This feature is not recommended for production cloud environments, as it is still in its experimental stage.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1418433
Containerized deployment of the OpenStack File Share Service (manila) is available as a technology preview in this release. By default, Manila, Cinder, and Neutron will still be deployed on bare metal machines.
Containerized deployment of the OpenStack File Share Service (manila) is available as a technology preview in this release. By default, Manila, Cinder, and Neutron will still be deployed on bare metal machines.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1513109
POWER-8 (ppc64le) Compute support is now available as a technology preview.
POWER-8 (ppc64le) Compute support is now available as a technology preview.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.3. Release Notes Copy linkLink copied to clipboard!
- BZ#1463355
When TLS everywhere is enabled, the HAProxy stats interface will also use TLS. As a result, you will need to access the interface though the individual node's ctlplane address, which is either the actual IP address or the FQDN (using the convention <node name>.ctlplane.<domain>, for example, overcloud-controller-0.ctlplane.example.com). This setting can be configured by the `CloudNameCtlplane` parameter in `tripleo-heat-templates`. Note that you can still use the `haproxy_stats_certificate` parameter from the HAproxy class, and it will take precedence if set.
When TLS everywhere is enabled, the HAProxy stats interface will also use TLS. As a result, you will need to access the interface though the individual node's ctlplane address, which is either the actual IP address or the FQDN (using the convention <node name>.ctlplane.<domain>, for example, overcloud-controller-0.ctlplane.example.com). This setting can be configured by the `CloudNameCtlplane` parameter in `tripleo-heat-templates`. Note that you can still use the `haproxy_stats_certificate` parameter from the HAproxy class, and it will take precedence if set.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.4. Known Issues Copy linkLink copied to clipboard!
- BZ#1552234
There is currently a known issue where you cannot use ACLs to make a container public for anonymous access. This issue arises when sending `POST` operations to Swift that specify a '*' value in the `X-Container-Read` or `X-Container-Write` settings.
There is currently a known issue where you cannot use ACLs to make a container public for anonymous access. This issue arises when sending `POST` operations to Swift that specify a '*' value in the `X-Container-Read` or `X-Container-Write` settings.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1321179
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1384845
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1385347
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1486995
When using an NFS back end for the Image service (glance), attempting to create an image will fail with a permission error. This is because the user ID on the host and container differ, and also because puppet cannot mount the NFS endpoint successfully on the container.
When using an NFS back end for the Image service (glance), attempting to create an image will fail with a permission error. This is because the user ID on the host and container differ, and also because puppet cannot mount the NFS endpoint successfully on the container.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1487920
Encrypted volumes cannot attach correctly to instances in containerized environments. The Compute service runs "cryptsetup luksOpen", which waits for the udev device creation process to finish. This process does not actually finish, which causes the command to hang. Workaround: Restart the containerized Compute service with the docker option "--ipc=host".
Encrypted volumes cannot attach correctly to instances in containerized environments. The Compute service runs "cryptsetup luksOpen", which waits for the udev device creation process to finish. This process does not actually finish, which causes the command to hang. Workaround: Restart the containerized Compute service with the docker option "--ipc=host".Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1508438
For containerized OpenStack services, configuration files are now installed in each container. However, some OpenStack services are not containerized yet, and configuration files for those services are still installed on the bare metal nodes. If you need to access or modify configuration files for containerized services, use /var/log/config-data/<container name>/<config path>. For services that are not containerized yet, use /etc/<service>.
For containerized OpenStack services, configuration files are now installed in each container. However, some OpenStack services are not containerized yet, and configuration files for those services are still installed on the bare metal nodes. If you need to access or modify configuration files for containerized services, use /var/log/config-data/<container name>/<config path>. For services that are not containerized yet, use /etc/<service>.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1516911
In HP DL 360/380 Gen9, the DIMM format does not match the regex query. In order to PASS on this, you must cherry-pick the HW patches in comment #2.
In HP DL 360/380 Gen9, the DIMM format does not match the regex query. In order to PASS on this, you must cherry-pick the HW patches in comment #2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1519057
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1519536
You must manually discover the latest Docker image tags for current container images that are stored in Red Hat Satellite. For more information, see the Red Hat Satellite documentation: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_container_images#managing_container_images_with_docker_tags
You must manually discover the latest Docker image tags for current container images that are stored in Red Hat Satellite. For more information, see the Red Hat Satellite documentation: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_container_images#managing_container_images_with_docker_tagsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1520004
It is only possible to deploy Ceph storage servers if their disk devices are homogeneous.
It is only possible to deploy Ceph storage servers if their disk devices are homogeneous.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1522872
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1525520
For deployments using OVN as the ML2 mechanism driver, only nodes with connectivity to the external networks are eligible to schedule the router gateway ports on them. However, there is currently a known issue that will make all nodes as eligible, which becomes a problem when the Compute nodes do not have external connectivity. As a result, if a router gateway port is scheduled on Compute nodes without external connectivity, ingress and egress connections for the external networks will not work; in which case the router gateway port has to be rescheduled to a controller node. As a workaround, you can provide connectivity on all your compute nodes, or you can consider deleting NeutronBridgeMappings, or set it to datacentre:br-ex. For more information, see https://bugzilla.redhat.com/show_bug.cgi?id=1525520 and https://bugzilla.redhat.com/show_bug.cgi?id=1510879.
For deployments using OVN as the ML2 mechanism driver, only nodes with connectivity to the external networks are eligible to schedule the router gateway ports on them. However, there is currently a known issue that will make all nodes as eligible, which becomes a problem when the Compute nodes do not have external connectivity. As a result, if a router gateway port is scheduled on Compute nodes without external connectivity, ingress and egress connections for the external networks will not work; in which case the router gateway port has to be rescheduled to a controller node. As a workaround, you can provide connectivity on all your compute nodes, or you can consider deleting NeutronBridgeMappings, or set it to datacentre:br-ex. For more information, see https://bugzilla.redhat.com/show_bug.cgi?id=1525520 and https://bugzilla.redhat.com/show_bug.cgi?id=1510879.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.5. Deprecated Functionality Copy linkLink copied to clipboard!
- BZ#1417221
The Panko service is officially deprecated in OpenStack version 12. Support for panko will be limited to usage from cloudforms only. We do not recommend using panko outside of the cloudforms use case.
The Panko service is officially deprecated in OpenStack version 12. Support for panko will be limited to usage from cloudforms only. We do not recommend using panko outside of the cloudforms use case.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1427719
VPN-as-a-Service (VPNaaS) VPNaaS was deprecated in Red Hat OpenStack Platform 11, and has now been removed in Red Hat OpenStack Platform 12.
VPN-as-a-Service (VPNaaS) VPNaaS was deprecated in Red Hat OpenStack Platform 11, and has now been removed in Red Hat OpenStack Platform 12.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1489801
MongoDB is no longer used by Red Hat OpenStack Platform. Previously, it was used for Telemetry (which now uses Gnocchi) and Zaqar on the undercloud (which is moving to Redis). As a result, 'mongodb', 'puppet-mongodb', and 'v8' are no longer included.
MongoDB is no longer used by Red Hat OpenStack Platform. Previously, it was used for Telemetry (which now uses Gnocchi) and Zaqar on the undercloud (which is moving to Redis). As a result, 'mongodb', 'puppet-mongodb', and 'v8' are no longer included.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1510716
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Red Hat OpenStack Platform 12 Maintenance Release January 2018 Copy linkLink copied to clipboard!
3.2.1. Known Issues Copy linkLink copied to clipboard!
- BZ#1552234
There is currently a known issue where you cannot use ACLs to make a container public for anonymous access. This issue arises when sending `POST` operations to Swift that specify a '*' value in the `X-Container-Read` or `X-Container-Write` settings.
There is currently a known issue where you cannot use ACLs to make a container public for anonymous access. This issue arises when sending `POST` operations to Swift that specify a '*' value in the `X-Container-Read` or `X-Container-Write` settings.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1469434
When using the Docker CLI to report the state of running containers, the nova_migration_target container might be incorrectly reported as "unhealthy". This is due to an issue with the health check itself, and not with an accurate reflection of the state of the running container.
When using the Docker CLI to report the state of running containers, the nova_migration_target container might be incorrectly reported as "unhealthy". This is due to an issue with the health check itself, and not with an accurate reflection of the state of the running container.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1519057
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1520004
It is only possible to deploy Ceph storage servers if their disk devices are homogeneous.
It is only possible to deploy Ceph storage servers if their disk devices are homogeneous.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1525520
For deployments using OVN as the ML2 mechanism driver, only nodes with connectivity to the external networks are eligible to schedule the router gateway ports on them. However, there is a known issue that marks all nodes as eligible, which becomes a problem when the Compute nodes do not have external connectivity. As a result, if a router gateway port is scheduled on Compute nodes without external connectivity, ingress and egress connections for the external networks will not work; in which case the router gateway port has to be rescheduled to a controller node. As a workaround, you can provide connectivity on all your compute nodes, or you can consider deleting NeutronBridgeMappings, or set it to datacentre:br-ex. For more information, see https://bugzilla.redhat.com/show_bug.cgi?id=1525520 and https://bugzilla.redhat.com/show_bug.cgi?id=1510879.
For deployments using OVN as the ML2 mechanism driver, only nodes with connectivity to the external networks are eligible to schedule the router gateway ports on them. However, there is a known issue that marks all nodes as eligible, which becomes a problem when the Compute nodes do not have external connectivity. As a result, if a router gateway port is scheduled on Compute nodes without external connectivity, ingress and egress connections for the external networks will not work; in which case the router gateway port has to be rescheduled to a controller node. As a workaround, you can provide connectivity on all your compute nodes, or you can consider deleting NeutronBridgeMappings, or set it to datacentre:br-ex. For more information, see https://bugzilla.redhat.com/show_bug.cgi?id=1525520 and https://bugzilla.redhat.com/show_bug.cgi?id=1510879.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Red Hat OpenStack Platform 12 Maintenance Release 28 March 2018 Copy linkLink copied to clipboard!
3.3.1. Release Notes Copy linkLink copied to clipboard!
- BZ#1488855
Due to the migration from puppet-ceph to ceph-ansible for the management of Ceph using Director, old puppet hieradata (such as ceph::profile::params::osds) needs to be migrated to the ceph-ansible format. Customizations for the Ceph deployment previously passed as hieradata from *ExtraConfig should be removed since they are ignored. Specifically, the deployment will stop if ceph::profile::params::osds is found to ensure the devices list has been migrated to the format expected by ceph-ansible. Use the CephAnsibleExtraConfig and CephAnsibleDisksConfig parameters to pass arbitrary variables to ceph-ansible, such devices and dedicated_devices.Due to the migration from puppet-ceph to ceph-ansible for the management of Ceph using Director, old puppet hieradata (such as ceph::profile::params::osds) needs to be migrated to the ceph-ansible format. Customizations for the Ceph deployment previously passed as hieradata from *ExtraConfig should be removed since they are ignored. Specifically, the deployment will stop if ceph::profile::params::osds is found to ensure the devices list has been migrated to the format expected by ceph-ansible. Use the CephAnsibleExtraConfig and CephAnsibleDisksConfig parameters to pass arbitrary variables to ceph-ansible, such devices and dedicated_devices.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.2. Known Issues Copy linkLink copied to clipboard!
- BZ#1552234
There is currently a known issue where you cannot use ACLs to make a container public for anonymous access. This issue arises when sending `POST` operations to Swift that specify a '*' value in the `X-Container-Read` or `X-Container-Write` settings.
There is currently a known issue where you cannot use ACLs to make a container public for anonymous access. This issue arises when sending `POST` operations to Swift that specify a '*' value in the `X-Container-Read` or `X-Container-Write` settings.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Red Hat OpenStack Platform 12 Maintenance Release 20 August 2018 Copy linkLink copied to clipboard!
3.4.1. Enhancements Copy linkLink copied to clipboard!
- BZ#1502860
This update helps operators locate log files after an upgrade from a non-containerized to a containerized deployment. If old log files are present when the upgrade begins, the old files are moved to a new file location. A readme.txt file is placed in the old file location. The file points to the new log file location. For example, if a /var/log/nova directory exists, a /var/log/nova/readme.txt file is created, advising the reader to look in the /var/log/containers/nova directory instead.
This update helps operators locate log files after an upgrade from a non-containerized to a containerized deployment. If old log files are present when the upgrade begins, the old files are moved to a new file location. A readme.txt file is placed in the old file location. The file points to the new log file location. For example, if a /var/log/nova directory exists, a /var/log/nova/readme.txt file is created, advising the reader to look in the /var/log/containers/nova directory instead.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1517278
This update prevents CPU pinning mismatches during Nova live migrations. Prior to the update, the scheduler did not check whether the guest CPU pinning configuration was supported on the host. A mismatch of CPU pinning caused errors during bootup of the the host. This failed scenario could be repeated over a series of potential hosts. A new condition in the NUMATopologyFilter filter identifies hosts with proper CPU pinning capability. If no suitable hosts are available, the migration fails quickly with an error message.
This update prevents CPU pinning mismatches during Nova live migrations. Prior to the update, the scheduler did not check whether the guest CPU pinning configuration was supported on the host. A mismatch of CPU pinning caused errors during bootup of the the host. This failed scenario could be repeated over a series of potential hosts. A new condition in the NUMATopologyFilter filter identifies hosts with proper CPU pinning capability. If no suitable hosts are available, the migration fails quickly with an error message.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1547146
This change allows TripleO to deploy Cinder with a Dell EMC VNX backend.
This change allows TripleO to deploy Cinder with a Dell EMC VNX backend.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1554768
Cinder volume migration between different availability zones is now supported.
Cinder volume migration between different availability zones is now supported.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1566611
Keystone user passwords generated by Heat resources such as WaitConditionHandle now meet more stringent regular expression-based password complexity requirements. The new passwords are 32-character random strings containing at least one uppercase and one lowercase letter, one digit, and one of the characters '!@#%^&*'. These passwords should pass the standard of virtually any regular expression-based password validation. Previously, generated passwords took the form of 32 hexadecimal digits, and thus never contained uppercase letters or special characters.
Keystone user passwords generated by Heat resources such as WaitConditionHandle now meet more stringent regular expression-based password complexity requirements. The new passwords are 32-character random strings containing at least one uppercase and one lowercase letter, one digit, and one of the characters '!@#%^&*'. These passwords should pass the standard of virtually any regular expression-based password validation. Previously, generated passwords took the form of 32 hexadecimal digits, and thus never contained uppercase letters or special characters.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1570941
Virtual CPUs (vCPUs) can be preempted by the hypervisor kernel thread even with strong partitioning in place (isolcpus, tuned). Preemptions are not frequent, a few per second, but with 256 descriptors per virtio queue, just one preemption of the vCPU can lead to packet drop, because the 256 slots are filled during the preemption. This is the case for network functions virtualization (NFV) VMs in which the per queue packet rate is above 1 Mpps (1 million packets per second). This release supports two new tunable options: 'rx_queue_size' and 'tx_queue_size'. Use these options to configure the RX queue size and TX queue size of virtio NICs, respectively, to reduce packet drop.
Virtual CPUs (vCPUs) can be preempted by the hypervisor kernel thread even with strong partitioning in place (isolcpus, tuned). Preemptions are not frequent, a few per second, but with 256 descriptors per virtio queue, just one preemption of the vCPU can lead to packet drop, because the 256 slots are filled during the preemption. This is the case for network functions virtualization (NFV) VMs in which the per queue packet rate is above 1 Mpps (1 million packets per second). This release supports two new tunable options: 'rx_queue_size' and 'tx_queue_size'. Use these options to configure the RX queue size and TX queue size of virtio NICs, respectively, to reduce packet drop.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1571744
Nova's libvirt driver now allows the specification of granular CPU feature flags when configuring CPU models. One benefit of this change is the alleviation of a performance degradation that has been experienced on guests running with certain Intel-based virtual CPU models after application of the "Meltdown" CVE fixes. This guest performance impact is reduced by exposing the CPU feature flag 'PCID' ("Process-Context ID") to the *guest* CPU, assuming that the PCID flag is available in the physical hardware itself. For more details, refer to the documentation of ``[libvirt]/cpu_model_extra_flags`` in the ``nova.conf`` file for usage details.Nova's libvirt driver now allows the specification of granular CPU feature flags when configuring CPU models. One benefit of this change is the alleviation of a performance degradation that has been experienced on guests running with certain Intel-based virtual CPU models after application of the "Meltdown" CVE fixes. This guest performance impact is reduced by exposing the CPU feature flag 'PCID' ("Process-Context ID") to the *guest* CPU, assuming that the PCID flag is available in the physical hardware itself. For more details, refer to the documentation of ``[libvirt]/cpu_model_extra_flags`` in the ``nova.conf`` file for usage details.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.2. Release Notes Copy linkLink copied to clipboard!
- BZ#1558148
To reduce the time spent processing security group updates in the L2 agent, conntrack deletion is now performed in a set of worker threads instead of in the main agent thread.
To reduce the time spent processing security group updates in the L2 agent, conntrack deletion is now performed in a set of worker threads instead of in the main agent thread.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1591204
A new configuration option called bridge_mac_table_size has been added for the neutron OVS agent. This value is set on every Open vSwitch bridge managed by the openvswitch-neutron-agent. The value controls the maximum number of MAC addresses that can be learned on a bridge. The default value for this new option is 50,000, which should be enough for most systems. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch.
A new configuration option called bridge_mac_table_size has been added for the neutron OVS agent. This value is set on every Open vSwitch bridge managed by the openvswitch-neutron-agent. The value controls the maximum number of MAC addresses that can be learned on a bridge. The default value for this new option is 50,000, which should be enough for most systems. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.3. Known Issues Copy linkLink copied to clipboard!
- BZ#1519536
You must manually discover the latest Docker image tags for current container images that are stored in Red Hat Satellite. For more information, see the Red Hat Satellite documentation: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_container_images#managing_container_images_with_docker_tags
You must manually discover the latest Docker image tags for current container images that are stored in Red Hat Satellite. For more information, see the Red Hat Satellite documentation: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_container_images#managing_container_images_with_docker_tagsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Red Hat OpenStack Platform 12 Maintenance Release 5 December 2018 Copy linkLink copied to clipboard!
3.5.1. End of Life Copy linkLink copied to clipboard!
Chapter 4. Technical Notes Copy linkLink copied to clipboard!
4.1. RHEA-2017:3462 — Red Hat OpenStack Platform 12.0 Enhancement Advisory Copy linkLink copied to clipboard!
diskimage-builder
- BZ#1489801
MongoDB is no longer used by Red Hat OpenStack Platform. Previously, it was used for Telemetry (which now uses Gnocchi) and Zaqar on the undercloud (which is moving to Redis). As a result, 'mongodb', 'puppet-mongodb', and 'v8' are no longer included.
MongoDB is no longer used by Red Hat OpenStack Platform. Previously, it was used for Telemetry (which now uses Gnocchi) and Zaqar on the undercloud (which is moving to Redis). As a result, 'mongodb', 'puppet-mongodb', and 'v8' are no longer included.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
opendaylight
- BZ#1344429
This update adds the support for OpenDaylight, OVS-DPDK and OpenStack in the NetVirt/OVSDB scenario. This feature allows users to set up virtualized networks for their tenants using OpenDaylight and OVS_DPDK.
This update adds the support for OpenDaylight, OVS-DPDK and OpenStack in the NetVirt/OVSDB scenario. This feature allows users to set up virtualized networks for their tenants using OpenDaylight and OVS_DPDK.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1414298
This update provides a new package of the OpenDaylight Carbon release that is used within the Red Hat OpenStack Platform 12.
This update provides a new package of the OpenDaylight Carbon release that is used within the Red Hat OpenStack Platform 12.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1414313
With this update, the High Availability clustering is enabled for both the Neutron and the OpenDaylight controller.
With this update, the High Availability clustering is enabled for both the Neutron and the OpenDaylight controller.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1420383
This update replaces the Java based LevelDB in favour of the JNI package and provides the leveldbjni-all-1.8-15.5.el7ost.x86_64 package.
This update replaces the Java based LevelDB in favour of the JNI package and provides the leveldbjni-all-1.8-15.5.el7ost.x86_64 package.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1414431
The new conntrack-based SNAT implementation, enabled by default, uses the Linux netfilter framework to do the NAPT (Network Address Port Translation) and track the connection. The first packet in a traffic is passed to the netfilter to be translated with the external IP. The following packets will use the netfilter for further inbound and outbound translation. In the netfilter, the Router ID will be used as the Zone ID. Each zone tracks the connection in its own table. The rest of the implementation remains the same. The conntrack mode also enables the new High Availability logic that newly considers the weight associated with each switch. Also, the switch will always keep one designated NAPT port open, which improves the performance.
The new conntrack-based SNAT implementation, enabled by default, uses the Linux netfilter framework to do the NAPT (Network Address Port Translation) and track the connection. The first packet in a traffic is passed to the netfilter to be translated with the external IP. The following packets will use the netfilter for further inbound and outbound translation. In the netfilter, the Router ID will be used as the Zone ID. Each zone tracks the connection in its own table. The rest of the implementation remains the same. The conntrack mode also enables the new High Availability logic that newly considers the weight associated with each switch. Also, the switch will always keep one designated NAPT port open, which improves the performance.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1450894
This update adds ping6 support to the Neutron router internal interfaces for OpenStack using OpenDaylight.
This update adds ping6 support to the Neutron router internal interfaces for OpenStack using OpenDaylight.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-cinder
- BZ#1334545
You can now set QoS IOPS limits that scale per GB size of the volume with the options "total_iops_sec_per_gb", "read_iops_sec_per_gb", and "write_iops_sec_per_gb". For example, if you set the total_iops_sec_per_gb=1000 option, you will get 1000 IOPS for a 1GB volume, 2000 IOPS for a 2GB volume, and so on.
You can now set QoS IOPS limits that scale per GB size of the volume with the options "total_iops_sec_per_gb", "read_iops_sec_per_gb", and "write_iops_sec_per_gb". For example, if you set the total_iops_sec_per_gb=1000 option, you will get 1000 IOPS for a 1GB volume, 2000 IOPS for a 2GB volume, and so on.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-containers
- BZ#1517903
Previously, if containers were shut down unexpectedly, Apache still left runtime files in the containers, which causes the containers to stay in a Restarting state after the host comes back up. If you use TLS everywhere, this means that the Glance and Swift services were unreachable after the host rebooted. This fix adds runtime cleanup in the container images startup scripts. Glance and Swift services are now functioning normally after the host reboots when deployed with TLS everywhere.
Previously, if containers were shut down unexpectedly, Apache still left runtime files in the containers, which causes the containers to stay in a Restarting state after the host comes back up. If you use TLS everywhere, this means that the Glance and Swift services were unreachable after the host rebooted. This fix adds runtime cleanup in the container images startup scripts. Glance and Swift services are now functioning normally after the host reboots when deployed with TLS everywhere.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-neutron
- BZ#1490281
Some deployments use Neutron provider bridges for internal traffic, such as traffic for AMQP, which causes bridges on boot are set to behave like normal switching. Because ARP broadcast packets use patch-ports to go between the integration bridge and the provider bridges, ARP storms to occur if more controllers were turned off ungracefully and then simultaneously booted up. The new systemd service neutron-destroy-patch-ports now executes at the boot to remove the patch ports and break the connection between the integration bridge and the provider bridges. This prevents ARP storms, and the patch ports are then renewed after the openvswitch agent is started.
Some deployments use Neutron provider bridges for internal traffic, such as traffic for AMQP, which causes bridges on boot are set to behave like normal switching. Because ARP broadcast packets use patch-ports to go between the integration bridge and the provider bridges, ARP storms to occur if more controllers were turned off ungracefully and then simultaneously booted up. The new systemd service neutron-destroy-patch-ports now executes at the boot to remove the patch ports and break the connection between the integration bridge and the provider bridges. This prevents ARP storms, and the patch ports are then renewed after the openvswitch agent is started.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-panko
- BZ#1417221
The Panko service is officially deprecated in OpenStack version 12. Support for panko will be limited to usage from cloudforms only. We do not recommend using panko outside of the cloudforms use case.
The Panko service is officially deprecated in OpenStack version 12. Support for panko will be limited to usage from cloudforms only. We do not recommend using panko outside of the cloudforms use case.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-tripleo-common
- BZ#1276147
This update adds support to OpenStack Bare Metal (ironic) for the Emulex hardware iSCSI (be2iscsi) ramdisk.
This update adds support to OpenStack Bare Metal (ironic) for the Emulex hardware iSCSI (be2iscsi) ramdisk.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1434929
Previously, the OS_IMAGE_API_VERSION and the OS_VOLUME_API_VERSION environment variables were not set, which forced Glance and Cinder to fall back to the default API versions. For Cinder, this was the older v2 API. With this update, the overcloudrc file now sets the environment variables to specify the API versions for Glance and Cinder.
Previously, the OS_IMAGE_API_VERSION and the OS_VOLUME_API_VERSION environment variables were not set, which forced Glance and Cinder to fall back to the default API versions. For Cinder, this was the older v2 API. With this update, the overcloudrc file now sets the environment variables to specify the API versions for Glance and Cinder.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-tripleo-heat-templates
- BZ#1487920
Encrypted volumes cannot attach correctly to instances in containerized environments. The Compute service runs "cryptsetup luksOpen", which waits for the udev device creation process to finish. This process does not actually finish, which causes the command to hang. Workaround: Restart the containerized Compute service with the docker option "--ipc=host".
Encrypted volumes cannot attach correctly to instances in containerized environments. The Compute service runs "cryptsetup luksOpen", which waits for the udev device creation process to finish. This process does not actually finish, which causes the command to hang. Workaround: Restart the containerized Compute service with the docker option "--ipc=host".Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1513109
POWER-8 (ppc64le) Compute support is now available as a technology preview.
POWER-8 (ppc64le) Compute support is now available as a technology preview.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1406102
Director now supports the creation of custom networks during the deployment and update phases. These additional networks can be used for dedicated network controllers, Ironic baremetal nodes, system management, or to create separate networks for different roles. A single data file ('network_data.yaml') manages the list of networks that will be deployed. The role definition process then assigns the networks to the required roles.Director now supports the creation of custom networks during the deployment and update phases. These additional networks can be used for dedicated network controllers, Ironic baremetal nodes, system management, or to create separate networks for different roles. A single data file ('network_data.yaml') manages the list of networks that will be deployed. The role definition process then assigns the networks to the required roles.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1418433
Containerized deployment of the OpenStack File Share Service (manila) is available as a technology preview in this release. By default, Manila, Cinder, and Neutron will still be deployed on bare metal machines.
Containerized deployment of the OpenStack File Share Service (manila) is available as a technology preview in this release. By default, Manila, Cinder, and Neutron will still be deployed on bare metal machines.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1484467
Running Cinder services on bare metal machines and the Iscsid service in a container caused the services to have different iSCSI Qualified Name (IQN) values. Because the IQN is used to authenticate iSCSI connections, Cinder backup operations failed with an authentication error that was caused by an IQN mismatch. With this fix, the Iscsid service now runs on bare metal, and all other services, such as containerized Nova and non-containerized Cinder, are configured to use the correct IQN.
Running Cinder services on bare metal machines and the Iscsid service in a container caused the services to have different iSCSI Qualified Name (IQN) values. Because the IQN is used to authenticate iSCSI connections, Cinder backup operations failed with an authentication error that was caused by an IQN mismatch. With this fix, the Iscsid service now runs on bare metal, and all other services, such as containerized Nova and non-containerized Cinder, are configured to use the correct IQN.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1486995
When using an NFS back end for the Image service (glance), attempting to create an image will fail with a permission error. This is because the user ID on the host and container differ, and also because puppet cannot mount the NFS endpoint successfully on the container.
When using an NFS back end for the Image service (glance), attempting to create an image will fail with a permission error. This is because the user ID on the host and container differ, and also because puppet cannot mount the NFS endpoint successfully on the container.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1489484
Previously, the ceph-osd package was a part of the common overcloud image, but was available only in a repository that requires the Ceph OSD entitlement. This entitlement is not required on OpenStack Controller and Compute nodes. The RPM dependency created by the ceph-osd package caused Yum update to fail when you tried to update the ceph-osd package without the ceph-osd entitlement, Yum update failed. This fix removes the ceph-osd package from overcloud nodes that do not require the package. The ceph-osd package is now only required on Ceph storage nodes, including hyperconverged nodes that run Ceph OSD and Compute services. Yum update now succeeds on nodes that do not require the ceph-osd package. Ceph storage and hyperconverged nodes that require the ceph-osd package will still require the necessary Ceph OSD entitlement.
Previously, the ceph-osd package was a part of the common overcloud image, but was available only in a repository that requires the Ceph OSD entitlement. This entitlement is not required on OpenStack Controller and Compute nodes. The RPM dependency created by the ceph-osd package caused Yum update to fail when you tried to update the ceph-osd package without the ceph-osd entitlement, Yum update failed. This fix removes the ceph-osd package from overcloud nodes that do not require the package. The ceph-osd package is now only required on Ceph storage nodes, including hyperconverged nodes that run Ceph OSD and Compute services. Yum update now succeeds on nodes that do not require the ceph-osd package. Ceph storage and hyperconverged nodes that require the ceph-osd package will still require the necessary Ceph OSD entitlement.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-tripleo-puppet-elements
- BZ#1270860
Using hardcoded machine IDs in templates creates multiple nodes with identical machine IDs. This prevents the Red Hat Storage Console from identifying multiple nodes. Workaround: Generate unique machine IDs on each node and then update the /etc/machine-id file. This will ensure that the Red Hat Storage Console can identify the nodes as unique.
Using hardcoded machine IDs in templates creates multiple nodes with identical machine IDs. This prevents the Red Hat Storage Console from identifying multiple nodes. Workaround: Generate unique machine IDs on each node and then update the /etc/machine-id file. This will ensure that the Red Hat Storage Console can identify the nodes as unique.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1384845
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-tripleo-ui
- BZ#1383576
This update adds an action to "Manage Nodes" through the director UI. This action switches nodes to a "manageable" state so the director can perform introspection through the UI.
This update adds an action to "Manage Nodes" through the director UI. This action switches nodes to a "manageable" state so the director can perform introspection through the UI.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1430885
This update increases the granularity of the deployment progress bar. This is achieved with an increase in the nesting level that retrieves the stack resources. This provides more accurate progress of a deployment.
This update increases the granularity of the deployment progress bar. This is achieved with an increase in the nesting level that retrieves the stack resources. This provides more accurate progress of a deployment.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-tripleo-validations
- BZ#1301549
The update adds a new validation to check the overcloud's network environment. This helps avoid any conflicts with IP addresses, VLANs, and allocation pool when deploying your overcloud.
The update adds a new validation to check the overcloud's network environment. This helps avoid any conflicts with IP addresses, VLANs, and allocation pool when deploying your overcloud.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1368512
The update adds a new validation to check the hardware resource on the undercloud before an deployment or upgrade. The validation ensures the undercloud meets the necessary disk space and memory requirements prior to a deployment or upgrade.
The update adds a new validation to check the hardware resource on the undercloud before an deployment or upgrade. The validation ensures the undercloud meets the necessary disk space and memory requirements prior to a deployment or upgrade.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
puppet-ironic
- BZ#1489192
Previously, the DHCP server configuration file for Ironic Inspector did not handle hosts that used UEFI and iPXE, which caused some UEFI and iPXE hosts to fail to boot during Ironic Introspection. This fix updates the DHCP server file `/etc/ironic-inspector/dnsmasq.conf` to handle UEFI and iPXE hosts, and now the hosts can properly boot during Ironic Introspection.
Previously, the DHCP server configuration file for Ironic Inspector did not handle hosts that used UEFI and iPXE, which caused some UEFI and iPXE hosts to fail to boot during Ironic Introspection. This fix updates the DHCP server file `/etc/ironic-inspector/dnsmasq.conf` to handle UEFI and iPXE hosts, and now the hosts can properly boot during Ironic Introspection.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
puppet-keystone
- BZ#1404324
The token flush cron job has been modified to run hourly instead of once a day. This was changed because of issues being raised in larger deployments, as the operation would take too long and sometimes even fail because the transaction was too large. Note that this only affects deployments using the UUID token provider.
The token flush cron job has been modified to run hourly instead of once a day. This was changed because of issues being raised in larger deployments, as the operation would take too long and sometimes even fail because the transaction was too large. Note that this only affects deployments using the UUID token provider.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
puppet-tripleo
- BZ#1463355
When TLS everywhere is enabled, the HAProxy stats interface will also use TLS. As a result, you will need to access the interface though the individual node's ctlplane address, which is either the actual IP address or the FQDN (using the convention {node-name}.ctlplane.{domain}, for example, overcloud-controller-0.ctlplane.example.com). This setting can be configured by the `CloudNameCtlplane` parameter in `tripleo-heat-templates`. Note that you can still use the `haproxy_stats_certificate` parameter from the HAproxy class, and it will take precedence if set.When TLS everywhere is enabled, the HAProxy stats interface will also use TLS. As a result, you will need to access the interface though the individual node's ctlplane address, which is either the actual IP address or the FQDN (using the convention {node-name}.ctlplane.{domain}, for example, overcloud-controller-0.ctlplane.example.com). This setting can be configured by the `CloudNameCtlplane` parameter in `tripleo-heat-templates`. Note that you can still use the `haproxy_stats_certificate` parameter from the HAproxy class, and it will take precedence if set.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1479751
Recent changes in Nova and Cinder resulted in Barbican being selected as the default encryption key manager, even when TripleO is not deploying Barbican. However, TripleO assumes that the legacy (fixed key) manager is active and selected for non-Barbican deployments. This led to broken volume encryption in non-Barbican deployments. This fix modifies the TripleO behavior to now actively configure Nova and Cinder to use the legacy key manager for non-Barbican deployments.
Recent changes in Nova and Cinder resulted in Barbican being selected as the default encryption key manager, even when TripleO is not deploying Barbican. However, TripleO assumes that the legacy (fixed key) manager is active and selected for non-Barbican deployments. This led to broken volume encryption in non-Barbican deployments. This fix modifies the TripleO behavior to now actively configure Nova and Cinder to use the legacy key manager for non-Barbican deployments.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-glance-store
- BZ#1293435
Uploading to and downloading from Cinder volumes with Glance is now supported with the Cinder backend driver. Note: This update does not include support for Ceph RBD. Use the Ceph backend driver to perform RBD operations on Ceph volumes.
Uploading to and downloading from Cinder volumes with Glance is now supported with the Cinder backend driver. Note: This update does not include support for Ceph RBD. Use the Ceph backend driver to perform RBD operations on Ceph volumes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-openstackclient
- BZ#1478287
When showing the list of Neutron security groups, the Project column referenced the tenant ID instead of the project ID. This caused the Project column to appear blank. This fix changes the behavior of the operation to get the project ID, and now the list of Neutron security groups shows the relevant project ID in the Project column.
When showing the list of Neutron security groups, the Project column referenced the tenant ID instead of the project ID. This caused the Project column to appear blank. This fix changes the behavior of the operation to get the project ID, and now the list of Neutron security groups shows the relevant project ID in the Project column.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-os-brick
- BZ#1503259
A race condition in the Python os.path.realpath method raised an unexpected exception. This caused an iSCSI disconnect method to unexpectedly fail. With this fix, the race condition exception is ignored. Because the symlink no longer exists, it is safe to ignore this exception. As a result, the disconnect operation succeeds, even when the race condition occurs.
A race condition in the Python os.path.realpath method raised an unexpected exception. This caused an iSCSI disconnect method to unexpectedly fail. With this fix, the race condition exception is ignored. Because the symlink no longer exists, it is safe to ignore this exception. As a result, the disconnect operation succeeds, even when the race condition occurs.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-tripleoclient
- BZ#1385347
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
qemu-kvm-rhev
- BZ#1498155
Hot-unplugging Virtual Function I/O (VFIO) devices previously failed when performed after hot-unplugging a vhost network device. This update fixes the underlying code, and the VFIO device is unplugged correctly in the described circumstances.
Hot-unplugging Virtual Function I/O (VFIO) devices previously failed when performed after hot-unplugging a vhost network device. This update fixes the underlying code, and the VFIO device is unplugged correctly in the described circumstances.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. RHEA-2018:2331 — Red Hat OpenStack Platform 12.0 Enhancement Advisory August 2018 Copy linkLink copied to clipboard!
openstack-tripleo-common
- BZ#1518662
Additional non-controller upgrade attempts after a failed upgrade can fail during service validation if services are not running. To prevent such upgrade failures you can skip services validation. Pass the option "--skip-tags validation" to the Ansible invocation. For example: upgrade-non-controller.sh --upgrade compute-0 --ansible-opts "--skip-tags validation"
Additional non-controller upgrade attempts after a failed upgrade can fail during service validation if services are not running. To prevent such upgrade failures you can skip services validation. Pass the option "--skip-tags validation" to the Ansible invocation. For example: upgrade-non-controller.sh --upgrade compute-0 --ansible-opts "--skip-tags validation"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1527205
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1549139
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1552759
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-tripleo-heat-templates
- BZ#1559151
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1559920
The file driver for Gnocchi now works as expected in containerized installations. Previously the host directory was not mounted in the container.
The file driver for Gnocchi now works as expected in containerized installations. Previously the host directory was not mounted in the container.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1571348
Database credentials are no longer logged when a transient container initializes the MySQL database on disk during a fresh overcloud deployment. Logging verbosity was limited to prevent the logging of database credentials in the container's logs and in the journal.
Database credentials are no longer logged when a transient container initializes the MySQL database on disk during a fresh overcloud deployment. Logging verbosity was limited to prevent the logging of database credentials in the container's logs and in the journal.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1597313
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1520453
An error in the NovaSchedulerLoggingSource variable in the puppet/services/nova-conductor.yaml file has been corrected to properly update logs during fluentd configuration. Previously, nova-scheduler.log was tailed twice and nova-conductor.log was not tailed at all.
An error in the NovaSchedulerLoggingSource variable in the puppet/services/nova-conductor.yaml file has been corrected to properly update logs during fluentd configuration. Previously, nova-scheduler.log was tailed twice and nova-conductor.log was not tailed at all.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1556720
To prevent failures caused by a gnocchi-upgrade race condition, gnocchi-upgrade is now called from the bootstrap node instead of from multiple nodes. Previously, gnocchi-upgrade was called from each node where gnocchi-api is part of the role. This sometimes resulted in failures with the error shown in the following example: 2018-03-14 12:39:39,683 [1] ERROR oslo_db.sqlalchemy.exc_filters: DBAPIError exception wrapped from (pymysql.err.InternalError) (1050, u"Table 'archive_policy' already exists")
To prevent failures caused by a gnocchi-upgrade race condition, gnocchi-upgrade is now called from the bootstrap node instead of from multiple nodes. Previously, gnocchi-upgrade was called from each node where gnocchi-api is part of the role. This sometimes resulted in failures with the error shown in the following example: 2018-03-14 12:39:39,683 [1] ERROR oslo_db.sqlalchemy.exc_filters: DBAPIError exception wrapped from (pymysql.err.InternalError) (1050, u"Table 'archive_policy' already exists")Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1571435
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1586155
OpenStack Director 13 can now successfully deploy an overcloud together with Ceph, using OpenStack 12 templates. Prior to this update, Ceph deployment would fail during overcloud deployment step 2 because OpenStack Director failed to set the correct version of Ceph. Now OpenStack Director 12 templates always deploy the Ceph Jewel release.
OpenStack Director 13 can now successfully deploy an overcloud together with Ceph, using OpenStack 12 templates. Prior to this update, Ceph deployment would fail during overcloud deployment step 2 because OpenStack Director failed to set the correct version of Ceph. Now OpenStack Director 12 templates always deploy the Ceph Jewel release.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1597972
This update adds the environment file /usr/share/openstack-tripleo-heat-templates/environments/ovs-dpdk-permissions.yaml for OVS-DPDK deployments (for new installations and minor updates). Note: This environment file updates the parameter only for ComputeOvsDpdk role. If any other custom role is used with OvS-DPDK then the environment file should be extended to those custom roles as well.
This update adds the environment file /usr/share/openstack-tripleo-heat-templates/environments/ovs-dpdk-permissions.yaml for OVS-DPDK deployments (for new installations and minor updates). Note: This environment file updates the parameter only for ComputeOvsDpdk role. If any other custom role is used with OvS-DPDK then the environment file should be extended to those custom roles as well.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1502860
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1508867
This update adds the service OS::TripleO::Services::NovaMigrationTarget to the service list of the ComputeOvsDpdk role in the roles_data.yaml. Prior to this update, the omission of the service caused Nova live migration to fail on the ComputeOvsDpdk roles. Before starting a minor update, ensure the service is present in the ComputeOvsDpdk role of the roles_data.yaml file.
This update adds the service OS::TripleO::Services::NovaMigrationTarget to the service list of the ComputeOvsDpdk role in the roles_data.yaml. Prior to this update, the omission of the service caused Nova live migration to fail on the ComputeOvsDpdk roles. Before starting a minor update, ensure the service is present in the ComputeOvsDpdk role of the roles_data.yaml file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1547146
This change allows TripleO to deploy Cinder with a Dell EMC VNX backend.
This change allows TripleO to deploy Cinder with a Dell EMC VNX backend.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1585362
The TripleO environment files used for deploying Cinder's Netapp backend have been updated in this release to allow successful deployment of a Cinder Netapp backend. Prior to this update, obsolete data caused the overcloud deployment to fail.
The TripleO environment files used for deploying Cinder's Netapp backend have been updated in this release to allow successful deployment of a Cinder Netapp backend. Prior to this update, obsolete data caused the overcloud deployment to fail.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1589951
The default age for purging deleted database records has been corrected so that deleted records are purged from Cinder's database. Previously, the CinderCronDbPurgeAge value for Cinder's purge cron job used the wrong value and deleted records were not purged from Cinder's database when they reached the desired default age.
The default age for purging deleted database records has been corrected so that deleted records are purged from Cinder's database. Previously, the CinderCronDbPurgeAge value for Cinder's purge cron job used the wrong value and deleted records were not purged from Cinder's database when they reached the desired default age.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1573808
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
puppet-nova
- BZ#1571744
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
puppet-tripleo
- BZ#1528632
Prior to this update, running a "stack update" operation on an existing stack to reassess the state of Heat resources caused a failure in container docker-puppet-rabbitmq. This failure prevented users from running stack update operations. This update fixes the issue by changing the way puppet configuration is done in the rabbitmq container docker-puppet-rabbitmq.
Prior to this update, running a "stack update" operation on an existing stack to reassess the state of Heat resources caused a failure in container docker-puppet-rabbitmq. This failure prevented users from running stack update operations. This update fixes the issue by changing the way puppet configuration is done in the rabbitmq container docker-puppet-rabbitmq.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1585149
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1533511
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1590953
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1599410
During a version upgrade, Cinder's database synchronization is now executed only on the bootstrap node. This prevents database synchronization and upgrade failures that occurred when database synchronization was executed on all Controller nodes.
During a version upgrade, Cinder's database synchronization is now executed only on the bootstrap node. This prevents database synchronization and upgrade failures that occurred when database synchronization was executed on all Controller nodes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-os-brick
- BZ#1572572
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. RHEA-2018:2332 — Red Hat OpenStack Platform 12.0 Security Advisory August 2018 Copy linkLink copied to clipboard!
openstack-nova
- BZ#1570941
Virtual CPUs (vCPUs) can be preempted by the hypervisor kernel thread even with strong partitioning in place (isolcpus, tuned). Preemptions are not frequent, a few per second, but with 256 descriptors per virtio queue, just one preemption of the vCPU can lead to packet drop, because the 256 slots are filled during the preemption. This is the case for network functions virtualization (NFV) VMs in which the per queue packet rate is above 1 Mpps (1 million packets per second). This release supports two new tunable options: 'rx_queue_size' and 'tx_queue_size'. Use these options to configure the RX queue size and TX queue size of virtio NICs, respectively, to reduce packet drop.
Virtual CPUs (vCPUs) can be preempted by the hypervisor kernel thread even with strong partitioning in place (isolcpus, tuned). Preemptions are not frequent, a few per second, but with 256 descriptors per virtio queue, just one preemption of the vCPU can lead to packet drop, because the 256 slots are filled during the preemption. This is the case for network functions virtualization (NFV) VMs in which the per queue packet rate is above 1 Mpps (1 million packets per second). This release supports two new tunable options: 'rx_queue_size' and 'tx_queue_size'. Use these options to configure the RX queue size and TX queue size of virtio NICs, respectively, to reduce packet drop.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1558706
Previously, the ability to set an admin password to the metadata service was not implemented for the libvirt driver causing the 'nova get-password' command to return nothing. This release enables setting an admin password to the metadata service for the libvirt driver. The admin password is saved to the metadata service, and the 'nova get-password' command returns that password.
Previously, the ability to set an admin password to the metadata service was not implemented for the libvirt driver causing the 'nova get-password' command to return nothing. This release enables setting an admin password to the metadata service for the libvirt driver. The admin password is saved to the metadata service, and the 'nova get-password' command returns that password.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1563109
This update slows the initial stages of live migrations to eliminate packet loss. Previously, instances with LinuxBridge VIFs experienced packet loss during live migration. Neutron did not have enough time to complete the plugging of the VIFs and related networking infrastructure on the destination during live migration. Live migrations are now initially slowed to ensure Neutron has adequate time to wire up the VIFs on the destination. Once complete, Neutron sends an event to Nova, returning the migration to full speed. This requires Neutron 11.0.4 or greater on Pike when used with LinuxBridge VIFs to pick up the Icb039ae2d465e3822ab07ae4f9bc405c1362afba bugfix.
This update slows the initial stages of live migrations to eliminate packet loss. Previously, instances with LinuxBridge VIFs experienced packet loss during live migration. Neutron did not have enough time to complete the plugging of the VIFs and related networking infrastructure on the destination during live migration. Live migrations are now initially slowed to ensure Neutron has adequate time to wire up the VIFs on the destination. Once complete, Neutron sends an event to Nova, returning the migration to full speed. This requires Neutron 11.0.4 or greater on Pike when used with LinuxBridge VIFs to pick up the Icb039ae2d465e3822ab07ae4f9bc405c1362afba bugfix.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1579785
Prior to this update, to re-discover a compute node record after deleting a host mapping from the API database, the compute node record had to be manually marked as unmapped. Otherwise, a compute node with the same hostname could not be mapped back to the cell from which it was removed. With this update, the compute node record is automatically marked as unmapped when you delete a host from a cell, enabling a compute node with the same hostname to be added to the cell during host discovery.
Prior to this update, to re-discover a compute node record after deleting a host mapping from the API database, the compute node record had to be manually marked as unmapped. Otherwise, a compute node with the same hostname could not be mapped back to the cell from which it was removed. With this update, the compute node record is automatically marked as unmapped when you delete a host from a cell, enabling a compute node with the same hostname to be added to the cell during host discovery.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1517278
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1539703
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1547578
Prior to this update, a volume detach operation performed under certain failure scenarios could result in the removal of a volume's libvirt definition without full removal of the associated logical volume (LUN) from the host. This allowed Cinder to incorrectly perform subsequent operations while the compute host still had active paths to the device. As of this update, even under a failure scenario, Nova compute attempts to disconnect the LUN from the host. The result is a better release of the logical volume on the host.
Prior to this update, a volume detach operation performed under certain failure scenarios could result in the removal of a volume's libvirt definition without full removal of the associated logical volume (LUN) from the host. This allowed Cinder to incorrectly perform subsequent operations while the compute host still had active paths to the device. As of this update, even under a failure scenario, Nova compute attempts to disconnect the LUN from the host. The result is a better release of the logical volume on the host.Copy to Clipboard Copied! Toggle word wrap Toggle overflow