Chapter 4. Technical Notes
This chapter supplements the information contained in the text of Red Hat OpenStack Platform "Train" errata advisories released through the Content Delivery Network.
4.1. RHEA-2020:0283 — Red Hat OpenStack Platform 16.0 general availability advisory
The bugs contained in this section are addressed by advisory RHEA-2020:0283. Further information about this advisory is available at link: https://access.redhat.com/errata/RHEA-2020:0283.html.
Changes to the distribution component:
- In Red Hat OpenStack Platform 16.0, a part of the Telemetry service, the ceilometer client (that was deprecated in an earlier RHOSP release) is no longer supported and has been removed. Note that ceilometer continues to be a part of RHOSP as an agent-only service (no client and no API). (BZ#1518222)
Changes to the openstack-cinder component:
- Previously, when using Red Hat Ceph Storage as a back end for both the Block Storage service (cinder) volumes and backups, any attempt to perform a full backup—after the first full backup—instead resulted in an incremental backup without any warning. In Red Hat OpenStack Platform 16.0, a technology preview has fixed this issue. (BZ#1375207)
- The Red Hat OpenStack Platform Block Storage service (cinder) now automatically changes the encryption keys when cloning volumes. Note, that this feature currently does not support using Red Hat Ceph Storage as a cinder back end. (BZ#1545700)
Changes to the openstack-glance component:
Previously, when an encrypted Block Storage service (cinder) volume image was deleted, its corresponding key was not deleted.
In Red Hat OpenStack Platform 16.0, this issue has been resolved. When the Image service deletes a cinder volume image, it also deletes the key for the image. (BZ#1481814)
- In Red Hat OpenStack Platform 16.0, a technology preview has been added to the Image service (glance) that pre-caches images so that operators can warm the cache before they boot an instance. (BZ#1706896)
Changes to the openstack-heat component:
- The Red Hat OpenStack Platform Orchestration service (heat) now includes a new resource type, OS::Glance::WebImage, used for creating an Image service (glance) image from a URL using the Glance v2 API. This new resource type replaces an earlier one, OS::Glance::Image. (BZ#1649264)
Changes to the openstack-keystone component:
- Keystone now supports a basic set of default roles (for example, admin, member, and reader) that are present in the system after a Red Hat OpenStack Platform director deployment. These default roles are incorporated into the default director policies across all authorization targets (for example, system, domains, and projects). (BZ#1228474)
Changes to the openstack-manila component:
- In Red Hat OpenStack Platform 16.0, a technology preview has been added for the Shared File Systems service (manila) for IPv6 to work in the CephFS NFS driver. (BZ#1575079)
Changes to the openstack-neutron component:
Red Hat OpenStack Platform deployments that use the Linux bridge ML2 driver and agent are unprotected against Address Resolution Protocol (ARP) spoofing. The version of Ethernet bridge frame table administration (ebtables) that is part of Red Hat Enterprise Linux 8 is incompatible with the Linux bridge ML2 driver.
The Linux Bridge ML2 driver and agent were deprecated in Red Hat OpenStack Platform 11, and should not be used.
Red Hat recommends that you use instead the ML2 Open Virtual Network (OVN) driver and services, the default deployed by Red Hat OpenStack Platform director. (BZ#1779221)
You can now forward the traffic from a TCP, UDP, or other protocol port of a floating IP address to a TCP, UDP, or other protocol port associated to one of the fixed IP addresses of a neutron port. Forwarded traffic is managed by an extension to the neutron API and by an OpenStack Networking plug-in. A floating IP address can have more than one forwarding definition configured. However, you cannot forward traffic for IP addresses that have a pre-existing association to an OpenStack Networking port. Traffic can only be forwarded for floating IP addresses that are managed by centralized routers on the network (legacy, HA, and DVR+HA).
To forward traffic for a port of a floating IP address, use the following OpenStack Networking plug-in command:
openstack floating ip port forwarding create --internal-ip-address <internal-ip-address> --port <port> --internal-protocol-port <port-number> --external-protocol-port <port-number> --protocol <protocol> <floating-ip>
--internal-ip-address <internal-ip-address> The fixed, IPv4, internal IP address of the neutron port that will receive the forwarded traffic.
--port <port> The name or ID of the neutron port that will receive the forwarded traffic.
--internal-protocol-port <port-number> The protocol port number of the neutron, fixed IP address that will receive the forwarded traffic.
--external-protocol-port <port-number> The protocol port number of the port of the floating IP address that will forward its traffic.
--protocol <protocol> The protocol that the port of the floating IP address uses (for example, TCP, UDP).
<floating-ip> The floating IP (IP address or ID) of the port that will forward its traffic.
Here is an example:
openstack floating ip port forwarding create \ --internal-ip-address 192.168.1.2 \ --port f7a08fe4-e79e-4b67-bbb8-a5002455a493 \ --internal-protocol-port 18343 \ --external-protocol-port 8343 \ --protocol tcp \ 10.0.0.100 (BZ#1749483)
Changes to the openstack-nova component:
- With this enhancement, support for live migration of instances with a NUMA topology has been added. Previously, this action was disabled by default. It could be enabled using the '[workarounds] enable_numa_live_migration' config option, but this defaulted to False because live migrating such instances resulted in them being moved to the destination host without updating any of the underlying NUMA guest-to-host mappings or the resource usage. With the new NUMA-aware live migration feature, if the instance cannot fit on the destination, the live migration will be attempted on an alternate destination if the request is set up to have alternates. If the instance can fit on the destination, the NUMA guest-to-host mappings will be re-calculated to reflect its new host, and its resource usage updated. (BZ#1222414)
With this enhancement, support for live migration of instances with attached SR-IOV-based neutron interfaces has been added. Neutron SR-IOV interfaces can be grouped into two categories: direct mode and indirect mode. Direct mode SR-IOV interfaces are directly attached to the guest and exposed to the guest OS. Indirect mode SR-IOV interfaces have a software interface, for example, a macvtap, between the guest and the SR-IOV device. This feature enables transparent live migration for instances with indirect mode SR-IOV devices. As there is no generic way to copy hardware state during a live migration, direct mode migration is not transparent to the guest. For direct mode interfaces, mimic the workflow already in place for suspend and resume. For example, with SR-IOV devices, detach the direct mode interfaces before migration and re-attach them after the migration. As a result, instances with direct mode SR-IOV ports lose network connectivity during a migration unless a bond with a live migratable interface is created within the guest.
Previously, it was not possible to live migrate instances with SR-IOV-based network interfaces. This was problematic as live migration is frequently used for host maintenance and similar actions. Previously, the instance had to be cold migrated which involves downtime for the guest.
This enhancement results in the live migration of instances with SR-IOV-based network interfaces. (BZ#1360970)
- In Red Hat OpenStack Platform 16.0, it is now possible to specify QoS minimum bandwidth rules when creating network interfaces. This enhancement ensures that the instance is guaranteed a specified value of a network’s available bandwidth. Currently, the only supported operations are resize and cold migrate. (BZ#1463838)
-
The
NUMATopologyFilter
is now disabled when rebuilding instances. Previously, the filter would always execute and the rebuild would only succeed if a host had enough additional capacity for a second instance using the new image and existing flavor. This was incorrect and unnecessary behavior. (BZ#1775246) - You can now configure PCI NUMA affinity on an instance-level basis. This is required to configure NUMA affinity for instances with SR-IOV-based network interfaces. Previously, NUMA affinity was only configurable at a host-level basis for PCI passthrough devices. (BZ#1775575)
With this enhancement, you can schedule dedicated (pinned) and shared (unpinned) instances on the same Compute node using the following parameters:
-
NovaComputeCpuDedicatedSet
- A comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. Replaces the NovaVcpuPinSet parameter, which is now deprecated. NovaComputeCpuSharedSet
- A comma-separated list or range of physical host CPU numbers used to provide vCPU inventory, determine the host CPUs that unpinned instances can be scheduled to, and determine the host CPUS that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy,hw:emulator_threads_policy=share
. Note: This option previously existed but its purpose has been extended with this feature.It is no longer necessary to use host aggregates to ensure these instance types run on separate hosts. Also, the
[DEFAULT] reserved_host_cpus
config option is no longer necessary and can be unset.To upgrade:
-
For hosts that were previously used for pinned instances, the value of
NovaVcpuPinSet
should be migrated toNovaComputeCpuDedicatedSet
. -
For hosts that were previously used for unpinned instances, the value of
NovaVcpuPinSet
should be migrated toNovaComputeCpuSharedSet
. If there is no value set for
NovaVcpuPinSet
, then all host cores should be assigned to eitherNovaComputeCpuDedicatedSet
orNovaComputeCpuSharedSet
, depending on the type of instance running there.Once the upgrade is complete, it is possible to start setting both options on the same host. However, to do this, the host should be drained of instances as nova will not start when cores for an unpinned instance are not listed in
NovaComputeCpuSharedSet
and vice versa. (BZ#1693372)
-
Changes to the openstack-octavia component:
- You can now use the Octavia API to create a VIP access control list (ACL) to limit incoming traffic to a listener to a set of allowed source IP addresses (CIDRs). Any other incoming traffic is rejected. For more information, see "Secure a load balancer with an access control list" in the "Networking Guide." (BZ#1691025)
- In Red Hat OpenStack Platform 16.0, the Load-balancing service (octavia) now has a technology preview for UDP protocol. (BZ#1703956)
Changes to the openstack-placement component:
- The Placement service has been extracted from the Compute (nova) service. It is now deployed and managed by the director, and runs as an additional container on the undercloud and on overcloud controller nodes. (BZ#1625244)
Changes to the openstack-tripleo-common component:
In Red Hat OpenStack Platform 16.0, a Workflow service (mistral) task is in technology preview that allows you to implement password rotation by doing the following:
- Execute the rotate-password workflow to generate new passwords and store them in the plan environment.
Redeploy your overcloud.
You can also obtain your passwords after you have changed them.
To implement password rotation, follow these steps:
NoteThe workflow task modifies the default passwords. The task does not modify passwords that are specified in a user-provided environment file.
Execute the new workflow task to regenerate the passwords:
$ source ./stackrc $ openstack workflow execution create tripleo.plan_management.v1.rotate_passwords '{"container": "overcloud"}'
This command generates new passwords for all passwords except for BarbicanSimpleCryptoKek and KeystoneFernet* and KeystoneCredential*. There are special procedures to rotate these passwords.
It is also possible to specify specific passwords to be rotated. The following command rotates only the specified passwords.
$ openstack workflow execution create tripleo.plan_management.v1.rotate_passwords '{"container": "overcloud", "password_list": ["BarbicanPassword", "SaharaPassword", "ManilaPassword"]}'
Redeploy your overcloud:
$ ./overcloud-deploy.sh
To retrieve the passwords, including the newly generated ones, follow these steps:
Run the following command:
$ openstack workflow execution create tripleo.plan_management.v1.get_passwords '{"container": "overcloud"}'
You should see output from the command, similar to the following:
+--------------------+---------------------------------------------+ | Field | Value | +--------------------+---------------------------------------------+ | ID | edcf9103-e1a8-42f9-85c1-e505c055e0ed | | Workflow ID | 8aa2ac9b-22ee-4e7d-8240-877237ef0d0a | | Workflow name | tripleo.plan_management.v1.rotate_passwords | | Workflow namespace | | | Description | | | Task Execution ID | <none> | | Root Execution ID | <none> | | State | RUNNING | | State info | None | | Created at | 2020-01-22 15:47:57 | | Updated at | 2020-01-22 15:47:57 | +--------------------+---------------------------------------------+
In the earlier example output, the value of State is RUNNING. State should eventually read SUCCESS.
Re-check the value of State:
$ openstack workflow execution show edcf9103-e1a8-42f9-85c1-e505c055e0ed
+--------------------+---------------------------------------------+ | Field | Value | +--------------------+---------------------------------------------+ | ID | edcf9103-e1a8-42f9-85c1-e505c055e0ed | | Workflow ID | 8aa2ac9b-22ee-4e7d-8240-877237ef0d0a | | Workflow name | tripleo.plan_management.v1.rotate_passwords | | Workflow namespace | | | Description | | | Task Execution ID | <none> | | Root Execution ID | <none> | | State | SUCCESS | | State info | None | | Created at | 2020-01-22 15:47:57 | | Updated at | 2020-01-22 15:48:39 | +--------------------+---------------------------------------------+
When the value of State is SUCCESS, you can retrieve passwords:
$ openstack workflow execution output show edcf9103-e1a8-42f9-85c1-e505c055e0ed
You should see output similar to the following:
{ "status": "SUCCESS", "message": { "AdminPassword": "FSn0sS1aAHp8YK2fU5niM3rxu", "AdminToken": "dTP0Wdy7DtblG80M54r4a2yoC", "AodhPassword": "fB5NQdRe37BaBVEWDHVuj4etk", "BarbicanPassword": "rn7yk7KPafKw2PWN71MvXpnBt", "BarbicanSimpleCryptoKek": "lrC3sGlV7-D7-V_PI4vbDfF1Ujm5OjnAVFcnihOpbCg=", "CeilometerMeteringSecret": "DQ69HdlJobhnGWoBC0jM3drPF", "CeilometerPassword": "qI6xOpofuiXZnG95iUe8Oxv5d", "CephAdminKey": "AQDGVPpdAAAAABAAZMP56/VY+zCVcDT81+TOjg==", "CephClientKey": "AQDGVPpdAAAAABAAanYtA0ggpcoCbS1nLeDN7w==", "CephClusterFSID": "141a5ede-21b4-11ea-8132-52540031f76b", "CephDashboardAdminPassword": "AQDGVPpdAAAAABAAKhsx630YKDhQrocS4o4KzA==", "CephGrafanaAdminPassword": "AQDGVPpdAAAAABAAKBojG+CO72B0TdBRR0paEg==", "CephManilaClientKey": "AQDGVPpdAAAAABAAA1TVHrTVCC8xQ4skG4+d5A==" } } (BZ#1600967)
Changes to the openstack-tripleo-heat-templates component:
Nova-compute ironic driver tries to update BM node while the node is being cleaned up. The cleaning takes approximately five minutes but nova-compute attempts to update the node for approximately two minutes. After timeout, nova-compute stops and puts nova instance into ERROR state.
As a workaround, set the following configuration option for nova-compute service:
[ironic] api_max_retries = 180
As a result, nova-compute continues to attempt to update BM node longer and eventually succeeds. (BZ#1647005)
There is a known issue where Red Hat OpenStack Platform 16.0, where metadata information required for configuring OpenStack instances is not available, and they might be started without connectivity.
An ordering issue causes the haproxy wrappers not to be updated for the ovn_metadata_agent.
A possible workaround is for the cloud operator to run the following Ansible command to restart the ovn_metadata_agent on select nodes after the update, to ensure that the ovn_metadata_agent is using an updated version of the haproxy wrapper script:
ansible -b <nodes> -i /usr/bin/tripleo-ansible-inventory -m shell -a "status=`sudo systemctl is-active tripleo_ovn_metadata_agent
; if test \"$status\" == \"active\"; then sudo systemctl restart tripleo_ovn_metadata_agent; echo restarted; fi"`In the earlier Ansible command,
nodes
may be a single node (for example,compute-0
), all computes (for example,compute*
) or"all"
.As the ovn_metadata_agent is most commonly found on compute nodes, the following Ansible command restarts the agent for all compute nodes in the cloud:
ansible -b compute* -i /usr/bin/tripleo-ansible-inventory -m shell -a "status=`sudo systemctl is-active tripleo_ovn_metadata_agent
; if test \"$status\" == \"active\"; then sudo systemctl restart tripleo_ovn_metadata_agent; echo restarted; fi"`After you restart the ovn_metadata_agent services, they use the updated haproxy wrapper script, which enables them to provide metadata to VMs when they are started. Affected VMs already running should behave normally when they are restarted after the workaround has been applied. (BZ#1790467)
- Red Hat OpenStack Platform 16.0 director, now supports multi-compute cell deployments. With this enhancement, your cloud is better positioned for scaling out, because each individual cell has its own database and message queue on a cell controller and reduces the load on the central control plane. For more information, see "Scaling deployments with Compute cells" in the "Instances and Images" guide. (BZ#1328124)
- Starting with this update, OSP deployments have full encryption between all the OVN services. All OVN clients (ovn-controller, neutron-server and ovn-metadata-agent) now connect to the OVSDB server using Mutual TLS encryption. (BZ#1601926)
In Red Hat OpenStack Platform 16.0, a technology preview has been added to the Orchestration service (heat) for rsyslog changes:
- Rsyslog is configured to collect and forward container logs to be functionally equivalent to the fluentd installation.
- Administrators can configure rsyslog log forwarding in the same way as fluentd. (BZ#1623152)
- In Red Hat OpenStack Platform 16.0, you can now use director to specify an availability zone for the Block Storage service (cinder) back end type. (BZ#1700396)
- Red Hat OpenStack Platform director now enables you to deploy an additional node that can be used to add additional Bare Metal Provisioning conductor service resources for system provisioning during deployments. (BZ#1710093)
- In Red Hat OpenStack Platform 16.0, the Elastic Compute Cloud (EC2) API is no longer supported. The EC2 API support is now deprecated in director and will be removed in a future RHOSP release. (BZ#1754560)
In Red Hat OpenStack Platform 16.0, the controller-v6.yaml file is removed. The routes that were defined in controller-v6.yaml are now defined in controller.yaml. (The controller.yaml file is a NIC configuration file that is rendered from values set in roles_data.yaml.)
Previous versions of Red Hat OpenStack director, included two routes: one for IPv6 on the External network (default) and one for IPv4 on the Control Plane.
To use both default routes, make sure that the controller definition in roles_data.yaml contains both networks in default_route_networks (for example,
default_route_networks: ['External', 'ControlPlane']
). (BZ#1631508)-
In Red Hat OpenStack Platform 16.0, you can now add custom Red Hat Ceph Storage configuration settings to any section of ceph.conf. Previously, custom settings were allowed only in the
[global]
section of ceph.conf. (BZ#1666973) In Red Hat OpenStack Platform 16.0, a new Orchestration service (heat) deployment parameter is available that enables administrators to turn on the nova metadata service on cell controllers:
parameter_defaults: NovaLocalMetadataPerCell: True
This new parameter automatically directs traffic from the OVN metadata agent on the cell computes to the nova metadata API service hosted on the cell controllers.
Depending on the RHOSP topology, the ability to run the metadata service on cell controllers can reduce the traffic on the central control plane. (BZ#1689816)
In Red Hat OpenStack Platform 16.0, a technology preview has been added to the Orchestration service (heat). A new parameter,
NovaSchedulerQueryImageType,
has been added that controls the Compute service (nova) placement and scheduler components query placement for image type (scheduler/query_placement_for_image_type_support).When set to true (the default),
NovaSchedulerQueryImageType
excludes compute nodes that do not support the disk format of the image used in a boot request.For example, the libvirt driver uses Red Hat Ceph Storage as an ephemeral back end, and does not support qcow2 images (without an expensive conversion step). In this case, enabling
NovaSchedulerQueryImageType
ensures that the scheduler does not send requests to boot a qcow2 image to compute nodes that use Red Hat Ceph Storage. (BZ#1710634)
Changes to the puppet-tripleo component:
- Red Hat OpenStack Platform director now offers a technology preview for fence_redfish, a fencing agent for the Redfish API. (BZ#1699449)
Changes to the python-django-horizon component:
- In the Red Hat OpenStack Platform 16.0 dashboard (horizon), there is now a new form for changing a user’s password. This form automatically appears when a user tries to sign on with an expired password. (BZ#1628541)
Changes to the python-networking-ansible component:
- In Red Hat OpenStack Platform 16.0, a technology preview is added to the OpenStack Bare Metal service (ironic) to configure ML2 networking-ansible functionality with Arista Extensible Operating System (Arista EOS) switches. For more information, see "Enabling networking-ansible ML2 functionality," in the Bare Metal Provisioning guide. (BZ#1621701)
- In Red Hat OpenStack Platform 16.0, a technology preview has been added to modify switch ports to put them into trunking mode and assign more than one VLAN to them. (BZ#1622233)
Changes to the python-networking-ovn component:
In Red Hat OpenStack Platform 16.0, live migrations with OVN enabled now succeed, as the flag,
live_migration_wait_for_vif_plug,
is enabled by default.Previously, live migrations failed, because the system was waiting for OpenStack Networking (neutron) to send
vif_plugged
notifications. (BZ#1716335)- Currently, the OVN load balancer does not open new connections when fetching data from members. The load balancer modifies the destination address and destination port and sends request packets to the members. As a result, it is not possible to define an IPv6 member while using an IPv4 load balancer address and vice versa. There is currently no workaround for this issue. (BZ#1734301)
Changes to the python-novajoin component:
-
Previously, when Novajoin lost its connection to the IPA server, it would immediately attempt to reconnect. Consequently, timing issues could arise and prevent the connection from being re-established. With this update, you can use
retry_delay
to set the number of seconds to wait before retrying the IPA server connection. As a result, this is expected to help mitigate the timing issues. (BZ#1767481)
Changes to the python-oslo-utils component:
- Previously, the regular expression for the oslo.util library was not updated, and it failed to recognize the output format from a newer version of the emulator, qemu (version 4.1.0). This fix in Red Hat OpenStack 16.0 updates the regular expression, and the oslo.util.imageutils library now functions properly. (BZ#1758302)
Changes to the python-tripleoclient component:
-
Director has added the
overcloud undercloud minion install
command that you can use to configure an additional host to augment the Undercloud services. (BZ#1710089) - Director now provides the ability to deploy an additional node that you can use to add additional heat-engine resources for deployment related actions. (BZ#1710092)
In Red Hat OpenStack Platform 16.0, you are now able to push, list, delete, and show (show metadata) images on the local registry.
To push images from remote repository to the main repository:
$ sudo openstack tripleo container image push docker.io/library/centos
To list the contents of the repository:
$ openstack tripleo container image list
To delete images:
$ sudo openstack tripleo container image delete
To show metadata for an image:
$ openstack tripleo container image show (BZ#1545855)
-
With this enhancement, overcloud node deletion requires user confirmation before the action will be performed to reduce the likelihood that the action is performed unintentionally. The
openstack overcloud node delete <node>
command requires a Y/n confirmation before the action executes. You can bypass this by adding--yes
to the command line. (BZ#1593057)
4.2. RHBA-2020:2114 — Red Hat OpenStack Platform 16.0.2 advisory
The bugs contained in this section are addressed by advisory 2020:2114. Further information about this advisory is available at link: https://access.redhat.com/errata/RHBA-2020:2114.html.
Changes to the openstack-tripleo-common component:
- This update fixes authentication timeouts caused by slow transfer of container images. Previously, undercloud and overcloud pulls against container sources that require authentication could fail, and generate a 401 error, if the image transfer exceeded five minutes. Now, if the container fetching process exceeds 5 minutes, the code attempts to re-authenticate, preventing the timeout. (BZ#1813520)
A Grafana Ceph 4.1 dependency causes Ceph dashboard bugs. The Ceph dashboard requires Ceph 4.1 and a Grafana container based on ceph4-rhel8. Presently, Red Hat supports ceph3-rhel7.3. This discrepancy causes the following dashboard bugs:
When you navigate to Pools > Overall Performance, Grafana returns the following error:
TypeError: l.c[t.type] is undefined true
When you view a pool’s performance details (Pools > select a pool from the list > Performance Details) the Grafana bar is displayed along with other graphs and values, but it should not be there.
These bugs will be fixed after rebasing to a newer Grafana version. (BZ#1824093)
-
This update fixes a bug that caused the
upload-puppet-modules
command to fail after the first invocation. A recent OpenStack command line interface update changed the OpenStack formats JSON data. That new format broke a script responsible for maintaining an internal URL used by the 'upload-puppet-modules' command. The script has been fixed to correctly handle the JSON data. Now the 'upload-puppet-modules' command functions correctly every time. (BZ#1808369)
Changes to the openstack-tripleo-heat-templates component:
There is a known issue for the Red Hat OpenStack Platform Load-balancing service: the containers octavia_api and octavia_driver_agent fail to start when rebooting a node.
The cause for this issue is that the directory, /var/run/octavia, does not exist when the node is rebooted.
To fix this issue, add the following line to the file, /etc/tmpfiles.d/var-run-octavia.conf:
d /var/run/octavia 0755 root root - -
(BZ#1795956)
RHOSP 16.0 works only with RHEL 8.1. Ensure that all the hosts of your OSP deployment are pinned to RHEL 8.1 before running the update.
See "Locking the environment to a Red Hat Enterprise Linux release" [1] in the guide "Keeping Red Hat OpenStack Platform Updated."
- This update fixes a bug that prevented display of Granfana layouts in Ceph dashboard iframes in high availability scenarios. Previously, the Grafana fronend could not be reached on the storage network. GET requests got stuck. This fix moves the Grafana frontend to the same network used by the Ceph dashboard. Now the GET requests succeed and the Grafana layouts are available in the Ceph dashboard. (BZ#1815037)
- Director operations involving the RADOS gateway no longer require interaction with puppet-ceph. Previously, tripleo-heat-templates had a dependency on puppet-ceph for the RADOS gateway component deployment. The move to tripleo-ansible eliminates this dependency. (BZ#1695898)
Changes to the python-tripleoclient component:
-
This update fixes a bug that caused the
openstack overcloud node import
command to fail with iPXE disabled ('ipxe_enabled=False'). With iPXE disabled, you must use the--http-boot
argument to specify the location of kernel and ramdisk images (--http-boot /var/lib/ironic/tftpboot
). Previously, theopenstack overcloud node import
command ignored the--http-boot
argument. The nodes failed to deploy. Now theopenstack overcloud node import
command responds correctly to thehttp-boot
argument and the nodes are deployed as expected. (BZ#1793175)