4.2. RHEA-2016:0604 - Red Hat OpenStack Platform 8 director 機能拡張アドバイザリー
本項に記載するバグは、アドバイザリー RHEA-2016:0604 で対応しています。このアドバイザリーに関する詳細は、を参照して https://access.redhat.com/errata/RHEA-2016:0604.html ください。
instack-undercloud
- BZ#1212158
This updates provides OpenStack notifications. Previously there were external consumers of OpenStack notifications that could not interface with director-deployed cloud because notifications were not enabled. Now director enables notifications for external consumers.
This updates provides OpenStack notifications. Previously there were external consumers of OpenStack notifications that could not interface with director-deployed cloud because notifications were not enabled. Now director enables notifications for external consumers.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1223257
A misconfiguration of Ceilometer on the Undercloud caused hardware meters to not work correctly. This fix provides a vaild default Ceilometer configuration. Now Ceilometer hardware meters work as expected.
A misconfiguration of Ceilometer on the Undercloud caused hardware meters to not work correctly. This fix provides a vaild default Ceilometer configuration. Now Ceilometer hardware meters work as expected.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1296295
Running "openstack undercloud install" attempted to delete and recreate the Undercloud's neutron subnet even if the subnet required no changes. If an Overcloud was already deployed, the subnet delete attempt failed since the subnet contained allocated ports. This caused the "openstack undercloud install" command to fail. This fix changes this behavior to only attempt to delete and recreate the subnet if the "openstack undercloud install" command has a configuration change to apply to the subnet. If an Overcloud is already deployed, the same error message still occurs since the director cannot delete the subnet. This is expected behavior though since we do not recommend change the subnet's configuration with an Overcloud already deployed. However, in cases with no subnet configuration changes, the "openstack undercloud install" command no longer fails with this error message.
Running "openstack undercloud install" attempted to delete and recreate the Undercloud's neutron subnet even if the subnet required no changes. If an Overcloud was already deployed, the subnet delete attempt failed since the subnet contained allocated ports. This caused the "openstack undercloud install" command to fail. This fix changes this behavior to only attempt to delete and recreate the subnet if the "openstack undercloud install" command has a configuration change to apply to the subnet. If an Overcloud is already deployed, the same error message still occurs since the director cannot delete the subnet. This is expected behavior though since we do not recommend change the subnet's configuration with an Overcloud already deployed. However, in cases with no subnet configuration changes, the "openstack undercloud install" command no longer fails with this error message.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1298189
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1315546
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1312889
This update removes the Tuskar API service from Red Hat OpenStack Platform director 8. Tuskar was installed and configured on the Undercloud, including an endpoint existing in the Keystone service catalog. The RPM is no longer installed, the service is not configured, and the endpoint is not created in the service catalog.
This update removes the Tuskar API service from Red Hat OpenStack Platform director 8. Tuskar was installed and configured on the Undercloud, including an endpoint existing in the Keystone service catalog. The RPM is no longer installed, the service is not configured, and the endpoint is not created in the service catalog.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-ironic-inspector
- BZ#1282580
The director includes new functionality to allow automatic profile matching. Users can specify automatic matching between nodes and deployment roles based on data available from the introspection step. Users now use ironic-inspector introspection rules and new python-tripleoclient commands to assign profiles to nodes.
The director includes new functionality to allow automatic profile matching. Users can specify automatic matching between nodes and deployment roles based on data available from the introspection step. Users now use ironic-inspector introspection rules and new python-tripleoclient commands to assign profiles to nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1270117
Previously, periodic iptables calls made by Ironic Inspector did not contain the -w option, which instructs iptables to wait for the xtables lock. As a consequence, periodic iptables updates occasionally failed. This update adds the -w option to the iptables calls, which prevents the periodic iptables updates from failing.
Previously, periodic iptables calls made by Ironic Inspector did not contain the -w option, which instructs iptables to wait for the xtables lock. As a consequence, periodic iptables updates occasionally failed. This update adds the -w option to the iptables calls, which prevents the periodic iptables updates from failing.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-ironic-python-agent
- BZ#1283650
Log processing in the introspection ramdisk did not take into account non-Latin characters in logs. Consequently, the "logs" collector failed during introspection. With this update, log processing has been fixed to properly handle any encoding.
Log processing in the introspection ramdisk did not take into account non-Latin characters in logs. Consequently, the "logs" collector failed during introspection. With this update, log processing has been fixed to properly handle any encoding.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1314642
The director uses a new ramdisk for inspection and deployment. This ramdisk included a new algorithm to pick the default root device for users not using root device hints. However, possible root device changes occurred on redployment, leading to failures. This fix reverts the ramdisk device logic to be the same as OpenStack Platform director 7. Note that this does not mean that the default root device is the same, as device names are not reliable. Also this behavior will change again in a future releases. Make sure to use root device hints if you nodes use multiple hard drives.
The director uses a new ramdisk for inspection and deployment. This ramdisk included a new algorithm to pick the default root device for users not using root device hints. However, possible root device changes occurred on redployment, leading to failures. This fix reverts the ramdisk device logic to be the same as OpenStack Platform director 7. Note that this does not mean that the default root device is the same, as device names are not reliable. Also this behavior will change again in a future releases. Make sure to use root device hints if you nodes use multiple hard drives.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-tripleo-heat-templates
- BZ#1295830
Pacemaker used a 100s timeout for service resources. However, a systemd timeout requires an additional timeout period after the initial timeout to accommodate for a SIGTERM and then a SIGKILL. This fix increases the Pacemaker timeout to 200s to accommodate two full systemd timeout periods. Now the timeout period is enough for systemd to perform a SIGTERM and then a SIGKILL.
Pacemaker used a 100s timeout for service resources. However, a systemd timeout requires an additional timeout period after the initial timeout to accommodate for a SIGTERM and then a SIGKILL. This fix increases the Pacemaker timeout to 200s to accommodate two full systemd timeout periods. Now the timeout period is enough for systemd to perform a SIGTERM and then a SIGKILL.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1311005
The notify=true parameter was previously missing from the RabbitMQ Pacemaker resource. Consequently, RabbitMQ instances were unable to rejoin the RabbitMQ cluster. This update adds support for notify=true to the pacemaker resource agent for RabbitMQ, and adds notify=true to OpenStack director. As a result, RabbitMQ instances are now able to rejoin the RabbitMQ cluster.
The notify=true parameter was previously missing from the RabbitMQ Pacemaker resource. Consequently, RabbitMQ instances were unable to rejoin the RabbitMQ cluster. This update adds support for notify=true to the pacemaker resource agent for RabbitMQ, and adds notify=true to OpenStack director. As a result, RabbitMQ instances are now able to rejoin the RabbitMQ cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1283632
The 'ceilometer' user lacked a role needed for some functionality, which causes some Ceilometer meters to function incorrectly. This fix adds the necessary role to the 'ceilometer' user. Now all ceilometer meters work correctly.
The 'ceilometer' user lacked a role needed for some functionality, which causes some Ceilometer meters to function incorrectly. This fix adds the necessary role to the 'ceilometer' user. Now all ceilometer meters work correctly.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1299227
Prior to this update, the swift_device and swift_proxy_memcache URIs used for the swift ringbuilder and the swift proxy memcache server respectively were not properly formatted for IPv6 addresses, lacking the expected '[]' delimiting the IPv6 address. As a consequence, when deploying with IPv6 enabled for the overcloud, the deploy failed with "Error: Parameter name failed on Ring_object_device ...". Now, when IPv6 is enabled, the IP addresses used as part of the swift_device and swift_proxy_memcache URIs are correctly delimited with '[]'. As a result, deploying with IPv6 no longer fails on incorrect formatting for swift_device or swift_proxy_memcache.
Prior to this update, the swift_device and swift_proxy_memcache URIs used for the swift ringbuilder and the swift proxy memcache server respectively were not properly formatted for IPv6 addresses, lacking the expected '[]' delimiting the IPv6 address. As a consequence, when deploying with IPv6 enabled for the overcloud, the deploy failed with "Error: Parameter name failed on Ring_object_device ...". Now, when IPv6 is enabled, the IP addresses used as part of the swift_device and swift_proxy_memcache URIs are correctly delimited with '[]'. As a result, deploying with IPv6 no longer fails on incorrect formatting for swift_device or swift_proxy_memcache.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1238807
This enhancement enables the distribution of per-node hieradata, matching the nodes from their UUID (as reported by 'dmidecode'). This allows you to scale CephStorage across nodes equipped with a different number/type of disks. As a result, CephStorage nodes can now be configured with non-homogeneous disk topologies. This is done by provisioning a different configuration hash for the ceph::profile::params::osds parameter.
This enhancement enables the distribution of per-node hieradata, matching the nodes from their UUID (as reported by 'dmidecode'). This allows you to scale CephStorage across nodes equipped with a different number/type of disks. As a result, CephStorage nodes can now be configured with non-homogeneous disk topologies. This is done by provisioning a different configuration hash for the ceph::profile::params::osds parameter.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1242396
Previously, the os-collect-config utility only printed Puppet logs after Puppet had finished running. As a consequence, Puppet logs were not available for Puppet runs that were in progress. With this update, logs for Puppet runs are available even when a Puppet run is in progress. They can be found in the /var/run/heat-config/deployed/ directory.
Previously, the os-collect-config utility only printed Puppet logs after Puppet had finished running. As a consequence, Puppet logs were not available for Puppet runs that were in progress. With this update, logs for Puppet runs are available even when a Puppet run is in progress. They can be found in the /var/run/heat-config/deployed/ directory.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1266104
This update adds neutron QoS (Quality of Service) extensions to provide better control over tenant networking qualities and limits. Overclouds are now deployed with Neutron QoS extension enabled.
This update adds neutron QoS (Quality of Service) extensions to provide better control over tenant networking qualities and limits. Overclouds are now deployed with Neutron QoS extension enabled.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1320454
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1279615
This update allows enabling of the Neutron L2 population feature. This helps reduce the amount of broadcast traffic in Tenant networks. Set the NeutronEnableL2Pop parameter in an environment file's 'default_parameters' section to enable Neutron L2 population.
This update allows enabling of the Neutron L2 population feature. This helps reduce the amount of broadcast traffic in Tenant networks. Set the NeutronEnableL2Pop parameter in an environment file's 'default_parameters' section to enable Neutron L2 population.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1225163
The Director now properly enabled notifications for external consumers.
The Director now properly enabled notifications for external consumers.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1259003
The domain name for overcloud nodes defaulted to 'localdomain'. For example: 'overcloud-compute-0.localdomain'. This enhancement provides a parameter (CloudDomain) to customize the domain name. Create an environment file with the CloudDomain parameter included in the 'parameter_defaults" section. If no domain name is defined, the Heat templates default to 'localdomain'.
The domain name for overcloud nodes defaulted to 'localdomain'. For example: 'overcloud-compute-0.localdomain'. This enhancement provides a parameter (CloudDomain) to customize the domain name. Create an environment file with the CloudDomain parameter included in the 'parameter_defaults" section. If no domain name is defined, the Heat templates default to 'localdomain'.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1273303
The Director now supports the OpenStack Networking 'enable_isolated_metadata' option. This option allows access to instance metadata on VMs on external routers or on isolated networks.
The Director now supports the OpenStack Networking 'enable_isolated_metadata' option. This option allows access to instance metadata on VMs on external routers or on isolated networks.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1308422
Previously, '/v2.0' was missing from the end of the URL specified in the admin_auth_url setting in the [neutron] section of /etc/nova/nova.conf. This would prevent Nova from being able to boot instances because it could not connect to the Keystone catalog to query for the Neutron service endpoint to create and bind the port for instances. Now, '/v2.0' is correctly added to the end of the URL specified in the admin_auth_url setting, allowing instances to be started successfully after deploying an overcloud with the director.
Previously, '/v2.0' was missing from the end of the URL specified in the admin_auth_url setting in the [neutron] section of /etc/nova/nova.conf. This would prevent Nova from being able to boot instances because it could not connect to the Keystone catalog to query for the Neutron service endpoint to create and bind the port for instances. Now, '/v2.0' is correctly added to the end of the URL specified in the admin_auth_url setting, allowing instances to be started successfully after deploying an overcloud with the director.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1298247
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1266219
The Director can now deploy the Block Storage service with a Dell EqualLogic or Dell Storage Center appliance as a back end. For more information, see: https://access.redhat.com/documentation/en/red-hat-openstack-platform/version-8/dell-equallogic-back-end-guide/ https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/dell-storage-center-back-end-guide/dell-storage-center-back-end-guide
The Director can now deploy the Block Storage service with a Dell EqualLogic or Dell Storage Center appliance as a back end. For more information, see: https://access.redhat.com/documentation/en/red-hat-openstack-platform/version-8/dell-equallogic-back-end-guide/ https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/dell-storage-center-back-end-guide/dell-storage-center-back-end-guide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
os-cloud-config
- BZ#1288475
A bug in the Identity service's endpoint registration code failed to mark the Telemetry service as SSL-enabled. This prevented the Telemetry service endpoint from being registered as HTTPS. This update fixes the bug: the Identity service now correctly registers Telemetry, and Telemetry traffic is now encrypted as expected.
A bug in the Identity service's endpoint registration code failed to mark the Telemetry service as SSL-enabled. This prevented the Telemetry service endpoint from being registered as HTTPS. This update fixes the bug: the Identity service now correctly registers Telemetry, and Telemetry traffic is now encrypted as expected.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1319878
When using Linux kernel mode for bridges and bonds (as opposed to Open vSwitch), the physical device was not detected for the VLAN interfaces. This, in turn, prevented the VLAN interfaces from working correctly. With this release, the os-net-config utility automatically detects the physical interface for a VLAN as long as the VLAN is a member of the physical bridge (that is, the VLAN must be in the 'members:' section of the bridge). As such, VLAN interfaces now work properly with both OVS bridges and Linux kernel bridges.
When using Linux kernel mode for bridges and bonds (as opposed to Open vSwitch), the physical device was not detected for the VLAN interfaces. This, in turn, prevented the VLAN interfaces from working correctly. With this release, the os-net-config utility automatically detects the physical interface for a VLAN as long as the VLAN is a member of the physical bridge (that is, the VLAN must be in the 'members:' section of the bridge). As such, VLAN interfaces now work properly with both OVS bridges and Linux kernel bridges.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1316730
In previous releases, when VLAN interfaces were placed directly on a Linux kernel bond with no bridge, it was possible for the VLANs to start before the bond. When this occurred, the VLANs failed to start. With this release, the os-net-config utility now starts the physical network (namely, bridges first, then bonds and interfaces) before VLANs. This ensures that the VLANs have the interfaces necessary to start properly.
In previous releases, when VLAN interfaces were placed directly on a Linux kernel bond with no bridge, it was possible for the VLANs to start before the bond. When this occurred, the VLANs failed to start. With this release, the os-net-config utility now starts the physical network (namely, bridges first, then bonds and interfaces) before VLANs. This ensures that the VLANs have the interfaces necessary to start properly.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-rdomanager-oscplugin
- BZ#1271250
In previous releases, a bug made it possible for failed nodes to be marked as available. Whenever this occurred, deployments failed because nodes were not in a proper state. This update backports an upstream patch to fix the bug.
In previous releases, a bug made it possible for failed nodes to be marked as available. Whenever this occurred, deployments failed because nodes were not in a proper state. This update backports an upstream patch to fix the bug.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-tripleoclient
- BZ#1288544
Previously, bulk introspection only printed on-screen errors, but never returned a failure status code. This prevented introspection failures from being detected. This update changes the status code of errors to non-zero, which ensures that failed introspections can now be detected through their status codes.
Previously, bulk introspection only printed on-screen errors, but never returned a failure status code. This prevented introspection failures from being detected. This update changes the status code of errors to non-zero, which ensures that failed introspections can now be detected through their status codes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1261920
Previously, bulk introspection operated on nodes currently in maintenance mode. This could cause introspection to fail, or even break node maintenance (depending on the reason for node maintenance). With this release, bulk introspection now ignores nodes in maintenance mode.
Previously, bulk introspection operated on nodes currently in maintenance mode. This could cause introspection to fail, or even break node maintenance (depending on the reason for node maintenance). With this release, bulk introspection now ignores nodes in maintenance mode.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1246589
In older deployments using the python-rdomanager-oscplugin (not the python-tripleoclient) for Overcloud deployment, the dhcp_agents_per_network parameter for neutron was set to a minimum of 3, even in the case of a non-HA single controller deployment. This meant the dhcp_agents_per_network was set to 3 when deploying with only 1 Controller. This fix takes into account the single Controller case. The director sets at most 3 dhcp_agents_per_network and never more than the number of Controllers. Now if you deploy in HA with 3 or more controller nodes, the dhcp_agents_per_network configuration parameter in neutron.conf on those Controller nodes will be set to '3'. Alternatively if you deploy in non-HA with only 1 Controller, this same dhcp_agents_per_network parameter will be set to '1'.
In older deployments using the python-rdomanager-oscplugin (not the python-tripleoclient) for Overcloud deployment, the dhcp_agents_per_network parameter for neutron was set to a minimum of 3, even in the case of a non-HA single controller deployment. This meant the dhcp_agents_per_network was set to 3 when deploying with only 1 Controller. This fix takes into account the single Controller case. The director sets at most 3 dhcp_agents_per_network and never more than the number of Controllers. Now if you deploy in HA with 3 or more controller nodes, the dhcp_agents_per_network configuration parameter in neutron.conf on those Controller nodes will be set to '3'. Alternatively if you deploy in non-HA with only 1 Controller, this same dhcp_agents_per_network parameter will be set to '1'.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
rhel-osp-director
- BZ#1293979
Updating packages on the Undercloud left the Undercloud in an indeterminate state. This meant some Undercloud services were disabled after the package update and could not start again. As a workaround, run 'openstack undercloud install' to reconfigure all Undercloud services. After the command complete, the Undercloud services operate normally.
Updating packages on the Undercloud left the Undercloud in an indeterminate state. This meant some Undercloud services were disabled after the package update and could not start again. As a workaround, run 'openstack undercloud install' to reconfigure all Undercloud services. After the command complete, the Undercloud services operate normally.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1234601
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1249601
OpenStack Bare Metal (ironic) now supports deploying nodes in UEFI mode. This is due to requests from customers with servers that only support UEFI boot.
OpenStack Bare Metal (ironic) now supports deploying nodes in UEFI mode. This is due to requests from customers with servers that only support UEFI boot.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1236372
A misconfiguration of the health check for Nova EC2 API caused HAProxy to believe the API was down. This meant the API was unreachable through HAProxy. This fix corrects the health check to query the API service state correctly. Now the Nova EC2 API is reachable through HAProxy.
A misconfiguration of the health check for Nova EC2 API caused HAProxy to believe the API was down. This meant the API was unreachable through HAProxy. This fix corrects the health check to query the API service state correctly. Now the Nova EC2 API is reachable through HAProxy.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1265180
The director requires the 'baremetal' flavor, even if unused. Without this flavor, the deployment fails with an error. Now the Undercloud installation automatically creates the 'baremetal' flavor. With the flavor in place, the director does not report the error.
The director requires the 'baremetal' flavor, even if unused. Without this flavor, the deployment fails with an error. Now the Undercloud installation automatically creates the 'baremetal' flavor. With the flavor in place, the director does not report the error.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1318583
Previously, the os_tenant_name variable in the Ceilometer configuration was incorrectly set to the 'admin' tenant instead of the 'service' tenant. This caused the ceilometer-central-agent to fail with the error "ERROR ceilometer.agent.manager Skipping tenant, keystone issue: User 739a3abf8504498e91044d6d2a6830b1 is unauthorized for tenant d097e6c45c494c2cbef4071c2c273a58". Now, Ceilometer is correctly configured to use the 'service' tenant.
Previously, the os_tenant_name variable in the Ceilometer configuration was incorrectly set to the 'admin' tenant instead of the 'service' tenant. This caused the ceilometer-central-agent to fail with the error "ERROR ceilometer.agent.manager Skipping tenant, keystone issue: User 739a3abf8504498e91044d6d2a6830b1 is unauthorized for tenant d097e6c45c494c2cbef4071c2c273a58". Now, Ceilometer is correctly configured to use the 'service' tenant.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1315467
Previously, after upgrading the undercloud, there was a missing restart of the openstack-nova-api service, which would cause upgrades of the overcloud to fail due to a timeout that would report the error "ERROR: Timed out waiting for a reply to message ID 84a44ca3ed724eda991ba689cc364852". Now, the openstack-nova-api service is correctly restarted as part of the undercloud upgrade process, allowing the overcloud upgrade process to proceed without encountering this timeout issue.
Previously, after upgrading the undercloud, there was a missing restart of the openstack-nova-api service, which would cause upgrades of the overcloud to fail due to a timeout that would report the error "ERROR: Timed out waiting for a reply to message ID 84a44ca3ed724eda991ba689cc364852". Now, the openstack-nova-api service is correctly restarted as part of the undercloud upgrade process, allowing the overcloud upgrade process to proceed without encountering this timeout issue.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow