Questo contenuto non è disponibile nella lingua selezionata.
Chapter 3. Release Information
These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
Notes for updates released during the support lifecycle of this Red Hat OpenStack Platform release will appear in the advisory text associated with each update.
3.1. Red Hat OpenStack Platform 14 GA Copia collegamentoCollegamento copiato negli appunti!
These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
3.1.1. Enhancements Copia collegamentoCollegamento copiato negli appunti!
This release of Red Hat OpenStack Platform features the following enhancements:
BZ#1241017
This update adds hostname and network name to the output of the 'openstack port list' command. The additional information makes it easier to associate Neutron port and IP addresses with a particular host.
This update adds hostname and network name to the output of the 'openstack port list' command.
The additional information makes it easier to associate Neutron port and IP addresses with a particular host.
BZ#1402584
BZ#1410195
Heat templates now include the `CephClusterName` parameter. This parameter enables you to customize the Ceph cluster name, which you might need to do if you use an external Ceph cluster or a Ceph RBDMirror.
Heat templates now include the `CephClusterName` parameter. This parameter enables you to customize the Ceph cluster name, which you might need to do if you use an external Ceph cluster or a Ceph RBDMirror.
BZ#1462048
With this update, Users can create application credentials to allow their applications to authenticate to keystone. See https://docs.openstack.org/keystone/latest/user/application_credentials.html
With this update, Users can create application credentials to allow their applications to authenticate to keystone.
See https://docs.openstack.org/keystone/latest/user/application_credentials.html
BZ#1469073
BZ#1512941
This update supports two new tunable options that can be used to reduce packet drop. Virtual CPUs (vCPUs) can be preempted by the hypervisor kernel thread even with strong partitioning in place (isolcpus, tuned). Preemptions are not frequent, a few per second, but with 256 descriptors per virtio queue, just one preemption of the vCPU can lead to packet drop, because the 256 slots are filled during the preemption. This is the case for network functions virtualization (NFV) VMs in which the per queue packet rate is above 1 Mpps (1 million packets per second). This update supports two new tunable options: 'rx_queue_size' and 'tx_queue_size'. Use these options to configure the RX queue size and TX queue size of virtio NICs, respectively, to reduce packet drop.
This update supports two new tunable options that can be used to reduce packet drop.
Virtual CPUs (vCPUs) can be preempted by the hypervisor kernel thread even with strong partitioning in place (isolcpus, tuned). Preemptions are not frequent, a few per second, but with 256 descriptors per virtio queue, just one preemption of the vCPU can lead to packet drop, because the 256 slots are filled during the preemption. This is the case for network functions virtualization (NFV) VMs in which the per queue packet rate is above 1 Mpps (1 million packets per second).
This update supports two new tunable options: 'rx_queue_size' and 'tx_queue_size'. Use these options to configure the RX queue size and TX queue size of virtio NICs, respectively, to reduce packet drop.
BZ#1521176
Instance resources have three new attributes: launched_at, deleted_at, created_at to track the exact time that Nova creates/launches/deletes instances.
Instance resources have three new attributes: launched_at, deleted_at, created_at to track the exact time that Nova creates/launches/deletes instances.
BZ#1523328
OpenStack director now uses Ansible for software configuration of the overcloud nodes. Ansible provides a more familiar and debuggable operator experience during overcloud deployment. Ansible is used to replace the communication and transport of the software configuration deployment data between heat and the heat agent (os-collect-config) on the overcloud nodes. Instead of os-collect-config running on each overcloud node and polling for deployment data from heat, the Ansible control node applies the configuration by running an ansible-playbook with an Ansible inventory file and a set of playbooks and tasks. The Ansible control node (the node running ansible-playbook) is the undercloud by default.
OpenStack director now uses Ansible for software configuration of the overcloud nodes. Ansible provides a more familiar and debuggable operator experience during overcloud deployment. Ansible is used to replace the communication and transport of the software configuration deployment data between heat and the heat agent (os-collect-config) on the overcloud nodes.
Instead of os-collect-config running on each overcloud node and polling for deployment data from heat, the Ansible control node applies the configuration by running an ansible-playbook with an Ansible inventory file and a set of playbooks and tasks. The Ansible control node (the node running ansible-playbook) is the undercloud by default.
BZ#1547708
OpenStack Sahara now supports Cloudera Distribution Hadoop (CDH) plugin 5.13.
OpenStack Sahara now supports Cloudera Distribution Hadoop (CDH) plugin 5.13.
BZ#1547710
This update adds support of s3-compatible object stores for OpenStack Sahara.
This update adds support of s3-compatible object stores for OpenStack Sahara.
BZ#1547954
With this release, Nova's libvirt driver now allows the specification of granular CPU feature flags when configuring CPU models.
One benefit of this change is the alleviation of a performance degradation experienced on guests running with certain Intel-based virtual CPU models after application of the "Meltdown" CVE fixes. This guest performance impact is reduced by exposing the CPU feature flag 'PCID' ("Process-Context ID") to the *guest* CPU, assuming that the PCID flag is available in the physical hardware itself.
For more details, refer to the documentation of ``[libvirt]/cpu_model_extra_flags`` in ``nova.conf`` for usage details.
With this release, Nova's libvirt driver now allows the specification of granular CPU feature flags when configuring CPU models.
One benefit of this change is the alleviation of a performance degradation experienced on guests running with certain Intel-based virtual CPU models after application of the "Meltdown" CVE fixes. This guest performance impact is reduced by exposing the CPU feature flag 'PCID' ("Process-Context ID") to the *guest* CPU, assuming that the PCID flag is available in the physical hardware itself.
For more details, refer to the documentation of ``[libvirt]/cpu_model_extra_flags`` in ``nova.conf`` for usage details.
BZ#1562171
This update introduces multi-tenant bare metal networking with the "neutron" network interface. By configuring the bare metal nodes with the "neutron" network interface, an operator can enable the users to use isolated VLAN networks for provisioning and tenant traffic on bare metal nodes.
This update introduces multi-tenant bare metal networking with the "neutron" network interface.
By configuring the bare metal nodes with the "neutron" network interface, an operator can enable the users to use isolated VLAN networks for provisioning and tenant traffic on bare metal nodes.
BZ#1639759
BZ#1654123
Red Hat OpenStack Platform 14 is now supported on IBM POWER9 CPUs. This support is provided with the `rhosp-director-images-ppc64lep9` and `rhosp-director-images-ipa-ppc64lep9` packages.
Red Hat OpenStack Platform 14 is now supported on IBM POWER9 CPUs. This support is provided with the `rhosp-director-images-ppc64lep9` and `rhosp-director-images-ipa-ppc64lep9` packages.
3.1.2. Technology Preview Copia collegamentoCollegamento copiato negli appunti!
The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/.
BZ#1033180
This release adds a Technology Preview of the ability to attach a volume to multiple hosts or servers simultaneously in both cinder and nova with read/write (RW) mode when this feature is supported by the back end driver. This feature addresses the clustered application workloads use case that typically requires active/active or active/standby scenarios.
This release adds a Technology Preview of the ability to attach a volume to multiple hosts or servers simultaneously in both cinder and nova with read/write (RW) mode when this feature is supported by the back end driver. This feature addresses the clustered application workloads use case that typically requires active/active or active/standby scenarios.
BZ#1550668
This feature enables forwarding tenant traffic based on DSCP marking from tenants encapsulated in the VXLAN IP header. This feature is a technology preview for OSP14.
This feature enables forwarding tenant traffic based on DSCP marking from tenants encapsulated in the VXLAN IP header. This feature is a technology preview for OSP14.
BZ#1614282
You can now configure automatic restart of instances on a Compute node if the compute node reboots without first migrating the instances. Nova and the libvirt-guests agent can be configured to gracefully shut down the instances and start them when the Compute node reboots. New parameters: NovaResumeGuestsStateOnHostBoot (True/False) NovaResumeGuestsShutdownTimeout (default 300s)
You can now configure automatic restart of instances on a Compute node if the compute node reboots without first migrating the instances. Nova and the libvirt-guests agent can be configured to gracefully shut down the instances and start them when the Compute node reboots.
New parameters:
NovaResumeGuestsStateOnHostBoot (True/False)
NovaResumeGuestsShutdownTimeout (default 300s)
3.1.3. Release Notes Copia collegamentoCollegamento copiato negli appunti!
This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment.
BZ#1601613
The default value of `--http-boot` changed from `/httpboot` to `/var/lib/ironic/httpboot` as containerized Ironic services expect.
The default value of `--http-boot` changed from `/httpboot` to
`/var/lib/ironic/httpboot` as containerized Ironic services
expect.
BZ#1614810
With this update, logrotate's copytruncate is used by default for containerized services logs rotation. The default period to keep old logs remains unchanged (14 days).
With this update, logrotate's copytruncate is used by default for containerized services logs rotation. The default period to keep old logs remains unchanged (14 days).
BZ#1640095
OpenStack Rally, previously included as a technical preview, is removed from this release.
OpenStack Rally, previously included as a technical preview, is removed from this release.
BZ#1649679
When you use the web-download feature, the staging area - defined in the configuration using the `node_staging_uri` option - is not cleaned up properly. Ensure that `file` is part of the `stores` configuration option in the `glance_store` section of the glance-api.conf file.
When you use the web-download feature, the staging area - defined in the configuration using the `node_staging_uri` option - is not cleaned up properly. Ensure that `file` is part of the `stores` configuration option in the `glance_store` section of the glance-api.conf file.
BZ#1654405
When you use the image conversion feature, ensure that `file` is part of the `stores` configuration option in the `glance_store` section of the glance-api.conf file.
When you use the image conversion feature, ensure that `file` is part of the `stores` configuration option in the `glance_store` section of the glance-api.conf file.
BZ#1654408
For glance image conversion, the glance-direct method is not enabled by default. To enable this feature, set `enabled_import_methods` to `[glance-direct,web-download]` or `[glance-direct]` in the DEFAULT section of glance-api.conf.
For glance image conversion, the glance-direct method is not enabled by default. To enable this feature, set `enabled_import_methods` to `[glance-direct,web-download]` or `[glance-direct]` in the DEFAULT section of glance-api.conf.
BZ#1654413
Glance image conversion is not enabled by default on a new install of Red Hat OpenStack Platform 14. To use this feature, edit the glance-image-import.conf file. In the image_import_opts section, insert the following line: image_import_plugins = ['image_conversion']
Glance image conversion is not enabled by default on a new install of Red Hat OpenStack Platform 14. To use this feature, edit the glance-image-import.conf file.
In the image_import_opts section, insert the following line:
image_import_plugins = ['image_conversion']
BZ#1662042
OpenDaylight does not support IPv6 for tenant or provider networks. Therefore, use only IPv4 networks. You may experience issues related to floating IPs if IPv6 networks are used along with IPv4 networks.
OpenDaylight does not support IPv6 for tenant or provider networks. Therefore, use only IPv4 networks. You may experience issues related to floating IPs if IPv6 networks are used along with IPv4 networks.
3.1.4. Known Issues Copia collegamentoCollegamento copiato negli appunti!
These known issues exist in Red Hat OpenStack Platform at this time:
BZ#1516911
The OvsDpdkMemoryChannels parameter cannot be derived through the DPDK derive parameters workflow. The value is set to 4 by default. You can change that value in your custom environments file to match your hardware.
The OvsDpdkMemoryChannels parameter cannot be derived through the DPDK derive parameters workflow. The value is set to 4 by default. You can change that value in your custom environments file to match your hardware.
BZ#1579052
When Octavia is configured to use a small Nova flavor, Amphorae (Nova instances) are created successfully but load balancers can get stuck in PENDING state for about 25 minutes. Instead, the load balancer should go to error state and the Amphorae should be deleted. As a workaround for small Nova flavors, tune Octavia configurations "connection_max_retries", "connection_retry_interval", "build_active_retries" and "build_retry_interval" in section [haproxy_amphora] to a more reasonable production values. This will cause load balancers will transition from PENDING to ERROR state faster with a small Nova flavor.
When Octavia is configured to use a small Nova flavor, Amphorae (Nova instances) are created successfully but load balancers can get stuck in PENDING state for about 25 minutes. Instead, the load balancer should go to error state and the Amphorae should be deleted.
As a workaround for small Nova flavors, tune Octavia configurations "connection_max_retries", "connection_retry_interval", "build_active_retries" and "build_retry_interval" in section [haproxy_amphora] to a more reasonable production values. This will cause load balancers will transition from PENDING to ERROR state faster with a small Nova flavor.
BZ#1630480
Workflow triggers for generating Openstack rc files are hardcoded in python-tripleoclient. As a result, OpenStack-specific workflows are triggered after director deploys OpenShift. Users can see OpenStack-specific URLs in stdout and OpenStack rc files created.
Workflow triggers for generating Openstack rc files are hardcoded in python-tripleoclient.
As a result, OpenStack-specific workflows are triggered after director deploys OpenShift. Users can see OpenStack-specific URLs in stdout and OpenStack rc files created.
BZ#1639495
There is currently a known issue with fernet token rotation where the keys are not automatically deployed onto the overcloud. The workflow task `tripleo.fernet_keys.v1.rotate_fernet_keys ` generates the keys but they are not successfully pushed to the overcloud. This issue is expected to be addressed in a future release. If you plan to perform rotation before this update, you can choose to follow one of these workarounds: * Start os-collect-config on the overcloud nodes before running the rotation. You can then stop it afterwards if you do not need it for anything else. * Enable os-collect-config on all overcloud nodes. You can choose to disable it once the update with the fix is released. NOTE: If you do not need to rotate keys before the update comes out, then you do not need to do anything.
There is currently a known issue with fernet token rotation where the keys are not automatically deployed onto the overcloud. The workflow task `tripleo.fernet_keys.v1.rotate_fernet_keys ` generates the keys but they are not successfully pushed to the overcloud. This issue is expected to be addressed in a future release. If you plan to perform rotation before this update, you can choose to follow one of these workarounds:
* Start os-collect-config on the overcloud nodes before running the rotation. You can then stop it afterwards if you do not need it for anything else.
* Enable os-collect-config on all overcloud nodes. You can choose to disable it once the update with the fix is released.
NOTE: If you do not need to rotate keys before the update comes out, then you do not need to do anything.
BZ#1640021
BZ#1640382
BZ#1640804
When you restart all three controller nodes, it might not be possible to launch tenant instances in the overcloud. A "DuplicateMessageError" message is logged in the overcloud logs. As a workaround, on one of the overcloud controllers, run this command: pcs resource restart rabbitmq-bundle
When you restart all three controller nodes, it might not be possible to launch tenant instances in the overcloud. A "DuplicateMessageError" message is logged in the overcloud logs.
As a workaround, on one of the overcloud controllers, run this command:
pcs resource restart rabbitmq-bundle
BZ#1643657
For proxying requests to the routers on Infra nodes, director sets up port 443 on the HAProxy instance running on master nodes. Port 443 cannot be used on OpenShift master nodes for binding the OpenShift API. OpenShift API cannot be configured on port 443 on a director deployed OpenShift environment.
For proxying requests to the routers on Infra nodes, director sets up port 443 on the HAProxy instance running on master nodes. Port 443 cannot be used on OpenShift master nodes for binding the OpenShift API. OpenShift API cannot be configured on port 443 on a director deployed OpenShift environment.
BZ#1644889
BZ#1646707
In some OVS versions, `updelay` and `downdelay` bond settings are ignored, and the default settings are always used.
In some OVS versions, `updelay` and `downdelay` bond settings are ignored, and the default settings are always used.
BZ#1647005
BZ#1652444
The `neutron_driver` parameter has the value `null` in the containers-prepare-parameter.yaml file. This might cause minor updates to the overcloud in OpenDaylight deployments. Workaround: Before you update the overcloud, set the value of the `neutron_driver` parameter to `odl`.
The `neutron_driver` parameter has the value `null` in the containers-prepare-parameter.yaml file. This might cause minor updates to the overcloud in OpenDaylight deployments.
Workaround: Before you update the overcloud, set the value of the `neutron_driver` parameter to `odl`.
BZ#1653348
Scaling out with an additional OpenShift master node of a director deployed OpenShift environment fails with a message similar to: "The field 'vars' has an invalid value, which includes an undefined variable. The error was: 'openshift_master_etcd_urls' is undefined…”
Scaling out with an additional OpenShift master node of a director deployed OpenShift environment fails with a message similar to: "The field 'vars' has an invalid value, which includes an undefined variable. The error was: 'openshift_master_etcd_urls' is undefined…”
BZ#1653466
Scaling out with an additional Infra node on a director deployed OpenShift environment with CNS enabled fails with a message similar to the following: “fatal: [openshift-master-2]: FAILED! => {"changed": false, "msg": "Error mounting /tmp/openshift-glusterfs-registry-c8qImT: Mount failed.”
Scaling out with an additional Infra node on a director deployed OpenShift environment with CNS enabled fails with a message similar to the following: “fatal: [openshift-master-2]: FAILED! => {"changed": false, "msg": "Error mounting /tmp/openshift-glusterfs-registry-c8qImT: Mount failed.”
BZ#1659183
BZ#1660066
Director does not support triggering Red Hat Enterprise Linux OS and OpenShift Container Platform updates on director deployed OpenShift environments. Director deployed OpenShift environments cannot be minor updated.
Director does not support triggering Red Hat Enterprise Linux OS and OpenShift Container Platform updates on director deployed OpenShift environments. Director deployed OpenShift environments cannot be minor updated.
BZ#1660475
After config-download has generated the playbooks for the Overcloud, if you execute ansible-playbook with --check parameter, it does not work. Expect an error about undefined stdout for ftype. This will be fixed in the next version.
After config-download has generated the playbooks for the Overcloud, if you execute ansible-playbook with --check parameter, it does not work. Expect an error about undefined stdout for ftype. This will be fixed in the next version.
BZ#1664165
BZ#1664698
A recent change made memory allocation for instances with NUMA topologies pagesize aware. With this change, memory for instances with NUMA topologies can no longer be oversubscribed. Memory oversubscription is currently disabled for all instances with a NUMA topology, whereas previously only instances with hugepages were not allowed to use oversubscription. This affects instances with an explicit NUMA topology and those with an implicit topology. An instance can have an implicit NUMA topology due to the use of hugepages or CPU pinning. If possible, avoid the use of explicit NUMA topologies. If CPU pinning is required, resulting in an implicit NUMA topology, there is no workaround.
A recent change made memory allocation for instances with NUMA topologies pagesize aware. With this change, memory for instances with NUMA topologies can no longer be oversubscribed.
Memory oversubscription is currently disabled for all instances with a NUMA topology, whereas previously only instances with hugepages were not allowed to use oversubscription. This affects instances with an explicit NUMA topology and those with an implicit topology. An instance can have an implicit NUMA topology due to the use of hugepages or CPU pinning.
If possible, avoid the use of explicit NUMA topologies. If CPU pinning is required, resulting in an implicit NUMA topology, there is no workaround.
3.1.5. Deprecated Functionality Copia collegamentoCollegamento copiato negli appunti!
The items in this section are either no longer supported, or will no longer be supported in a future release.
BZ#1668219
OpenDaylight was first made available in OSP 13 and is being deprecated in OSP 14. Our combined OpenDaylight in OpenStack solution will no longer accept new feature enhancements and we would like to inform those who were looking for an OpenDaylight integrated solution from Red Hat to seek alternatives. OpenDaylight will continue to be supported and receive bug fixes for the duration of the OSP 14 deprecation cycle, with support planned to be completely dropped by the end of the OSP 13 lifecycle (June 27, 2021).
OpenDaylight was first made available in OSP 13 and is being deprecated in OSP 14.
Our combined OpenDaylight in OpenStack solution will no longer accept new feature enhancements and we would like to inform those who were looking for an OpenDaylight integrated solution from Red Hat to seek alternatives.
OpenDaylight will continue to be supported and receive bug fixes for the duration of the OSP 14 deprecation cycle, with support planned to be completely dropped by the end of the OSP 13 lifecycle (June 27, 2021).
3.2. Red Hat OpenStack Platform 14 Maintenance Release - March 13, 2019 Copia collegamentoCollegamento copiato negli appunti!
These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
3.2.1. Enhancements Copia collegamentoCollegamento copiato negli appunti!
This release of Red Hat OpenStack Platform features the following enhancements:
BZ#1645489
This enhancement adds the boolean parameter `NovaLibvirtVolumeUseMultipath`, which provides a value for the multipath configuration parameter `libvirt/volume_use_multipath` in the `nova.conf` file for Compute nodes. You can set this parameter for each Compute role. Default value is `False`.
This enhancement adds the boolean parameter `NovaLibvirtVolumeUseMultipath`, which provides a value for the multipath configuration parameter `libvirt/volume_use_multipath` in the `nova.conf` file for Compute nodes. You can set this parameter for each Compute role. Default value is `False`.
BZ#1658484
This enhancement sets the number of RPC workers to `1` by default in OVN tripleo deployments. The goal of this setting is to reduce the number of workers to save memory resources and the number of connections to OVSDB, in cases where the Neutron DHCP agent is not deployed alongside OVN services.
This enhancement sets the number of RPC workers to `1` by default in OVN tripleo deployments. The goal of this setting is to reduce the number of workers to save memory resources and the number of connections to OVSDB, in cases where the Neutron DHCP agent is not deployed alongside OVN services.
BZ#1673172
This enhancement adds the networking-ansible heat parameter `IronicDefaultNetworkInterface`, which determines the value of the `default_network_interface` parameter in the `ironic.conf` configuration file. This value is set to the `neutron` interface by default, which enables virtual networking through Neutron on bare metal nodes. Note: The switches attached to the bare metal nodes must be programmable by the networking service if the `default_network_interface` is set to `neutron`.
This enhancement adds the networking-ansible heat parameter `IronicDefaultNetworkInterface`, which determines the value of the `default_network_interface` parameter in the `ironic.conf` configuration file. This value is set to the `neutron` interface by default, which enables virtual networking through Neutron on bare metal nodes.
Note: The switches attached to the bare metal nodes must be programmable by the networking service if the `default_network_interface` is set to `neutron`.
3.2.2. Known Issues Copia collegamentoCollegamento copiato negli appunti!
These known issues exist in Red Hat OpenStack Platform at this time:
BZ#1691449
3.3. Red Hat OpenStack Platform 14 Maintenance Release - April 30, 2019 Copia collegamentoCollegamento copiato negli appunti!
These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
3.3.1. Enhancements Copia collegamentoCollegamento copiato negli appunti!
This release of Red Hat OpenStack Platform features the following enhancements:
BZ#1658192
This feature adds the capability to configure the Cinder Dell EMC StorageCenter driver to use a multipath for volume-to-image and image-to-volume transfers. The feature includes a new parameter `CinderDellScMultipathXfer` with a default value of `True`. Enabling multipath transfers can reduce the total time of data transfers between volumes and images.
This feature adds the capability to configure the Cinder Dell EMC StorageCenter driver to use a multipath for volume-to-image and image-to-volume transfers. The feature includes a new parameter `CinderDellScMultipathXfer` with a default value of `True`. Enabling multipath transfers can reduce the total time of data transfers between volumes and images.
BZ#1677001
Previously, when using TLS Everywhere, your controller node was required to access IdM through the `ctlplane` network. As a result, if traffic was routed through a different network, then the overcloud deployment process would fail due to `getcert` errors. To address this, IdM enrolment has been moved into a composable service that runs within `host_prep_tasks`; this runs at the start of the deployment phase. Note that the script will simply exit if the instance has already been enrolled in IdM.
Previously, when using TLS Everywhere, your controller node was required to access IdM through the `ctlplane` network. As a result, if traffic was routed through a different network, then the overcloud deployment process would fail due to `getcert` errors. To address this, IdM enrolment has been moved into a composable service that runs within `host_prep_tasks`; this runs at the start of the deployment phase. Note that the script will simply exit if the instance has already been enrolled in IdM.
3.3.2. Known Issues Copia collegamentoCollegamento copiato negli appunti!
These known issues exist in Red Hat OpenStack Platform at this time:
BZ#1653348
Scaling out with an additional OpenShift master node of a director deployed OpenShift environment fails with a message similar to: "The field 'vars' has an invalid value, which includes an undefined variable. The error was: 'openshift_master_etcd_urls' is undefined…”
Scaling out with an additional OpenShift master node of a director deployed OpenShift environment fails with a message similar to: "The field 'vars' has an invalid value, which includes an undefined variable. The error was: 'openshift_master_etcd_urls' is undefined…”
BZ#1659183
BZ#1664698
A recent change made memory allocation for instances with NUMA topologies pagesize aware. With this change, memory for instances with NUMA topologies can no longer be oversubscribed. Memory oversubscription is currently disabled for all instances with a NUMA topology, whereas previously only instances with hugepages were not allowed to use oversubscription. This affects instances with an explicit NUMA topology and those with an implicit topology. An instance can have an implicit NUMA topology due to the use of hugepages or CPU pinning. If possible, avoid the use of explicit NUMA topologies. If CPU pinning is required, resulting in an implicit NUMA topology, there is no workaround.
A recent change made memory allocation for instances with NUMA topologies pagesize aware. With this change, memory for instances with NUMA topologies can no longer be oversubscribed.
Memory oversubscription is currently disabled for all instances with a NUMA topology, whereas previously only instances with hugepages were not allowed to use oversubscription. This affects instances with an explicit NUMA topology and those with an implicit topology. An instance can have an implicit NUMA topology due to the use of hugepages or CPU pinning.
If possible, avoid the use of explicit NUMA topologies. If CPU pinning is required, resulting in an implicit NUMA topology, there is no workaround.
BZ#1691449
3.3.3. Deprecated Functionality Copia collegamentoCollegamento copiato negli appunti!
The items in this section are either no longer supported, or will no longer be supported in a future release.
BZ#1687884
As of this release the director graphical user interface is deprecated. Bug fixes and support will be provided through the end of the OSP 13 lifecycle but no new feature enhancements will be made.
As of this release the director graphical user interface is deprecated. Bug fixes and support will be provided through the end of the OSP 13 lifecycle but no new feature enhancements will be made.
3.4. Red Hat OpenStack Platform 14 Maintenance Release - July 1, 2019 Copia collegamentoCollegamento copiato negli appunti!
These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
3.4.1. Enhancements Copia collegamentoCollegamento copiato negli appunti!
This release of Red Hat OpenStack Platform features the following enhancements:
BZ#1698682
Red Hat OpenStack Platform director now has the ability to control Block Storage service (Cinder) snapshots on NFS back ends. A new director parameter, CinderNfsSnapshotSupport, has a default value of True.
Red Hat OpenStack Platform director now has the ability to control Block Storage service (Cinder) snapshots on NFS back ends. A new director parameter, CinderNfsSnapshotSupport, has a default value of True.
BZ#1701426
Prior to this release, the communication between hapoxy and the Shared File Systems service (Manila) API was not secured when deployed with TLS everywhere. Support has been added for the Manila API to configured with SSL certificates, allowing TLS on the internal API network. This feature is now automatically configured when TLS everywhere is enabled.
Prior to this release, the communication between hapoxy and the Shared File Systems service (Manila) API was not secured when deployed with TLS everywhere. Support has been added for the Manila API to configured with SSL certificates, allowing TLS on the internal API network. This feature is now automatically configured when TLS everywhere is enabled.
3.4.2. Release Notes Copia collegamentoCollegamento copiato negli appunti!
This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment.
BZ#1701423
The API for the OpenStack Shared File Systems service (Manila) now runs behind httpd. The Apache error and access logs for this service are available in `/var/log/containers/httpd/manila-api` on all nodes that run the Manila API container. The logs for the main API remain in `/var/log/containers/manila`.
The API for the OpenStack Shared File Systems service (Manila) now runs behind httpd. The Apache error and access logs for this service are available in `/var/log/containers/httpd/manila-api` on all nodes that run the Manila API container. The logs for the main API remain in `/var/log/containers/manila`.
3.4.3. Known Issues Copia collegamentoCollegamento copiato negli appunti!
These known issues exist in Red Hat OpenStack Platform at this time:
BZ#1644883
Previously, when the `PING` type health monitor was configured, HAProxy would silently use TCP connect instead. This is because Red Hat OpenStack Platform uses an older version of HAProxy that does not support external monitors. The setting `allow_ping_health_monitors` is now set to `False` by default.
Previously, when the `PING` type health monitor was configured, HAProxy would silently use TCP connect instead. This is because Red Hat OpenStack Platform uses an older version of HAProxy that does not support external monitors. The setting `allow_ping_health_monitors` is now set to `False` by default.
BZ#1660066
Director does not support triggering Red Hat Enterprise Linux OS and OpenShift Container Platform updates on director deployed OpenShift environments. Director deployed OpenShift environments cannot be minor updated.
Director does not support triggering Red Hat Enterprise Linux OS and OpenShift Container Platform updates on director deployed OpenShift environments. Director deployed OpenShift environments cannot be minor updated.
3.5. Red Hat OpenStack Platform 14 Maintenance Release - November 6, 2019 Copia collegamentoCollegamento copiato negli appunti!
These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
For information about the November 6, 2019 Red Hat OpenStack Platform 14 Maintenance Release, see the associated advisories at https://access.redhat.com/downloads/content/191/ver=14/rhel---7/14.0/x86_64/product-errata.