Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 3. Release Information
These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
Notes for updates released during the support lifecycle of this Red Hat OpenStack Platform release will appear in the advisory text associated with each update.
3.1. Red Hat OpenStack Platform 10 GA Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform.
3.1.1. Enhancements Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
This release of Red Hat OpenStack Platform features the following enhancements:
- BZ#1188175
This enhancement adds support for virtual device role tagging. This was added because an instance's operating system may need extra information about the virtual devices it is running on. For example, in an instance with multiple virtual network interfaces, the guest operating system needs to distinguish between their intended usage in order to provision them accordingly. With this update, virtual device role tagging allows users to tag virtual devices when creating an instance. Those tags are then presented to the instance (along with other device metadata) using the metadata API, and through the config drive (if enabled). For more information, see the chapter `Use Tagging for Virtual Device Identification` in the Red Hat OpenStack Platform 10 Networking Guide: https://access.redhat.com/documentation/en/red-hat-openstack-platform/
This enhancement adds support for virtual device role tagging. This was added because an instance's operating system may need extra information about the virtual devices it is running on. For example, in an instance with multiple virtual network interfaces, the guest operating system needs to distinguish between their intended usage in order to provision them accordingly. With this update, virtual device role tagging allows users to tag virtual devices when creating an instance. Those tags are then presented to the instance (along with other device metadata) using the metadata API, and through the config drive (if enabled). For more information, see the chapter `Use Tagging for Virtual Device Identification` in the Red Hat OpenStack Platform 10 Networking Guide: https://access.redhat.com/documentation/en/red-hat-openstack-platform/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1189551
This update adds the `real time` feature, which provides stronger guarantees for worst-case scheduler latency for vCPUs. This update assists tenants that need to run workloads concerned with CPU execution latency, and that require the guarantees offered by a real time KVM guest configuration.
This update adds the `real time` feature, which provides stronger guarantees for worst-case scheduler latency for vCPUs. This update assists tenants that need to run workloads concerned with CPU execution latency, and that require the guarantees offered by a real time KVM guest configuration.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1198602
This enhancement allows the `admin` user to view a list of the floating IPs allocated to instances, using the admin console. This list spans all projects in the deployment. Previously, this information was only available from the command-line.
This enhancement allows the `admin` user to view a list of the floating IPs allocated to instances, using the admin console. This list spans all projects in the deployment. Previously, this information was only available from the command-line.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1233920
This enhancement adds support for virtual device role tagging. This was added because an instance's operating system may need extra information about the virtual devices it is running on. For example, in an instance with multiple virtual network interfaces, the guest operating system needs to distinguish between their intended usage in order to provision them accordingly. With this update, virtual device role tagging allows users to tag virtual devices when creating an instance. Those tags are then presented to the instance (along with other device metadata) using the metadata API, and through the config drive (if enabled). For more information, see the chapter `Use Tagging for Virtual Device Identification` in the Red Hat OpenStack Platform 10 Networking Guide: https://access.redhat.com/documentation/en/red-hat-openstack-platform/
This enhancement adds support for virtual device role tagging. This was added because an instance's operating system may need extra information about the virtual devices it is running on. For example, in an instance with multiple virtual network interfaces, the guest operating system needs to distinguish between their intended usage in order to provision them accordingly. With this update, virtual device role tagging allows users to tag virtual devices when creating an instance. Those tags are then presented to the instance (along with other device metadata) using the metadata API, and through the config drive (if enabled). For more information, see the chapter `Use Tagging for Virtual Device Identification` in the Red Hat OpenStack Platform 10 Networking Guide: https://access.redhat.com/documentation/en/red-hat-openstack-platform/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1249836
With the 'openstack baremetal' utility, you can now specify specific images during boot configuration. Specifically, you can now use the '--deploy-kernel' and '--deploy-ramdisk' options to specify a kernel or ramdisk image, respectively.
With the 'openstack baremetal' utility, you can now specify specific images during boot configuration. Specifically, you can now use the '--deploy-kernel' and '--deploy-ramdisk' options to specify a kernel or ramdisk image, respectively.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1256850
The Telemetry API (ceilometer-api) now uses apache-wsgi instead of eventlet. When upgrading to this release, ceilometer-api will be migrated accordingly. This change provides greater flexibility for per-deployment performance and scaling adjustments, as well as straightforward use of SSL.
The Telemetry API (ceilometer-api) now uses apache-wsgi instead of eventlet. When upgrading to this release, ceilometer-api will be migrated accordingly. This change provides greater flexibility for per-deployment performance and scaling adjustments, as well as straightforward use of SSL.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1262070
You can now use the director to configure Ceph RBD as a Block Storage backup target. This will allow you to deploy an overcloud where volumes are set to back up to a Ceph target. By default, volume backups will be stored in a Ceph pool called 'backups'. Backup settings are configured in the following environment file (on the undercloud): /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml
You can now use the director to configure Ceph RBD as a Block Storage backup target. This will allow you to deploy an overcloud where volumes are set to back up to a Ceph target. By default, volume backups will be stored in a Ceph pool called 'backups'. Backup settings are configured in the following environment file (on the undercloud): /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1279554
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1283336
Previously, in Red Hat Enterprise Linux OpenStack Platform 7, the networks that could be used on each role was fixed. Consequently, it was not possible to have a custom network topology with any network, on any role. With this update, in Red Hat OpenStack Platform 8 and higher, any network may be assigned to any role. As a result, custom network topologies are now possible, but the ports for each role will have to be customized. Review the `environments/network-isolation.yaml` file in `openstack-tripleo-heat-templates` to see how to enable ports for each role in a custom environment file or in `network-environment.yaml`.
Previously, in Red Hat Enterprise Linux OpenStack Platform 7, the networks that could be used on each role was fixed. Consequently, it was not possible to have a custom network topology with any network, on any role. With this update, in Red Hat OpenStack Platform 8 and higher, any network may be assigned to any role. As a result, custom network topologies are now possible, but the ports for each role will have to be customized. Review the `environments/network-isolation.yaml` file in `openstack-tripleo-heat-templates` to see how to enable ports for each role in a custom environment file or in `network-environment.yaml`.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1289502
With this release, the customer requires two factor authentication, to support better security for re-seller use case.
With this release, the customer requires two factor authentication, to support better security for re-seller use case.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1290251
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1309460
You can now use the director to deploy Ceph RadosGW as your object storage gateway. To do so, include /usr/share/openstack-tripleo-heat-templates/environmens/ceph-radosgw.yaml in your overcloud deployment. When you use this heat template, the default Object Storage service (swift) will not be deployed.
You can now use the director to deploy Ceph RadosGW as your object storage gateway. To do so, include /usr/share/openstack-tripleo-heat-templates/environmens/ceph-radosgw.yaml in your overcloud deployment. When you use this heat template, the default Object Storage service (swift) will not be deployed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1314080
With this enhancement, `heat-manage` now supports a `heat-manage reset_stack_status` subcommand. This was added to manage situations where `heat-engine` was unable to contact the database, causing any stacks that were in-progress to remain stuck due to outdated database information. When this occurred, administrators needed a way to reset the status to allow these stacks to be updated again. As a result, administrators can now use the `heat-manage reset_stack_status` command to reset a stuck stack.
With this enhancement, `heat-manage` now supports a `heat-manage reset_stack_status` subcommand. This was added to manage situations where `heat-engine` was unable to contact the database, causing any stacks that were in-progress to remain stuck due to outdated database information. When this occurred, administrators needed a way to reset the status to allow these stacks to be updated again. As a result, administrators can now use the `heat-manage reset_stack_status` command to reset a stuck stack.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1317669
This update includes a release file to identify the overcloud version deployed with OSP director. This gives a clear indication of the installed version and aids debugging. The overcloud-full image includes a new package (rhosp-release). Upgrades from older versions also install this RPM. All versions starting with OSP 10 will now have a release file. This only applies to Red Hat OpenStack Platform director-based installations. However, users can manually the install the rhosp-release package and achieve the same result.
This update includes a release file to identify the overcloud version deployed with OSP director. This gives a clear indication of the installed version and aids debugging. The overcloud-full image includes a new package (rhosp-release). Upgrades from older versions also install this RPM. All versions starting with OSP 10 will now have a release file. This only applies to Red Hat OpenStack Platform director-based installations. However, users can manually the install the rhosp-release package and achieve the same result.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1325680
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1325682
With this update, IP traffic can be managed by DSCP marking rules attached to QoS policies, which are in turn applied to networks and ports. This was added because different sources of traffic may require different levels of prioritisation at the network level, especially when dealing with real-time information, or critical control data. As a result, the traffic from the specific ports and networks can be marked with DSCP flags. Note that only Open vSwitch is supported in this release.
With this update, IP traffic can be managed by DSCP marking rules attached to QoS policies, which are in turn applied to networks and ports. This was added because different sources of traffic may require different levels of prioritisation at the network level, especially when dealing with real-time information, or critical control data. As a result, the traffic from the specific ports and networks can be marked with DSCP flags. Note that only Open vSwitch is supported in this release.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1328830
This update adds support for multiple theme configurations. This was added to allow a user to change a theme dynamically, using the front end. Some use-cases include the ability to toggle between a light and dark theme, or the ability to turn on a high contrast theme for accessibility reasons. As a result, users can now choose a theme at run time.
This update adds support for multiple theme configurations. This was added to allow a user to change a theme dynamically, using the front end. Some use-cases include the ability to toggle between a light and dark theme, or the ability to turn on a high contrast theme for accessibility reasons. As a result, users can now choose a theme at run time.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1337782
This release now features Composable Roles. TripleO can now be deployed in a composable way, allowing customers to select what services should run on each node. This, in turn, allows support for more complex use-cases.
This release now features Composable Roles. TripleO can now be deployed in a composable way, allowing customers to select what services should run on each node. This, in turn, allows support for more complex use-cases.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1337783
Generic nodes can now be deployed during the hardware provisioning phase. These nodes are deployed with a generic operating system (namely, Red Hat Enterprise Linux); customers can then deploy additional services directly on these nodes.
Generic nodes can now be deployed during the hardware provisioning phase. These nodes are deployed with a generic operating system (namely, Red Hat Enterprise Linux); customers can then deploy additional services directly on these nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1343130
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1346401
It is now possible to confine 'ceph-osd' instances with SELinux policies. In OSP10, new deployments have SELinux configured in 'enforcing' mode on the Ceph Storage nodes.
It is now possible to confine 'ceph-osd' instances with SELinux policies. In OSP10, new deployments have SELinux configured in 'enforcing' mode on the Ceph Storage nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1347371
With this enhancement, RabbitMQ introduces the new HA feature of Queue Master distribution. One of the strategies is `min-masters`, which picks the node hosting the minimum number of masters. This was added because of the possibility that one of the controllers may become unavailable, with Queue Masters then located on available controllers during queue declarations. Once the lost controller becomes available again, masters of newly-declared queues are not placed with priority to the controller with an obviously lower number of queue masters, and consequently the distribution may be unbalanced, with one of the controllers under significantly higher load in the event of multiple fail-overs. As a result, this enhancement spreads out the queues across controllers after a controller fail-over.
With this enhancement, RabbitMQ introduces the new HA feature of Queue Master distribution. One of the strategies is `min-masters`, which picks the node hosting the minimum number of masters. This was added because of the possibility that one of the controllers may become unavailable, with Queue Masters then located on available controllers during queue declarations. Once the lost controller becomes available again, masters of newly-declared queues are not placed with priority to the controller with an obviously lower number of queue masters, and consequently the distribution may be unbalanced, with one of the controllers under significantly higher load in the event of multiple fail-overs. As a result, this enhancement spreads out the queues across controllers after a controller fail-over.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1353796
With this update, you can now add nodes manually using the UI.
With this update, you can now add nodes manually using the UI.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1359192
With this update, the overcloud image includes the Red Hat Cloud Storage 2.0 version installed.
With this update, the overcloud image includes the Red Hat Cloud Storage 2.0 version installed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1366721
The Telemetry service (ceilometer) now uses gnocchi as its default meter dispatcher back end. Gnocchi is more scalable, and is more aligned to the future direction that the Telemetry service is facing.
The Telemetry service (ceilometer) now uses gnocchi as its default meter dispatcher back end. Gnocchi is more scalable, and is more aligned to the future direction that the Telemetry service is facing.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1367678
This enhancement adds `NeutronOVSFirewallDriver`, a new parameter for configuring the Open vSwitch (OVS) firewall driver in Red Hat OpenStack Platform director. This was added because the neutron OVS agent supports a new mechanism for implementing security groups: the 'openvswitch' firewall. `NeutronOVSFirewallDriver` allows users to directly control which implementation is used: `hybrid` - configures neutron to use the old iptables/hybrid based implementation. 'openvswitch' - enables the new flow-based implementation. The new firewall driver includes higher performance and reduces the number of interfaces and bridges used to connect guests to the project network. As a result, users can more easily evaluate the new security group implementation.
This enhancement adds `NeutronOVSFirewallDriver`, a new parameter for configuring the Open vSwitch (OVS) firewall driver in Red Hat OpenStack Platform director. This was added because the neutron OVS agent supports a new mechanism for implementing security groups: the 'openvswitch' firewall. `NeutronOVSFirewallDriver` allows users to directly control which implementation is used: `hybrid` - configures neutron to use the old iptables/hybrid based implementation. 'openvswitch' - enables the new flow-based implementation. The new firewall driver includes higher performance and reduces the number of interfaces and bridges used to connect guests to the project network. As a result, users can more easily evaluate the new security group implementation.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1368218
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1371649
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1381628
As described in https://bugs.launchpad.net/tripleo/+bug/1630247, the Sahara service in upstream Newton TripleO is now disabled by default. As part of the upgrade procedure from Red Hat OpenStack Platform 9 to Red Hat OpenStack Platform 10, the Sahara services are enabled/retained by default. If the operator decides they do not want Sahara after the upgrade, they need to include the provided `-e 'major-upgrade-remove-sahara.yaml'` environment file as part of the deployment command for the controller upgrade and converge steps. Note: this environment file must be specified last, especially for the converge step, but it could be done for both steps to avoid confusion. In this case, the Sahara services would not be restarted after the major upgrade. This approach allows Sahara services to be properly handled during the OSP9 to OSP10 upgrade. As a result, Sahara services are retained as part of the upgrade. In addition, the operator can still explicitly disable Sahara, if necessary.
As described in https://bugs.launchpad.net/tripleo/+bug/1630247, the Sahara service in upstream Newton TripleO is now disabled by default. As part of the upgrade procedure from Red Hat OpenStack Platform 9 to Red Hat OpenStack Platform 10, the Sahara services are enabled/retained by default. If the operator decides they do not want Sahara after the upgrade, they need to include the provided `-e 'major-upgrade-remove-sahara.yaml'` environment file as part of the deployment command for the controller upgrade and converge steps. Note: this environment file must be specified last, especially for the converge step, but it could be done for both steps to avoid confusion. In this case, the Sahara services would not be restarted after the major upgrade. This approach allows Sahara services to be properly handled during the OSP9 to OSP10 upgrade. As a result, Sahara services are retained as part of the upgrade. In addition, the operator can still explicitly disable Sahara, if necessary.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1383779
You can now use node-specific hiera to deploy Ceph storage nodes which do not have the same list of block devices. As a result, you can use node-specific hiera entries within the overcloud deployment's Heat templates to deploy non-similar OSD servers.
You can now use node-specific hiera to deploy Ceph storage nodes which do not have the same list of block devices. As a result, you can use node-specific hiera entries within the overcloud deployment's Heat templates to deploy non-similar OSD servers.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2. Technology Preview Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/.
- BZ#1381227
This update contains the necessary components for testing the use of containers in OpenStack. This feature is available in this release as a Technology Preview.
This update contains the necessary components for testing the use of containers in OpenStack. This feature is available in this release as a Technology Preview.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.3. Release Notes Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment.
- BZ#1377763
With Gnocchi 2.2, job dispatch is coordinated between controllers using Redis. As a result, you can expect improved processing of Telemetry measures.
With Gnocchi 2.2, job dispatch is coordinated between controllers using Redis. As a result, you can expect improved processing of Telemetry measures.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1385368
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.4. Known Issues Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
These known issues exist in Red Hat OpenStack Platform at this time:
- BZ#1204259
Glance is not configured with glance.store.http.Store as a known_store in /etc/glance/glance.conf. This means the glance client can not create images with the --copy-from argument. These commands fail with a "400 Bad Request" error. As a workaround, edit /etc/glance/glance-api.conf, add glance.store.http.Store to the list in the "stores" configuration option, then restart the openstack-glance-api server. This enables successful creation of glance images with the --copy-from argument.
Glance is not configured with glance.store.http.Store as a known_store in /etc/glance/glance.conf. This means the glance client can not create images with the --copy-from argument. These commands fail with a "400 Bad Request" error. As a workaround, edit /etc/glance/glance-api.conf, add glance.store.http.Store to the list in the "stores" configuration option, then restart the openstack-glance-api server. This enables successful creation of glance images with the --copy-from argument.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1239130
The director does not provide network validation before or during a deployment. This means a deployment with a bad network configuration can run for two hours with no output and can result in failure. A network validation script is currently in development and will be released in the future.
The director does not provide network validation before or during a deployment. This means a deployment with a bad network configuration can run for two hours with no output and can result in failure. A network validation script is currently in development and will be released in the future.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1241644
When openstack-cinder-volume uses an LVM backend and the Overcloud nodes reboot, the file-backed loopback device is not recreated. As a workaround, manually recreate the loopback device: $ sudo losetup /dev/loop2 /var/lib/cinder/cinder-volumes Then restart openstack-cinder-volume. Note that openstack-cinder-volume only runs on one node at a time in a high availability cluster of Overcloud Controller nodes. However, the loopback device should exist on all nodes.
When openstack-cinder-volume uses an LVM backend and the Overcloud nodes reboot, the file-backed loopback device is not recreated. As a workaround, manually recreate the loopback device: $ sudo losetup /dev/loop2 /var/lib/cinder/cinder-volumes Then restart openstack-cinder-volume. Note that openstack-cinder-volume only runs on one node at a time in a high availability cluster of Overcloud Controller nodes. However, the loopback device should exist on all nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1243306
Ephemeral storage is hard coded as true when using the NovaEnableRbdBackend parameter. This means NovaEnableRbdBackend instances cannot use cinder backed onto Ceph Storage. As a workaround, add the following to puppet/hieradata/compute.yaml: nova::compute::rbd::ephemeral_storage: false This disables ephemeral storage.
Ephemeral storage is hard coded as true when using the NovaEnableRbdBackend parameter. This means NovaEnableRbdBackend instances cannot use cinder backed onto Ceph Storage. As a workaround, add the following to puppet/hieradata/compute.yaml: nova::compute::rbd::ephemeral_storage: false This disables ephemeral storage.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1245826
The "openstack overcloud update stack" command returns immediately despite ongoing operations in the background. The command seems to run forever because it's not interactive. In these situations, run the command with the "-i" flag. This prompts the user for any manual interaction needs.
The "openstack overcloud update stack" command returns immediately despite ongoing operations in the background. The command seems to run forever because it's not interactive. In these situations, run the command with the "-i" flag. This prompts the user for any manual interaction needs.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1249210
A timing issue sometimes causes Overcloud neutron services to not automatically start correctly. This means instances are not accessible. As a workaround, you can run the following command on the Controller node cluster: $ sudo pcs resource debug-start neutron-l3-agent Instances will work correctly.
A timing issue sometimes causes Overcloud neutron services to not automatically start correctly. This means instances are not accessible. As a workaround, you can run the following command on the Controller node cluster: $ sudo pcs resource debug-start neutron-l3-agent Instances will work correctly.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1266565
Currently, certain setup steps require a SSH connection to the overcloud controllers, and will need to traverse VIPs to reach the Overcloud nodes. If your environment is using an external load balancer, then these steps are not likely to successfully connect. You can work around this issue by configuring the external load balancer to forward port 22. As a result, the SSH connection to the VIP will succeed.
Currently, certain setup steps require a SSH connection to the overcloud controllers, and will need to traverse VIPs to reach the Overcloud nodes. If your environment is using an external load balancer, then these steps are not likely to successfully connect. You can work around this issue by configuring the external load balancer to forward port 22. As a result, the SSH connection to the VIP will succeed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1269005
In this release, RHEL OpenStack Platform director only supports a High Availability (HA) overcloud deployment using three controller nodes.
In this release, RHEL OpenStack Platform director only supports a High Availability (HA) overcloud deployment using three controller nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1274687
There is currently a known requirement that can arise when Director connects to the Public API to complete the final configuration post-deployment steps: The Undercloud node must have a route to the Public API, and it must be reachable on the standard OpenStack API ports and port 22 (SSH). To prepare for this requirement, check that the Undercloud will be able to reach the External network on the controllers, as this network will be used for post-deployment tasks. As a result, the Undercloud can be expected to successfully connect to the Public API after deployment, and perform final configuration tasks. These tasks are required in order for the newly created deployment to be managed using the Admin account.
There is currently a known requirement that can arise when Director connects to the Public API to complete the final configuration post-deployment steps: The Undercloud node must have a route to the Public API, and it must be reachable on the standard OpenStack API ports and port 22 (SSH). To prepare for this requirement, check that the Undercloud will be able to reach the External network on the controllers, as this network will be used for post-deployment tasks. As a result, the Undercloud can be expected to successfully connect to the Public API after deployment, and perform final configuration tasks. These tasks are required in order for the newly created deployment to be managed using the Admin account.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1282951
When deploying Red Hat OpenStack Platform director, the bare-metal nodes should be powered off, and the ironic `node-state` and `provisioning-state` must be correct. For example, if ironic lists a node as "Available, powered-on", but the server is actually powered off, the node cannot be used for deployment. As a result, you will need to ensure that the node state in ironic matches the actual node state. Use "ironic node-set-power-state <node> [on|off]" and/or "ironic node-set-provisioning-state <node> available" to make the power state in ironic match the real state of the server, and ensure that the nodes are marked `Available`. As a result, once the state in ironic is correct, ironic will be able to correctly manage the power state and deploy to the nodes.
When deploying Red Hat OpenStack Platform director, the bare-metal nodes should be powered off, and the ironic `node-state` and `provisioning-state` must be correct. For example, if ironic lists a node as "Available, powered-on", but the server is actually powered off, the node cannot be used for deployment. As a result, you will need to ensure that the node state in ironic matches the actual node state. Use "ironic node-set-power-state <node> [on|off]" and/or "ironic node-set-provisioning-state <node> available" to make the power state in ironic match the real state of the server, and ensure that the nodes are marked `Available`. As a result, once the state in ironic is correct, ironic will be able to correctly manage the power state and deploy to the nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1293379
There is currently a known issue where network configuration changes can cause interface restarts, resulting in an interruption of network connectivity on overcloud nodes. Consequently, the network interruption can cause outages in the pacemaker controller cluster, leading to nodes being fenced (if fencing is configured). As a result, tripleo-heat-templates is designed to not apply network configuration changes on overcloud updates. By not applying any network configuration changes, the unintended consequence of a cluster outage is avoided.
There is currently a known issue where network configuration changes can cause interface restarts, resulting in an interruption of network connectivity on overcloud nodes. Consequently, the network interruption can cause outages in the pacemaker controller cluster, leading to nodes being fenced (if fencing is configured). As a result, tripleo-heat-templates is designed to not apply network configuration changes on overcloud updates. By not applying any network configuration changes, the unintended consequence of a cluster outage is avoided.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1293422
IBM x3550 M5 servers require firmware with minimum versions to work with Red Hat OpenStack Platform. Consequently, older firmware levels must be upgraded prior to deployment. Affected systems will need to upgrade to the following versions (or newer): DSA 10.1, IMM2 1.72, UEFI 1.10, Bootcode NA, Broadcom GigE 17.0.4.4a After upgrading the firmware, deployment should proceed as expected.
IBM x3550 M5 servers require firmware with minimum versions to work with Red Hat OpenStack Platform. Consequently, older firmware levels must be upgraded prior to deployment. Affected systems will need to upgrade to the following versions (or newer): DSA 10.1, IMM2 1.72, UEFI 1.10, Bootcode NA, Broadcom GigE 17.0.4.4a After upgrading the firmware, deployment should proceed as expected.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1302081
Address ranges entered for the `AllocationPools` IPv6 networks and IP allocation pools must be input in a valid format according to RFC 5952. Consequently, invalid entries will result in an error. As a result, IPv6 addresses should be entered in a valid format: Leading zeros can be omitted or entered in full, and repeating sequences of zeros may be replaced by "::". For example, an IP address of "fd00:0001:0000:0000:00a1:00b2:00c3:0010" may be represented as: "fd00:1::a1:b2:c3:10", but not as: "fd00:01::0b2:0c3:10", because there are an invalid number of leading zeros (01, 0b2, 0c3). The field must be truncated of leading zeros or fully padded.
Address ranges entered for the `AllocationPools` IPv6 networks and IP allocation pools must be input in a valid format according to RFC 5952. Consequently, invalid entries will result in an error. As a result, IPv6 addresses should be entered in a valid format: Leading zeros can be omitted or entered in full, and repeating sequences of zeros may be replaced by "::". For example, an IP address of "fd00:0001:0000:0000:00a1:00b2:00c3:0010" may be represented as: "fd00:1::a1:b2:c3:10", but not as: "fd00:01::0b2:0c3:10", because there are an invalid number of leading zeros (01, 0b2, 0c3). The field must be truncated of leading zeros or fully padded.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1312155
The controller_v6.yaml template contains a parameter for a Management network VLAN. This parameter is not supported in the current version of the director, and can be safely ignored along with any comments referring to the Management network. The Management network references do not need to be copied to any custom templates. This parameter will be supported in a future version.
The controller_v6.yaml template contains a parameter for a Management network VLAN. This parameter is not supported in the current version of the director, and can be safely ignored along with any comments referring to the Management network. The Management network references do not need to be copied to any custom templates. This parameter will be supported in a future version.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1323024
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1368279
When using Red Hat Ceph as a back end for ephemeral storage, the Compute service does not calculate the amount of available storage correctly. Specifically, Compute simply adds up the amount of available storage without factoring in replication. This results in grossly overstated available storage, which in turn could cause unexpected storage oversubscription. To determine the correct ephemeral storage capacity, query the Ceph service directly instead.
When using Red Hat Ceph as a back end for ephemeral storage, the Compute service does not calculate the amount of available storage correctly. Specifically, Compute simply adds up the amount of available storage without factoring in replication. This results in grossly overstated available storage, which in turn could cause unexpected storage oversubscription. To determine the correct ephemeral storage capacity, query the Ceph service directly instead.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1372804
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1383627
Nodes that are imported using "openstack baremetal import --json instackenv.json" should be powered off prior to attempting import. If the nodes are powered on, Ironic will not attempt to add the nodes or attempt introspection. As a workaround, power off all overcloud nodes prior to running "openstack baremetal import --json instackenv.json". As a result, if the nodes are powered off, the import should work successfully.
Nodes that are imported using "openstack baremetal import --json instackenv.json" should be powered off prior to attempting import. If the nodes are powered on, Ironic will not attempt to add the nodes or attempt introspection. As a workaround, power off all overcloud nodes prior to running "openstack baremetal import --json instackenv.json". As a result, if the nodes are powered off, the import should work successfully.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1383930
If using DHCP HA, the `NeutronDhcpAgentsPerNetwork` value should be set either equal to the number of dhcp-agents, or 3 (whichever is lower), using composable roles. If this is not done, the value will default to `ControllerCount` which may not be optimal as there may not be enough dhcp-agents running to satisfy spawning that many DHCP servers for each network.
If using DHCP HA, the `NeutronDhcpAgentsPerNetwork` value should be set either equal to the number of dhcp-agents, or 3 (whichever is lower), using composable roles. If this is not done, the value will default to `ControllerCount` which may not be optimal as there may not be enough dhcp-agents running to satisfy spawning that many DHCP servers for each network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1385034
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1391022
Red Hat Enterprise Linux 6 only contains GRUB Legacy, while OpenStack bare metal provisioning (ironic) only supports the installation of GRUB2. As a result, deploying a partition image with local boot will fail during the bootloader installation. As a workaround, if using RHEL 6 for bare metal instances, do not set boot_option to local in the flavor settings. You can also consider deploying a RHEL 6 whole disk image which already has GRUB Legacy installed.
Red Hat Enterprise Linux 6 only contains GRUB Legacy, while OpenStack bare metal provisioning (ironic) only supports the installation of GRUB2. As a result, deploying a partition image with local boot will fail during the bootloader installation. As a workaround, if using RHEL 6 for bare metal instances, do not set boot_option to local in the flavor settings. You can also consider deploying a RHEL 6 whole disk image which already has GRUB Legacy installed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1396308
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1463059
When using Red Hat Ceph Storage as a back end for both Block Storage (cinder) volumes and backups, any attempt to perform an incremental backup will result in a full backup instead, without any warning.
When using Red Hat Ceph Storage as a back end for both Block Storage (cinder) volumes and backups, any attempt to perform an incremental backup will result in a full backup instead, without any warning.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1321179
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.5. Deprecated Functionality Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
The items in this section are either no longer supported, or will no longer be supported in a future release.
- BZ#1261539
Support for nova-network is deprecated as of Red Hat OpenStack Platform 9 and will be removed in a future release. When creating new environments, it is recommended to use OpenStack Networking (Neutron).
Support for nova-network is deprecated as of Red Hat OpenStack Platform 9 and will be removed in a future release. When creating new environments, it is recommended to use OpenStack Networking (Neutron).
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1404907
In accordance with the upstream project, the LBaaS v1 API has been removed. Red Hat OpenStack Platfom 10 supports only the LBaaS v2 API.
In accordance with the upstream project, the LBaaS v1 API has been removed. Red Hat OpenStack Platfom 10 supports only the LBaaS v2 API.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow