3.4. Known Issues
These known issues exist in Red Hat OpenStack at this time:
- BZ#1221034
Due to a known issue with the 'python-neutron-fwaas' package, Firewall-as-a-Service (FWaaS) may fail to work. This is a result of the 'python-neutron-fwaas' package missing the database upgrade 'versions' directory. In addition, upgrading the database schemas between version releases may not function correctly at this time.
Due to a known issue with the 'python-neutron-fwaas' package, Firewall-as-a-Service (FWaaS) may fail to work. This is a result of the 'python-neutron-fwaas' package missing the database upgrade 'versions' directory. In addition, upgrading the database schemas between version releases may not function correctly at this time.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1244358
The Director uses misconfigured HAProxy settings when deploying the Bare Metal and Telemetry services with SSL enabled in the undercloud. This prevents some nodes from registering. To work around this, comment out 'option ssl-hello-chk' under the Bare Metal and Telemetry sections in /etc/haproxy/haproxy.cfg after installing the undercloud.
The Director uses misconfigured HAProxy settings when deploying the Bare Metal and Telemetry services with SSL enabled in the undercloud. This prevents some nodes from registering. To work around this, comment out 'option ssl-hello-chk' under the Bare Metal and Telemetry sections in /etc/haproxy/haproxy.cfg after installing the undercloud.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1256630
The GlusterFS native driver of the File Share Service allows users to create shares of specified sizes. If no Red Hat Gluster volumes of the exact requested size exist, the driver chooses one with the nearest possible size and creates a share on the volume. Whenever this occurs, the resulting share will use the entire volume. For example, if a user requests a 1GB share and only 2GB, 3GB, and 4GB volumes are available, the driver will choose the 2GB volume as a back end for the share. The driver will also proceed with creating a 2GB share; the user will be able to use and mount the entire 2GB share. To work around this, implement File Share quotas for users. Doing so will prevent them from provisioning more file share storage than what they are entitled to.
The GlusterFS native driver of the File Share Service allows users to create shares of specified sizes. If no Red Hat Gluster volumes of the exact requested size exist, the driver chooses one with the nearest possible size and creates a share on the volume. Whenever this occurs, the resulting share will use the entire volume. For example, if a user requests a 1GB share and only 2GB, 3GB, and 4GB volumes are available, the driver will choose the 2GB volume as a back end for the share. The driver will also proceed with creating a 2GB share; the user will be able to use and mount the entire 2GB share. To work around this, implement File Share quotas for users. Doing so will prevent them from provisioning more file share storage than what they are entitled to.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1272347
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1290949
By default the number of heat-engine workers created will match the number of cores on the undercloud. Previously, however, if there was only one core there would only be one heat-engine worker, and this caused deadlocks when creating the overcloud stack. A single heat-engine worker was not enough to launch an overcloud stack. To avoid this, it is recommended that the undercloud has at least two (virtual) cores. For virtual deployments this should be two vCPUs, regardless of cores on the baremetal host. If this is not possible, then uncommenting the num_engine_workers line in /etc/heat/heat.conf, and restarting openstack-heat-engine fixes the issue. Thus, the above workarounds have resolved the issue.
By default the number of heat-engine workers created will match the number of cores on the undercloud. Previously, however, if there was only one core there would only be one heat-engine worker, and this caused deadlocks when creating the overcloud stack. A single heat-engine worker was not enough to launch an overcloud stack. To avoid this, it is recommended that the undercloud has at least two (virtual) cores. For virtual deployments this should be two vCPUs, regardless of cores on the baremetal host. If this is not possible, then uncommenting the num_engine_workers line in /etc/heat/heat.conf, and restarting openstack-heat-engine fixes the issue. Thus, the above workarounds have resolved the issue.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1069157
At present, policy rules for volume extension prevent you from taking snapshots of in-use GlusterFS volumes. To work around this, you will have to manually edit those policy rules. To do so, open the Compute service's policy.json file and change "rule:admin_api" entries to "" for "compute_extension:os-assisted-volume-snapshots:create" and "compute_extension:os-assisted-volume-snapshots:delete". Afterwards, restart the Compute API service.
At present, policy rules for volume extension prevent you from taking snapshots of in-use GlusterFS volumes. To work around this, you will have to manually edit those policy rules. To do so, open the Compute service's policy.json file and change "rule:admin_api" entries to "" for "compute_extension:os-assisted-volume-snapshots:create" and "compute_extension:os-assisted-volume-snapshots:delete". Afterwards, restart the Compute API service.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1220630
The underlying Database-as-a-Service (Trove) processes will not start if the service's back-end database is unreachable. To work around this, Database-as-a-Service must be deployed on the same node as its back-end database.
The underlying Database-as-a-Service (Trove) processes will not start if the service's back-end database is unreachable. To work around this, Database-as-a-Service must be deployed on the same node as its back-end database.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1241424
Sometimes bare metal nodes can lock into a certain state if ironic-conductor stops abruptly. This means users cannot delete these nodes or change their state. As a workaround, log into the director's database and use the following query to set the node back to "available" state and remove the lock: UPDATE nodes SET provision_state="available", target_provision_state=NULL, reservation=NULL WHERE uuid=<node uuid>;
Sometimes bare metal nodes can lock into a certain state if ironic-conductor stops abruptly. This means users cannot delete these nodes or change their state. As a workaround, log into the director's database and use the following query to set the node back to "available" state and remove the lock: UPDATE nodes SET provision_state="available", target_provision_state=NULL, reservation=NULL WHERE uuid=<node uuid>;
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1221076
Due to a known issue with the 'python-neutron-fwaas' package, Firewall-as-a-Service (FWaaS) may fail to work. This is a result of the 'python-neutron-fwaas' package missing the database upgrade 'versions' directory. In addition, upgrading the database schemas between version releases may not function correctly at this time.
Due to a known issue with the 'python-neutron-fwaas' package, Firewall-as-a-Service (FWaaS) may fail to work. This is a result of the 'python-neutron-fwaas' package missing the database upgrade 'versions' directory. In addition, upgrading the database schemas between version releases may not function correctly at this time.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1247358
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1246525
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1296365
Multiple services attempted NTP configuration on the Overcloud and the last service configured it incorrectly. This caused time synchronization issues across all Overcloud nodes. As a workaround, delete /usr/libexec/os-apply-config/templates/etc/ntp.conf from all Overcloud nodes and re-run the deployment command to re-apply the puppet configuration. This is required for users updating from an older version of Red Hat OpenStack Platform to 7.3. This fix is not necessary on new 7.3 deployments. NTP now configures correctly.
Multiple services attempted NTP configuration on the Overcloud and the last service configured it incorrectly. This caused time synchronization issues across all Overcloud nodes. As a workaround, delete /usr/libexec/os-apply-config/templates/etc/ntp.conf from all Overcloud nodes and re-run the deployment command to re-apply the puppet configuration. This is required for users updating from an older version of Red Hat OpenStack Platform to 7.3. This fix is not necessary on new 7.3 deployments. NTP now configures correctly.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1236136
All keystone endpoints are on the External VIP. This means all API calls to keystone happen over the External VIP. There is no workaround at this time.
All keystone endpoints are on the External VIP. This means all API calls to keystone happen over the External VIP. There is no workaround at this time.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1250043
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1205432
The OpenStack Dashboard (Horizon) is not configured to accept connections on its local IP address. This mean you cannot browse the OpenStack Dashboard, including the Undercloud UI by IP address. As a workaround, use the Undercloud's FQDN instead of IP address. If access through the IP address is desired, edit /etc/openstack-dashboard/local_settings, add the IP address to the ALLOWED_HOSTS setting, then restart the httpd service. This enables the ability to browse OpenStack Dashboard through the host IP address.
The OpenStack Dashboard (Horizon) is not configured to accept connections on its local IP address. This mean you cannot browse the OpenStack Dashboard, including the Undercloud UI by IP address. As a workaround, use the Undercloud's FQDN instead of IP address. If access through the IP address is desired, edit /etc/openstack-dashboard/local_settings, add the IP address to the ALLOWED_HOSTS setting, then restart the httpd service. This enables the ability to browse OpenStack Dashboard through the host IP address.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1257291
With the glusterFS_native driver, providing or revoking 'cert'-based access to a share restarts a Red Hat Gluster Storage volume. This, in turn, will disrupt any ongoing I/O to existing mounts. To prevent any data loss, unmount a share on all clients before allowing or denying access to it.
With the glusterFS_native driver, providing or revoking 'cert'-based access to a share restarts a Red Hat Gluster Storage volume. This, in turn, will disrupt any ongoing I/O to existing mounts. To prevent any data loss, unmount a share on all clients before allowing or denying access to it.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1250130
The 'manila list' command shows information on all shares available from the File Share Service. This command also shows the Export Location field of each one, which should provide information for composing its mount point entry in an instance. However, the field displays this information in the following format: user@host:/vol The 'user@' prefix is unnecessary, and should therefore be ignored when composing its mount point entry.
The 'manila list' command shows information on all shares available from the File Share Service. This command also shows the Export Location field of each one, which should provide information for composing its mount point entry in an instance. However, the field displays this information in the following format: user@host:/vol The 'user@' prefix is unnecessary, and should therefore be ignored when composing its mount point entry.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1257304
With the the File Share Service, when an attempt to create a snapshot of a provisioned share fails, an entry for the snapshot will still be created. However, this entry will be in an 'error' state, and any attempts to delete it will fail. To prevent this, avoid creating share snapshots if the back end volume, service, or host is down.
With the the File Share Service, when an attempt to create a snapshot of a provisioned share fails, an entry for the snapshot will still be created. However, this entry will be in an 'error' state, and any attempts to delete it will fail. To prevent this, avoid creating share snapshots if the back end volume, service, or host is down.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1307125
If the database is not available when heat-engine starts, heat-engine fails to start, preventing heat-engine from running under certain circumstances. This occurs after you update a machine on which both the database and heat-engine are running using yum alone such as when updating the undercloud. As a workaround, heat-engine must be explicitly started by running 'systemctl start openstack-heat-engine.service'. You can confirm whether heat-engine is running by running 'systemctl status openstack-heat-engine.service'.
If the database is not available when heat-engine starts, heat-engine fails to start, preventing heat-engine from running under certain circumstances. This occurs after you update a machine on which both the database and heat-engine are running using yum alone such as when updating the undercloud. As a workaround, heat-engine must be explicitly started by running 'systemctl start openstack-heat-engine.service'. You can confirm whether heat-engine is running by running 'systemctl status openstack-heat-engine.service'.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1321179
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow