3.4. Known Issues


These known issues exist in Red Hat OpenStack at this time:
BZ#1221034
Due to a known issue with the 'python-neutron-fwaas' package, Firewall-as-a-Service (FWaaS) may fail to work. This is a result of the 'python-neutron-fwaas' package missing the database upgrade 'versions' directory.
In addition, upgrading the database schemas between version releases may not function correctly at this time.
Copy to Clipboard Toggle word wrap
BZ#1244358
The Director uses misconfigured HAProxy settings when deploying the Bare Metal and Telemetry services with SSL enabled in the undercloud. This prevents some nodes from registering. 

To work around this, comment out 'option ssl-hello-chk' under the Bare Metal and Telemetry sections in /etc/haproxy/haproxy.cfg after installing the undercloud.
Copy to Clipboard Toggle word wrap
BZ#1256630
The GlusterFS native driver of the File Share Service allows users to create shares of specified sizes. If no Red Hat Gluster volumes of the exact requested size exist, the driver chooses one with the nearest possible size and creates a share on the volume. Whenever this occurs, the resulting share will use the entire volume.

For example, if a user requests a 1GB share and only 2GB, 3GB, and 4GB volumes are available, the driver will choose the 2GB volume as a back end for the share. The driver will also proceed with creating a 2GB share; the user will be able to use and mount the entire 2GB share. 

To work around this, implement File Share quotas for users. Doing so will prevent them from provisioning more file share storage than what they are entitled to.
Copy to Clipboard Toggle word wrap
BZ#1272347
With this update, the default network where the 'KeystoneAdminVip' is placed was changed from 'InternalApi' to 'ctlplane' so that the post-deployment Identity service initialization step could be carried by the Undercloud over the 'ctlplane' network. Relocating the 'KeystoneAdminVip' causes a cascading restart of the services pointing to the old 'KeystoneAdminVip'.

As a workaround to make sure the KeystoneAdminVip remains on the 'InternalApi' network, a customized 'ServiceNetMap' must be provided as deployment parameter when launching an update from the 7.0 release. A sample Orchestration environment file passing a customized 'ServiceNetMap' is as follows:


parameters:
  ServiceNetMap:
    NeutronTenantNetwork: tenant
    CeilometerApiNetwork: internal_api
    MongoDbNetwork: internal_api
    CinderApiNetwork: internal_api
    CinderIscsiNetwork: storage
    GlanceApiNetwork: storage
    GlanceRegistryNetwork: internal_api
    KeystoneAdminApiNetwork: internal_api
    KeystonePublicApiNetwork: internal_api
    NeutronApiNetwork: internal_api
    HeatApiNetwork: internal_api
    NovaApiNetwork: internal_api
    NovaMetadataNetwork: internal_api
    NovaVncProxyNetwork: internal_api
    SwiftMgmtNetwork: storage_mgmt
    SwiftProxyNetwork: storage
    HorizonNetwork: internal_api
    MemcachedNetwork: internal_api
    RabbitMqNetwork: internal_api
    RedisNetwork: internal_api
    MysqlNetwork: internal_api
    CephClusterNetwork: storage_mgmt
    CephPublicNetwork: storage
    ControllerHostnameResolveNetwork: internal_api
    ComputeHostnameResolveNetwork: internal_api
    BlockStorageHostnameResolveNetwork: internal_api
    ObjectStorageHostnameResolveNetwork: internal_api
    CephStorageHostnameResolveNetwork: storage

If any additional binding network from the above has been customized then that setting has to be preserved as well.

As a result of the workaround changes, the 'KeystoneAdminVip' is not relocated on the 'ctlplane' network so that no services restart needs to be triggered.
Copy to Clipboard Toggle word wrap
BZ#1290949
By default the number of heat-engine workers created will match the number of cores on the undercloud. Previously, however, if there was only one core there would only be one heat-engine worker, and this caused deadlocks when creating the overcloud stack. A single heat-engine worker was not enough to launch an overcloud stack.

To avoid this, it is recommended that the undercloud has at least two (virtual) cores. For virtual deployments this should be two vCPUs, regardless of cores on the baremetal host. If this is not possible, then uncommenting the num_engine_workers line in /etc/heat/heat.conf,  and restarting openstack-heat-engine fixes the issue. Thus, the above workarounds have resolved the issue.
Copy to Clipboard Toggle word wrap
BZ#1069157
At present, policy rules for volume extension prevent you from taking snapshots of in-use GlusterFS volumes. To work around this, you will have to manually edit those policy rules.

To do so, open the Compute service's policy.json file and change "rule:admin_api" entries to "" for "compute_extension:os-assisted-volume-snapshots:create" and "compute_extension:os-assisted-volume-snapshots:delete". Afterwards, restart the Compute API service.
Copy to Clipboard Toggle word wrap
BZ#1220630
The underlying Database-as-a-Service (Trove) processes will not start if the service's back-end database is unreachable. To work around this, Database-as-a-Service must be deployed on the same node as its back-end database.
Copy to Clipboard Toggle word wrap
BZ#1241424
Sometimes bare metal nodes can lock into a certain state if ironic-conductor stops abruptly. This means users cannot delete these nodes or change their state. As a workaround, log into the director's database and use the following query to set the node back to "available" state and remove the lock:

UPDATE nodes SET provision_state="available", target_provision_state=NULL, reservation=NULL WHERE uuid=<node uuid>;
Copy to Clipboard Toggle word wrap
BZ#1221076
Due to a known issue with the 'python-neutron-fwaas' package, Firewall-as-a-Service (FWaaS) may fail to work. This is a result of the 'python-neutron-fwaas' package missing the database upgrade 'versions' directory.
In addition, upgrading the database schemas between version releases may not function correctly at this time.
Copy to Clipboard Toggle word wrap
BZ#1247358
In rare cases, RabbitMQ fails to start on deployment. As a workaround, manually start RabbitMQ on nodes:

[stack@director ~]$ ssh heat-admin@192.168.0.20
[heat-admin@overcloud-controller-0 ~]$ pcs resource debug-start rabbitmq

Then rerun the deployment command on the director. The deployment now succeeds.
Copy to Clipboard Toggle word wrap
BZ#1246525
On the Undercloud, HAProxy is configured to run a HTTP check against the openstack-ironic-api service every 2 seconds. The check causes openstack-ironic-api to log a traceback to stderr with the errors:

 error: [Errno 104] Connection reset by peer
 error: [Errno 32] Broken pipe

Since the check runs every 2 seconds, these messages repeat frequently in /var/log/messages. As a workaround, switch to root permissions, edit /etc/haproxy/haproxy.cfg, and comment out the "option httpchk GET /" line from the ironic listener configuration:

 listen ironic
   bind 192.0.2.2:6385
   bind 192.0.2.3:6385
   # option httpchk GET /
   server 192.0.2.1 192.0.2.1:6385 check fall 5 inter 2000 rise 2

Save the file, then restart haproxy:

 sudo systemctl restart haproxy

No tracebacks from openstack-ironic-api are written to stderr.
Copy to Clipboard Toggle word wrap
BZ#1296365
Multiple services attempted NTP configuration on the Overcloud and the last service configured it incorrectly. This caused time synchronization issues across all Overcloud nodes. As a workaround, delete /usr/libexec/os-apply-config/templates/etc/ntp.conf from all Overcloud nodes and re-run the deployment command to re-apply the puppet configuration. This is required for users updating from an older version of Red Hat OpenStack Platform to 7.3. This fix is not necessary on new 7.3 deployments. NTP now configures correctly.
Copy to Clipboard Toggle word wrap
BZ#1236136
All keystone endpoints are on the External VIP. This means all API calls to keystone happen over the External VIP. There is no workaround at this time.
Copy to Clipboard Toggle word wrap
BZ#1250043
When using the 'gluster_native' driver for File Share Service back ends, snapshot commands can fail ungracefully with a 'key error' if any of the following components are down in the back end cluster's nodes:

- Logical volume brick
- The glusterd service
- Red Hat Gluster Storage volume

In addition, the following could also cause the same error:

- An entire node in a cluster is down.
- An unsupported volume is used as a back end.

Specifically, these issues can cause the 'openstack-manila-share' service to produce a traceback with KeyError instead of producing a useful error message. When troubleshooting this error, consider these possible back end issues.
Copy to Clipboard Toggle word wrap
BZ#1205432
The OpenStack Dashboard (Horizon) is not configured to accept connections on its local IP address. This mean you cannot browse the OpenStack Dashboard, including the Undercloud UI by IP address. As a workaround, use the Undercloud's FQDN instead of IP address. If access through the IP address is desired, edit /etc/openstack-dashboard/local_settings, add the IP address to the ALLOWED_HOSTS setting, then restart the httpd service. This enables the ability to browse OpenStack Dashboard through the host IP address.
Copy to Clipboard Toggle word wrap
BZ#1257291
With the glusterFS_native driver, providing or revoking 'cert'-based access to a share restarts a Red Hat Gluster Storage volume. This, in turn, will disrupt any ongoing I/O to existing mounts. To prevent any data loss, unmount a share on all clients before allowing or denying access to it.
Copy to Clipboard Toggle word wrap
BZ#1250130
The 'manila list' command shows information on all shares available from the File Share Service. This command also shows the Export Location field of each one, which should provide information for composing its mount point entry in an instance. However, the field displays this information in the following format:

    user@host:/vol
    
The 'user@' prefix is unnecessary, and should therefore be ignored when composing its mount point entry.
Copy to Clipboard Toggle word wrap
BZ#1257304
With the the File Share Service, when an attempt to create a snapshot of a provisioned share fails, an entry for the snapshot will still be created. However, this entry will be in an 'error' state, and any attempts to delete it will fail. 

To prevent this, avoid creating share snapshots if the back end volume, service, or host is down.
Copy to Clipboard Toggle word wrap
BZ#1307125
If the database is not available when heat-engine starts, heat-engine fails to start, preventing heat-engine from running under certain circumstances. This occurs after you update a machine on which both the database and heat-engine are running using yum alone such as when updating the undercloud. As a workaround, heat-engine must be explicitly started by running 'systemctl start openstack-heat-engine.service'. You can confirm whether heat-engine is running by running 'systemctl status openstack-heat-engine.service'.
Copy to Clipboard Toggle word wrap
BZ#1321179
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat