3.4. 已知问题
目前,Red Hat OpenStack Platform 存在的已知问题包括:
- BZ#1177611
A known issue has been identified for interactions between High Availability (VRRP) routers and L2 Population. Currently, when connecting a HA router to a subnet, HA routers use a distributed port by design. Each router has the same port details on each node that it's scheduled on, and only the master router has IPs configured on that port; all the slaves have the port without any IPs configured. Consequently, L2Population uses the stale information to advise that the router is present on the node (which it states in the port binding information for that port). As a result, each node that has a port on that logical network has a tunnel created only to the node where the port is presumably bound. In addition, a forwarding entry is set so that any traffic to that port is sent through the created tunnel. However, this action may not succeed as there is not guarantee that the master router is on the node specified in the port binding. Furthermore, in the event that the master router is in fact on the node, a failover event would cause it to migrate to another node and result in a loss of connectivity with the router.
A known issue has been identified for interactions between High Availability (VRRP) routers and L2 Population. Currently, when connecting a HA router to a subnet, HA routers use a distributed port by design. Each router has the same port details on each node that it's scheduled on, and only the master router has IPs configured on that port; all the slaves have the port without any IPs configured. Consequently, L2Population uses the stale information to advise that the router is present on the node (which it states in the port binding information for that port). As a result, each node that has a port on that logical network has a tunnel created only to the node where the port is presumably bound. In addition, a forwarding entry is set so that any traffic to that port is sent through the created tunnel. However, this action may not succeed as there is not guarantee that the master router is on the node specified in the port binding. Furthermore, in the event that the master router is in fact on the node, a failover event would cause it to migrate to another node and result in a loss of connectivity with the router.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1234601
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1237009
The swift proxy port is denied in the Undercloud firewall. This means the swift proxy only accepts connections from localhost. As a workaround, open the swift proxy port in the firewall: # sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT This enabled connections to the swift proxy from remote machines.
The swift proxy port is denied in the Undercloud firewall. This means the swift proxy only accepts connections from localhost. As a workaround, open the swift proxy port in the firewall: # sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT This enabled connections to the swift proxy from remote machines.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1268426
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1272591
The Undercloud used the Public API to configure service endpoints during the post-deployment stage. This meant the Undercloud needed to reach the Public API in order to complete the deployment. If the External uplink on the Undercloud is not the same subnet as the Public API, the Undercloud requires a route to the Public API and any firewall ACLs must allow this traffic. With this route, the Undercloud connects to the Public API and completes post-deployment tasks.
The Undercloud used the Public API to configure service endpoints during the post-deployment stage. This meant the Undercloud needed to reach the Public API in order to complete the deployment. If the External uplink on the Undercloud is not the same subnet as the Public API, the Undercloud requires a route to the Public API and any firewall ACLs must allow this traffic. With this route, the Undercloud connects to the Public API and completes post-deployment tasks.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1290881
The default driver with Block Storage service is the internal LVM software iSCSI driver. This is the volume back-end which manages local volumes. However, the Cinder iSCSI LVM driver has significant performance issues. In production environments, with high I/O activity, there are many potential issues which could affect performance or data integrity, Red Hat strongly recommends using a certified Block Storage plug-in provider for storage in a production environment. The software iSCSI LVM driver should be used and is only supported for single node evaluations and proof of concept environments.
The default driver with Block Storage service is the internal LVM software iSCSI driver. This is the volume back-end which manages local volumes. However, the Cinder iSCSI LVM driver has significant performance issues. In production environments, with high I/O activity, there are many potential issues which could affect performance or data integrity, Red Hat strongly recommends using a certified Block Storage plug-in provider for storage in a production environment. The software iSCSI LVM driver should be used and is only supported for single node evaluations and proof of concept environments.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1293979
Updating packages on the Undercloud left the Undercloud in an indeterminate state. This meant some Undercloud services were disabled after the package update and could not start again. As a workaround, run 'openstack undercloud install' to reconfigure all Undercloud services. After the command complete, the Undercloud services operate normally.
Updating packages on the Undercloud left the Undercloud in an indeterminate state. This meant some Undercloud services were disabled after the package update and could not start again. As a workaround, run 'openstack undercloud install' to reconfigure all Undercloud services. After the command complete, the Undercloud services operate normally.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1295374
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1463061
When using Red Hat Ceph Storage as a back end for both Block Storage (cinder) volumes and backups, any attempt to perform an incremental backup will result in a full backup instead, without any warning.
When using Red Hat Ceph Storage as a back end for both Block Storage (cinder) volumes and backups, any attempt to perform an incremental backup will result in a full backup instead, without any warning.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1321179
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow