3.3. Known Issues


These known issues exist in Red Hat OpenStack at this time:
BZ#1237009
The swift proxy port is denied in the Undercloud firewall. This means the swift proxy only accepts connections from localhost. As a workaround, open the swift proxy port in the firewall:

# sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT

This enabled connections to the swift proxy from remote machines.
BZ#1268426
There is a known issue that can occur when IP address conflicts are identified on the Provisioning network. As a consequence, discovery and/or deployment tasks will fail for hosts which are assigned an IP address which is already in use.
You can work around this issue by performing a port scan of the Provisioning network. Run from the Undercloud node, this will help validate whether the IP addresses used for the discovery and host IP ranges are available for allocation. You can perform this scan using the nmap utility. For example (replace the network with the subnet of the Provisioning network (in CIDR format)):
----
$ sudo yum install -y nmap
$ nmap -sn 192.0.2.0/24
----
As a result, if any of the IP addresses in use will conflict with the IP ranges in undercloud.conf, you will need to either change the IP ranges, or free up the IP addresses before running the introspection process, or deploying the Overcloud nodes.
BZ#1204259
Glance is not configured with glance.store.http.Store as a known_store in /etc/glance/glance.conf. This means the glance client can not create images with the --copy-from argument. These commands fail with a "400 Bad Request" error. As a workaround, edit /etc/glance/glance-api.conf, add glance.store.http.Store to the list in the "stores" configuration option, then restart the openstack-glance-api server. This enables successful creation of glance images with the --copy-from argument.
BZ#1266565
Currently, certain setup steps require a SSH connection to the overcloud controllers, and will need to traverse VIPs to reach the Overcloud nodes. 
If your environment is using an external load balancer, then these steps are not likely to successfully connect. You can work around this issue by configuring the external load balancer to forward port 22. As a result, the SSH connection to the VIP will succeed.
BZ#1243188
The Undercloud installer uses the following hardcoded file paths:

* undercloud.conf is hardcoded to ~/undercloud.conf
* instack.answers is hardcoded to ~/instack.answers
* tripleo-undercloud-passwords is hardcoded to ~/tripleo-undercloud-passwords
* install-undercloud.log is hardcoded to ~/.instack/install-undercloud.log

These files must exist or the Undercloud installer will fail.
BZ#1274687
There is currently a known requirement that can arise when Director connects to the Public API to complete the final configuration post-deployment steps: The Undercloud node must have a route to the Public API, and it must be reachable on the standard OpenStack API ports and port 22 (SSH).
To prepare for this requirement, check that the Undercloud will be able to reach the External network on the controllers, as this network will be used for post-deployment tasks. As a result, the Undercloud can be expected to successfully connect to the Public API after deployment, and perform final configuration tasks. These tasks are required in order for the newly created deployment to be managed using the Admin account.
BZ#1225069
For security reasons, the Overcloud only allows SSH key-based access by default. You can set a root password on the disk image for the overcloud using the virt-customize tool, which is found in the Red Hat Enterprise Linux Extras channel. After installing the tool and downloading the Overcloud images, use the following command to change the root password:

$ virt-customize -a overcloud-full.qcow2 --root-password password:<my_root_password>

Perform this operation prior to uploading the images into glance with the "openstack overcloud image upload" command.
BZ#1312155
The controller_v6.yaml template contains a parameter for a Management network VLAN. This parameter is not supported in the current version of the director, and can be safely ignored along with any comments referring to the Management network. The Management network references do not need to be copied to any custom templates.

This parameter will be supported in a future version.
BZ#1227955
Only NICs which have a live connection to a switch port will be counted in the numbered NIC abstractions (nic1, nic2, etc). As a workaround, the director includes a script that pings the first controller on all interfaces from each node. If a node has a link disconnected at deployment time, this is detected so that it can be corrected. Another possible workaround is to use a mapping file for each host, which has a mapping of NIC number to physical NIC. If one or more Overcloud nodes do not configure correctly due to a down link, this is now detected and the Overcloud can redeploy.
BZ#1177611
A known issue has been identified for interactions between High Availability (VRRP) routers and L2 Population. Currently, when connecting a HA router to a subnet, HA routers use a distributed port by design. Each router has the same port details on each node that it's scheduled on, and only the master router has IPs configured on that port; all the slaves have the port without any IPs configured.
Consequently, L2Population uses the stale information to advise that the router is present on the node (which it states in the port binding information for that port).
As a result, each node that has a port on that logical network has a tunnel created only to the node where the port is presumably bound. In addition, a forwarding entry is set so that any traffic to that port is sent through the created tunnel. 
However, this action may not succeed as there is not guarantee that the master router is on the node specified in the port binding. Furthermore, in the event that the master router is in fact on the node, a failover event would cause it to migrate to another node and result in a loss of connectivity with the router.
BZ#1241644
When openstack-cinder-volume uses an LVM backend and the Overcloud nodes reboot, the file-backed loopback device is not recreated. As a workaround, manually recreate the loopback device:

$ sudo losetup /dev/loop2 /var/lib/cinder/cinder-volumes

Then restart openstack-cinder-volume. Note that openstack-cinder-volume only runs on one node at a time in a high availability cluster of Overcloud Controller nodes. However, the loopback device should exist on all nodes.
BZ#1239130
The director does not provide network validation before or during a deployment. This means a deployment with a bad network configuration can run for two hours with no output and can result in failure. A network validation script is currently in development and will be released in the future.
BZ#1269005
In this release, Red Hat OpenStack Platform director only supports a High Availability (HA) overcloud deployment using three controller nodes.
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.