OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
此内容没有您所选择的语言版本。
Chapter 3. Known Issues
This chapter provides a list of known issues at the time of release.
- During node reboot, the corresponding rhgs-server pod gets restarted. When the pod boots up, the brick is not mounted as the corresponding lv device is not available.To workaround this issue, mount the brick(s) using the following command:
mount -a --fstab /var/lib/heketi/fstab
# mount -a --fstab /var/lib/heketi/fstab
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the corresponding volume using the following command:gluster volume start <volume name> force
# gluster volume start <volume name> force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Few paths are missing for iscsi mpath device. This is because, either CHAP security values mismatch or iscsi CSG: stage 0 has been skipped. Both will result in iSCSI login negotiation to fail.To workaround this issue, delete the app pod which will trigger the restart of the pod. The pod start will then update the credentials and relogins.
- Sometimes in OCP, instead of using the multipath mapper device, such as /dev/mapper/mpatha, to mount the device, individual paths, such as /dev/sdb, is used. This is because OCP is currently not waiting for the mpath checker default timeout of 30 seconds, but waits for only 10 second and then picks the individual path if the mapper device is not ready. Due to this, we are not benefited by all the advantages of high availability and multipathing for block device.To workaround this issue, delete the app pod which will then trigger the restart of the pod. The pod then relogins and undergoes a multipath check.
- Gluster-block operations (create/delete/modify) or gluster-block-target service restart, performed when tcmu-runner is in offline state, can trigger netlink hung issue, with targetcli process entering uninterruptible sleep (D state) state forever.To workaround this issue, reboot the node to recover from this state.
- The following two lines might be repeatedly logged in the rhgs-server-docker container/gluster container logs.
[MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd. [socket.c:701:__socket_rwv] 0-nfs: readv on /var/run/gluster/1ab7d02f7e575c09b793c68ec2a478a5.socket failed (Invalid argument)
[MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd. [socket.c:701:__socket_rwv] 0-nfs: readv on /var/run/gluster/1ab7d02f7e575c09b793c68ec2a478a5.socket failed (Invalid argument)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These logs are added as glusterd is unable to start the NFS service. There is no functional impact as NFS export is not supported in Containerized Red Hat Gluster Storage.