Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

4.2. Known Issues


  • Mounting file systems on a guest using the -o nobarrier option is not recommended, even if the host is directly conneted to Enterprise-class storage.
  • When an LVM mirror suffers a device failure, a two-stage recovery takes place. The first stage involves removing the failed devices. This can result in the mirror being reduced to a linear device. The second stage — if configured to do so by the administrator — is to attempt to replace any of the failed devices. Note, however, that there is no guarantee that the second stage will choose devices previously in-use by the mirror that had not been part of the failure if others are available.
  • In Red Hat Enterprise Linux 5, infiniband support (specifically the openib start script and the openib.conf file) were supplied by the openib package. In Red Hat Enterprise Linux 6, the openib package is renamed to rdma. Additionally, the service has been renamed to rdma and the configuration file is now located in /etc/rdma/rdma.conf.
  • The NFSv4 server in Red Hat Enterprise Linux 6 currently allows clients to mount using UDP and advertises NFSv4 over UDP with rpcbind. However, this configuration is not supported by Red Hat and violates the RFC 3530 standard.
  • If a device-mapper-multipath device is still open, but all of the attached paths have been lost, the device is unable to create a new table with no paths. Consequently, the following unusual output may be returned from the multipath -ll output command:
    mpatha (3600a59a0000c2fd0003079284c122fec) dm-0,
    size=2.0G hwhandler='0'
    |-+- policy='round-robin 0' prio=0 status=enabled
    | `- #:#:#:# -   #:#  failed faulty running
    `-+- policy='round-robin 0' prio=0 status=enabled
      |- #:#:#:# -   #:#  failed faulty running
      `- #:#:#:# -   #:#  failed faulty running
    
    Output of this type indicates that there are no paths to the device. The erroneous lines in the output preceded by the string #:#:#:# will be removed in a future release.
  • ext2 and ext3 filesystems do not use a page_mkwrite mechanism to intercept page faults. The quota subsystem can not account for this additional usage when writing to disk. Consequently, a user may exceed their disk block quota by issuing memory-mapped writes into a sparse region of a file. Note, also, that this is a longstanding behavior in the ext2 and ext3 filesystems.
  • Parted in Red Hat Enterprise Linux 6 cannot handle Extended Address Volumes (EAV) Direct Access Storage Devices (DASD) that have greater than 65535 cylinders. Consequently, EAV DASD drives cannot be partitioned using parted and installation on EAV DASD drives will fail. To work around this issue, complete the installation on a non EAV DASD drive, then add the EAV device after installation using the tools provided in s390-utils.
  • Systems that have an Emulex FC controller (with SLI-3 based firmware) installed may return a kernel panic during install. If the SAN disk is not required for installation, work around this issue by disconnecting the SAN connection from the Emulex FC controller. Note that this issue does not occur on SLI-4 based controllers. To determine the firmware interface of the adapter, run the command
    cat /sys/class/scsi_host/host{n}/fwrev
    
  • When multipath is configured to use user_friendly_names, it stores the binding between the wwid and the alias in /etc/multipath/bindings. When multipath creates devices in early bootup, (for example when the root filesystem is on a multipath device) it looks at /etc/multipath/bindings in the initramfs. When it creates devices during normal operation, it looks at /etc/multipath/bindings in the root filesystem. Currently, these two files aren't synced during initramfs creation. Because of this, there may be naming conflicts which keep new multipath devices from being created after bootup. To work around this, the bindings for the devices created by the initramfs must be copied into /etc/multipath/bindings after installation. The format of the bindings is:
    <alias><space><wwid>
    
    for example:
    mpatha 3600d0230000000000e13955cc3757801
    
  • Direct Asynchronous IO (AIO) that is not issued on filesystem block boundaries, and falls into a hole in a sparse file on ext4 or xfs filesystems, may corrupt file data if multiple I/O operations modify the same filesystem block. Specifically, if qemu-kvm is used with the aio=native IO mode over a sparse device image hosted on the ext4 or xfs filesystem, guest filesystem corruption will occur if partitions are not aligned with the host filesystem block size. Generally, do not use aio=native option along with cache=none for QEMU. This issue can be avoided by using one of the following techniques:
    1. Align AIOs on filesystem block boundaries, or do not write to sparse files using AIO on xfs or ext4 filesystems.
    2. KVM: Use a non-sparse system image file or allocate the space by zeroing out the entire file.
    3. KVM: Create the image using an ext3 host filesystem instead of ext4.
    4. KVM: Invoke qemu-kvm with aio=threads (this is the default).
    5. KVM: Align all partitions within the guest image to the host's filesystem block boundary (default 4k).
  • Mixing the iSCSI discoveryd mode and the normal discovery mode is not supported. When using discoveryd mode, iscsid will attempt to login from all iSCSI ifaces found in /var/lib/iscsi/ifaces. If the iface cannot log into the target this will fill the log with failure messages every discoveryd_poll_inval seconds. To prevent this, the iface can be deleted by running "iscsiadm -m iface -o delete -I ifacename".
  • A change in the 2.6.31 Linux kernel made the net.ipv4.conf.default.rp_filter = 1 more strict in the I/O that is accepted. Consequently, in Red Hat Enterprise Linux 6, if there are multiple interfaces on the same subnet and I/O is sent to the one that is not the default route, the I/O will be dropped. Note that this applies to iSCSI iface binding when multiple interfaces are on the same subnet. To work around this, set the net.ipv4.conf.default.rp_filter parameter in /etc/sysctl.conf to 0 or 2, and reboot the machine.
  • Attempting to run multiple LVM commands in quick succession might cause a backlog of these commands. Consequently, some of the operations requested might time-out, and subsequently, fail.
  • dracut currently only supports one FiberChannel over Ethernet (FCoE) connection to be used to boot from the root device. Consequently, booting from a root device that spans multiple FCoE devices (e.g. using RAID, LVM or similar techniques) is not possible.
  • If an LVM volume requires physical volumes that are multipath or FCoE devices, the LVM volume will not automatically activate. To enable automatic LVM activation, create a udev rules file /etc/udev/rules.d/64-autolvm.rules with the following content:
    SUBSYSTEM!="block", GOTO="lvm_end"
    ACTION!="add|change", GOTO="lvm_end"
    KERNEL=="dm-[0-9]*", ACTION=="add", GOTO="lvm_end"
    ENV{ID_FS_TYPE}!="LVM*_member", GOTO="lvm_end"
    
    PROGRAM=="/bin/sh -c 'for i in $sys/$devpath/holders/dm-[0-9]*; do [ -e $$i ] && exit 0; done; exit 1;' ", \
        GOTO="lvm_end"
    
    RUN+="/bin/sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"
    
    LABEL="lvm_end"
    
    Note, however that this work around may impact system performance.
  • The fscontext=, defcontext=, rootcontext= or context= mount options should not be used for remount operations. Using these options can cause the remount of a manually mounted volume to fail, returning errors such as:
    mount: /dev/shm not mounted already, or bad option
    
Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.