Questo contenuto non è disponibile nella lingua selezionata.
4. Release Notes for ppc
- This release includes WBEMSMT, a suite of web-based applications that provides a user-friendly management interface for Samba and DNS. For more information about WBEMSMT, refer to http://sblim.wiki.sourceforge.net/.
- Upgrading
pm-utils
from a Red Hat Enterprise Linux 5.1 Beta version ofpm-utils
will fail, resulting in the following error:error: unpacking of archive failed on file /etc/pm/sleep.d: cpio: rename
To prevent this from occurring, delete the/etc/pm/sleep.d/
directory prior to upgrading. If/etc/pm/sleep.d
contains any files, you can move those files to/etc/pm/hooks/
. - Hardware testing for the Mellanox MT25204 has revealed that an internal error occurs under certain high-load conditions. When the
ib_mthca
driver reports a catastrophic error on this hardware, it is usually related to an insufficient completion queue depth relative to the number of outstanding work requests generated by the user application.Although the driver will reset the hardware and recover from such an event, all existing connections are lost at the time of the error. This generally results in a segmentation fault in the user application. Further, ifopensm
is running at the time the error occurs, then it will have to be manually restarted in order to resume proper operation. - Driver Update Disks now support Red Hat's Driver Update Program RPM-based packaging. If a driver disk uses the newer format, it is possible to include RPM packaged drivers that will be preserved across system updates.Please note that driver RPMs are copied only for the default kernel variant that is in use on the installed system. For example, installing a driver RPM on a system running the virtualized kernel will install the driver only for the virtualized kernel. The driver RPM will not be installed for any other installed kernel variant in the system.As such, on a system that has multiple kernel variants installed, you will need to boot the system on each kernel variant and install the driver RPM. For example, if your system has both bare-metal and virtualized kernels installed, boot your system using the bare-metal kernel and install the driver RPM. Then, reboot the system into the virtualized kernel and install the driver RPM again.
- During the lifetime of
dom0
, you cannot create guests (i.e.xm create
) more than 32,750 times. For example, if you have guests rebooting in a loop,dom0
will fail to boot any guest after rebooting guests a total of 32,750 times.If this event occurs, restartdom0
- The Red Hat Enterprise Linux 5.1 NFS server now supports referral exports. These exports are based on extensions to the NFSv4 protocol. Any NFS clients that do not support these extensions (namely, Red Hat Enterprise Linux releases prior to 5.1) will not be able to access these exports.As such, if an NFS client does not support these exports, any attempt to access these exports may fail with an I/O error. In some cases, depending on the client implementation, the failure may be more severe, including the possibility of a system crash.It is important that you take precautions to ensure that NFS referral exports are not accessed by clients that do not support them.
- GFS2 is an incremental advancement of GFS. This update applies several significant improvements that require a change to the on-disk file system format. GFS file systems can be converted to GFS2 using the utility
gfs2_convert
, which updates the metadata of a GFS file system accordingly.While much improved since its introduction in Red Hat Enterprise Linux 5, GFS2 remains a Technology Preview. The release notes included in the distribution incorrectly states that GFS2 is fully supported. Nevertheless, benchmark tests indicate faster performance on the following:- heavy usage in a single directory and faster directory scans (Postmark benchmark)
- synchronous I/O operations (
fstest
benchmark test indicates improved performance for messaging applications like TIBCO) - cached reads, as there is no longer any locking overhead
- direct I/O to preallocated files
- NFS file handle lookups
df
, as allocation information is now cached
In addition, GFS2 also features the following changes:- journals are now plain (though hidden) files instead of metadata. Journals can now be dynamically added as additional servers mount a file system.
- quotas are now enabled and disabled by the mount option
quota=<on|off|account>
quiesce
is no longer needed on a cluster to replay journals for failure recovery- nanosecond timestamps are now supported
- similar to ext3, GFS2 now supports the
data=ordered
mode - attribute settings
lsattr()
andchattr()
are now supported via standardioctl()
- file system sizes above 16TB are now supported
- GFS2 is a standard file system, and can be used in non-clustered configurations
- Installing Red Hat Enterprise Linux 5.1 on HP BL860c blade systems may hang during the IP information request stage. This issue manifests when you have to select OK twice on the screen.If this occurs, reboot and perform the installation with Ethernet autonegotiation disabled. To do this, use the parameter
ethtool="autoneg=off"
when booting from the installation media. Doing so does not affect the final installed system. - The
nohide
export option is required on referral exports (i.e. exports that specify a referral server). This is because referral exports need to "cross over" a bound mount point. Thenohide
export option is required for such a "cross over" to be successful.For more information on bound mounts, refer toman exports 5
. - This update includes the
lvm2
event monitoring daemon. If you are already usinglvm2
mirroring, perform the following steps to ensure that all monitoring functions are upgraded properly:- Deactivate all mirrored
lvm2
logical volumes before updating. To do this, use the commandlvchange -a n <volume group or mirrored volume>
. - Stop the old
lvm2
event daemon usingkillall -HUP dmeventd
. - Perform the upgrade of all related RPM packages, namely
device-mapper
andlvm2
. - Reactivate all mirrored volumes again using
lvchange -a y <volume group or mirrored volume>
.
- Rapid Virtualization Indexing (RVI) is now supported on 64-bit, 32-bit, and 32-bit PAE kernels. However, RVI can only translate 32-bit guest virtual addresses on the 32-bit PAE hypervisor.As such, if a guest is running a PAE kernel with more than 3840MB of RAM, a wrong address translation error will occur. This can crash the guest.It is recommended that you use the 64-bit kernel if you intend to run guests with more than 4GB of physical RAM under RVI.
- Running 16 cores or more using AMD Rev F processors may result in system resets when performing fully-virtualized guest installations.
- Installing the
systemtap-runtime
package will result in a transaction check error if thesystemtap
package is already installed. Further, upgrading Red Hat Enterprise Linux 5 to 5.1 will also fail if thesystemtap
package is already installed.As such, remove thesystemtap
package using the commandrpm -e systemtap-0.5.12-1.e15
before installingsystemtap-runtime
or performing an upgrade. - When setting up NFSROOT,
BOOTPROTO
must be set asBOOTPROTO=dhcp
in/etc/sysconfig/network-scripts/ifcfg-eth0
.If your environment requires a different setting forBOOTPROTO
, then temporarily setBOOTPROTO=dhcp
in/etc/sysconfig/network-scripts/ifcfg-eth0
before initially creating theinitrd
. You can reset the original value ofBOOTPROTO
after theinitrd
is created. nfsroot
is fully supported in this update. This allows users to run Red Hat Enterprise Linux 5.1 with its root file system (/
) mounted via NFS.nfsroot
was originally introduced in Red Hat Enterprise Linux 5 as a subset of the Technology Preview feature Stateless Linux. The full implementation of Stateless Linux remains a Technology Preview.At present,nfsroot
has the following restrictions:- Each client must have its own separate root file system over the NFS server. This restriction applies even when read-only root is in use.
- SWAP is not supported over NFS.
- SELinux cannot be enabled on
nfsroot
clients. In general, Red Hat does not recommend disabling SELinux. As such, customers must carefully consider the security implications of this action.
The release notes included in the distribution of Red Hat Enterprise Linux 5.1 contains outdated instructions on how to set upnfsroot
. Refer to the following procedure on how to set upnfsroot
. As always, this procedure assumes that your network device iseth0
and the associated network driver istg3
. You may need to adjust according to your system configuration:- Create the
initrd
in your home directory using the following command:mkinitrd --with=tg3 --rootfs=nfs --net-dev=eth0 --rootdev=<nfs server ip>:/<path to nfsroot> ~/initrd-<kernel-version>.img <kernel-version>
Thisinitrd
must be created using the Red Hat Enterprise Linux 5.1 kernel. - Next, create a
zImage.initrd
image from theinitrd
generated earlier.zImage.initrd
is a compressed kernel and initrd in one image. Use the following command:mkzimage /boot/vmlinuz-<kernel-version> /boot/config-<kernel-version> /boot/System.map-<kernel-version> ~/initrd-<kernel-version>.img /usr/share/ppc64-utils/zImage.stub ~/zImage.initrd-<kernel-version>
- Copy the created
zImage.initrd-<kernel-version>
to an exportable location on yourtftp
server. - Ensure that the exported
nfsroot
file system on thenfs
server contains the necessary binaries and modules. These binaries and modules must correspond to the version of the kernel used to create theinitrd
in the first step. - Configure the DHCP server to point the client to the target
zImage.initrd-<kernel-version>
.To do this, add the following entries to the/etc/dhcpd.conf
file of the DHCP server:next-server <tftp hostname/IP address>; filename "<tftp-path>/zImage.initrd";
Note that<tftp-path>
should specify the path to thezImage.initrd
from within thetftp
-exported directory. For example, if the absolute path to thezImage.initrd
is/tftpboot/mykernels/zImage.initrd
and/tftpboot/
is thetftp
-exported directory, then <tftp-path> should bemykernels/zImage.initrd
. - Finally, set your system's boot configuration parameters to make it boot first from the network device (in this example, the network device is
eth0
).
For more information about setting up a Red Hat Enterprise Linux 5.1 network installation for the BladeCenter QS21, refer to http://www-01.ibm.com/chips/techlib/techlib.nsf/products/Cell_Broadband_Engine.- The QLogic iSCSI Expansion Card for the IBM Bladecenter provides both ethernet and iSCSI functions. Some parts on the card are shared by both functions. However, the current
qla3xxx
andqla4xxx
drivers support ethernet and iSCSI functions individually. Both drivers do not support the use of ethernet and iSCSI functions simultaneously.As such, using both ethernet and iSCSI functions simultaneously may hang the device. This could result in data loss and filesystem corruption on iSCSI devices, or network disruptions on other connected ethernet devices. - When using
virt-manager
to add disks to an existing guest, duplicate entries may be created in the guest's/etc/xen/<domain name>
configuration file. These duplicate entries will prevent the guest from booting.As such, you should remove these duplicate entries. - Repeatedly migrating a guest between two hosts may cause one host to panic. If a host is rebooted after migrating a guest out of the system and before migrating the same guest back, the panic will not occur.
sysreport
is being deprecated in favor ofsos
. To installsos
, runyum install sos
. This command installssos
and removessysreport
. It is recommended that you update any existing kickstart files to reflect this.After installingsos
, use the commandsosreport
to invoke it. Using the commandsysreport
generates a warning thatsysreport
is now deprecated; continuing will invokesosreport
.If you need to use thesysreport
tool specifically, use the commandsysreport.legacy
to invoke it.For more information aboutsosreport
, refer toman sosreport
andsosreport --help
.
4.1. Installation-Related Notes
This section includes information specific to Anaconda and the installation of Red Hat Enterprise Linux 5.1.
To upgrade an already-installed Red Hat Enterprise Linux 5, you can use Red Hat Network to update those packages that have changed.
You may also use Anaconda to perform a fresh installation of Red Hat Enterprise Linux 5.1 or to perform an upgrade from the latest updated version of Red Hat Enterprise Linux 4 to Red Hat Enterprise Linux 5.1. Anaconda can also be used to upgrade an already-installed Red Hat Enterprise Linux 5.
- The minimum RAM required to install Red Hat Enterprise Linux 5.1 is now 1GB; the recommended RAM is 2GB. If a machine has less than 1GB RAM, the installation process may hang.Further, PPC machines that have 1GB of RAM experience significant performance issues under certain RAM-intensive workloads. For a Red Hat Enterprise Linux 5.1 system to perform RAM-intensive processes optimally, it is recommended that 4GB of RAM be equipped on the machine. This ensures that the system has the same number of physical pages as that on PPC machines (using 512MB of RAM) running Red Hat Enterprise Linux 4.5 or earlier.
- If you are copying the contents of the Red Hat Enterprise Linux 5 CD-ROMs (in preparation for a network-based installation, for example) be sure to copy the CD-ROMs for the operating system only. Do not copy the
Supplementary CD-ROM
, or any of the layered product CD-ROMs, as this will overwrite files necessary for Anaconda's proper operation.The contents of theSupplementary CD-ROM
and other layered product CD-ROMs must be installed after Red Hat Enterprise Linux 5.1 has been installed. - When installing Red Hat Enterprise Linux 5.1 on a fully virtualized guest, do not use the
kernel-xen
kernel. Using this kernel on fully virtualized guests can cause your system to hang.If you are using an Installation Number when installing Red Hat Enterprise Linux 5.1 on a fully virtualized guest, be sure to deselect theVirtualization
package group during the installation. TheVirtualization
package group option installs thekernel-xen
kernel.Note that paravirtualized guests are not affected by this issue. Paravirtualized guests always use thekernel-xen
kernel. - If you are using the Virtualized kernel when upgrading from Red Hat Enterprise Linux 5 to 5.1, you must reboot after completing the upgrade. You should then boot the system using the updated Virtualized kernel.The hypervisors of Red Hat Enterprise Linux 5 and 5.1 are not ABI-compatible. If you do not boot the system after upgrading using the updated Virtualized kernel, the upgraded Virtualization RPMs will not match the running kernel.
4.1.1. Installation / Boot for iSCSI software initiator (open-iscsi)
iSCSI installation and boot was originally introduced in Red Hat Enterprise Linux 5 as a Technology Preview. This feature is now fully supported, with the restrictions described below.
This capability has three configurations depending on whether you are:
- using a hardware iSCSI initiator (such as the QLogic qla4xxx)
- using the open-iscsi initiator on a system with firmware boot support for iSCSI (such as iSCSI Boot Firmware, or a version of Open Firmware that features the iSCSI boot capability)
- using the open-iscsi initiator on a system with no firmware boot support for iSCSI
4.1.1.1. Using a Hardware iSCSI Initiator
If you are using a hardware iSCSI initiator, you can use the card's BIOS set-up utility to enter the IP address and other parameters required to obtain access to the remote storage. The logical units of the remote storage will be available in Anaconda as standard
sd
devices, with no additional set-up required.
If you need to determine the initiator's qualified name (IQN) in order to configure the remote storage server, follow these steps during installation:
- Go to the installer page where you select which disk drives to use for the installation.
- Click on.
- Click on.
- The iSCSI IQN will be displayed on that screen.
4.1.1.2. Using open-iscsi On A System With Firmware Boot Support for iSCSI
If you are using the open-iscsi software initiator on a system with firmware boot support for iSCSI, use the firmware's setup utility to enter the IP address and other parameters needed to access the remote storage. Doing this configures the system to boot from the remote iSCSI storage.
Currently, Anaconda does not access the iSCSI information held by the firmware. Instead, you must manually enter the target IP address during installation. To do so, determine the IQN of the initiator using the procedure described above. Afterwards, on the same installer page where the initiator IQN is displayed, specify the IP address of the iSCSI target you wish to install to.
After manually specifying the IP address of the iSCSI target, the logical units on the iSCSI targets will be available for installation. The
initrd
created by Anaconda will now obtain the IQN and IP address of the iSCSI target.
If the IQN or IP address of the iSCSI target are changed in the future, enter the iBFT or Open Firmware set-up utility on each initiator and change the corresponding parameters. Afterwards, modify the
initrd
(stored in the iSCSI storage) for each initiator as follows:
- Expand the
initrd
usinggunzip
. - Unpack it using
cpio -i
. - In the
init
file, search for the line containing the stringiscsistartup
. This line also contains the IQN and IP address of the iSCSI target; update this line with the new IQN and IP address. - Re-pack the
initrd
usingcpio -o
. - Re-compress the
initrd
usinggunzip
.
The ability of the operating system to obtain iSCSI information held by the Open Firmware / iBFT firmware is planned for a future release. Such an enhancement will remove the need to modify the
initrd
(stored in the iSCSI storage) for each initiator whenever the IP address or IQN of the iSCSI target is changed.
4.1.1.3. Using open-iscsi On A System With No Firmware Boot Support for iSCSI
If you are using the
open-iscsi
software initiator on a system with no firmware boot support for iSCSI, use a network boot capability (such as PXE/tftp). In this case, follow the same procedure described earlier to determine the initiator IQN and specify the IP address of the iSCSI target. Once completed, copy the initrd
to the network boot server and set up the system for network boot.
Similarly, if the IP address or IQN of the iSCSI target is changed, the
initrd
should be modified accordingly as well. To do so, use the same procedure described earlier to modify the initrd
for each initiator.