Ce contenu n'est pas disponible dans la langue sélectionnée.
Appendix G. Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5
This appendix provides a procedure for upgrading a Red Hat cluster from RHEL 4 to RHEL 5. The procedure includes changes required for Red Hat GFS and CLVM, also. For more information about Red Hat GFS, refer to Global File System: Configuration and Administration. For more information about LVM for clusters, refer to LVM Administrator's Guide: Configuration and Administration.
Upgrading a Red Hat Cluster from RHEL 4 to RHEL 5 consists of stopping the cluster, converting the configuration from a GULM cluster to a CMAN cluster (only for clusters configured with the GULM cluster manager/lock manager), adding node IDs, and updating RHEL and cluster software. To upgrade a Red Hat Cluster from RHEL 4 to RHEL 5, follow these steps:
- Stop client access to cluster high-availability services.
- At each cluster node, stop the cluster software as follows:
- Stop all high-availability services.
- Run
service rgmanager stop
. - Run
service gfs stop
, if you are using Red Hat GFS. - Run
service clvmd stop
, if CLVM has been used to create clustered volumes.Note
Ifclvmd
is already stopped, an error message is displayed:#
service clvmd stop
Stopping clvm: [FAILED]The error message is the expected result when runningservice clvmd stop
afterclvmd
has stopped. - Depending on the type of cluster manager (either CMAN or GULM), run the following command or commands:
- CMAN — Run
service fenced stop; service cman stop
. - GULM — Run
service lock_gulmd stop
.
- Run
service ccsd stop
.
- Disable cluster software from starting during reboot. At each node, run
/sbin/chkconfig
as follows:#
chkconfig --level 2345 rgmanager off
#chkconfig --level 2345 gfs off
#chkconfig --level 2345 clvmd off
#chkconfig --level 2345 fenced off
#chkconfig --level 2345 cman off
#chkconfig --level 2345 ccsd off
- Edit the cluster configuration file as follows:
- At a cluster node, open
/etc/cluster/cluster.conf
with a text editor. - If your cluster is configured with GULM as the cluster manager, remove the GULM XML elements —
<gulm>
and</gulm>
— and their content from/etc/cluster/cluster.conf
. GULM is not supported in Red Hat Cluster Suite for RHEL 5. Example G.1, “GULM XML Elements and Content” shows an example of GULM XML elements and content. - At the
<clusternode>
element for each node in the configuration file, insertnodeid="number"
aftername="name"
. Use a number value unique to that node. Inserting it there follows the format convention of the<clusternode>
element in a RHEL 5 cluster configuration file.Note
Thenodeid
parameter is required in Red Hat Cluster Suite for RHEL 5. The parameter is optional in Red Hat Cluster Suite for RHEL 4. If your configuration file already containsnodeid
parameters, skip this step. - When you have completed editing
/etc/cluster/cluster.conf
, save the file and copy it to the other nodes in the cluster (for example, using thescp
command).
- If your cluster is a GULM cluster and uses Red Hat GFS, change the superblock of each GFS file system to use the DLM locking protocol. Use the
gfs_tool
command with thesb
andproto
options, specifyinglock_dlm
for the DLM locking protocol:gfs_tool sb device proto lock_dlm
For example:#
gfs_tool sb /dev/my_vg/gfs1 proto lock_dlm
You shouldn't change any of these values if the filesystem is mounted. Are you sure? [y/n]y
current lock protocol name = "lock_gulm" new lock protocol name = "lock_dlm" Done - Update the software in the cluster nodes to RHEL 5 and Red Hat Cluster Suite for RHEL 5. You can acquire and update software through Red Hat Network channels for RHEL 5 and Red Hat Cluster Suite for RHEL 5.
- Run
lvmconf --enable-cluster
. - Enable cluster software to start upon reboot. At each node run
/sbin/chkconfig
as follows:#
chkconfig --level 2345 rgmanager on
#chkconfig --level 2345 gfs on
#chkconfig --level 2345 clvmd on
#chkconfig --level 2345 cman on
- Reboot the nodes. The RHEL 5 cluster software should start while the nodes reboot. Upon verification that the Red Hat cluster is running, the upgrade is complete.
Example G.1. GULM XML Elements and Content
<gulm> <lockserver name="gulmserver1"/> <lockserver name="gulmserver2"/> <lockserver name="gulmserver3"/> </gulm>