このコンテンツは選択した言語では利用できません。
Appendix A. Upgrading GFS
To upgrade a node to Red Hat GFS 6.1 from earlier versions of Red Hat GFS, you must convert the GFS cluster configuration archive (CCA) to a Red Hat Cluster Suite cluster configuration system (CCS) configuration file (
/etc/cluster/cluster.conf
) and convert GFS pool
volumes to LVM2 volumes.
This appendix contains instructions for upgrading from GFS 6.0 (or GFS 5.2.1) to Red Hat GFS 6.1, using GULM as the lock manager.
Note
You must retain GULM lock management for the upgrade to Red Hat GFS 6.1; that is, you cannot change from GULM lock management to DLM lock management during the upgrade to Red Hat GFS 6.1. However, after the upgrade to GFS 6.1, you can change lock managers.
The following procedure demonstrates upgrading to Red Hat GFS 6.1 from a GFS 6.0 (or GFS 5.2.1) configuration with an example
pool
configuration for a pool volume named argus
.
poolname argus subpools 1 subpool 0 512 1 gfs_data pooldevice 0 0 /dev/sda1
- Halt the GFS nodes and the lock server nodes as follows:
- Unmount GFS file systems from all nodes.
- Stop the lock servers; at each lock server node, stop the lock server as follows:
#
service lock_gulmd stop
- Stop
ccsd
at all nodes; at each node, stopccsd
as follows:#
service ccsd stop
- Deactivate pools; at each node, deactivate GFS
pool
volumes as follows:#
service pool stop
- Uninstall Red Hat GFS RPMs.
- Install new software:
- Install Red Hat Enterprise Linux version 4 software (or verify that it is installed).
- Install Red Hat Cluster Suite and Red Hat GFS RPMs.
- At all GFS 6.1 nodes, create a cluster configuration file directory (
/etc/cluster
) and upgrade the CCA (in this example, located in/dev/pool/cca
) to the new Red Hat Cluster Suite CCS configuration file format by running theccs_tool upgrade
command as shown in the following example:#
mkdir /etc/cluster
#ccs_tool upgrade /dev/pool/cca > /etc/cluster/cluster.conf
- At all GFS 6.1 nodes, start
ccsd
, run thelock_gulmd -c
command, and startclvmd
as shown in the following example:#
ccsd
#lock_gulmd -c
Warning! You didn't specify a cluster name before --use_ccs Letting ccsd choose which cluster we belong to. #clvmd
Note
Ignore the warning message following thelock_gulmd -c
command. Because the cluster name is already included in the converted configuration file, there is no need to specify a cluster name when issuing thelock_gulmd -c
command. - At all GFS 6.1 nodes, run
vgscan
as shown in the following example:#
vgscan
Reading all physical volumes. This may take a while... Found volume group "argus" using metadata type pool - At one GFS 6.1 node, convert the
pool
volume to an LVM2 volume by running thevgconvert
command as shown in the following example:#
vgconvert -M2 argus
Volume group argus successfully converted - At all GFS 6.1 nodes, run
vgchange -ay
as shown in the following example:#
vgchange -ay
1 logical volume(s) in volume group "argus" now active - At the first node to mount a GFS file system, run the
mount
command with theupgrade
option as shown in the following example:#
mount -t gfs -o upgrade /dev/pool/argus /mnt/gfs1
Note
This step only needs to be done once — on the first mount of the GFS file system.Note
If static minor numbers were used onpool
volumes and the GFS 6.1 nodes are using LVM2 for other purposes (root file system) there may be problems activating thepool
volumes under GFS 6.1. That is because of static minor conflicts. Refer to the following Bugzilla report for more information:https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=146035