搜索

此内容没有您所选择的语言版本。

Appendix A. Upgrading GFS

download PDF
To upgrade a node to Red Hat GFS 6.1 from earlier versions of Red Hat GFS, you must convert the GFS cluster configuration archive (CCA) to a Red Hat Cluster Suite cluster configuration system (CCS) configuration file (/etc/cluster/cluster.conf) and convert GFS pool volumes to LVM2 volumes.
This appendix contains instructions for upgrading from GFS 6.0 (or GFS 5.2.1) to Red Hat GFS 6.1, using GULM as the lock manager.

Note

You must retain GULM lock management for the upgrade to Red Hat GFS 6.1; that is, you cannot change from GULM lock management to DLM lock management during the upgrade to Red Hat GFS 6.1. However, after the upgrade to GFS 6.1, you can change lock managers.
The following procedure demonstrates upgrading to Red Hat GFS 6.1 from a GFS 6.0 (or GFS 5.2.1) configuration with an example pool configuration for a pool volume named argus.
poolname argus
subpools 1
subpool 0 512 1 gfs_data
pooldevice 0 0 /dev/sda1
  1. Halt the GFS nodes and the lock server nodes as follows:
    1. Unmount GFS file systems from all nodes.
    2. Stop the lock servers; at each lock server node, stop the lock server as follows:
      # service lock_gulmd stop
    3. Stop ccsd at all nodes; at each node, stop ccsd as follows:
      # service ccsd stop
    4. Deactivate pools; at each node, deactivate GFS pool volumes as follows:
      # service pool stop
    5. Uninstall Red Hat GFS RPMs.
  2. Install new software:
    1. Install Red Hat Enterprise Linux version 4 software (or verify that it is installed).
    2. Install Red Hat Cluster Suite and Red Hat GFS RPMs.
  3. At all GFS 6.1 nodes, create a cluster configuration file directory (/etc/cluster) and upgrade the CCA (in this example, located in /dev/pool/cca) to the new Red Hat Cluster Suite CCS configuration file format by running the ccs_tool upgrade command as shown in the following example:
    # mkdir /etc/cluster
    # ccs_tool upgrade /dev/pool/cca > /etc/cluster/cluster.conf
  4. At all GFS 6.1 nodes, start ccsd, run the lock_gulmd -c command, and start clvmd as shown in the following example:
    # ccsd
    # lock_gulmd -c 
    Warning! You didn't specify a cluster name before --use_ccs
    Letting ccsd choose which cluster we belong to.
    # clvmd

    Note

    Ignore the warning message following the lock_gulmd -c command. Because the cluster name is already included in the converted configuration file, there is no need to specify a cluster name when issuing the lock_gulmd -c command.
  5. At all GFS 6.1 nodes, run vgscan as shown in the following example:
    # vgscan
      Reading all physical volumes.  This may take a while...
      Found volume group "argus" using metadata type pool
    
  6. At one GFS 6.1 node, convert the pool volume to an LVM2 volume by running the vgconvert command as shown in the following example:
    # vgconvert -M2 argus
      Volume group argus successfully converted
    
  7. At all GFS 6.1 nodes, run vgchange -ay as shown in the following example:
    # vgchange -ay
      1 logical volume(s) in volume group "argus" now active
    
  8. At the first node to mount a GFS file system, run the mount command with the upgrade option as shown in the following example:
    # mount -t gfs -o upgrade /dev/pool/argus /mnt/gfs1

    Note

    This step only needs to be done once — on the first mount of the GFS file system.

    Note

    If static minor numbers were used on pool volumes and the GFS 6.1 nodes are using LVM2 for other purposes (root file system) there may be problems activating the pool volumes under GFS 6.1. That is because of static minor conflicts. Refer to the following Bugzilla report for more information:
    https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=146035
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.