此内容没有您所选择的语言版本。

6.4. Upgrading to Red Hat Gluster Storage 3.5


  1. Disable all repositories

    # subscription-manager repos --disable=’*’
    Copy to Clipboard Toggle word wrap
  2. Subscribe to the Red Hat Enterprise Linux 7 channel

    # subscription-manager repos --enable=rhel-7-server-rpms
    Copy to Clipboard Toggle word wrap
  3. Check for stale Red Hat Enterprise Linux 6 packages

    Check for any stale Red Hat Enterprise Linux 6 packages post upgrade:
    # rpm -qa | grep el6
    Copy to Clipboard Toggle word wrap

    Important

    If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.
  4. Update and reboot

    Update the Red Hat Enterprise Linux 7 packages and reboot.
    # yum update
    # reboot
    Copy to Clipboard Toggle word wrap
  5. Verify the version number

    Ensure that the latest version of Red Hat Enterprise Linux 6 is shown when you view the `redhat-release` file:
    # cat /etc/redhat-release
    Copy to Clipboard Toggle word wrap
  6. Subscribe to the required channels

    1. Subscribe to the Gluster channel:
      # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
      Copy to Clipboard Toggle word wrap
    2. If you use Samba, enable its repository.
      # subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
      Copy to Clipboard Toggle word wrap
    3. If you use NFS-Ganesha, enable its repository.
      # subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms --enable=rhel-ha-for-rhel-7-server-rpms
      Copy to Clipboard Toggle word wrap
    4. If you use gdeploy, enable the Ansible repository:
      # subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
      Copy to Clipboard Toggle word wrap
  7. Install and update Gluster

    1. If you used a Red Hat Enterprise Linux 7 ISO, install Red Hat Gluster Storage 3.5 using the following command:
      # yum install redhat-storage-server
      Copy to Clipboard Toggle word wrap
      This is already installed if you used a Red Hat Gluster Storage 3.5 ISO based on Red Hat Enterprise Linux 7.
    2. Update Red Hat Gluster Storage to the latest packages using the following command:
      # yum update
      Copy to Clipboard Toggle word wrap
  8. Verify the installation and update

    1. Check the current version number of the updated Red Hat Gluster Storage system:
      # cat /etc/redhat-storage-release
      Copy to Clipboard Toggle word wrap

      Important

      The version number should be 3.5.
    2. Ensure that no Red Hat Enterprise Linux 6 packages are present:
      # rpm -qa | grep el6
      Copy to Clipboard Toggle word wrap

      Important

      If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.
  9. Install and configure Firewalld

    1. Install and start the firewall daemon using the following commands:
      # yum install firewalld
      # systemctl start firewalld
      Copy to Clipboard Toggle word wrap
    2. Add the Gluster process to the firewall:
      # firewall-cmd --zone=public --add-service=glusterfs --permanent
      Copy to Clipboard Toggle word wrap
    3. Add the required services and ports to firewalld. For more information see Considerations for Red Hat Gluster Storage
    4. Reload the firewall using the following commands:
      # firewall-cmd --reload
      Copy to Clipboard Toggle word wrap
  10. Start the Gluster processes

    1. Start the glusterd process:
      # systemctl start glusterd
      Copy to Clipboard Toggle word wrap
  11. Update Gluster op-version

    Update the Gluster op-version to the required maximum version using the following commands:
    # gluster volume get all cluster.max-op-version
    # gluster volume set all cluster.op-version op_version
    Copy to Clipboard Toggle word wrap

    Note

    70200 is the cluster.op-version value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:
    gluster volume heal $VOLNAME granular-entry-heal enable
    Copy to Clipboard Toggle word wrap
    The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correct cluster.op-version value for other versions.
  12. Set up Samba and CTDB

    If the Gluster setup on Red Hat Enterprise Linux 6 had Samba and CTDB configured, you should have the following available on the updated Red Hat Enterprise Linux 7 system:
    • CTDB volume
    • /etc/ctdb/nodes file
    • /etc/ctdb/public_addresses file
    Perform the following steps to reconfigure Samba and CTDB:
    1. Configure the firewall for Samba:
      # firewall-cmd --zone=public  --add-service=samba --permanent
      # firewall-cmd --zone=public  --add-port=4379/tcp --permanent
      Copy to Clipboard Toggle word wrap
    2. Subscribe to the Samba channel:
      # subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
      Copy to Clipboard Toggle word wrap
    3. Update Samba to the latest packages:
      # yum update
      Copy to Clipboard Toggle word wrap
    4. Configure CTDB for Samba. For more information, see Configuring CTDB on Red Hat Gluster Storage Server in Setting up CTDB for Samba. You must skip creating the volume as the volumes present before the upgrade would be persistent after the upgrade.
    5. In the following files, replace all in the statement META="all" with the volume name:
      /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
      /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
      Copy to Clipboard Toggle word wrap
      For example, the volume name is ctdb_volname, the META="all" in the files should be changed to META="ctdb_volname".
    6. Restart the CTDB volume using the following commands:
      # gluster volume stop volume_name
      # gluster volume start volume_name
      Copy to Clipboard Toggle word wrap
    7. Start the CTDB process:
      # systemctl start ctdb
      Copy to Clipboard Toggle word wrap
    8. Share the volume over Samba if required. See Sharing Volumes over SMB.
  13. Start the volumes and geo-replication

    1. Start the required volumes using the following command:
      # gluster volume start volume_name
      Copy to Clipboard Toggle word wrap
    2. Mount the meta-volume:
      # mount /var/run/gluster/shared_storage/
      Copy to Clipboard Toggle word wrap

      Note

      With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
      If this command does not work, review the content of the /etc/fstab file and ensure that the entry for the shared storage is configured correctly, and re-run the mount command. The line for the meta volume in the /etc/fstab file should look like the following:
      hostname:/gluster_shared_storage   /var/run/gluster/shared_storage/   glusterfs   defaults   0 0
      Copy to Clipboard Toggle word wrap
    3. Restore the geo-replication session:
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
      Copy to Clipboard Toggle word wrap
      For more information on geo-replication, see Preparing to Deploy Geo-replication.
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat