此内容没有您所选择的语言版本。
Chapter 8. Upgrading to Red Hat Gluster Storage 3.3
This chapter describes the procedure to upgrade to Red Hat Gluster Storage 3.3 from Red Hat Gluster Storage 3.1 or 3.2.
Upgrade support limitations
- Upgrading from Red Hat Enterprise Linux 6 based Red Hat Gluster Storage to Red Hat Enterprise Linux 7 based Red Hat Gluster Storage is not supported.
- Servers must be upgraded prior to upgrading clients.
- If you are upgrading from Red Hat Gluster Storage 3.1 Update 2 or earlier, you must upgrade servers and clients simultaneously.
- If you use NFS-Ganesha, your supported upgrade path to Red Hat Gluster Storage 3.3 depends on the version from which you are upgrading. If you are upgrading from version 3.1.x to 3.3, use Section 8.1, “Offline Upgrade to Red Hat Gluster Storage 3.3”. If you are upgrading from version 3.2 to 3.3, use Section 8.2, “In-Service Software Upgrade from Red Hat Gluster Storage 3.2 to Red Hat Gluster Storage 3.3”.
8.1. Offline Upgrade to Red Hat Gluster Storage 3.3 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
Warning
Before you upgrade, be aware of changed requirements that exist after Red Hat Gluster Storage 3.1.3. If you want to access a volume being provided by a Red Hat Gluster Storage 3.1.3 or higher server, your client must also be using Red Hat Gluster Storage 3.1.3 or higher. Accessing volumes from other client versions can result in data becoming unavailable and problems with directory operations. This requirement exists because Red Hat Gluster Storage 3.1.3 contained a number of changes that affect how the Distributed Hash Table works in order to improve directory consistency and remove the effects seen in BZ#1115367 and BZ#1118762.
Important
In Red Hat Enterprise Linux 7 based Red Hat Gluster Storage 3.1 and higher, updating reloads firewall rules. All runtime-only changes made before the reload are lost, so ensure that any changes you want to keep are made persistently.
Procedure 8.1. Before you upgrade
- Back up the following configuration directory and files in a location that is not on the operating system partition.
/var/lib/glusterd
/etc/swift
/etc/samba
/etc/ctdb
/etc/glusterfs
/var/lib/samba
/var/lib/ctdb
/var/run/gluster/shared_storage/nfs-ganesha
If you use NFS-Ganesha, back up the following files from all nodes:/etc/ganesha/exports/export.*.conf
/etc/ganesha/ganesha.conf
/etc/ganesha/ganesha-ha.conf
- Unmount gluster volumes from all clients. On a client, use the following command to unmount a volume from a mount point.
umount mount-point
# umount mount-point
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use NFS-Ganesha, run the following on a gluster server to disable the nfs-ganesha service:
gluster nfs-ganesha disable
# gluster nfs-ganesha disable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On a gluster server, disable the shared volume.
gluster volume set all cluster.enable-shared-storage disable
# gluster volume set all cluster.enable-shared-storage disable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
# for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that all volumes are stopped.
gluster volume info
# gluster volume info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Unmount the data partition(s) from the servers using the following command.
umount mount-point
# umount mount-point
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the
glusterd
services on all servers using the following command:service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the pcsd service.
systemctl stop pcsd
# systemctl stop pcsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure 8.2. Upgrade using yum
- Verify that your system is not on the legacy Red Hat Network Classic update system.
migrate-rhs-classic-to-rhsm --status
# migrate-rhs-classic-to-rhsm --status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are still on Red Hat Network Classic, run the following command to migrate to Red Hat Subscription Manager.migrate-rhs-classic-to-rhsm --rhn-to-rhsm
# migrate-rhs-classic-to-rhsm --rhn-to-rhsm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then verify that your status has changed.migrate-rhs-classic-to-rhsm --status
# migrate-rhs-classic-to-rhsm --status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use Samba:
- For Red Hat Enterprise Linux 6.7 or higher, enable the following repository:
subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Red Hat Enterprise Linux 7, enable the following repository:subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that Samba is upgraded on all the nodes simultaneously, as running different versions of Samba in the same cluster will lead to data corruption.Stop the CTDB and SMB services and verify that they are stopped.
service ctdb stop
# service ctdb stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
# ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If you want to migrate from Gluster NFS to NFS Ganesha as part of this upgrade, perform the following additional steps.
- Stop and disable CTDB. This ensures that multiple versions of Samba do not run in the cluster during the update process, and avoids data corruption.
systemctl stop ctdb systemctl disable ctdb
# systemctl stop ctdb # systemctl disable ctdb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the CTDB and NFS services are stopped:
ps axf | grep -E '(ctdb|nfs)[d]'
ps axf | grep -E '(ctdb|nfs)[d]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the CTDB volume by executing the following command:
gluster vol delete <ctdb_vol_name>
# gluster vol delete <ctdb_vol_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Upgrade the server to Red Hat Gluster Storage 3.3.
yum update
# yum update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the update to complete. - Reboot the server to ensure that kernel updates are applied.
- Ensure that glusterd and pcsd services are started.
systemctl start glusterd systemctl start pcsd
# systemctl start glusterd # systemctl start pcsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - When all nodes have been upgraded, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.gluster volume set all cluster.op-version 31102
# gluster volume set all cluster.op-version 31102
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
31102
is thecluster.op-version
value for the latest Red Hat Gluster Storage 3.3.1 glusterfs Async. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-version
value for other versions. - If you want to migrate from Gluster NFS to NFS Ganesha as part of this upgrade, install the NFS-Ganesha packages as described in Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage, and use the information in the NFS Ganesha section of the Red Hat Gluster Storage 3.3 Administration Guide to configure the NFS Ganesha cluster.
- Start all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume start $vol; done
# for vol in `gluster volume list`; do gluster --mode=script volume start $vol; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are using NFS-Ganesha:
- Copy the volume's export information from your backup copy of
ganesha.conf
to the new/etc/ganesha/ganesha.conf
file.The export information in the backed up file is similar to the following:%include "/etc/ganesha/exports/export.v1.conf" %include "/etc/ganesha/exports/export.v2.conf" %include "/etc/ganesha/exports/export.v3.conf"
%include "/etc/ganesha/exports/export.v1.conf" %include "/etc/ganesha/exports/export.v2.conf" %include "/etc/ganesha/exports/export.v3.conf"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the backup volume export files from the backup directory to
/etc/ganesha/exports
by running the following command from the backup directory:cp export.* /etc/ganesha/exports/
# cp export.* /etc/ganesha/exports/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Enable firewall settings for new services and ports. See the Red Hat Gluster Storage 3.3 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/chap-getting_started.
- Enable the shared volume.
gluster volume set all cluster.enable-shared-storage enable
# gluster volume set all cluster.enable-shared-storage enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the shared storage volume is mounted on the server. If the volume is not mounted, run the following command:
mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
# mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the
/var/run/gluster/shared_storage/nfs-ganesha
directory is created.cd /var/run/gluster/shared_storage/ mkdir nfs-ganesha
# cd /var/run/gluster/shared_storage/ # mkdir nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use NFS-Ganesha:
- Copy the
ganesha.conf
andganesha-ha.conf
files, and the/etc/ganesha/exports
directory to the/var/run/gluster/shared_storage/nfs-ganesha
directory.cd /etc/ganesha/ cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
# cd /etc/ganesha/ # cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the path of any export entries in the
ganesha.conf
file.sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
# sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following to clean up any existing cluster configuration:
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you have upgraded to Red Hat Enterprise Linux 7.4 or later, set the following SELinux Booleans:
setsebool -P ganesha_use_fusefs on setsebool -P gluster_use_execmem on
# setsebool -P ganesha_use_fusefs on # setsebool -P gluster_use_execmem on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Start the ctdb service (and nfs-ganesha service, if used) and verify that all nodes are functional.
systemctl start ctdb gluster nfs-ganesha enable
# systemctl start ctdb # gluster nfs-ganesha enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If this deployment uses NFS-Ganesha, enable NFS-Ganesha on all volumes.
gluster volume set volname ganesha.enable on
# gluster volume set volname ganesha.enable on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure 8.3. Before you upgrade
- Back up the following configuration directory and files in a location that is not on the operating system partition.
/var/lib/glusterd
/etc/swift
/etc/samba
/etc/ctdb
/etc/glusterfs
/var/lib/samba
/var/lib/ctdb
/var/run/gluster/shared_storage/nfs-ganesha
If you use NFS-Ganesha, back up the following files from all nodes:/etc/ganesha/exports/export.*.conf
/etc/ganesha/ganesha.conf
/etc/ganesha/ganesha-ha.conf
- Unmount gluster volumes from all clients. On a client, use the following command to unmount a volume from a mount point.
umount mount-point
# umount mount-point
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use NFS-Ganesha, run the following on a gluster server to disable the nfs-ganesha service:
gluster nfs-ganesha disable
# gluster nfs-ganesha disable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On a gluster server, disable the shared volume.
gluster volume set all cluster.enable-shared-storage disable
# gluster volume set all cluster.enable-shared-storage disable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
# for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that all volumes are stopped.
gluster volume info
# gluster volume info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Unmount the data partition(s) from the servers using the following command.
umount mount-point
# umount mount-point
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the
glusterd
services on all servers using the following command:service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the pcsd service.
systemctl stop pcsd
# systemctl stop pcsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure 8.4. Upgrade using Satellite
- Create an Activation Key at the Red Hat Network Satellite Server, and associate it with the following channels. For more information, see Section 2.5, “Installing from Red Hat Satellite Server”
- For Red Hat Enterprise Linux 6.7 or higher:
Base Channel: Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 6 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 6 for x86_64)
Base Channel: Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 6 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 6 for x86_64)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use Samba, add the following channel:Red Hat Gluster 3 Samba (RHEL 6 for x86_64)
Red Hat Gluster 3 Samba (RHEL 6 for x86_64)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For Red Hat Enterprise Linux 7:
Base Channel: Red Hat Enterprise Linux Server (v.7 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 7 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 7 for x86_64)
Base Channel: Red Hat Enterprise Linux Server (v.7 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 7 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 7 for x86_64)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use Samba, add the following channel:Red Hat Gluster 3 Samba (RHEL 6 for x86_64)
Red Hat Gluster 3 Samba (RHEL 6 for x86_64)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Unregister your system from Red Hat Network Satellite by following these steps:
- Log in to the Red Hat Network Satellite server.
- Click on the Systems tab in the top navigation bar and then the name of the old or duplicated system in the System List.
- Click the delete system link in the top-right corner of the page.
- To confirm the system profile deletion by clicking the Delete System button.
- Run the following command on your Red Hat Gluster Storage server, using your credentials and the Activation Key you prepared earlier. This re-registers the system to the Red Hat Gluster Storage 3.3 channels on the Red Hat Network Satellite Server.
rhnreg_ks --username username --password password --force --activationkey Activation Key ID
# rhnreg_ks --username username --password password --force --activationkey Activation Key ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the channel subscriptions have been updated.On Red Hat Enterprise Linux 6.7 and higher, look for the following channels, as well as the
rhel-x86_64-server-6-rh-gluster-3-samba
channel if you use Samba.rhn-channel --list
# rhn-channel --list rhel-x86_64-server-6 rhel-x86_64-server-6-rhs-3 rhel-x86_64-server-sfs-6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 7, look for the following channels, as well as therhel-x86_64-server-7-rh-gluster-3-samba
channel if you use Samba.rhn-channel --list
# rhn-channel --list rhel-x86_64-server-7 rhel-x86_64-server-7-rhs-3 rhel-x86_64-server-sfs-7
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Upgrade to Red Hat Gluster Storage 3.3.
yum update
# yum update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the server and run volume and data integrity checks.
- When all nodes have been upgraded, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.gluster volume set all cluster.op-version 31102
# gluster volume set all cluster.op-version 31102
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
31102
is thecluster.op-version
value for the latest Red Hat Gluster Storage 3.3.1 glusterfs Async. See Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-version
value for other versions. - Start all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume start $vol; done
# for vol in `gluster volume list`; do gluster --mode=script volume start $vol; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are using NFS-Ganesha:
- Copy the volume's export information from your backup copy of
ganesha.conf
to the new/etc/ganesha/ganesha.conf
file.The export information in the backed up file is similar to the following:%include "/etc/ganesha/exports/export.v1.conf" %include "/etc/ganesha/exports/export.v2.conf" %include "/etc/ganesha/exports/export.v3.conf"
%include "/etc/ganesha/exports/export.v1.conf" %include "/etc/ganesha/exports/export.v2.conf" %include "/etc/ganesha/exports/export.v3.conf"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the backup volume export files from the backup directory to
/etc/ganesha/exports
by running the following command from the backup directory:cp export.* /etc/ganesha/exports/
# cp export.* /etc/ganesha/exports/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Enable firewall settings for new services and ports. See the Red Hat Gluster Storage 3.3 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/chap-getting_started.
- Enable the shared volume.
gluster volume set all cluster.enable-shared-storage enable
# gluster volume set all cluster.enable-shared-storage enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the shared storage volume is mounted on the server. If the volume is not mounted, run the following command:
mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
# mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the
/var/run/gluster/shared_storage/nfs-ganesha
directory is created.cd /var/run/gluster/shared_storage/ mkdir nfs-ganesha
# cd /var/run/gluster/shared_storage/ # mkdir nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use NFS-Ganesha:
- Copy the
ganesha.conf
andganesha-ha.conf
files, and the/etc/ganesha/exports
directory to the/var/run/gluster/shared_storage/nfs-ganesha
directory.cd /etc/ganesha/ cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
# cd /etc/ganesha/ # cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the path of any export entries in the
ganesha.conf
file.sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
# sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following to clean up any existing cluster configuration:
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you have upgraded to Red Hat Enterprise Linux 7.4 or later, set the following SELinux Booleans:
setsebool -P ganesha_use_fusefs on setsebool -P gluster_use_execmem on
# setsebool -P ganesha_use_fusefs on # setsebool -P gluster_use_execmem on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Start the ctdb service (and nfs-ganesha service, if used) and verify that all nodes are functional.
systemctl start ctdb gluster nfs-ganesha enable
# systemctl start ctdb # gluster nfs-ganesha enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If this deployment uses NFS-Ganesha, enable NFS-Ganesha on all volumes.
gluster volume set volname ganesha.enable on
# gluster volume set volname ganesha.enable on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow