Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Upgrading to Red Hat Gluster Storage 3.5
Important
Upgrade support limitations
- While upgrading RHGS from versions lower than RHGS-3.5.4 to version RHGS-3.5.4 or higher, both servers and clients must upgrade to RHGS-3.5.4 or higher versions before bumping up the operating version of the cluster.
- Virtual Data Optimizer (VDO) volumes, which are supported in Red Hat Enterprise Linux 7.5, are not currently supported in Red Hat Gluster Storage. VDO is supported only when used as part of Red Hat Hyperconverged Infrastructure for Virtualization 2.0. See Understanding VDO for more information.
- Servers must be upgraded prior to upgrading clients.
- If you are upgrading from Red Hat Gluster Storage 3.1 Update 2 or earlier, you must upgrade servers and clients simultaneously.
- If you use NFS-Ganesha, your supported upgrade path to Red Hat Gluster Storage 3.5 depends on the version from which you are upgrading. If you are upgrading from version 3.3 or earlier, use Offline Upgrade to Red Hat Gluster Storage 3.3 and then perform an in-service upgrade from version 3.3 to 3.4. Later, perform the upgrading from version 3.4 to 3.5 using Section 5.2, “In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5”. If you are upgrading from version 3.4 to 3.5, directly use Section 5.2, “In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5”.
5.1. Offline Upgrade to Red Hat Gluster Storage 3.5 Link kopierenLink in die Zwischenablage kopiert!
Warning
Important
5.1.1. Upgrading to Red Hat Gluster Storage 3.5 for Systems Subscribed to Red Hat Subscription Manager Link kopierenLink in die Zwischenablage kopiert!
Procedure 5.1. Before you upgrade
- Back up the following configuration directory and files in a location that is not on the operating system partition.
/var/lib/glusterd/etc/samba/etc/ctdb/etc/glusterfs/var/lib/samba/var/lib/ctdb/var/run/gluster/shared_storage/nfs-ganesha
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .If you use NFS-Ganesha, back up the following files from all nodes:/run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf/etc/ganesha/ganesha.conf/etc/ganesha/ganesha-ha.conf
If upgrading from Red Hat Gluster Storage 3.3 to 3.4 or subsequent releases, back up all xattr by executing the following command individually on the brick root(s) for all nodes:find ./ -type d ! -path "./.*" ! -path "./" | xargs getfattr -d -m. -e hex > /var/log/glusterfs/xattr_dump_brick_name
# find ./ -type d ! -path "./.*" ! -path "./" | xargs getfattr -d -m. -e hex > /var/log/glusterfs/xattr_dump_brick_nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Unmount gluster volumes from all clients. On a client, use the following command to unmount a volume from a mount point.
umount mount-point
# umount mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use NFS-Ganesha, run the following on a gluster server to disable the nfs-ganesha service:
gluster nfs-ganesha disable
# gluster nfs-ganesha disableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - On a gluster server, disable the shared volume.
gluster volume set all cluster.enable-shared-storage disable
# gluster volume set all cluster.enable-shared-storage disableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop all volumes
for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
# for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that all volumes are stopped.
gluster volume info
# gluster volume infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the
glusterdservices on all servers using the following command:service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the pcsd service.
systemctl stop pcsd
# systemctl stop pcsdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure 5.2. Upgrade using yum
Note
migrate-rhs-classic-to-rhsm --status
# migrate-rhs-classic-to-rhsm --status
migrate-rhs-classic-to-rhsm --rhn-to-rhsm
# migrate-rhs-classic-to-rhsm --rhn-to-rhsm
migrate-rhs-classic-to-rhsm --status
# migrate-rhs-classic-to-rhsm --status
- If you use Samba:
- For Red Hat Enterprise Linux 6.7 or higher, enable the following repository:
subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For Red Hat Enterprise Linux 7, enable the following repository:subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For Red Hat Enterprise Linux 8, enable the following repository:subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-8-x86_64-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-8-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that Samba is upgraded on all the nodes simultaneously, as running different versions of Samba in the same cluster will lead to data corruption.Stop the CTDB and SMB services.On Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8:
systemctl stop ctdb
# systemctl stop ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service ctdb stop
# service ctdb stopCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that services are stopped, run:ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
# ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Upgrade the server to Red Hat Gluster Storage 3.5.
yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the update to complete.Important
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 7:yum install nfs-ganesha-selinux
# yum install nfs-ganesha-selinuxCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 8:dnf install glusterfs-ganesha
# dnf install glusterfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use Samba/CTDB, update the following files to replace
META="all"withMETA="<ctdb_volume_name>", for example,META="ctdb":/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh- This script ensures the file system and its lock volume are mounted on all Red Hat Gluster Storage servers that use Samba, and ensures that CTDB starts at system boot./var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh- This script ensures that the file system and its lock volume are unmounted when the CTDB volume is stopped.Note
For RHEL based Red Hat Gluster Storage upgrading to 3.5 batch update 4 with Samba, the write-behind translator has to manually disabled for all existing samba volumes.gluster volume set <volname> performance.write-behind off
# gluster volume set <volname> performance.write-behind offCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Reboot the server to ensure that kernel updates are applied.
- Ensure that glusterd and pcsd services are started.
systemctl start glusterd systemctl start pcsd
# systemctl start glusterd # systemctl start pcsdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
During upgrade of servers, the glustershd.log file throws some “Invalid argument” errors during every index crawl (10 mins by default) on the upgraded nodes. It is *EXPECTED* and can be *IGNORED* until the op-version bump up, after which these errors are not triggered. Sample error message:If you are in op-version '70000' or lower, do not bump up the op-version to '70100' or higher until all the servers and clients are upgraded to the newer version.[2021-05-25 17:58:38.007134] E [MSGID: 114031] [client-rpc-fops_v2.c:216:client4_0_mkdir_cbk] 0-spvol-client-40: remote operation failed. Path: (null) [Invalid argument]
[2021-05-25 17:58:38.007134] E [MSGID: 114031] [client-rpc-fops_v2.c:216:client4_0_mkdir_cbk] 0-spvol-client-40: remote operation failed. Path: (null) [Invalid argument]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - When all nodes have been upgraded, run the following command to update the
op-versionof the cluster. This helps to prevent any compatibility issues within the cluster.gluster volume set all cluster.op-version 70200
# gluster volume set all cluster.op-version 70200Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
70200is thecluster.op-versionvalue for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correctgluster volume heal $VOLNAME granular-entry-heal enable
gluster volume heal $VOLNAME granular-entry-heal enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow cluster.op-versionvalue for other versions.Important
If the op-version is bumped up to '70100' after upgrading the servers and before upgrading the clients, some internal metadata files under the root of the mount point named '.glusterfs-anonymous-inode-(gfid)' exposed to the older clients. The clients must not do any I/O or remove or touch contents in this directory. The clients must upgrade to 3.5.4 or higher version, then this directory becomes invisible to the clients. - If you want to migrate from Gluster NFS to NFS Ganesha as part of this upgrade, install the NFS-Ganesha packages as described in Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage, and configure the NFS Ganesha cluster using the information in the NFS Ganesha section of the Red Hat Gluster Storage 3.5 Administration Guide.
- Start all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; done
# for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are using NFS-Ganesha:
- Copy the volume's export information from your backup copy of
ganesha.confto the new/etc/ganesha/ganesha.conffile.The export information in the backed up file is similar to the following:%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf"
%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . - Copy the backup volume export files from the backup directory to
/etc/ganesha/exportsby running the following command from the backup directory:cp export.* /etc/ganesha/exports/
# cp export.* /etc/ganesha/exports/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Enable the shared volume.
gluster volume set all cluster.enable-shared-storage enable
# gluster volume set all cluster.enable-shared-storage enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the shared storage volume is mounted on the server. If the volume is not mounted, run the following command:
mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
# mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the
/var/run/gluster/shared_storage/nfs-ganeshadirectory is created.cd /var/run/gluster/shared_storage/ mkdir nfs-ganesha
# cd /var/run/gluster/shared_storage/ # mkdir nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable firewall settings for new services and ports. See Getting Started in the Red Hat Gluster Storage 3.5 Administration Guide.
- If you use Samba/CTDB:
- Mount
/gluster/lockbefore starting CTDB by executing the following commands:mount <ctdb_volume_name> mount -t glusterfs server:/ctdb_volume_name /gluster/lock/
# mount <ctdb_volume_name> # mount -t glusterfs server:/ctdb_volume_name /gluster/lock/Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the lock volume mounted correctly by checking for
lockin the output of themountcommand on any Samba server.mount | grep 'lock'
# mount | grep 'lock' ... <hostname>:/<ctdb_volume_name>.tcp on /gluster/lock type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If all servers that host volumes accessed via SMB have been updated, then start the CTDB and Samba services by executing the following commands.On Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8:
systemctl start ctdb
# systemctl start ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service ctdb start
# service ctdb startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the CTDB and SMB services have started, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If you use NFS-Ganesha:
- Copy the
ganesha.confandganesha-ha.conffiles, and the/etc/ganesha/exportsdirectory to the/var/run/gluster/shared_storage/nfs-ganeshadirectory.cd /etc/ganesha/ cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
# cd /etc/ganesha/ # cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the path of any export entries in the
ganesha.conffile.sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
# sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following to clean up any existing cluster configuration:
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you have upgraded to Red Hat Enterprise Linux 7.4 or later, set the following SELinux Boolean:
setsebool -P ganesha_use_fusefs on
# setsebool -P ganesha_use_fusefs onCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the nfs-ganesha service and verify that all nodes are functional.
gluster nfs-ganesha enable
# gluster nfs-ganesha enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable NFS-Ganesha on all volumes.
gluster volume set volname ganesha.enable on
# gluster volume set volname ganesha.enable onCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.2. Upgrading to Red Hat Gluster Storage 3.5 for Systems Subscribed to Red Hat Network Satellite Server Link kopierenLink in die Zwischenablage kopiert!
Procedure 5.3. Before you upgrade
- Back up the following configuration directory and files in a location that is not on the operating system partition.
/var/lib/glusterd/etc/samba/etc/ctdb/etc/glusterfs/var/lib/samba/var/lib/ctdb/var/run/gluster/shared_storage/nfs-ganesha
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .If you use NFS-Ganesha, back up the following files from all nodes:/run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf/etc/ganesha/ganesha.conf/etc/ganesha/ganesha-ha.conf
If upgrading from Red Hat Gluster Storage 3.3 to 3.4 or subsequent releases, back up all xattr by executing the following command individually on the brick root(s) for all nodes:find ./ -type d ! -path "./.*" ! -path "./" | xargs getfattr -d -m. -e hex > /var/log/glusterfs/xattr_dump_brick_name
# find ./ -type d ! -path "./.*" ! -path "./" | xargs getfattr -d -m. -e hex > /var/log/glusterfs/xattr_dump_brick_nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Unmount gluster volumes from all clients. On a client, use the following command to unmount a volume from a mount point.
umount mount-point
# umount mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use NFS-Ganesha, run the following on a gluster server to disable the nfs-ganesha service:
gluster nfs-ganesha disable
# gluster nfs-ganesha disableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - On a gluster server, disable the shared volume.
gluster volume set all cluster.enable-shared-storage disable
# gluster volume set all cluster.enable-shared-storage disableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
# for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that all volumes are stopped.
gluster volume info
# gluster volume infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the
glusterdservices on all servers using the following command:service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the pcsd service.
systemctl stop pcsd
# systemctl stop pcsdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure 5.4. Upgrade using Satellite
- Create an Activation Key at the Red Hat Network Satellite Server, and associate it with the following channels. For more information, see Section 2.6, “Installing from Red Hat Satellite Server”
- For Red Hat Enterprise Linux 6.7 or higher:
Base Channel: Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 6 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 6 for x86_64)
Base Channel: Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 6 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 6 for x86_64)Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use Samba, add the following channel:Red Hat Gluster 3 Samba (RHEL 6 for x86_64)
Red Hat Gluster 3 Samba (RHEL 6 for x86_64)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For Red Hat Enterprise Linux 7:
Base Channel: Red Hat Enterprise Linux Server (v.7 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 7 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 7 for x86_64)
Base Channel: Red Hat Enterprise Linux Server (v.7 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 7 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 7 for x86_64)Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use Samba, add the following channel:Red Hat Gluster 3 Samba (RHEL 7 for x86_64)
Red Hat Gluster 3 Samba (RHEL 7 for x86_64)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Unregister your system from Red Hat Network Satellite by following these steps:
- Log in to the Red Hat Network Satellite server.
- Click on the Systems tab in the top navigation bar and then the name of the old or duplicated system in the System List.
- Click the delete system link in the top-right corner of the page.
- To confirm the system profile deletion by clicking the Delete System button.
- Run the following command on your Red Hat Gluster Storage server, using your credentials and the Activation Key you prepared earlier. This re-registers the system to the Red Hat Gluster Storage 3.5 channels on the Red Hat Network Satellite Server.
rhnreg_ks --username username --password password --force --activationkey Activation Key ID
# rhnreg_ks --username username --password password --force --activationkey Activation Key IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the channel subscriptions have been updated.On Red Hat Enterprise Linux 6.7 and higher, look for the following channels, as well as the
rh-gluster-3-samba-for-rhel-6-server-rpmschannel if you use Samba.rhn-channel --list
# rhn-channel --list rhel-6-server-rpms rhel-scalefs-for-rhel-6-server-rpms rhs-3-for-rhel-6-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 7, look for the following channels, as well as therh-gluster-3-samba-for-rhel-7-server-rpmschannel if you use Samba.rhn-channel --list
# rhn-channel --list rhel-7-server-rpms rh-gluster-3-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Upgrade to Red Hat Gluster Storage 3.5.
yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Important
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 7:yum install nfs-ganesha-selinux
# yum install nfs-ganesha-selinuxCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 8:dnf install glusterfs-ganesha
# dnf install glusterfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the server and run volume and data integrity checks.
- When all nodes have been upgraded, run the following command to update the
op-versionof the cluster. This helps to prevent any compatibility issues within the cluster.gluster volume set all cluster.op-version 70200
# gluster volume set all cluster.op-version 70200Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
70200is thecluster.op-versionvalue for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correctgluster volume heal $VOLNAME granular-entry-heal enable
gluster volume heal $VOLNAME granular-entry-heal enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow cluster.op-versionvalue for other versions. - Start all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; done
# for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are using NFS-Ganesha:
- Copy the volume's export information from your backup copy of
ganesha.confto the new/etc/ganesha/ganesha.conffile.The export information in the backed up file is similar to the following:%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf"
%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . - Copy the backup volume export files from the backup directory to
/etc/ganesha/exportsby running the following command from the backup directory:cp export.* /etc/ganesha/exports/
# cp export.* /etc/ganesha/exports/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Enable the shared volume.
gluster volume set all cluster.enable-shared-storage enable
# gluster volume set all cluster.enable-shared-storage enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the shared storage volume is mounted on the server. If the volume is not mounted, run the following command:
mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
# mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the
/var/run/gluster/shared_storage/nfs-ganeshadirectory is created.cd /var/run/gluster/shared_storage/ mkdir nfs-ganesha
# cd /var/run/gluster/shared_storage/ # mkdir nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable firewall settings for new services and ports. See Getting Started in the Red Hat Gluster Storage 3.5 Administration Guide.
- If you use Samba/CTDB:
- Mount
/gluster/lockbefore starting CTDB by executing the following commands:mount <ctdb_volume_name> mount -t glusterfs server:/ctdb_volume_name /gluster/lock/
# mount <ctdb_volume_name> # mount -t glusterfs server:/ctdb_volume_name /gluster/lock/Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the lock volume mounted correctly by checking for
lockin the output of themountcommand on any Samba server.mount | grep 'lock'
# mount | grep 'lock' ... <hostname>:/<ctdb_volume_name>.tcp on /gluster/lock type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If all servers that host volumes accessed via SMB have been updated, then start the CTDB and Samba services by executing the following commands.On Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8:
systemctl start ctdb
# systemctl start ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service ctdb start
# service ctdb startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the CTDB and SMB services have started, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If you use NFS-Ganesha:
- Copy the
ganesha.confandganesha-ha.conffiles, and the/etc/ganesha/exportsdirectory to the/var/run/gluster/shared_storage/nfs-ganeshadirectory.cd /etc/ganesha/ cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
# cd /etc/ganesha/ # cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the path of any export entries in the
ganesha.conffile.sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
# sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following to clean up any existing cluster configuration:
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you have upgraded to Red Hat Enterprise Linux 7.4 or later, set the following SELinux Boolean:
setsebool -P ganesha_use_fusefs on
# setsebool -P ganesha_use_fusefs onCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Start the ctdb service (and nfs-ganesha service, if used) and verify that all nodes are functional.
systemctl start ctdb gluster nfs-ganesha enable
# systemctl start ctdb # gluster nfs-ganesha enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If this deployment uses NFS-Ganesha, enable NFS-Ganesha on all volumes.
gluster volume set volname ganesha.enable on
# gluster volume set volname ganesha.enable onCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.3. Special consideration for Offline Software Upgrade Link kopierenLink in die Zwischenablage kopiert!
5.1.3.1. Migrate CTDB configuration files Link kopierenLink in die Zwischenablage kopiert!
- Make a temporary directory to migrate configuration files.
mkdir /tmp/ctdb-migration
# mkdir /tmp/ctdb-migrationCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the CTDB configuration migration script.
./usr/share/doc/ctdb-4.9.x/examples/config_migrate.sh -o /tmp/ctdb-migration /etc/sysconfig/ctdb
# ./usr/share/doc/ctdb-4.9.x/examples/config_migrate.sh -o /tmp/ctdb-migration /etc/sysconfig/ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow The script assumes that the CTDB configuration directory is/etc/ctdb. If this is not correct for your setup, specify an alternative configuration directory with the-doption, for example:./usr/share/doc/ctdb-4.9.x/examples/config_migrate.sh -o /tmp/ctdb-migration /etc/sysconfig/ctdb -d ctdb-config-dir
# ./usr/share/doc/ctdb-4.9.x/examples/config_migrate.sh -o /tmp/ctdb-migration /etc/sysconfig/ctdb -d ctdb-config-dirCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the
/tmp/ctdb-migrationdirectory now contains the following files:commands.shctdb.confscript.optionsctdb.tunables(if additional changes are required)ctdb.sysconfig(if additional changes are required)README.warn(if additional changes are required)
- Back up the current configuration files.
mv /etc/ctdb/ctdb.conf /etc/ctdb/ctdb.conf.default
# mv /etc/ctdb/ctdb.conf /etc/ctdb/ctdb.conf.defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Install the new configuration files.
mv /tmp/ctdb-migration/ctdb.conf /etc/ctdb/ctdb.conf mv script.options /etc/ctdb/
# mv /tmp/ctdb-migration/ctdb.conf /etc/ctdb/ctdb.conf # mv script.options /etc/ctdb/Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Make the
commands.shfile executable, and run it.chmod +x /tmp/ctdb-migration/commands.sh ./tmp/ctdb-migration/commands.sh
# chmod +x /tmp/ctdb-migration/commands.sh # ./tmp/ctdb-migration/commands.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If
/tmp/ctdb-migration/ctdb.tunablesexists, move it to the/etc/ctdbdirectory.cp /tmp/ctdb-migration/ctdb.tunables /etc/ctdb
# cp /tmp/ctdb-migration/ctdb.tunables /etc/ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If
/tmp/ctdb-migration/ctdb.sysconfigexists, back up the old/etc/sysconfig/ctdbfile and replace it with/tmp/ctdb-migration/ctdb.sysconfig.mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.old mv /tmp/ctdb-migration/ctdb.sysconfig /etc/sysconfig/ctdb
# mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.old # mv /tmp/ctdb-migration/ctdb.sysconfig /etc/sysconfig/ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Otherwise, back up the old/etc/sysconfig/ctdbfile and replace it with/etc/sysconfig/ctdb.rpmnew.mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.old mv /etc/sysconfig/ctdb.rpmnew /etc/sysconfig/ctdb
# mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.old # mv /etc/sysconfig/ctdb.rpmnew /etc/sysconfig/ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow