此内容没有您所选择的语言版本。
7.2. In-Service Software Upgrade from Red Hat Gluster Storage 3.3 to Red Hat Gluster Storage 3.4
Important
Note
ganesha.conf.rpmnew" in /etc/ganesha folder. The old configuration file is not overwritten during the inservice upgrade process. However, post upgradation, you have to manually copy any new configuration changes from "ganesha.conf.rpmnew" to the existing ganesha.conf file in /etc/ganesha folder.
7.2.1. Pre-upgrade Tasks 复制链接链接已复制到粘贴板!
- In-service software upgrade is supported only for nodes with replicate, distributed replicate, or erasure coded (dispersed) volumes.
- If you want to use snapshots for your existing environment, each brick must be an independent thin provisioned logical volume (LV). If you do not plan to use snapshots, thickly provisioned volumes remain supported.
- A Logical Volume that contains a brick must not be used for any other purpose.
- Only linear LVM is supported with Red Hat Gluster Storage 3.4. For more information, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Logical_Volume_Manager_Administration/lv_overview.html#linear_volumes
- When server-side quorum is enabled, ensure that bringing one node down does not violate server-side quorum. Add dummy peers to ensure the server-side quorum is not violated until the completion of rolling upgrade using the following command:
gluster peer probe dummynode
# gluster peer probe dummynodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If you have a geo-replication session, then to add a node follow the steps mentioned in the sectionStarting Geo-replication for a New Brick or New Node in the Red Hat Gluster Storage Administration Guide.For example, when the server-side quorum percentage is set to the default value (>50%), for a plain replicate volume with two nodes and one brick on each machine, a dummy node that does not contain any bricks must be added to the trusted storage pool to provide high availability of the volume using the command mentioned above.In a three node cluster, if the server-side quorum percentage is set to 77%, bringing down one node would violate the server-side quorum. In this scenario, you have to add two dummy nodes to meet server-side quorum. - For replica 2 volumes, disable client-side quorum. This is not recommended for replica 3 volumes, as it increases the risk of split brain conditions developing.
gluster volume reset <vol-name> cluster.quorum-type
# gluster volume reset <vol-name> cluster.quorum-typeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop any geo-replication sessions running between the master and slave.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that there are no pending self-heals before proceeding with in-service software upgrade using the following command:
gluster volume heal volname info
# gluster volume heal volname infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure the Red Hat Gluster Storage server is registered to the required channels.On Red Hat Enterprise Linux 6:
rhel-x86_64-server-6 rhel-x86_64-server-6-rhs-3 rhel-x86_64-server-sfs-6
rhel-x86_64-server-6 rhel-x86_64-server-6-rhs-3 rhel-x86_64-server-sfs-6Copy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 7:rhel-x86_64-server-7 rhel-x86_64-server-7-rhs-3 rhel-x86_64-server-sfs-7
rhel-x86_64-server-7 rhel-x86_64-server-7-rhs-3 rhel-x86_64-server-sfs-7Copy to Clipboard Copied! Toggle word wrap Toggle overflow To subscribe to the channels, run the following command:subscription-manager repos --enable=repo-name
# subscription-manager repos --enable=repo-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.1.2. Restrictions for In-Service Software Upgrade 复制链接链接已复制到粘贴板!
- In-service upgrade for NFS-Ganesha clusters is supported only from Red Hat Gluster Storage 3.3 to Red Hat Gluster Storage 3.4. If you are upgrading from Red Hat Gluster Storage 3.1 and you use NFS-Ganesha, use the offline upgrade method instead.
- Erasure coded (dispersed) volumes can be upgraded while in-service only if the
disperse.optimistic-change-loganddisperse.eager-lockoptions are set tooff. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations. - Ensure that the system workload is low before performing the in-service software upgrade, so that the self-heal process does not have to heal too many entries during the upgrade. Also, with high system workload healing is time-consuming.
- Do not perform any volume operations on the Red Hat Gluster Storage server.
- Do not change hardware configurations.
- Do not run mixed versions of Red Hat Gluster Storage for an extended period of time. For example, do not have a mixed environment of Red Hat Gluster Storage 3.3 and Red Hat Gluster Storage 3.4 for a prolonged time.
- Do not combine different upgrade methods.
- It is not recommended to use in-service software upgrade for migrating to thin provisioned volumes, but to use offline upgrade scenario instead. For more information see, Section 7.1, “Offline Upgrade to Red Hat Gluster Storage 3.4”
7.2.1.3. Configuring repo for Upgrading using ISO 复制链接链接已复制到粘贴板!
Note
- Mount the ISO image file under any directory using the following command:
mount -o loop <ISO image file> <mount-point>
# mount -o loop <ISO image file> <mount-point>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:mount -o loop RHGS-3.4-RHEL-7-x86_64-dvd-1.iso /mnt
# mount -o loop RHGS-3.4-RHEL-7-x86_64-dvd-1.iso /mntCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the repo options in a file in the following location:
/etc/yum.repos.d/<file_name.repo>
/etc/yum.repos.d/<file_name.repo>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the following information to the repo file:
[local] name=local baseurl=file:///mnt enabled=1 gpgcheck=0
[local] name=local baseurl=file:///mnt enabled=1 gpgcheck=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.1.4. Preparing and Monitoring the Upgrade Activity 复制链接链接已复制到粘贴板!
- Check the peer and volume status to ensure that all peers are connected and there are no active volume tasks.
gluster peer status
# gluster peer statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume status
# gluster volume statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the rebalance status using the following command:
gluster volume rebalance r2 status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- --------- -------- --------- ------ -------- -------------- 10.70.43.198 0 0Bytes 99 0 0 completed 1.00 10.70.43.148 49 196Bytes 100 0 0 completed 3.00
# gluster volume rebalance r2 status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- --------- -------- --------- ------ -------- -------------- 10.70.43.198 0 0Bytes 99 0 0 completed 1.00 10.70.43.148 49 196Bytes 100 0 0 completed 3.00Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you need to upgrade an erasure coded (dispersed) volume, set the
disperse.optimistic-change-loganddisperse.eager-lockoptions tooff. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations.gluster volume set volname disperse.optimistic-change-log off gluster volume set volname disperse.eager-lock off
# gluster volume set volname disperse.optimistic-change-log off # gluster volume set volname disperse.eager-lock offCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that there are no pending self-heals by using the following command:
gluster volume heal volname info
# gluster volume heal volname infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following example shows no pending self-heals.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.2. Service Impact of In-Service Upgrade 复制链接链接已复制到粘贴板!
ReST requests that are in transition will fail during in-service software upgrade. Hence it is recommended to stop all swift services before in-service software upgrade using the following commands:
service openstack-swift-proxy stop service openstack-swift-account stop service openstack-swift-container stop service openstack-swift-object stop
# service openstack-swift-proxy stop
# service openstack-swift-account stop
# service openstack-swift-container stop
# service openstack-swift-object stop
When you use Gluster NFS to mount a volume, any new or outstanding file operations on that file system will hang uninterruptedly during in-service software upgrade until the server is upgraded.
Ongoing I/O on Samba shares will fail as the Samba shares will be temporarily unavailable during the in-service software upgrade, hence it is recommended to stop the Samba service using the following command:
service ctdb stop ;Stopping CTDB will also stop the SMB service.
# service ctdb stop ;Stopping CTDB will also stop the SMB service.
In-service software upgrade is not supported for distributed volume. If you have a distributed volume in the cluster, stop that volume for the duration of the upgrade.
gluster volume stop <VOLNAME>
# gluster volume stop <VOLNAME>
The virtual machine images are likely to be modified constantly. The virtual machine listed in the output of the volume heal command does not imply that the self-heal of the virtual machine is incomplete. It could mean that the modifications on the virtual machine are happening constantly.
Important
7.2.3. In-Service Software Upgrade 复制链接链接已复制到粘贴板!
- Back up the following configuration directory and files in a location that is not on the operating system partition.
/var/lib/glusterd/etc/swift/etc/samba/etc/ctdb/etc/glusterfs/var/lib/samba/var/lib/ctdb/var/run/gluster/shared_storage/nfs-ganesha
If you use NFS-Ganesha, back up the following files from all nodes:/etc/ganesha/exports/export.*.conf/etc/ganesha/ganesha.conf/etc/ganesha/ganesha-ha.conf
- If the node is part of an NFS-Ganesha cluster, place the node in standby mode.
pcs cluster standby
# pcs cluster standbyCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that there are no pending self-heal operations.
gluster volume heal volname info
# gluster volume heal volname infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If this node is part of an NFS-Ganesha cluster:
- Disable the PCS cluster and verify that it has stopped.
pcs cluster disable pcs status
# pcs cluster disable # pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the nfs-ganesha service.
systemctl stop nfs-ganesha
# systemctl stop nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Stop all gluster services on the node and verify that they have stopped.
systemctl stop glusterd pkill glusterfs pkill glusterfsd pgrep gluster
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd # pgrep glusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that your system is not using the legacy Red Hat Classic update software.
migrate-rhs-classic-to-rhsm --status
# migrate-rhs-classic-to-rhsm --statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow If your system uses this legacy software, migrate to Red Hat Subscription Manager and verify that your status has changed when migration is complete.migrate-rhs-classic-to-rhsm --rhn-to-rhsm
# migrate-rhs-classic-to-rhsm --rhn-to-rhsmCopy to Clipboard Copied! Toggle word wrap Toggle overflow migrate-rhs-classic-to-rhsm --status
# migrate-rhs-classic-to-rhsm --statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the server using the following command:
yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If the volumes are thick provisioned, and you plan to use snapshots, perform the following steps to migrate to thin provisioned volumes:
Note
Migrating from thick provisioned volume to thin provisioned volume during in-service software upgrade takes a significant amount of time based on the data you have in the bricks. If you do not plan to use snapshots, you can skip this step. However, if you plan to use snapshots on your existing environment, the offline method to upgrade is recommended. For more information regarding offline upgrade, see Section 7.1, “Offline Upgrade to Red Hat Gluster Storage 3.4”Contact a Red Hat Support representative before migrating from thick provisioned volumes to thin provisioned volumes using in-service software upgrade.- Unmount all the bricks associated with the volume by executing the following command:
umount mount_point
# umount mount_pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove the LVM associated with the brick by executing the following command:
lvremove logical_volume_name
# lvremove logical_volume_nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:lvremove /dev/RHS_vg/brick1
# lvremove /dev/RHS_vg/brick1Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove the volume group by executing the following command:
vgremove -ff volume_group_name
# vgremove -ff volume_group_nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:vgremove -ff RHS_vg
# vgremove -ff RHS_vgCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove the physical volume by executing the following command:
pvremove -ff physical_volume
# pvremove -ff physical_volumeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If the physical volume (PV) is not created, then create the PV for a RAID 6 volume by executing the following command, else proceed with the next step:
pvcreate --dataalignment 2560K /dev/vdb
# pvcreate --dataalignment 2560K /dev/vdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the Red Hat Gluster Storage 3.4 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_Storage/3.4/html-single/administration_guide/#Formatting_and_Mounting_Bricks. - Create a single volume group from the PV by executing the following command:
vgcreate volume_group_name disk
# vgcreate volume_group_name diskCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:vgcreate RHS_vg /dev/vdb
# vgcreate RHS_vg /dev/vdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a thinpool using the following command:
lvcreate -L size --poolmetadatasize md_size --chunksize chunk_size -T pool_device
# lvcreate -L size --poolmetadatasize md_size --chunksize chunk_size -T pool_deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:lvcreate -L 2T --poolmetadatasize 16G --chunksize 256 -T /dev/RHS_vg/thin_pool
# lvcreate -L 2T --poolmetadatasize 16G --chunksize 256 -T /dev/RHS_vg/thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a thin volume from the pool by executing the following command:
lvcreate -V size -T pool_device -n thinvol_name
# lvcreate -V size -T pool_device -n thinvol_nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:lvcreate -V 1.5T -T /dev/RHS_vg/thin_pool -n thin_vol
# lvcreate -V 1.5T -T /dev/RHS_vg/thin_pool -n thin_volCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create filesystem in the new volume by executing the following command:
mkfs.xfs -i size=512 thin_vol
# mkfs.xfs -i size=512 thin_volCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:mkfs.xfs -i size=512 /dev/RHS_vg/thin_vol
# mkfs.xfs -i size=512 /dev/RHS_vg/thin_volCopy to Clipboard Copied! Toggle word wrap Toggle overflow The back-end is now converted to a thin provisioned volume. - Mount the thin provisioned volume to the brick directory and setup the extended attributes on the bricks. For example:
setfattr -n trusted.glusterfs.volume-id \ -v 0x$(grep volume-id /var/lib/glusterd/vols/volname/info \ | cut -d= -f2 | sed 's/-//g') $brick
# setfattr -n trusted.glusterfs.volume-id \ -v 0x$(grep volume-id /var/lib/glusterd/vols/volname/info \ | cut -d= -f2 | sed 's/-//g') $brickCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Disable glusterd.
systemctl disable glusterd
# systemctl disable glusterdCopy to Clipboard Copied! Toggle word wrap Toggle overflow This prevents it starting during boot time, so that you can ensure the node is healthy before it rejoins the cluster. - Reboot the server.
shutdown -r now "Shutting down for upgrade to Red Hat Gluster Storage 3.4"
# shutdown -r now "Shutting down for upgrade to Red Hat Gluster Storage 3.4"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
Perform this step only for each thick provisioned volume that has been migrated to thin provisioned volume in the previous step.Change the Automatic File Replication extended attributes from another node, so that the heal process is executed from a brick in the replica subvolume to the thin provisioned brick.- Create a FUSE mount point to edit the extended attributes.
mount -t glusterfs HOSTNAME_or_IPADDRESS:/VOLNAME /MOUNTDIR
# mount -t glusterfs HOSTNAME_or_IPADDRESS:/VOLNAME /MOUNTDIRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new directory on the mount point, and ensure that a directory with such a name is not already present.
mkdir /MOUNTDIR/name-of-nonexistent-dir
# mkdir /MOUNTDIR/name-of-nonexistent-dirCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the directory and set the extended attributes.
rmdir /MOUNTDIR/name-of-nonexistent-dir
# rmdir /MOUNTDIR/name-of-nonexistent-dirCopy to Clipboard Copied! Toggle word wrap Toggle overflow setfattr -n trusted.non-existent-key -v abc /MOUNTDIR setfattr -x trusted.non-existent-key /MOUNTDIR
# setfattr -n trusted.non-existent-key -v abc /MOUNTDIR # setfattr -x trusted.non-existent-key /MOUNTDIRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the extended attributes of the brick in the replica subvolume is not set to zero.
getfattr -d -m. -e hex brick_path
# getfattr -d -m. -e hex brick_pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the following example, the extended attributetrusted.afr.repl3-client-1for/dev/RHS_vg/brick2is not set to zero:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Start the
glusterdservice.systemctl start glusterd
# systemctl start glusterdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that you have upgraded to the latest version of Red Hat Gluster Storage.
gluster --version
# gluster --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that all bricks are online.
gluster volume status
# gluster volume statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start self-heal on the volume.
gluster volume heal volname
# gluster volume heal volnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that self-heal on the volume is complete.
gluster volume heal volname info
# gluster volume heal volname infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following example shows a completed self heal operation.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that shared storage is mounted.
mount | grep /run/gluster/shared_storage
# mount | grep /run/gluster/shared_storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If this node is part of an NFS-Ganesha cluster:
- If the system is managed by SELinux, set the
ganesha_use_fusefsBoolean toon.setsebool -P ganesha_use_fusefs on
# setsebool -P ganesha_use_fusefs onCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the NFS-Ganesha service.
systemctl start nfs-ganesha
# systemctl start nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable and start the cluster.
pcs cluster enable pcs cluster start
# pcs cluster enable # pcs cluster startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Release the node from standby mode.
pcs cluster unstandby
# pcs cluster unstandbyCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the pcs cluster is running, and that the volume is being exported correctly after upgrade.
pcs status showmount -e
# pcs status # showmount -eCopy to Clipboard Copied! Toggle word wrap Toggle overflow NFS-ganesha enters a short grace period after performing these steps. I/O operations halt during this grace period. Wait until you seeNFS Server Now NOT IN GRACEin theganesha.logfile before continuing.
- Optionally, enable the glusterd service to start at boot time.
systemctl enable glusterd
# systemctl enable glusterdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat the above steps on the other node of the replica pair. In the case of a distributed-replicate setup, repeat the above steps on all the replica pairs.
- When all nodes have been upgraded, run the following command to update the
op-versionof the cluster. This helps to prevent any compatibility issues within the cluster.gluster volume set all cluster.op-version 31306
# gluster volume set all cluster.op-version 31306Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
31306is thecluster.op-versionvalue for Red Hat Gluster Storage 3.4 Async Update. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-versionvalue for other versions.Note
If you want to enable snapshots, see the Red Hat Gluster Storage 3.4 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_Storage/3.4/html-single/administration_guide/#Troubleshooting1. - If the client-side quorum was disabled before upgrade, then upgrade it by executing the following command:
gluster volume set volname cluster.quorum-type auto
# gluster volume set volname cluster.quorum-type autoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If a dummy node was created earlier, then detach it by executing the following command:
gluster peer detach <dummy_node name>
# gluster peer detach <dummy_node name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the geo-replication session between master and slave was disabled before upgrade, then configure the meta volume and restart the session:
gluster volume set all cluster.enable-shared-storage enable
# gluster volume set all cluster.enable-shared-storage enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true
# gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you disabled the
disperse.optimistic-change-loganddisperse.eager-lockoptions in order to upgrade an erasure-coded (dispersed) volume, re-enable these settings.gluster volume set volname disperse.optimistic-change-log on gluster volume set volname disperse.eager-lock on
# gluster volume set volname disperse.optimistic-change-log on # gluster volume set volname disperse.eager-lock onCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.4.1. In-Service Software Upgrade for a CTDB Setup 复制链接链接已复制到粘贴板!
- To ensure that the CTDB does not start automatically after a reboot run the following command on each node of the CTDB cluster:
systemctl disable ctdb
# systemctl disable ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the CTDB service on the Red Hat Gluster Storage node using the following command on each node of the CTDB cluster:
systemctl stop ctdb
# systemctl stop ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if the CTDB and SMB services are stopped, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
# ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Stop the gluster services on the storage server using the following commands:
systemctl stop glusterd pkill glusterfs pkill glusterfsd
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - In
/etc/fstab, comment out the line containing the volume used for CTDB service as shown in the following example:HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0
# HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the server using the following command:
yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If SELinux support is required, then enable SELinux by following the steps mentioned in, Chapter 10, Enabling SELinux
- After SELinux is enabled, set the following boolean:For Samba
setsebool -P samba_load_libgfapi 1For CTDBsetsebool -P use_fusefs_home_dirs 1 - To ensure the
glusterdservice does not start automatically after reboot, execute the following command:systemctl disable glusterd
# systemctl disable glusterdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the server.
- Update the META=all with the gluster volume information in the following scripts:
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow - In
/etc/fstab, uncomment the line containing the volume used for CTDB service as shown in the following example:HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0
HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To automatically start the
glusterddaemon every time the system boots, run the following command:systemctl enable glusterd
# systemctl enable glusterdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To automatically start the ctdb daemon every time the system boots, run the following command:
systemctl enable ctdb
# systemctl enable ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the
glusterdservice using the following command:systemctl start glusterd
# systemctl start glusterdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If using NFS to access volumes, enable gluster-NFS using below command:
gluster volume set <volname> nfs.disable off
# gluster volume set <volname> nfs.disable offCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume set testvol nfs.disable off volume set: success
# gluster volume set testvol nfs.disable off volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Mount the CTDB volume by running the following command:
mount -a
# mount -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the CTDB service using the following command:
systemctl start ctdb
# systemctl start ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if CTDB is running successfully, execute the following commands:
ctdb status ctdb ip ctdb ping -n all
# ctdb status # ctdb ip # ctdb ping -n allCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After upgrading the Red Hat Gluster Storage server, upgrade the CTDB package by executing the following steps:
Note
- Upgrading CTDB on all the nodes must be done simultaneously to avoid any data corruption.
- The following steps have to performed only when upgrading CTDB from CTDB 1.x to CTDB 4.x.
- Stop the CTDB service on all the nodes of the CTDB cluster by executing the following command. Ensure it is performed on all the nodes simultaneously as two different versions of CTDB cannot run at the same time in the CTDB cluster:
systemctl stop ctdb
# systemctl stop ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Perform the following operations on all the nodes used as samba servers:
- Remove the following soft links:
/etc/sysconfig/ctdb /etc/ctdb/nodes /etc/ctdb/public_addresses
/etc/sysconfig/ctdb /etc/ctdb/nodes /etc/ctdb/public_addressesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the following files from the CTDB volume to the corresponding location by executing the following command on each node of the CTDB cluster:
cp /gluster/lock/nodes /etc/ctdb/nodes cp /gluster/lock/public_addresses /etc/ctdb/public_addresses
cp /gluster/lock/nodes /etc/ctdb/nodes cp /gluster/lock/public_addresses /etc/ctdb/public_addressesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Stop and delete the CTDB volume by executing the following commands on one of the nodes of the CTDB cluster:
gluster volume stop volname
# gluster volume stop volnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume delete volname
# gluster volume delete volnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To update CTDB, execute the following command:
yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.4.2. Verifying In-Service Software Upgrade 复制链接链接已复制到粘贴板!
gluster --version
# gluster --version