8.2. In-Service Software Upgrade from Red Hat Gluster Storage 3.1 to Red Hat Gluster Storage 3.2
Important
8.2.1. Pre-upgrade Tasks
8.2.1.1. Upgrade Requirements for Red Hat Gluster Storage 3.2
- In-service software upgrade is supported only for nodes with replicate and distributed replicate volumes.
- If you want to use snapshots for your existing environment, each brick must be an independent thinly provisioned logical volume (LV). If you do not plan to use snapshots, thickly provisioned volumes remain supported.
- A Logical Volume that contains a brick must not be used for any other purpose.
- Only linear LVM is supported with Red Hat Gluster Storage 3.2. For more information, see https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/4/html-single/Cluster_Logical_Volume_Manager/#lv_overview
- When server-side quorum is enabled, ensure that bringing one node down does not violate server-side quorum. Add dummy peers to ensure the server-side quorum will not be violated until the completion of rolling upgrade using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster peer probe dummynode
# gluster peer probe dummynode
Note
If you have a geo-replication session, then to add a node follow the steps mentioned in the sectionStarting Geo-replication for a New Brick or New Node in the Red Hat Gluster Storage Administration Guide.For example, when the server-side quorum percentage is set to the default value (>50%), for a plain replicate volume with two nodes and one brick on each machine, a dummy node that does not contain any bricks must be added to the trusted storage pool to provide high availability of the volume using the command mentioned above.In a three node cluster, if the server-side quorum percentage is set to 77%, bringing down one node would violate the server-side quorum. In this scenario, you have to add two dummy nodes to meet server-side quorum. - For replica 2 volumes, disable client-side quorum. This is not recommended for replica 3 volumes, as it increases the risk of split brain conditions developing.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume reset <vol-name> cluster.quorum-type
# gluster volume reset <vol-name> cluster.quorum-type
- Stop any geo-replication sessions running between the master and slave.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
- Ensure that there are no pending self-heals before proceeding with in-service software upgrade using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume heal volname info
# gluster volume heal volname info
- Ensure the Red Hat Gluster Storage server is registered to the required channels.On Red Hat Enterprise Linux 6:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rhel-x86_64-server-6 rhel-x86_64-server-6-rhs-3 rhel-x86_64-server-sfs-6
rhel-x86_64-server-6 rhel-x86_64-server-6-rhs-3 rhel-x86_64-server-sfs-6
On Red Hat Enterprise Linux 7:Copy to Clipboard Copied! Toggle word wrap Toggle overflow rhel-x86_64-server-7 rhel-x86_64-server-7-rhs-3 rhel-x86_64-server-sfs-7
rhel-x86_64-server-7 rhel-x86_64-server-7-rhs-3 rhel-x86_64-server-sfs-7
To subscribe to the channels, run the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow rhn-channel --add --channel=<channel>
# rhn-channel --add --channel=<channel>
8.2.1.2. Restrictions for In-Service Software Upgrade
- Do not perform in-service software upgrade when the I/O or load is high on the Red Hat Gluster Storage server.
- Do not perform any volume operations on the Red Hat Gluster Storage server.
- Do not change hardware configurations.
- Do not run mixed versions of Red Hat Gluster Storage for an extended period of time. For example, do not have a mixed environment of Red Hat Gluster Storage 3.1 and Red Hat Gluster Storage 3.2 for a prolonged time.
- Do not combine different upgrade methods.
- It is not recommended to use in-service software upgrade for migrating to thinly provisioned volumes, but to use offline upgrade scenario instead. For more information see, Section 8.1, “Offline Upgrade from Red Hat Gluster Storage 3.1 to 3.2 ”
8.2.1.3. Configuring repo for Upgrading using ISO
Note
- Mount the ISO image file under any directory using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount -o loop <ISO image file> <mount-point>
mount -o loop <ISO image file> <mount-point>
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount -o loop RHGS-3.2-20170209.0-RHS-x86_64-dvd1.iso /mnt
mount -o loop RHGS-3.2-20170209.0-RHS-x86_64-dvd1.iso /mnt
- Set the repo options in a file in the following location:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /etc/yum.repos.d/<file_name.repo>
/etc/yum.repos.d/<file_name.repo>
- Add the following information to the repo file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow [local] name=local baseurl=file:///mnt enabled=1 gpgcheck=0
[local] name=local baseurl=file:///mnt enabled=1 gpgcheck=0
8.2.1.4. Preparing and Monitoring the Upgrade Activity
- Check the peer and volume status to ensure that all peers are connected and there are no active volume tasks.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster peer status
# gluster peer status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume status
# gluster volume status
- Check the rebalance status using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume rebalance r2 status
# gluster volume rebalance r2 status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- --------- -------- --------- ------ -------- -------------- 10.70.43.198 0 0Bytes 99 0 0 completed 1.00 10.70.43.148 49 196Bytes 100 0 0 completed 3.00
- Ensure that there are no pending self-heals before proceeding with in-service software upgrade using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume heal volname info
# gluster volume heal volname info
The following example shows no pending self-heals.Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume heal drvol info
# gluster volume heal drvol info Gathering list of entries to be healed on volume drvol has been successful Brick 10.70.37.51:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.78:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.51:/rhs/brick2/dir2 Number of entries: 0 Brick 10.70.37.78:/rhs/brick2/dir2 Number of entries: 0
8.2.2. Service Impact of In-Service Upgrade
ReST requests that are in transition will fail during in-service software upgrade. Hence it is recommended to stop all swift services before in-service software upgrade using the following commands:
service openstack-swift-proxy stop service openstack-swift-account stop service openstack-swift-container stop service openstack-swift-object stop
# service openstack-swift-proxy stop
# service openstack-swift-account stop
# service openstack-swift-container stop
# service openstack-swift-object stop
When you NFS mount a volume, any new or outstanding file operations on that file system will hang uninterruptedly during in-service software upgrade until the server is upgraded.
Ongoing I/O on Samba shares will fail as the Samba shares will be temporarily unavailable during the in-service software upgrade, hence it is recommended to stop the Samba service using the following command:
service ctdb stop ;Stopping CTDB will also stop the SMB service.
# service ctdb stop ;Stopping CTDB will also stop the SMB service.
In-service software upgrade is not supported for distributed volume. If you have a distributed volume in the cluster, stop that volume for the duration of the upgrade.
gluster volume stop <VOLNAME>
# gluster volume stop <VOLNAME>
The virtual machine images are likely to be modified constantly. The virtual machine listed in the output of the volume heal command does not imply that the self-heal of the virtual machine is incomplete. It could mean that the modifications on the virtual machine are happening constantly.
8.2.3. In-Service Software Upgrade
- Back up the following configuration directory and files in a location that is not on the operating system partition.
/var/lib/glusterd, /etc/swift, /etc/samba, /etc/ctdb, /etc/glusterfs, /var/lib/samba, /var/lib/ctdb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp -a /var/lib/glusterd /backup-disk/ cp -a /etc/swift /backup-disk/ cp -a /etc/samba /backup-disk/ cp -a /etc/ctdb /backup-disk/ cp -a /etc/glusterfs /backup-disk/ cp -a /var/lib/samba /backup-disk/ cp -a /var/lib/ctdb /backup-disk/
# cp -a /var/lib/glusterd /backup-disk/ # cp -a /etc/swift /backup-disk/ # cp -a /etc/samba /backup-disk/ # cp -a /etc/ctdb /backup-disk/ # cp -a /etc/glusterfs /backup-disk/ # cp -a /var/lib/samba /backup-disk/ # cp -a /var/lib/ctdb /backup-disk/
Note
- If you have a CTDB environment, see Section 8.2.4.1, “In-Service Software Upgrade for a CTDB Setup”.
- Ensure that there are no pending self-heals before proceeding with in-service software upgrade using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume heal volname info
# gluster volume heal volname info
- Stop the gluster services on the storage server using the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsd
- Verify that your system is not using the legacy Red Hat Classic update software.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow migrate-rhs-classic-to-rhsm --status
# migrate-rhs-classic-to-rhsm --status
If your system uses this legacy software, migrate to Red Hat Subscription Manager and verify that your status has changed when migration is complete.Copy to Clipboard Copied! Toggle word wrap Toggle overflow migrate-rhs-classic-to-rhsm --rhn-to-rhsm
# migrate-rhs-classic-to-rhsm --rhn-to-rhsm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow migrate-rhs-classic-to-rhsm --status
# migrate-rhs-classic-to-rhsm --status
- Update the server using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow yum update
# yum update
- If the volumes are thickly provisioned, and you plan to use snapshots, perform the following steps to migrate to thinly provisioned volumes:
Note
Migrating from thickly provisioned volume to thinly provisioned volume during in-service software upgrade takes a significant amount of time based on the data you have in the bricks. If you do not plan to use snapshots, you can skip this step. However, if you plan to use snapshots on your existing environment, the offline method to upgrade is recommended. For more information regarding offline upgrade, see Section 8.1, “Offline Upgrade from Red Hat Gluster Storage 3.1 to 3.2 ”Contact a Red Hat Support representative before migrating from thickly provisioned volumes to thinly provisioned volumes using in-service software upgrade.- Unmount all the bricks associated with the volume by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow umount mount_point
# umount mount_point
- Remove the LVM associated with the brick by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow lvremove logical_volume_name
# lvremove logical_volume_name
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow lvremove /dev/RHS_vg/brick1
# lvremove /dev/RHS_vg/brick1
- Remove the volume group by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vgremove -ff volume_group_name
# vgremove -ff volume_group_name
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow vgremove -ff RHS_vg
vgremove -ff RHS_vg
- Remove the physical volume by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pvremove -ff physical_volume
# pvremove -ff physical_volume
- If the physical volume (PV) not created then create the PV for a RAID 6 volume by executing the following command, else proceed with the next step:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pvcreate --dataalignment 2560K /dev/vdb
# pvcreate --dataalignment 2560K /dev/vdb
For more information, see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#Formatting_and_Mounting_Bricks. - Create a single volume group from the PV by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vgcreate volume_group_name disk
# vgcreate volume_group_name disk
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow vgcreate RHS_vg /dev/vdb
vgcreate RHS_vg /dev/vdb
- Create a thinpool using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow lvcreate -L size --poolmetadatasize md size --chunksize chunk size -T pool device
# lvcreate -L size --poolmetadatasize md size --chunksize chunk size -T pool device
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow lvcreate -L 2T --poolmetadatasize 16G --chunksize 256 -T /dev/RHS_vg/thin_pool
lvcreate -L 2T --poolmetadatasize 16G --chunksize 256 -T /dev/RHS_vg/thin_pool
- Create a thin volume from the pool by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow lvcreate -V size -T pool device -n thinp
# lvcreate -V size -T pool device -n thinp
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow lvcreate -V 1.5T -T /dev/RHS_vg/thin_pool -n thin_vol
lvcreate -V 1.5T -T /dev/RHS_vg/thin_pool -n thin_vol
- Create filesystem in the new volume by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkfs.xfs -i size=512 thin pool device
mkfs.xfs -i size=512 thin pool device
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkfs.xfs -i size=512 /dev/RHS_vg/thin_vol
mkfs.xfs -i size=512 /dev/RHS_vg/thin_vol
The back-end is now converted to a thinly provisioned volume. - Mount the thinly provisioned volume to the brick directory and setup the extended attributes on the bricks. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow setfattr -n trusted.glusterfs.volume-id \ -v 0x$(grep volume-id /var/lib/glusterd/vols/volname/info \ | cut -d= -f2 | sed 's/-//g') $brick
# setfattr -n trusted.glusterfs.volume-id \ -v 0x$(grep volume-id /var/lib/glusterd/vols/volname/info \ | cut -d= -f2 | sed 's/-//g') $brick
- To ensure Red Hat Gluster Storage Server node is healthy after reboot and so that it can then be joined back to the cluster, it is recommended that you disable glusterd during boot using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chkconfig glusterd off
# chkconfig glusterd off
- Reboot the server.
- Perform the following operations to change the Automatic File Replication extended attributes so that the heal process happens from a brick in the replica subvolume to the thin provisioned brick.
- Create a FUSE mount point from any server to edit the extended attributes. Using the NFS and CIFS mount points, you will not be able to edit the extended attributes.Note that /mnt/r2 is the FUSE mount path.
- Create a new directory on the mount point and ensure that a directory with such a name is not already present.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir /mnt/r2/name-of-nonexistent-dir
# mkdir /mnt/r2/name-of-nonexistent-dir
- Delete the directory and set the extended attributes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rmdir /mnt/r2/name-of-nonexistent-dir
# rmdir /mnt/r2/name-of-nonexistent-dir
Copy to Clipboard Copied! Toggle word wrap Toggle overflow setfattr -n trusted.non-existent-key -v abc /mnt/r2 setfattr -x trusted.non-existent-key /mnt/r2
# setfattr -n trusted.non-existent-key -v abc /mnt/r2 # setfattr -x trusted.non-existent-key /mnt/r2
- Ensure that the extended attributes of the brick in the replica subvolume(In this example,
brick: /dev/RHS_vg/brick2
, extended attribute: trusted.afr.r2-client-1), is not set to zero.Copy to Clipboard Copied! Toggle word wrap Toggle overflow getfattr -d -m. -e hex /dev/RHS_vg/brick2 # file: /dev/RHS_vg/brick2
# getfattr -d -m. -e hex /dev/RHS_vg/brick2 # file: /dev/RHS_vg/brick2 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 trusted.afr.r2-client-0=0x000000000000000000000000 trusted.afr.r2-client-1=0x000000000000000300000002 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
- Start the
glusterd
service using the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow service glusterd start
# service glusterd start
- To automatically start the
glusterd
daemon every time the system boots, run the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow chkconfig glusterd on
# chkconfig glusterd on
- To verify if you have upgraded to the latest version of the Red Hat Gluster Storage server execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster --version
# gluster --version
- Ensure that all the bricks are online. To check the status execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume status
# gluster volume status
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume status
# gluster volume status Status of volume: r2 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.43.198:/brick/r2_0 49152 Y 32259 Brick 10.70.42.237:/brick/r2_1 49152 Y 25266 Brick 10.70.43.148:/brick/r2_2 49154 Y 2857 Brick 10.70.43.198:/brick/r2_3 49153 Y 32270 NFS Server on localhost 2049 Y 25280 Self-heal Daemon on localhost N/A Y 25284 NFS Server on 10.70.43.148 2049 Y 2871 Self-heal Daemon on 10.70.43.148 N/A Y 2875 NFS Server on 10.70.43.198 2049 Y 32284 Self-heal Daemon on 10.70.43.198 N/A Y 32288 Task Status of Volume r2 ------------------------------------------------------------------------------ There are no active volume tasks
- Start self-heal on the volume.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume heal vol-name
# gluster volume heal vol-name
- Ensure self-heal is complete on the replica using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume heal volname info
# gluster volume heal volname info
The following example shows self heal completion:Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume heal drvol info
# gluster volume heal drvol info Gathering list of entries to be healed on volume drvol has been successful Brick 10.70.37.51:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.78:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.51:/rhs/brick2/dir2 Number of entries: 0 Brick 10.70.37.78:/rhs/brick2/dir2 Number of entries: 0
- Repeat the above steps on the other node of the replica pair.
Note
In the case of a distributed-replicate setup, repeat the above steps on all the replica pairs. - When all nodes have been upgraded, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume set all cluster.op-version 31001
# gluster volume set all cluster.op-version 31001
Note
31001
is thecluster.op-version
value for Red Hat Gluster Storage 3.2.0. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-version
value for other versions.Note
If you want to enable snapshots, see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#Troubleshooting1. - If the client-side quorum was disabled before upgrade, then upgrade it by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume set volname cluster.quorum-type auto
# gluster volume set volname cluster.quorum-type auto
- If a dummy node was created earlier, then detatch it by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster peer detatch <dummy_node name>
# gluster peer detatch <dummy_node name>
- If the geo-replication session between master and slave was disabled before upgrade, then configure the meta volume and restart the session:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume set all cluster.enable-shared-storage enable
# gluster volume set all cluster.enable-shared-storage enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true
# gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
8.2.4. Special Consideration for In-Service Software Upgrade
8.2.4.1. In-Service Software Upgrade for a CTDB Setup
- To ensure that the CTDB does not start automatically after a reboot run the following command on each node of the CTDB cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chkconfig ctdb off
# chkconfig ctdb off
- Stop the CTDB service on the Red Hat Gluster Storage node using the following command on each node of the CTDB cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service ctdb stop
# service ctdb stop
- To verify if the CTDB and SMB services are stopped, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
- Stop the gluster services on the storage server using the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsd
- In
/etc/fstab
, comment out the line containing the volume used for CTDB service as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0
# HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0
- Update the server using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow yum update
# yum update
- If SELinux support is required, then enable SELinux by following the steps mentioned in, Chapter 10, Enabling SELinux
- After SELinux is enabled, set the following boolean:For Samba
setsebool -P samba_load_libgfapi 1
For CTDBsetsebool -P use_fusefs_home_dirs 1
- To ensure the
glusterd
service does not start automatically after reboot, execute the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow chkconfig glusterd off
# chkconfig glusterd off
- Reboot the server.
- Update the META=all with the gluster volume information in the following scripts:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
- In
/etc/fstab
, uncomment the line containing the volume used for CTDB service as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0
HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0
- To automatically start the
glusterd
daemon every time the system boots, run the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow chkconfig glusterd on
# chkconfig glusterd on
- To automatically start the ctdb daemon every time the system boots, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chkconfig ctdb on
# chkconfig ctdb on
- Start the
glusterd
service using the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow service glusterd start
# service glusterd start
- If using NFS to access volumes, enable gluster-NFS using below command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume set <volname> nfs.disable off
# gluster volume set <volname> nfs.disable off
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume set testvol nfs.disable off
# gluster volume set testvol nfs.disable off volume set: success
- Mount the CTDB volume by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount -a
# mount -a
- Start the CTDB service using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service ctdb start
# service ctdb start
- To verify if CTDB is running successfully, execute the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ctdb status ctdb ip ctdb ping -n all
# ctdb status # ctdb ip # ctdb ping -n all
After upgrading the Red Hat Gluster Storage server, upgrade the CTDB package by executing the following steps:
Note
- Upgrading CTDB on all the nodes must be done simultaneously to avoid any data corruption.
- The following steps have to performed only when upgrading CTDB from CTDB 1.x to CTDB 4.x.
- Stop the CTDB service on all the nodes of the CTDB cluster by executing the following command. Ensure it is performed on all the nodes simultaneously as two different versions of CTDB cannot run at the same time in the CTDB cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service ctdb stop
# service ctdb stop
- Perform the following operations on all the nodes used as samba servers:
- Remove the following soft links:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /etc/sysconfig/ctdb /etc/ctdb/nodes /etc/ctdb/public_addresses
/etc/sysconfig/ctdb /etc/ctdb/nodes /etc/ctdb/public_addresses
- Copy the following files from the CTDB volume to the corresponding location by executing the following command on each node of the CTDB cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp /gluster/lock/nodes /etc/ctdb/nodes cp /gluster/lock/public_addresses /etc/ctdb/public_addresses
cp /gluster/lock/nodes /etc/ctdb/nodes cp /gluster/lock/public_addresses /etc/ctdb/public_addresses
- Stop and delete the CTDB volume by executing the following commands on one of the nodes of the CTDB cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume stop volname
# gluster volume stop volname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume delete volname
# gluster volume delete volname
- To update CTDB, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow yum update
# yum update
8.2.4.2. Verifying In-Service Software Upgrade
gluster --version
# gluster --version