Chapter 7. Updating Red Hat Gluster Storage from 3.2.x to 3.2.y
A software update is a minor release that includes bug fixes for features, software enhancements etc. Red Hat strongly recommends you update your Red Hat Gluster Storage software regularly with the latest security patches and upgrades. Associate your system with a content server to update existing content or to install new content. This ensures that your system is up-to-date with security updates and upgrade.
To keep your Red Hat Gluster Storage system up-to-date, associate the system with the RHN or your locally-managed content service. This ensures your system automatically stays up-to-date with security patches and bug fixes.
Be aware of the following when updating your Red Hat Gluster Storage 3.2 installation:
- Updating from Red Hat Enterprise Linux 6 based Red Hat Gluster Storage to Red Hat Enterprise Linux 7 based Red Hat Gluster Storage is not supported.
- Asynchronous errata update releases of Red Hat Gluster Storage include all fixes that were released asynchronously since the last release as a cumulative update.
- When there are large number of snapshots, ensure to deactivate the snapshots before performing an update. The snapshots can be activated after the update is complete. For more information, see Chapter 4.1 Starting and Stopping the glusterd service in the Red Hat Gluster Storage 3 Administration Guide.
7.1. Updating Red Hat Gluster Storage in the Offline Mode
Warning
Before you update, be aware of changed requirements that exist after Red Hat Gluster Storage 3.1.3. If you want to access a volume being provided by a Red Hat Gluster Storage 3.1.3 or higher server, your client must also be using Red Hat Gluster Storage 3.1.3 or higher. Accessing volumes from other client versions can result in data becoming unavailable and problems with directory operations. This requirement exists because Red Hat Gluster Storage 3.1.3 contained a number of changes that affect how the Distributed Hash Table works in order to improve directory consistency and remove the effects seen in BZ#1115367 and BZ#1118762.
Before you update, be aware:
- Offline updates result in server downtime, as volumes are offline during upgrade.
- You must update all Red Hat Gluster Storage servers before updating any clients.
- This process assumes that you are updating to a thinly provisioned volume.
Updating Red Hat Gluster Storage 3.2 in the offline mode
- Make a complete backup using a reliable backup solution. This Knowledge Base solution covers one possible approach: https://access.redhat.com/solutions/1484053.If you use an alternative backup solution:
- Ensure that you have sufficient space available for a complete backup.
- Copy the
.glusterfs
directory before copying any data files. - Ensure that no new files are created on Red Hat Gluster Storage file systems during the backup.
- Ensure that all extended attributes, ACLs, owners, groups, and symbolic and hard links are backed up.
- Check that the backup restores correctly before you continue with the migration.
- Delete the existing Logical Volume (LV) and recreate a new thinly provisioned LV. For more information, see https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/thinprovisioned_volumes.html
- Restore backed up content to the newly created thinly provisioned LV.
- When you are certain that your backup works, stop all volumes.
# gluster volume stop volname
- Run the following commands to stop gluster services and update Red Hat Gluster Storage in the offline mode:On Red Hat Enterprise Linux 6:
# service glusterd stop # pkill glusterfs # pkill glusterfsd # yum update
On Red Hat Enterprise Linux 7:# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd # yum update
Wait for the update to complete. - Start glusterd.On Red Hat Enterprise Linux 6:
# service glusterd start
On Red Hat Enterprise Linux 7:# systemctl start glusterd
- When all nodes have been updated, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.# gluster volume set all cluster.op-version 31001
Note
31001
is thecluster.op-version
value for Red Hat Gluster Storage 3.2.0. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-version
value for other versions. - Start your volumes with the following command:
# gluster volume start volname
- If you had a meta volume configured prior to this upgrade, and you did not reboot as part of the upgrade process, mount the meta volume:
# mount /var/run/gluster/shared_storage/
If this command does not work, review the content of/etc/fstab
and ensure that the entry for the shared storage is configured correctly and re-run themount
command. The line for the meta volume in the/etc/fstab
file should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
- If using NFS to access volumes, enable gluster-NFS using the following command:
# gluster volume set volname nfs.disable off
For example:# gluster volume set testvol nfs.disable off volume set: success
- If you use geo-replication, restart geo-replication sessions when upgrade is complete.
Important
In Red Hat Gluster Storage 3.1 and higher, a meta volume is recommended when geo-replication is configured. However, when upgrading geo-replicated Red Hat Gluster Storage from version 3.0.x to 3.1.y, the older geo-replicated configuration that did not use shared volumes was persisted to the upgraded installation. Red Hat recommends reconfiguring geo-replication following upgrade to Red Hat Gluster Storage 3.2 to ensure that shared volumes are used and a meta volume is configured.To enable shared volumes, set thecluster.enable-shared-storage
parameter toenable
from the master node:# gluster volume set all cluster.enable-shared-storage enable
Then configure geo-replication to use shared volumes as a meta volume by settinguse_meta_volume
totrue
.# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
For further information see the following sections in the Red Hat Gluster Storage 3.2 Administration Guide:# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
Note
As a result of BZ#1347625, you may need to use theforce
parameter to successfully restart in some circumstances.# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force