Questo contenuto non è disponibile nella lingua selezionata.
7.2. Updating Red Hat Gluster Storage in the Offline Mode
Important
- Offline updates result in server downtime, as volumes are offline during the update process.
- Complete updates to all Red Hat Gluster Storage servers before updating any clients.
Updating Red Hat Gluster Storage 3.3 in the offline mode
- Ensure that you have a working backup, as described in Section 7.1, “Before you update”.
- Stop all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
# for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following commands on one server at a time.
- Stop all gluster services.On Red Hat Enterprise Linux 7:
systemctl stop glusterd pkill glusterfs pkill glusterfsd
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you want to migrate from Gluster NFS to NFS Ganesha as part of this update, perform the following additional steps.
- Stop and disable CTDB. This ensures that multiple versions of Samba do not run in the cluster during the update process, and avoids data corruption.
systemctl stop ctdb systemctl disable ctdb
# systemctl stop ctdb # systemctl disable ctdb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the CTDB and NFS services are stopped:
ps axf | grep -E '(ctdb|nfs)[d]'
ps axf | grep -E '(ctdb|nfs)[d]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the CTDB volume by executing the following command:
gluster vol delete <ctdb_vol_name>
# gluster vol delete <ctdb_vol_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Update the system.
yum update
# yum update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the packages to be updated, and entery
to proceed with the update when prompted.Wait for the update to complete. - If updates to the kernel package occurred, or if you are migrating from Gluster NFS to NFS Ganesha as part of this update, reboot the system.
- Start glusterd.On Red Hat Enterprise Linux 7:
systemctl start glusterd
# systemctl start glusterd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service glusterd start
# service glusterd start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- When all servers have been updated, run the following command to update the cluster operating version. This helps to prevent any compatibility issues within the cluster.
gluster volume set all cluster.op-version 31102
# gluster volume set all cluster.op-version 31102
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
31102
is thecluster.op-version
value for the latest Red Hat Gluster Storage 3.3.1 glusterfs Async. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-version
value for other versions. - If you want to migrate from Gluster NFS to NFS Ganesha as part of this update, install the NFS-Ganesha packages as described in Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage, and use the information in the NFS Ganesha section of the Red Hat Gluster Storage 3.3 Administration Guide to configure the NFS Ganesha cluster.
- Start all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume start $vol; done
# for vol in `gluster volume list`; do gluster --mode=script volume start $vol; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you did not reboot as part of the update process, run the following command to remount the meta volume:
mount /var/run/gluster/shared_storage/
# mount /var/run/gluster/shared_storage/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If this command does not work, review the content of/etc/fstab
and ensure that the entry for the shared storage is configured correctly and re-run themount
command. The line for the meta volume in the/etc/fstab
file should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use Gluster NFS to access volumes, enable Gluster NFS using the following command:
gluster volume set volname nfs.disable off
# gluster volume set volname nfs.disable off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume set testvol nfs.disable off
# gluster volume set testvol nfs.disable off volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use geo-replication, restart geo-replication sessions when upgrade is complete.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You may need to append theforce
parameter to successfully restart in some circumstances. See BZ#1347625 for details.