Este contenido no está disponible en el idioma seleccionado.
6.3. In-service Software Update from Red Hat Gluster Storage
Important
In Red Hat Enterprise Linux 7 based Red Hat Gluster Storage, updating to 3.1 or higher reloads firewall rules. All runtime-only changes made before the reload are lost.
Important
The SMB and CTDB services do not support in-service updates. The procedure outlined in this section involves service interruptions to the SMB and CTDB services.
Before you update, be aware:
- Complete updates to all Red Hat Gluster Storage servers before updating any clients.
- If geo-replication is in use, complete updates to all slave nodes before updating master nodes.
- Erasure coded (dispersed) volumes can be updated while in-service only if the
disperse.optimistic-change-loganddisperse.eager-lockoptions are set tooff. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations. - If updating Samba, ensure that Samba is upgraded on all nodes simultaneously, as running different versions of Samba in the same cluster results in data corruption.
- Your system must be registered to Red Hat Network in order to receive updates. For more information, see Section 2.6, “Subscribing to the Red Hat Gluster Storage Server Channels”
- Do not perform any volume operations while the cluster is being updated.
Updating Red Hat Gluster Storage 3.4 in in-service mode
- Ensure that you have a working backup, as described in Section 6.1, “Before you update”.
- If you have a replicated configuration, perform these steps on all nodes of a replica set.If you have a distributed-replicated configuration, perform these steps on one replica set at a time, for all replica sets.
- Stop any geo-replication sessions.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If this node is part of an NFS-Ganehsa cluster, place the node in standby mode.
pcs cluster standby
# pcs cluster standbyCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that there are no pending self-heals:
gluster volume heal volname info
# gluster volume heal volname infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for any self-heal operations to complete before continuing. - If this node is part of an NFS-Ganesha cluster:
- Disable the PCS cluster and verify that it has stopped.
pcs cluster disable pcs status
# pcs cluster disable # pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the nfs-ganesha service.
systemctl stop nfs-ganesha
# systemctl stop nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If you need to update an erasure coded (dispersed) volume, set the
disperse.optimistic-change-loganddisperse.eager-lockoptions tooff. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations.gluster volume set volname disperse.optimistic-change-log off gluster volume set volname disperse.eager-lock off
# gluster volume set volname disperse.optimistic-change-log off # gluster volume set volname disperse.eager-lock offCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the gluster services on the storage server using the following commands:On Red Hat Enterprise Linux 7:
systemctl stop glusterd pkill glusterfs pkill glusterfsd
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsdCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use Samba:
- Enable the required repository.On Red Hat Enterprise Linux 6.7 or later:
subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 7:subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the CTDB and SMB services across all nodes in the Samba cluster using the following command. Stopping the CTDB service also stops the SMB service.On Red Hat Enterprise Linux 7:
systemctl stop ctdb systemctl disable ctdb
# systemctl stop ctdb # systemctl disable ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service ctdb stop chkconfig ctdb off
# service ctdb stop # chkconfig ctdb offCopy to Clipboard Copied! Toggle word wrap Toggle overflow This ensures different versions of Samba do not run in the same Samba cluster until all Samba nodes are updated. - Verify that the CTDB and SMB services are stopped by running the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Update the server using the following command:
yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the packages being updated, and wait for the update to complete. - If a kernel update was included as part of the update process in the previous step, reboot the server.
- If a reboot of the server was not required, then start the gluster services on the storage server using the following command.On Red Hat Enterprise Linux 7:
systemctl start glusterd
# systemctl start glusterdCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service glusterd start
# service glusterd startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that you have updated to the latest version of the Red Hat Gluster Storage server.
gluster --version
# gluster --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Compare output with the desired version in Section 1.5, “Supported Versions of Red Hat Gluster Storage”. - Ensure that all bricks are online. To check the status, execute the following command:
gluster volume status
# gluster volume statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Start self-heal on the volume.
gluster volume heal volname
# gluster volume heal volnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure self-heal is complete on the replica using the following command:
gluster volume heal volname info
# gluster volume heal volname infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that shared storage is mounted.
mount | grep /run/gluster/shared_storage
# mount | grep /run/gluster/shared_storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- When all nodes in the volume have been updated, run the following command to update the
op-versionof the cluster. This helps to prevent any compatibility issues within the cluster.gluster volume set all cluster.op-version 31306
# gluster volume set all cluster.op-version 31306Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
31306is thecluster.op-versionvalue for Red Hat Gluster Storage 3.4 Async Update. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-versionvalue for other versions. - If you use Samba:
- Mount
/gluster/lockbefore starting CTDB by executing the following command:mount -a
# mount -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If all servers that host volumes accessed via SMB have been updated, then start and re-enable the CTDB and Samba services by executing the following commands.On Red Hat Enterprise Linux 7:
systemctl start ctdb systemctl enable ctdb
# systemctl start ctdb # systemctl enable ctdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service ctdb start chkconfig ctdb on
# service ctdb start # chkconfig ctdb onCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the CTDB and SMB services have started, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If you had a meta volume configured prior to this upgrade, and you did not reboot as part of the upgrade process, mount the meta volume:
mount /var/run/gluster/shared_storage/
# mount /var/run/gluster/shared_storage/Copy to Clipboard Copied! Toggle word wrap Toggle overflow If this command does not work, review the content of/etc/fstaband ensure that the entry for the shared storage is configured correctly and re-run themountcommand. The line for the meta volume in the/etc/fstabfile should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If this node is part of an NFS-Ganesha cluster:
- If SELinux is in use, set the
ganesha_use_fusefsBoolean toon.setsebool -P ganesha_use_fusefs on
# setsebool -P ganesha_use_fusefs onCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the nfs-ganesha service:
systemctl start nfs-ganesha
# systemctl start nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable and start the cluster.
pcs cluster enable pcs cluster start
# pcs cluster enable # pcs cluster startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Release the node from standby mode.
pcs cluster unstandby
# pcs cluster unstandbyCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the PCS cluster is running and that the volume is exporting correctly.
pcs status showmount -e
# pcs status # showmount -eCopy to Clipboard Copied! Toggle word wrap Toggle overflow NFS-ganesha enters a short grace period after performing these steps. I/O operations halt during this grace period. Wait until you seeNFS Server Now NOT IN GRACEin theganesha.logfile before continuing.
- If you use geo-replication, restart geo-replication sessions when upgrade is complete.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL startCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
As a result of BZ#1347625, you may need to use theforceparameter to successfully restart in some circumstances.gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you disabled the
disperse.optimistic-change-loganddisperse.eager-lockoptions in order to update an erasure-coded (dispersed) volume, re-enable these settings.gluster volume set volname disperse.optimistic-change-log on gluster volume set volname disperse.eager-lock on
# gluster volume set volname disperse.optimistic-change-log on # gluster volume set volname disperse.eager-lock onCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
After performing inservice upgrade of NFS-Ganesha, the new configuration file is saved as "
ganesha.conf.rpmnew" in /etc/ganesha folder. The old configuration file is not overwritten during the inservice upgrade process. However, post upgradation, you have to manually copy any new configuration changes from "ganesha.conf.rpmnew" to the existing ganesha.conf file in /etc/ganesha folder.
Note
If you are updating your Web Administration environment, after executing the required steps, navigate to the Red Hat Gluster Storage Web Administration 3.4.x to 3.4.y section and perform the steps identified under On Web Administration Server and On Red Hat Gluster Storage Servers (Part II) to complete the Red Hat Gluster Storage and Web Administration update process.