此内容没有您所选择的语言版本。
7.3. In-service Software Update from Red Hat Gluster Storage
Warning
Before you update, be aware of changed requirements that exist after Red Hat Gluster Storage 3.1.3. If you want to access a volume being provided by a Red Hat Gluster Storage 3.1.3 or higher server, your client must also be using Red Hat Gluster Storage 3.1.3 or higher. Accessing volumes from other client versions can result in data becoming unavailable and problems with directory operations. This requirement exists because Red Hat Gluster Storage 3.1.3 contained a number of changes that affect how the Distributed Hash Table works in order to improve directory consistency and remove the effects seen in BZ#1115367 and BZ#1118762.
Important
In Red Hat Enterprise Linux 7 based Red Hat Gluster Storage, updating to 3.1 or higher reloads firewall rules. All runtime-only changes made before the reload are lost.
Before you update, be aware:
- You must update all Red Hat Gluster Storage servers before updating any clients.
- If geo-replication is in use, slave nodes must be updated before master nodes.
- NFS-Ganesha does not support in-service updates. All the running services and I/O operations must be stopped before starting the update process. For more information see, Section 7.2.2, “Updating NFS-Ganesha in the Offline Mode”.
- Dispersed volumes (volumes that use erasure coding) do not support in-service updates and cannot be updated in a non-disruptive manner.
- The SMB and CTDB services do not support in-service updates. The procedure outlined in this section involves service interruptions to the SMB and CTDB services.
- If updating Samba, ensure that Samba is upgraded on all nodes simultaneously, as running different versions of Samba in the same cluster results in data corruption.
- Your system must be registered to Red Hat Network. For more information refer to Section 2.6, “Subscribing to the Red Hat Gluster Storage Server Channels”
- Do not perform any volume operations while the cluster is being updated.
To update your system to Red Hat Gluster Storage 3.2.x, follow these steps. The following steps must be performed on each node of the replica pair.
Updating Red Hat Gluster Storage 3.2 in in-service mode
- If you have a replicated configuration, perform these steps on all nodes of a replica set.If you have a distributed-replicated setup, perform these steps on one replica set at a time, for all replica sets.
- Stop any geo-replication sessions.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stopCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that there are no pending self-heals:
gluster volume heal volname info
# gluster volume heal volname infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for any self-heal operations to complete before continuing. - Stop the gluster services on the storage server using the following commands:
service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use Samba:
- Enable the required repository.On Red Hat Enterprise Linux 6.7 or later:
subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 7:subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the CTDB and SMB services across all nodes in the Samba cluster using the following command. Stopping the CTDB service also stops the SMB service.
service ctdb stop
# service ctdb stopCopy to Clipboard Copied! Toggle word wrap Toggle overflow This ensures different versions of Samba do not run in the same Samba cluster. - Verify that the CTDB and SMB services are stopped by running the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Update the server using the following command:
yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the update to complete. - If a kernel update was included as part of the update process in the previous step, reboot the server.
- If a reboot of the server was not required, then start the gluster services on the storage server using the following command:
service glusterd start
# service glusterd startCopy to Clipboard Copied! Toggle word wrap Toggle overflow Additionally, if you use Samba:- Mount
/gluster/lockbefore starting CTDB by executing the following command:mount -a
# mount -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If the CTDB and SMB services were stopped earlier, then start the services by executing the following command.
service ctdb start
# service ctdb startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if the CTDB and SMB services have started, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- To verify that you have updated to the latest version of the Red Hat Gluster Storage server execute the following command and compare output with the desired version in Section 1.5, “Supported Versions of Red Hat Gluster Storage”.
gluster --version
# gluster --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that all bricks are online. To check the status, execute the following command:
gluster volume status
# gluster volume statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Start self-heal on the volume.
gluster volume heal volname
# gluster volume heal volnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure self-heal is complete on the replica using the following command:
gluster volume heal volname info
# gluster volume heal volname infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - When all nodes in the volume have been upgraded, run the following command to update the
op-versionof the cluster. This helps to prevent any compatibility issues within the cluster.gluster volume set all cluster.op-version 31001
# gluster volume set all cluster.op-version 31001Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
31001is thecluster.op-versionvalue for Red Hat Gluster Storage 3.2.0. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-versionvalue for other versions. - If you had a meta volume configured prior to this upgrade, and you did not reboot as part of the upgrade process, mount the meta volume:
mount /var/run/gluster/shared_storage/
# mount /var/run/gluster/shared_storage/Copy to Clipboard Copied! Toggle word wrap Toggle overflow If this command does not work, review the content of/etc/fstaband ensure that the entry for the shared storage is configured correctly and re-run themountcommand. The line for the meta volume in the/etc/fstabfile should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use geo-replication, restart geo-replication sessions when upgrade is complete.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL startCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
As a result of BZ#1347625, you may need to use theforceparameter to successfully restart in some circumstances.gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow