Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
6.3. In-service Software Update from Red Hat Gluster Storage
Important
In Red Hat Enterprise Linux 7 based Red Hat Gluster Storage, updating to 3.1 or higher reloads firewall rules. All runtime-only changes made before the reload are lost.
Important
The SMB and CTDB services do not support in-service updates. The procedure outlined in this section involves service interruptions to the SMB and CTDB services.
Before you update, be aware:
- Complete updates to all Red Hat Gluster Storage servers before updating any clients.
- If geo-replication is in use, complete updates to all slave nodes before updating master nodes.
- Erasure coded (dispersed) volumes can be updated while in-service only if the
disperse.optimistic-change-log
anddisperse.eager-lock
options are set tooff
. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations. - If updating Samba, ensure that Samba is upgraded on all nodes simultaneously, as running different versions of Samba in the same cluster results in data corruption.
- Your system must be registered to Red Hat Network in order to receive updates. For more information, see Section 2.6, “Subscribing to the Red Hat Gluster Storage Server Channels”
- Do not perform any volume operations while the cluster is being updated.
Updating Red Hat Gluster Storage 3.4 in in-service mode
- Ensure that you have a working backup, as described in Section 6.1, “Before you update”.
- If you have a replicated configuration, perform these steps on all nodes of a replica set.If you have a distributed-replicated configuration, perform these steps on one replica set at a time, for all replica sets.
- Stop any geo-replication sessions.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If this node is part of an NFS-Ganehsa cluster, place the node in standby mode.
pcs cluster standby
# pcs cluster standby
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that there are no pending self-heals:
gluster volume heal volname info
# gluster volume heal volname info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for any self-heal operations to complete before continuing. - If this node is part of an NFS-Ganesha cluster:
- Disable the PCS cluster and verify that it has stopped.
pcs cluster disable pcs status
# pcs cluster disable # pcs status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the nfs-ganesha service.
systemctl stop nfs-ganesha
# systemctl stop nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If you need to update an erasure coded (dispersed) volume, set the
disperse.optimistic-change-log
anddisperse.eager-lock
options tooff
. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations.gluster volume set volname disperse.optimistic-change-log off gluster volume set volname disperse.eager-lock off
# gluster volume set volname disperse.optimistic-change-log off # gluster volume set volname disperse.eager-lock off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the gluster services on the storage server using the following commands:On Red Hat Enterprise Linux 7:
systemctl stop glusterd pkill glusterfs pkill glusterfsd
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service glusterd stop pkill glusterfs pkill glusterfsd
# service glusterd stop # pkill glusterfs # pkill glusterfsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use Samba:
- Enable the required repository.On Red Hat Enterprise Linux 6.7 or later:
subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 7:subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the CTDB and SMB services across all nodes in the Samba cluster using the following command. Stopping the CTDB service also stops the SMB service.On Red Hat Enterprise Linux 7:
systemctl stop ctdb systemctl disable ctdb
# systemctl stop ctdb # systemctl disable ctdb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service ctdb stop chkconfig ctdb off
# service ctdb stop # chkconfig ctdb off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This ensures different versions of Samba do not run in the same Samba cluster until all Samba nodes are updated. - Verify that the CTDB and SMB services are stopped by running the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Update the server using the following command:
yum update
# yum update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the packages being updated, and wait for the update to complete. - If a kernel update was included as part of the update process in the previous step, reboot the server.
- If a reboot of the server was not required, then start the gluster services on the storage server using the following command.On Red Hat Enterprise Linux 7:
systemctl start glusterd
# systemctl start glusterd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service glusterd start
# service glusterd start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that you have updated to the latest version of the Red Hat Gluster Storage server.
gluster --version
# gluster --version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Compare output with the desired version in Section 1.5, “Supported Versions of Red Hat Gluster Storage”. - Ensure that all bricks are online. To check the status, execute the following command:
gluster volume status
# gluster volume status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start self-heal on the volume.
gluster volume heal volname
# gluster volume heal volname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure self-heal is complete on the replica using the following command:
gluster volume heal volname info
# gluster volume heal volname info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that shared storage is mounted.
mount | grep /run/gluster/shared_storage
# mount | grep /run/gluster/shared_storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- When all nodes in the volume have been updated, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.gluster volume set all cluster.op-version 31306
# gluster volume set all cluster.op-version 31306
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
31306
is thecluster.op-version
value for Red Hat Gluster Storage 3.4 Async Update. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-version
value for other versions. - If you use Samba:
- Mount
/gluster/lock
before starting CTDB by executing the following command:mount -a
# mount -a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If all servers that host volumes accessed via SMB have been updated, then start and re-enable the CTDB and Samba services by executing the following commands.On Red Hat Enterprise Linux 7:
systemctl start ctdb systemctl enable ctdb
# systemctl start ctdb # systemctl enable ctdb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On Red Hat Enterprise Linux 6:service ctdb start chkconfig ctdb on
# service ctdb start # chkconfig ctdb on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the CTDB and SMB services have started, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If you had a meta volume configured prior to this upgrade, and you did not reboot as part of the upgrade process, mount the meta volume:
mount /var/run/gluster/shared_storage/
# mount /var/run/gluster/shared_storage/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If this command does not work, review the content of/etc/fstab
and ensure that the entry for the shared storage is configured correctly and re-run themount
command. The line for the meta volume in the/etc/fstab
file should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If this node is part of an NFS-Ganesha cluster:
- If SELinux is in use, set the
ganesha_use_fusefs
Boolean toon
.setsebool -P ganesha_use_fusefs on
# setsebool -P ganesha_use_fusefs on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the nfs-ganesha service:
systemctl start nfs-ganesha
# systemctl start nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable and start the cluster.
pcs cluster enable pcs cluster start
# pcs cluster enable # pcs cluster start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Release the node from standby mode.
pcs cluster unstandby
# pcs cluster unstandby
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the PCS cluster is running and that the volume is exporting correctly.
pcs status showmount -e
# pcs status # showmount -e
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NFS-ganesha enters a short grace period after performing these steps. I/O operations halt during this grace period. Wait until you seeNFS Server Now NOT IN GRACE
in theganesha.log
file before continuing.
- If you use geo-replication, restart geo-replication sessions when upgrade is complete.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
As a result of BZ#1347625, you may need to use theforce
parameter to successfully restart in some circumstances.gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you disabled the
disperse.optimistic-change-log
anddisperse.eager-lock
options in order to update an erasure-coded (dispersed) volume, re-enable these settings.gluster volume set volname disperse.optimistic-change-log on gluster volume set volname disperse.eager-lock on
# gluster volume set volname disperse.optimistic-change-log on # gluster volume set volname disperse.eager-lock on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
After performing inservice upgrade of NFS-Ganesha, the new configuration file is saved as "
ganesha.conf.rpmnew
" in /etc/ganesha
folder. The old configuration file is not overwritten during the inservice upgrade process. However, post upgradation, you have to manually copy any new configuration changes from "ganesha.conf.rpmnew
" to the existing ganesha.conf
file in /etc/ganesha
folder.
Note
If you are updating your Web Administration environment, after executing the required steps, navigate to the Red Hat Gluster Storage Web Administration 3.4.x to 3.4.y section and perform the steps identified under On Web Administration Server and On Red Hat Gluster Storage Servers (Part II) to complete the Red Hat Gluster Storage and Web Administration update process.