6.4. Upgrading to Red Hat Gluster Storage 3.5
Disable all repositories
subscription-manager repos --disable=’*’
# subscription-manager repos --disable=’*’
Copy to Clipboard Copied! Subscribe to the Red Hat Enterprise Linux 7 channel
subscription-manager repos --enable=rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-rpms
Copy to Clipboard Copied! Check for stale Red Hat Enterprise Linux 6 packages
Check for any stale Red Hat Enterprise Linux 6 packages post upgrade:rpm -qa | grep el6
# rpm -qa | grep el6
Copy to Clipboard Copied! Important
If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.Update and reboot
Update the Red Hat Enterprise Linux 7 packages and reboot.yum update reboot
# yum update # reboot
Copy to Clipboard Copied! Verify the version number
Ensure that the latest version of Red Hat Enterprise Linux 6 is shown when you view the `redhat-release` file:cat /etc/redhat-release
# cat /etc/redhat-release
Copy to Clipboard Copied! Subscribe to the required channels
- Subscribe to the Gluster channel:
subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
Copy to Clipboard Copied! - If you use Samba, enable its repository.
subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
Copy to Clipboard Copied! - If you use NFS-Ganesha, enable its repository.
subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms --enable=rhel-ha-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms --enable=rhel-ha-for-rhel-7-server-rpms
Copy to Clipboard Copied! - If you use gdeploy, enable the Ansible repository:
subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
# subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
Copy to Clipboard Copied!
Install and update Gluster
- If you used a Red Hat Enterprise Linux 7 ISO, install Red Hat Gluster Storage 3.5 using the following command:
yum install redhat-storage-server
# yum install redhat-storage-server
Copy to Clipboard Copied! This is already installed if you used a Red Hat Gluster Storage 3.5 ISO based on Red Hat Enterprise Linux 7. - Update Red Hat Gluster Storage to the latest packages using the following command:
yum update
# yum update
Copy to Clipboard Copied!
Verify the installation and update
- Check the current version number of the updated Red Hat Gluster Storage system:
cat /etc/redhat-storage-release
# cat /etc/redhat-storage-release
Copy to Clipboard Copied! Important
The version number should be3.5
. - Ensure that no Red Hat Enterprise Linux 6 packages are present:
rpm -qa | grep el6
# rpm -qa | grep el6
Copy to Clipboard Copied! Important
If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.
Install and configure Firewalld
- Install and start the firewall daemon using the following commands:
yum install firewalld systemctl start firewalld
# yum install firewalld # systemctl start firewalld
Copy to Clipboard Copied! - Add the Gluster process to the firewall:
firewall-cmd --zone=public --add-service=glusterfs --permanent
# firewall-cmd --zone=public --add-service=glusterfs --permanent
Copy to Clipboard Copied! - Add the required services and ports to firewalld. For more information see Considerations for Red Hat Gluster Storage
- Reload the firewall using the following commands:
firewall-cmd --reload
# firewall-cmd --reload
Copy to Clipboard Copied!
Start the Gluster processes
- Start the
glusterd
process:systemctl start glusterd
# systemctl start glusterd
Copy to Clipboard Copied!
Update Gluster op-version
Update the Gluster op-version to the required maximum version using the following commands:gluster volume get all cluster.max-op-version gluster volume set all cluster.op-version op_version
# gluster volume get all cluster.max-op-version # gluster volume set all cluster.op-version op_version
Copy to Clipboard Copied! Note
70200
is thecluster.op-version
value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correctgluster volume heal $VOLNAME granular-entry-heal enable
gluster volume heal $VOLNAME granular-entry-heal enable
Copy to Clipboard Copied! cluster.op-version
value for other versions.Set up Samba and CTDB
If the Gluster setup on Red Hat Enterprise Linux 6 had Samba and CTDB configured, you should have the following available on the updated Red Hat Enterprise Linux 7 system:- CTDB volume
/etc/ctdb/nodes
file/etc/ctdb/public_addresses
file
Perform the following steps to reconfigure Samba and CTDB:- Configure the firewall for Samba:
firewall-cmd --zone=public --add-service=samba --permanent firewall-cmd --zone=public --add-port=4379/tcp --permanent
# firewall-cmd --zone=public --add-service=samba --permanent # firewall-cmd --zone=public --add-port=4379/tcp --permanent
Copy to Clipboard Copied! - Subscribe to the Samba channel:
subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
Copy to Clipboard Copied! - Update Samba to the latest packages:
yum update
# yum update
Copy to Clipboard Copied! - Configure CTDB for Samba. For more information, see Configuring CTDB on Red Hat Gluster Storage Server in Setting up CTDB for Samba. You must skip creating the volume as the volumes present before the upgrade would be persistent after the upgrade.
- In the following files, replace
all
in the statementMETA="all"
with the volume name:/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
Copy to Clipboard Copied! For example, the volume name isctdb_volname
, theMETA="all"
in the files should be changed toMETA="ctdb_volname"
. - Restart the CTDB volume using the following commands:
gluster volume stop volume_name gluster volume start volume_name
# gluster volume stop volume_name # gluster volume start volume_name
Copy to Clipboard Copied! - Start the CTDB process:
systemctl start ctdb
# systemctl start ctdb
Copy to Clipboard Copied! - Share the volume over Samba if required. See Sharing Volumes over SMB.
Start the volumes and geo-replication
- Start the required volumes using the following command:
gluster volume start volume_name
# gluster volume start volume_name
Copy to Clipboard Copied! - Mount the meta-volume:
mount /var/run/gluster/shared_storage/
# mount /var/run/gluster/shared_storage/
Copy to Clipboard Copied! Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .If this command does not work, review the content of the/etc/fstab
file and ensure that the entry for the shared storage is configured correctly, and re-run themount
command. The line for the meta volume in the/etc/fstab
file should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
Copy to Clipboard Copied! - Restore the geo-replication session:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
Copy to Clipboard Copied! For more information on geo-replication, see Preparing to Deploy Geo-replication.