8.4. Upgrading to Red Hat Gluster Storage 3.4
Disable all repositories
# subscription-manager repos --disable=’*’
Subscribe to RHEL 7 channel
# subscription-manager repos --enable=rhel-7-server-rpms
Check for stale RHEL 6 packages
Make a note of any stale RHEL 6 packages post upgrade:# rpm -qa | grep el6
Update and reboot
Update the RHEL 7 packages and reboot once the update is complete.# yum update # reboot
Verify the version number
Check the current version number of the updated RHEL 7 system:# cat /etc/redhat-release
Important
The version number should be7.5
.Subscribe to required channels
- Subscibe to the Gluster channel:
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
- If you require Samba, enable its repository:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
- If you require NFS-Ganesha, enable its repository:
# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms --enable=rhel-ha-for-rhel-7-server-rpms
- If you require gdeploy, enable the Ansible repository:
# subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
- If you require Nagios, enable its repository:
# subscription-manager repos --enable=rh-gluster-3-nagios-for-rhel-7-server-rpms
Install and update Gluster
- Install Red Hat Gluster Storage 3.4 using the following command:
# yum install redhat-storage-server
- Update Red Hat Gluster Storage to the latest packages using the following command:
# yum update
Verify the installation and update
- Check the current version number of the updated Red Hat Gluster Storage system:
# cat /etc/redhat-storage-release
Important
The version number should be3.4
. - Check if any RHEL 6 packages are present:
# rpm -qa | grep el6
Important
The output of the command should not list any packages of RHEL 6 variant. If the output lists packages of RHEL 6 variant, contact Red Hat Support for further course of action on these packages.
Firewalld installation and configuration
- Install and start the firewall deamon using the following commands:
# yum install firewalld # systemctl start firewalld
- Add the Gluster process to firewall:
# firewall-cmd --zone=public --add-service=glusterfs --permanent
- Add the required services and ports to firewalld, see Considerations for Red Hat Gluster Storage
- Reload the firewall using the following commands:
# firewall-cmd --reload
Start Gluster processes
- Start the
glusterd
process:# systemctl start glusterd
- For a system with Nagios, start and enable the following process:
# systemctl start glusterpmd
- Start and enable the Nagios process:
# systemctl start nrpe
- If the Nagios process fails to start, execute the following commands:
# restorecon -Rv /etc/nagios/nrpe.cfg # systemctl start nrpe
Update Gluster op-version
Update the Gluster op-version to the required maximum version using the following commands:# gluster volume get all cluster.max-op-version # gluster volume set all cluster.op-version op_version
Note
31306
is thecluster.op-version
value for Red Hat Gluster Storage 3.4 Async Update. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-version
value for other versions.Setup Samba and CTDB
If the Gluster setup on RHEL 6 had Samba and CTDB configured, you should have the following available on the updated RHEL 7 system:- CTDB volume
/etc/ctdb/nodes
file/etc/ctdb/public_addresses
file
Perform the following steps to reconfigure Samba and CTDB:- Configure the firewall for Samba:
# firewall-cmd --zone=public --add-service=samba --permanent # firewall-cmd --zone=public --add-port=4379/tcp --permanent
- Subscribe to the Samba channel:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
- Update Samba to the latest packages:
# yum update
- Setup CTDB for Samba, see Configuring CTDB on Red Hat Gluster Storage Server in Setting up CTDB for Samba. You must skip creating the volume as the volumes present before the upgrade would be persistent after the upgrade.
- In the following files, replace
all
in the statementMETA="all"
with the volume name:/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
For example, the volume name isctdb_volname
, theMETA="all"
in the files should be changed toMETA="ctdb_volname"
. - Restart the CTDB volume using the following commands:
# gluster volume stop volume_name # gluster volume start volume_name
- Start the CTDB process:
# systemctl start ctdb
- Share the volume over Samba as required, see Sharing Volumes over SMB.
Start volumes and geo-replication
- Start the required volumes using the following command:
# gluster volume start volume_name
- Mount the meta-volume:
# mount /var/run/gluster/shared_storage/
If this command does not work, review the content of/etc/fstab
and ensure that the entry for the shared storage is configured correctly and re-run themount
command. The line for the meta volume in the/etc/fstab
file should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
- Restore the geo-replication session:
#
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
For more information on geo-replication, see Preparing to Deploy Geo-replication.