7.2. Updating the NFS Server
Depending on the environment, the NFS server can be updated in the following ways:
- Updating Gluster NFS
- Updating NFS-Ganesha in the Offline Mode
- Migrating from Gluster NFS to NFS Ganesha in Offline mode
More detailed information about each is provided in the following sections.
7.2.1. Updating Gluster NFS
To update gluster NFS, refer Section 7.1, “Updating Red Hat Gluster Storage in the Offline Mode”
If you have a CTDB setup, then refer Section 8.2.4.1, “In-Service Software Upgrade for a CTDB Setup”
7.2.2. Updating NFS-Ganesha in the Offline Mode
Note
NFS-Ganesha does not support in-service updates. This means all running services and I/O operations must be stopped before starting the update process.
Execute the following steps to update the NFS-Ganesha service from Red Hat Gluster Storage 3.1.x to Red Hat Gluster Storage 3.2:
- Back up all the volume export files under
/etc/ganesha/exports
andganesha.conf
under/etc/ganesha
, in a backup directory on all the nodes:From Red Hat Gluster Storage 3.1.x to Red Hat Gluster Storage 3.2For example:
# cp /etc/ganesha/exports/export.v1.conf backup/ # cp /etc/ganesha/exports/export.v2.conf backup/ # cp /etc/ganesha/exports/export.v3.conf backup/ # cp /etc/ganesha/exports/export.v4.conf backup/ # cp /etc/ganesha/exports/export.v5.conf backup/ # cp /etc/ganesha/ganesha.conf backup/ # cp /etc/ganesha/ganesha-ha.conf backup/
From Red Hat Gluster Storage 3.2 to Red Hat Gluster Storage 3.2 AsyncFor example:
# cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf backup/ # cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf backup/ # cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf backup/ # cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v4.conf backup/ # cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v5.conf backup/ # cp /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf backup/ # cp /var/run/gluster/shared_storage/nfs-ganesha/ganesha-ha.conf backup/
- Disable nfs-ganesha on the cluster by executing the following command:
# gluster nfs-ganesha disable
For example:# gluster nfs-ganesha disable This will take a few minutes to complete. Please wait .. nfs-ganesha : success
- Disable the shared volume in cluster by executing the following command:
# gluster volume set all cluster.enable-shared-storage disable
For example:# gluster volume set all cluster.enable-shared-storage disable Disabling cluster.enable-shared-storage will delete the shared storage volumec(gluster_shared_storage), which is used by snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y volume set: success
- Stop the glusterd service and kill any running gluster process on all the nodes.On Red Hat Enterprise Linux 7:
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd
On Red Hat Enterprise Linux 6:# service glusterd stop # pkill glusterfs # pkill glusterfsd
- Ensure all gluster processes are stopped and if there are any gluster processes still running, terminate the process using kill, on all the nodes by executing the following command:
# pgrep gluster
- Stop the pcsd service on all nodes of the cluster.On Red Hat Enterprise Linux 7:
# systemctl stop pcsd
On Red Hat Enterprise Linux 6:# service pcsd stop
- Update the packages on all the nodes by executing the following command:
# yum update
This updates the required packages and any dependencies of those packages.Important
- From Red Hat Gluster Storage 3.2, NFS-Ganesha packages must be installed on all the nodes of the trusted storage pool.
- Verify on all the nodes that the required packages are updated, the nodes are fully functional and are using the correct versions. If anything does not seem correct, then do not proceed until the situation is resolved. Contact the Red Hat Global Support Services for assistance if needed.
- Start the glusterd and pcsd service on all the nodes by executing the following commands.On Red Hat Enterprise Linux 7:
# systemctl start glusterd # systemctl start pcsd
On Red Hat Enterprise Linux 6:# service glusterd start # service pcsd start
- When all nodes have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster.
# gluster volume set all cluster.op-version 31001
- Copy the volume's export information from backup copy of
ganesha.conf
to the newly renamedganesha.conf
under/etc/ganesha
.Export entries will look like as below in backup copy ofganesha.conf
:%include "/etc/ganesha/exports/export.v1.conf" %include "/etc/ganesha/exports/export.v2.conf" %include "/etc/ganesha/exports/export.v3.conf" %include "/etc/ganesha/exports/export.v4.conf" %include "/etc/ganesha/exports/export.v5.conf"
- Copy the backup volume export files from backup directory to
/etc/ganesha/exports
# cp export.* /etc/ganesha/exports/
- Enable the firewall settings for the new services and ports. Information on how to enable the services is available in the Red Hat Gluster Storage Administration Guide.
- Enable the shared volume in the cluster:
# gluster volume set all cluster.enable-shared-storage enable
For example:# gluster volume set all cluster.enable-shared-storage enable volume set: success
- Ensure that the shared storage volume mount exists on the server after node reboot/shutdown. If it does not, then mount the shared storage volume manually using the following command:
# mount -t glusterfs <local_node's_hostname>:gluster_shared_storage /var/run/gluster/shared_storage
- Once the shared volume is created, create a folder named “nfs-ganesha” inside /var/run/gluster/shared_storage:
# cd /var/run/gluster/shared_storage/ # mkdir nfs-ganesha
- Copy the ganesha.conf, ganesha-ha.conf, and the exports folder from
/etc/ganesha
to/var/run/gluster/shared_storage/nfs-ganesha
# cd /etc/ganesha/ # cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
- If there are any export entries in the ganesha.conf file, then update the path in the file using the following command:
# sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
- Execute the following command to cleanup any already existing cluster related configuration:
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
- If you have upgraded to Red Hat Enterprise Linux 7.4, enable the
ganesha_use_fusefs
and thegluster_use_execmem
boolean before enabling NFS-Ganesha by executing the following commands:# setsebool -P ganesha_use_fusefs on # setsebool -P gluster_use_execmem on
- Enable nfs-ganesha on the cluster:
# gluster nfs-ganesha enable
For example:# gluster nfs-ganesha enable Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success
Important
Verify that all the nodes are functional. If anything does not seem correct, then do not proceed until the situation is resolved. Contact Red Hat Global Support Services for assistance if required.
7.2.3. Migrating from Gluster NFS to NFS Ganesha in Offline mode
The following steps have to be performed on each node of the replica pair to migrate from Gluster NFS to NFS Ganesha
- To ensure that CTDB does not start automatically after a reboot run the following command on each node of the CTDB cluster:
# chkconfig ctdb off
- Stop the CTDB service on the Red Hat Gluster Storage node using the following command on each node of the CTDB cluster:
# service ctdb stop
- To verify if the CTDB and NFS services are stopped, execute the following command:
ps axf | grep -E '(ctdb|nfs)[d]'
- Stop the gluster services on the storage server using the following commands:
# service glusterd stop # pkill glusterfs # pkill glusterfsd
- Delete the CTDB volume by executing the following command:
# gluster vol delete <ctdb_vol_name>
- Update the server using the following command:
# yum update
- Reboot the server
- Start the glusterd service using the following command:
# service glusterd start
On Red Hat Enterprise Linux 7, execute the following command:# systemctl start glusterd
- When all nodes have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster.
# gluster volume set all cluster.op-version 31001
- To install nfs-ganesha packages, refer Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage
- To configure nfs-ganesha cluster, refer section NFS-Ganesha in the Red Hat Gluster Storage Administration Guide.