Installation Guide
Installing Red Hat Gluster Storage 3.5
Abstract
Chapter 1. Planning Red Hat Gluster Storage Installation
1.1. About Red Hat Gluster Storage
Red Hat Gluster Storage Server for On-Premise enables enterprises to treat physical storage as a virtualized, scalable, and centrally managed pool of storage by using commodity server and storage hardware.
Red Hat Gluster Storage Server for Public Cloud packages GlusterFS as an Amazon Machine Image (AMI) for deploying scalable NAS in the AWS public cloud. This powerful storage server provides a highly available, scalable, virtualized, and centrally managed pool of storage for Amazon users.
A storage node can be a physical server, a virtual machine, or a public cloud machine image running one instance of Red Hat Gluster Storage.
1.2. Prerequisites
Important
1.2.1. Multi-site Cluster Latency
# ping -c100 -q site_ip_address/site_hostname
Note
1.2.2. File System Requirements
Note
ext3
or ext4
to upgrade to a supported version of Red Hat Gluster Storage using the XFS back-end file system.
1.2.3. Logical Volume Manager
1.2.4. Network Time Configuration
1.2.4.1. Configuring time synchronization using Chrony
Note
# clockdiff node-hostname
1.2.4.2. Configuring time synchronization using Network Time Protocol
ntpd
daemon to automatically synchronize the time during the boot process as follows:
- Edit the NTP configuration file
/etc/ntp.conf
using a text editor such as vim or nano.# nano /etc/ntp.conf
- Add or edit the list of public NTP servers in the
ntp.conf
file as follows:server 0.rhel.pool.ntp.org server 1.rhel.pool.ntp.org server 2.rhel.pool.ntp.org
The Red Hat Enterprise Linux 6 version of this file already contains the required information. Edit the contents of this file if customization is required. For more information regarding supported Red Hat Enterprise Linux version for a particular Red Hat Gluster Storage release, see Section 1.7, “Red Hat Gluster Storage Support Matrix”. - Optionally, increase the initial synchronization speed by appending the
iburst
directive to each line:server 0.rhel.pool.ntp.org iburst server 1.rhel.pool.ntp.org iburst server 2.rhel.pool.ntp.org iburst
- After the list of servers is complete, set the required permissions in the same file. Ensure that only
localhost
has unrestricted access:restrict default kod nomodify notrap nopeer noquery restrict -6 default kod nomodify notrap nopeer noquery restrict 127.0.0.1 restrict -6 ::1
- Save all changes, exit the editor, and restart the NTP daemon:
# service ntpd restart
- Ensure that the
ntpd
daemon starts at boot time:# chkconfig ntpd on
ntpdate
command for a one-time synchronization of NTP. For more information about this feature, see the Red Hat Enterprise Linux Deployment Guide.
1.3. Hardware Compatibility
1.4. Port Information
iptables
command to open a port:
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5667 -j ACCEPT # service iptables save
# firewall-cmd --zone=zone_name --add-service=glusterfs # firewall-cmd --zone=zone_name --add-service=glusterfs --permanent
# firewall-cmd --zone=zone_name --add-port=port/protocol # firewall-cmd --zone=zone_name --add-port=port/protocol --permanent
# firewall-cmd --zone=public --add-port=5667/tcp # firewall-cmd --zone=public --add-port=5667/tcp --permanent
Connection source | TCP Ports | UDP Ports | Recommended for | Used for |
---|---|---|---|---|
Any authorized network entity with a valid SSH key | 22 | - | All configurations | Remote backup using geo-replication |
Any authorized network entity; be cautious not to clash with other RPC services. | 111 | 111 | All configurations | RPC port mapper and RPC bind |
Any authorized SMB/CIFS client | 139 and 445 | 137 and 138 | Sharing storage using SMB/CIFS | SMB/CIFS protocol |
Any authorized NFS clients | 2049 | 2049 | Sharing storage using Gluster NFS (Deprecated) or NFS-Ganesha | Exports using NFS protocol |
All servers in the Samba-CTDB cluster | 4379 | - | Sharing storage using SMB and Gluster NFS (Deprecated) | CTDB |
Any authorized network entity | 24007 | - | All configurations | Management processes using glusterd |
Any authorized network entity | 24009 | - | All configurations | Gluster events daemon |
NFSv3 clients | 662 | 662 | Sharing storage using NFS-Ganesha and Gluster NFS (Deprecated) | statd |
NFSv3 clients | 32803 | 32803 | Sharing storage using NFS-Ganesha and Gluster NFS (Deprecated) | NLM protocol |
NFSv3 clients sending mount requests | - | 32769 | Sharing storage using Gluster NFS (Deprecated) | Gluster NFS MOUNT protocol |
NFSv3 clients sending mount requests | 20048 | 20048 | Sharing storage using NFS-Ganesha | NFS-Ganesha MOUNT protocol |
NFS clients | 875 | 875 | Sharing storage using NFS-Ganesha | NFS-Ganesha RQUOTA protocol (fetching quota information) |
Servers in pacemaker/corosync cluster | 2224 | - | Sharing storage using NFS-Ganesha | pcsd |
Servers in pacemaker/corosync cluster | 3121 | - | Sharing storage using NFS-Ganesha | pacemaker_remote |
Servers in pacemaker/corosync cluster | - | 5404 and 5405 | Sharing storage using NFS-Ganesha | corosync |
Servers in pacemaker/corosync cluster | 21064 | - | Sharing storage using NFS-Ganesha | dlm |
Any authorized network entity | 49152 - 49664 | - | All configurations | Brick communication ports. The total number of ports required depends on the number of bricks on the node. One port is required for each brick on the machine. |
Connection source | TCP Ports | UDP Ports | Recommended for | Used for |
---|---|---|---|---|
NFSv3 servers | 662 | 662 | Sharing storage using NFS-Ganesha and Gluster NFS (Deprecated) | statd |
NFSv3 servers | 32803 | 32803 | Sharing storage using NFS-Ganesha and Gluster NFS (Deprecated) | NLM protocol |
1.5. Red Hat Gluster Storage Software Components and Versions
RHGS version | glusterfs and glusterfs-fuse | RHGS op-version | SMB | NFS | gDeploy | Ansible |
---|---|---|---|---|---|---|
3.4 | 3.12.2-18 | 31302 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0 | gdeploy-2.0.2-27 | - |
3.4 Batch 1 Update | 3.12.2-25 | 31303 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0 | gdeploy-2.0.2-30 | - |
3.4 Batch 2 Update | 3.12.2-32 | 31304 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0 | gdeploy-2.0.2-31 | gluster-ansible-infra-1.0.2-2 |
3.4 Batch 3 Update | 3.12.2-40 | 31305 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0 | gdeploy-2.0.2-31 | gluster-ansible-infra-1.0.2-2 |
3.4 Batch 4 Update | 3.12.2-47 | 31305 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0 | gdeploy-2.0.2-32 | gluster-ansible-infra-1.0.3-3 |
3.4.4 Async Update | 3.12.2-47.5 | 31306 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0 | gdeploy-2.0.2-32 | gluster-ansible-infra-1.0.3-3 |
3.5 (RHEL 6) | 6.0-22 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | - | - | - |
3.5 (RHEL 7) | 6.0-21 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-35 | gluster-ansible-infra-1.0.4-3 |
3.5 Batch Update 1 (RHEL 7) | 6.0-29 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-35 | gluster-ansible-infra-1.0.4-5 |
3.5 Async Update (RHEL 7) | 6.0-30.1 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-35 | gluster-ansible-infra-1.0.4-5 |
3.5 Batch 2 Update (RHEL 7) | 6.0-37 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-35 | gluster-ansible-infra-1.0.4-5 |
3.5 Batch 2 Update (RHEL 8) | 6.0-37 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-3.0.0-7 | gluster-ansible-infra-1.0.4-10 |
3.5.2 Async Update (RHEL 7) | 6.0-37.1 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-35 | gluster-ansible-infra-1.0.4-5 |
3.5.2 Async Update (RHEL 8) | 6.0-37.1 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-3.0.0-7 | gluster-ansible-infra-1.0.4-17 |
3.5 Batch 3 Update (RHEL 7) | 6.0-49.2 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-36 | gluster-ansible-infra-1.0.4-5 |
3.5 Batch 3 Update (RHEL 8) | 6.0-49 | 70000 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-3.0.0-8 | gluster-ansible-infra-1.0.4-11 |
3.5 Batch 4 Update (RHEL 7) | 6.0-56 | 70100 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-36 | gluster-ansible-infra-1.0.4-5 |
3.5 Batch 4 Update (RHEL 8) | 6.0-56 | 70100 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-3.0.0-8 | gluster-ansible-infra-1.0.4-19 |
3.5.4 Async Update (RHEL 7) | 6.0-56.2 | 70100 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-36 | gluster-ansible-infra-1.0.4-19 |
3.5.4 Async Update (RHEL 8) | 6.0-56.2 | 70100 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-36 | gluster-ansible-infra-1.0.4-19 |
3.5 Batch 5 Update (RHEL 7) | 6.0-59 | 70200 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-36 | gluster-ansible-infra-1.0.4-19 |
3.5 Batch 5 Update (RHEL 8) | 6.0-59 | 70200 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-3.0.0-8 | gluster-ansible-infra-1.0.4-19 |
3.5 Batch 6 Update (RHEL 7) | 6.0-61 | 70200 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-36 | gluster-ansible-infra-1.0.4-19 |
3.5 Batch 6 Update (RHEL 8) | 6.0-61 | 70200 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-3.0.0-8 | gluster-ansible-infra-1.0.4-19 |
3.5 Batch 7 Update (RHEL 7) | 6.0-63 | 70200 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-2.0.2-36 | gluster-ansible-infra-1.0.4-19 |
3.5 Batch 7 Update (RHEL 8) | 6.0-63 | 70200 | SMB 1, 2.0, 2.1, 3.0, 3.1.1 | NFSv3, NFSv4.0, NFSv4.1 | gdeploy-3.0.0-11 | gluster-ansible-infra-1.0.4-21 |
Note
1.6. Feature Compatibility Support
Important
Feature | Version |
---|---|
Arbiter bricks | 3.2 |
Bitrot detection | 3.1 |
Erasure coding | 3.1 |
Google Compute Engine | 3.1.3 |
Metadata caching | 3.2 |
Microsoft Azure | 3.1.3 |
NFS version 4 | 3.1 |
SELinux | 3.1 |
Sharding | 3.2.0 |
Snapshots | 3.0 |
Snapshots, cloning | 3.1.3 |
Snapshots, user-serviceable | 3.0.3 |
Tiering (Deprecated) | 3.1.2 |
Volume Shadow Copy (VSS) | 3.1.3 |
Volume Type | Sharding | Tiering (Deprecated) | Quota | Snapshots | Geo-Rep | Bitrot |
---|---|---|---|---|---|---|
Arbitrated-Replicated | Yes | No | Yes | Yes | Yes | Yes |
Distributed | No | Yes | Yes | Yes | Yes | Yes |
Distributed-Dispersed | No | Yes | Yes | Yes | Yes | Yes |
Distributed-Replicated | Yes | Yes | Yes | Yes | Yes | Yes |
Replicated | Yes | Yes | Yes | Yes | Yes | Yes |
Sharded | N/A | No | No | No | Yes | No |
Tiered (Deprecated) | No | N/A | Limited[a] | Limited[a] | Limited[a] | Limited[a] |
Note
Feature | FUSE | Gluster-NFS | NFS-Ganesha | SMB |
---|---|---|---|---|
Arbiter | Yes | Yes | Yes | Yes |
Bitrot detection | Yes | Yes | No | Yes |
dm-cache | Yes | Yes | Yes | Yes |
Encryption (TLS-SSL) | Yes | Yes | Yes | Yes |
Erasure coding | Yes | Yes | Yes | Yes |
Export subdirectory | Yes | Yes | Yes | N/A |
Geo-replication | Yes | Yes | Yes | Yes |
Quota (Deprecated)
Warning
Using QUOTA feature is considered to be deprecated in Red Hat Gluster Storage 3.5.3. Red Hat no longer recommends to use this feature and does not support it on new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3.
| Yes | Yes | Yes | Yes |
RDMA (Deprecated)
Warning
Using RDMA as a transport protocol is considered deprecated in Red Hat Gluster Storage 3.5. Red Hat no longer recommends the use of this feature, and does not support it on new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3.
| Yes | No | No | No |
Snapshots | Yes | Yes | Yes | Yes |
Snapshot cloning | Yes | Yes | Yes | Yes |
Snapshot mount | Yes | No | No | No |
Tiering (Deprecated)
Warning
Using tiering feature is considered deprecated in Red Hat Gluster Storage 3.5. Red Hat no longer recommends the use of this feature, and does not support it on new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3.
| Yes | Yes | N/A | N/A |
1.7. Red Hat Gluster Storage Support Matrix
Red Hat Enterprise Linux version | Red Hat Gluster Storage version |
---|---|
6.5 | 3.0 |
6.6 | 3.0.2, 3.0.3, 3.0.4 |
6.7 | 3.1, 3.1.1, 3.1.2 |
6.8 | 3.1.3 |
6.9 | 3.2 |
6.9 | 3.3 |
6.9 | 3.3.1 |
6.10 | 3.4, 3.5 |
7.1 | 3.1, 3.1.1 |
7.2 | 3.1.2 |
7.2 | 3.1.3 |
7.3 | 3.2 |
7.4 | 3.2 |
7.4 | 3.3 |
7.4 | 3.3.1 |
7.5 | 3.3.1, 3.4 |
7.6 | 3.3.1, 3.4 |
7.7 | 3.5, 3.5.1 |
7.8 | 3.5.1, 3.5.2 |
7.9 | 3.5.3, 3.5.4, 3.5.5, 3.5.6, 3.5.7 |
8.2 | 3.5.2, 3.5.3 |
8.3 | 3.5.3 |
8.4 | 3.5.4 |
8.5 | 3.5.5, 3.5.6 |
8.6 | 3.5.7 |
Chapter 2. Installing Red Hat Gluster Storage
Warning
Important
- Technology preview packages will also be installed with this installation of Red Hat Gluster Storage Server. For more information about the list of technology preview features, see chapter Technology Previews in the Red Hat Gluster Storage 3.5 Release Notes.
- When you clone a virtual machine that has Red Hat Gluster Storage Server installed, you need to remove the
/var/lib/glusterd/glusterd.info
file (if present) before you clone. If you do not remove this file, all cloned machines will have the same UUID. The file will be automatically recreated with a UUID on initial start-up of the glusterd daemon on the cloned virtual machines.
2.1. Obtaining Red Hat Gluster Storage
2.1.1. Obtaining Red Hat Gluster Storage Server for On-Premise
- Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in.
- Click Downloads to visit the Software & Download Center.
- In the Red Hat Gluster Storage Server area, clickto download the latest version of the software.
2.1.2. Obtaining Red Hat Gluster Storage Server for Public Cloud
2.2. Installing from an ISO Image
Important
2.2.1. Installing Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux
Important
- Download the ISO image file for Red Hat Gluster Storage Server as described in Section 2.1, “Obtaining Red Hat Gluster Storage”
- In the Welcome to Red Hat Gluster Storage 3.5 screen, select the language that will be used for the rest of the installation and click Continue. This selection will also become the default for the installed system, unless changed later.
Note
One language is pre-selected by default on top of the list. If network access is configured at this point (for example, if you booted from a network server instead of local media), the pre-selected language will be determined based on automatic location detection using the GeoIP module. - The Installation Summary screen is the central location for setting up an installation.
Figure 2.1. Installation Summary For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 8
Figure 2.2. Installation Summary for Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7
Instead of directing you through consecutive screens, the Red Hat Gluster Storage 3.5 installation program on Red Hat Enterprise Linux 7.7 and later allows you to configure the installation in the order you choose.Select a menu item to configure a section of the installation. When you have completed configuring a section, or if you would like to complete that section later, click the Done button located in the upper left corner of the screen.Only sections marked with a warning symbol are mandatory. A note at the bottom of the screen warns you that these sections must be completed before the installation can begin. The remaining sections are optional. Beneath each section's title, the current configuration is summarized. Using this you can determine whether you need to visit the section to configure it further.The following list provides a brief information of each of the menu item on the Installation Summary screen:- Date & TimeTo configure NTP, perform the following steps:
Important
Setting up NTP is mandatory for Gluster installation.- Click Date & Time and specify a time zone to maintain the accuracy of the system clock.
- Toggle the Network Time switch to ON.
Note
By default, the Network Time switch is enabled if you are connected to the network. - Click the configuration icon to add new NTP servers or select existing servers.
- Once you have made your addition or selection, click Done to return to the Installation Summary screen.
Note
NTP servers might be unavailable at the time of installation. In such a case, enabling them will not set the time automatically. When the servers are available, the date and time will be updated. - Language Support
To install support for additional locales and language dialects, select Language Support.
- Keyboard Configuration
To add multiple keyboard layouts to your system, select Keyboard.
- Installation Source
To specify a file or a location to install Red Hat Enterprise Linux from, select Installation Source. On this screen, you can choose between locally available installation media, such as a DVD or an ISO file, or a network location.
- Network & Hostname
To configure essential networking features for your system, select Network & Hostname.
Important
When the Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7 and later, installation finishes and the system boots for the first time, any network interfaces which you configured during the installation will be activated. However, the installation does not prompt you to configure network interfaces on some common installation paths - for example, when you install Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.5 from a DVD to a local hard drive.When you install Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7 and later, from a local installation source to a local storage device, be sure to configure at least one network interface manually if you require network access when the system boots for the first time. You will also need to set the connection to connect automatically after boot when editing the configuration. - Software Selection
To specify which packages will be installed, select Software Selection. If you require the following optional Add-Ons, then select the required Add-Ons and click Done:
- RH-Gluster-AD-Integration
- RH-Gluster-NFS-Ganesha
- RH-Gluster-Samba-Server
- Installation Destination
To select the disks and partition the storage space on which you will install Red Hat Gluster Storage, select Installation Destination. For more information, see the Installation Destination section in the Red Hat Enterprise Linux 7 Installation Guide.
- Kdump
Kdump is a kernel crash dumping mechanism which, in the event of a system crash, captures information that can be invaluable in determining the cause of the crash. Use this option to select whether or not to use Kdump on the system
- After making the necessary configurations, click Begin Installation on the Installation Summary screen.
Warning
Up to this point in the installation process, no lasting changes have been made on your computer. When you click Begin Installation, the installation program will allocate space on your hard drive and start to transfer Red Hat Gluster Storage into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer.To revise any of the choices that you made up to this point, return to the relevant section of the Installation Summary screen. To cancel installation completely, click Quit or switch off your computer.If you have finished customizing the installation and are certain that you want to proceed, click Begin Installation.After you click Begin Installation, allow the installation process to complete. If the process is interrupted, for example, by you switching off or resetting the computer, or by a power outage, you will probably not be able to use your computer until you restart and complete the Red Hat Gluster Storage installation process - Once you click Begin Installation, the progress screen appears. Red Hat Gluster Storage reports the installation progress on the screen as it writes the selected packages to your system. Following is a brief description of the options on this screen:
- Root Password
The Root Password menu item is used to set the password for the root account. The root account is used to perform critical system management and administration tasks. The password can be configured either while the packages are being installed or afterwards, but you will not be able to complete the installation process until it has been configured.
- User Creation
Creating a user account is optional and can be done after installation, but it is recommended to do it on this screen. A user account is used for normal work and to access the system. Best practice suggests that you always access the system via a user account and not the root account.
- After the installation is completed, click Reboot to reboot your system and begin using Red Hat Gluster Storage.
2.3. Subscribing to the Red Hat Gluster Storage Server Channels
Note
Register the System with Subscription Manager
Run the following command and enter your Red Hat Network user name and password to register the system with Subscription Manager:# subscription-manager register
Identify Available Entitlement Pools
Run the following commands to find entitlement pools containing the repositories required to install Red Hat Gluster Storage:# subscription-manager list --available | grep -A8 "Red Hat Enterprise Linux Server" # subscription-manager list --available | grep -A8 "Red Hat Storage"
Attach Entitlement Pools to the System
Use the pool identifiers located in the previous step to attach theRed Hat Enterprise Linux Server
andRed Hat Gluster Storage
entitlements to the system. Run the following command to attach the entitlements:# subscription-manager attach --pool=[POOLID]
For example:# subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47
Enable the Required Channels for Red Hat Gluster Storage on Red Hat Enterprise Linux
For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 6.7 and later- Run the following commands to enable the repositories required to install Red Hat Gluster Storage:
# subscription-manager repos --enable=rhel-6-server-rpms # subscription-manager repos --enable=rhel-scalefs-for-rhel-6-server-rpms # subscription-manager repos --enable=rhs-3-for-rhel-6-server-rpms
- If you require Samba, then enable the following repository:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
- NFS-Ganesha is not supported on Red Hat Enterprise Linux 6 based installations.
For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7 and later- Run the following commands to enable the repositories required to install Red Hat Gluster Storage
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
- If you require Samba, then enable the following repository:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
- For Red Hat Gluster Storage 3.5, if NFS-Ganesha is required, then enable the following repository:
# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms --enable=rhel-ha-for-rhel-7-server-rpms
- For Red Hat Gluster Storage 3.5, if you require CTDB, then enable the following repository:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 8.2 and later- Run the following commands to enable the repositories required to install Red Hat Gluster Storage
# subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms # subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-8-x86_64-rpms
- If you require Samba, then enable the following repository:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-8-x86_64-rpms
- For Red Hat Gluster Storage 3.5, if NFS-Ganesha is required, then enable the following repository:
# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-highavailability-rpms
- For Red Hat Gluster Storage 3.5, if you require CTDB, then enable the following repository:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-8-x86_64-rpms
Verify if the Channels are Enabled
Run the following command to verify if the channels are enabled:# yum repolist
Configure the Client System to Access Red Hat Satellite
Configure the client system to access Red Hat Satellite. Refer section Registering Clients with Red Hat Satellite Server in Red Hat Satellite 5.6 Client Configuration Guide.Register to the Red Hat Satellite Server
Run the following command to register the system to the Red Hat Satellite Server:# rhn_register
Register to the Standard Base Channel
In the select operating system release page, selectAll available updates
and follow the prompts to register the system to the standard base channel for Red Hat Enterprise Linux 6 -rhel-6-server-rpms
. The standard base channel for Red Hat Enterprise Linux 7 isrhel-7-server-rpms
. The standard base channel for Red Hat Enterprise Linux 8 isrhel-8-for-x86_64-baseos-rpms
.Subscribe to the Required Red Hat Gluster Storage Server Channels
For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 6.7 and later- Run the following command to subscribe the system to the required Red Hat Gluster Storage server channel:
# rhn-channel --add \ --channel rhel-6-server-rpms \ --channel rhel-scalefs-for-rhel-6-server-rpms \ --channel rhs-3-for-rhel-6-server-rpms
- If you require Samba, then execute the following command to enable the required channel:
# rhn-channel --add --channel rh-gluster-3-samba-for-rhel-6-server-rpms
- NFS-Ganesha is not supported on Red Hat Enterprise Linux 6 based installations.
For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7 and later- Run the following command to subscribe the system to the required Red Hat Gluster Storage server channels for Red Hat Enterprise Linux 7:
# rhn-channel --add \ --channel rhel-7-server-rpms \ --channel rh-gluster-3-for-rhel-7-server-rpms
- If you require Samba, then execute the following command to enable the required channel:
# rhn-channel --add --channel rh-gluster-3-samba-for-rhel-7-server-rpms
- For Red Hat Gluster Storage 3.5, for NFS-Ganesha enable the following channel:
# rhn-channel --add --channel rhel-x86_64-server-7-rh-gluster-3-nfs --channel rhel-x86_64-server-ha-7
- For Red Hat Gluster Storage 3.5, if CTDB is required, then enable the following channel:
# rhn-channel --add --channel rh-gluster-3-samba-for-rhel-7-server-rpms
For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 8.2 and later- Run the following command to subscribe the system to the required Red Hat Gluster Storage server channels for Red Hat Enterprise Linux 8:
# rhn-channel --add \ --channel rh-gluster-3-for-rhel-8-x86_64-rpms
- If you require Samba, then execute the following command to enable the required channel:
# rhn-channel --add --channel rh-gluster-3-samba-for-rhel-8-x86_64-rpms
- For Red Hat Gluster Storage 3.5, for NFS-Ganesha enable the following channel:
# rhn-channel --add --channel rh-gluster-3-nfs-for-rhel-8-x86_64-rpms --channel rhel-8-for-x86_64-highavailability-rpms
- For Red Hat Gluster Storage 3.5, if CTDB is required, then enable the following channel:
# rhn-channel --add --channel rh-gluster-3-samba-for-rhel-8-x86_64-rpms
Verify if the System is Registered Successfully
Run the following command to verify if the system is registered successfully:# rhn-channel --list rhel-x86_64-server-7 rhel-x86_64-server-7-rh-gluster-3
2.4. Installing Red Hat Gluster Storage Server on Red Hat Enterprise Linux (Layered Install)
Important
Important
/var
partition that is large enough (50GB - 100GB) for log files, geo-replication related miscellaneous files, and other files.
Perform a base install of Red Hat Enterprise Linux Server
Red Hat Gluster Storage is supported on Red Hat Enterprise Linux 7 (RHEL 7) and Red Hat Enterprise Linux 8 (RHEL 8). We highly recommend you to choose the latest RHEL versions.Important
Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See, Table 1.3, “Version Details”Register the System with Subscription Manager
To register the system with subscription manager, refer Section 2.3, “Subscribing to the Red Hat Gluster Storage Server Channels”Kernel Version Requirement
Red Hat Gluster Storage requires the kernel-2.6.32-431.17.1.el6 version or higher to be used on the system. Verify the installed and running kernel versions by running the following command:# rpm -q kernel kernel-2.6.32-431.el6.x86_64 kernel-2.6.32-431.17.1.el6.x86_64
# uname -r 2.6.32-431.17.1.el6.x86_64
Update all packages
Ensure that all packages are up to date by running the following command.# yum update
Important
If any kernel packages are updated, reboot the system with the following command.# shutdown -r now
Install Red Hat Gluster Storage
Run the following command to install Red Hat Gluster Storage:# yum install redhat-storage-server
- To install Samba, see Chapter 3, Deploying Samba on Red Hat Gluster Storage
- To install NFS-Ganesha, see Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage
Reboot
Reboot the system.
2.5. Installing from a PXE Server
Network Boot
or Boot Services
. Once you properly configure PXE booting, the computer can boot the Red Hat Gluster Storage Server installation system without any other media.
- Ensure that the network cable is attached. The link indicator light on the network socket should be lit, even if the computer is not switched on.
- Switch on the computer.
- A menu screen appears. Press the number key that corresponds to the preferred option.
2.6. Installing from Red Hat Satellite Server
2.6.1. Using Red Hat Satellite Server 6.x
Note
- Create a new manifest file and upload the manifest in the Satellite 6 server.
- Search for the required Red Hat Gluster Storage repositories and enable them.
- Synchronize all repositories enabled for Red Hat Gluster Storage.
- Create a new content view and add all the required products.
- Publish the content view and create an activation key.
- Register the required clients
# rpm -Uvh satellite-server-host-address/pub/katello-ca-consumer-latest.noarch.rpm # subscription-manager register --org=”Organization_Name” --activationkey=”Activation_Key”
- Identify available entitlement pools
# subscription-manager list --available
- Attach entitlement pools to the system
# subscription-manager attach --pool=Pool_ID
- Subscribe to required channels
- Enable the RHEL and Gluster channelFor Red Hat Enterprise Linux 8
# subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-8-x86_64-rpms
For Red Hat Enterprise Linux 7# subscription-manager repos --enable=rhel-7-server-rpms --enable=rh-gluster-3-for-rhel-7-server-rpms
- If you require Samba, enable its repositoryFor Red Hat Enterprise Linux 8
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-8-x86_64-rpms
For Red Hat Enterprise Linux 7# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
- If you require NFS-Ganesha, enable its repositoryFor Red Hat Enterprise Linux 8
# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-8-x86_64-rpms
For Red Hat Enterprise Linux 7# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms
- If you require HA, enable its repositoryFor Red Hat Enterprise Linux 8
# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
For Red Hat Enterprise Linux 7# subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
- If you require gdeploy, enable the Ansible repositoryFor Red Hat Enterprise Linux 8
# subscription-manager repos --enable=ansible-2-for-rhel-8-x86_64-rpms
For Red Hat Enterprise Linux 7# subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
- Install Red Hat Gluster Storage
# yum install redhat-storage-server
2.6.2. Using Red Hat Satellite Server 5.x
For more information on how to create an activation key, see Activation Keys in the Reference Guide.
- In the Details tab of the Activation Keys screen, select
Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64)
from the Base Channels drop-down list.Figure 2.3. Base Channels
- In the Child Channels tab of the Activation Keys screen, select the following child channels:
RHEL Server Scalable File System (v. 6 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 6 for x86_64)
If you require the Samba package, then select the following child channel:Red Hat Gluster 3 Samba (RHEL 6 for x86_64)
Figure 2.4. Child Channels
- In the Packages tab of the Activation Keys screen, enter the following package name:
redhat-storage-server
Figure 2.5. Package
- If you require the Samba package, then enter the following package name:
samba
For more information on creating a kickstart profile, see Kickstart in the Reference Guide.
- When creating a kickstart profile, the following
Base Channel
andTree
must be selected.Base Channel: Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64)Tree: ks-rhel-x86_64-server-6-6.5 - Do not associate any child channels with the kickstart profile.
- Associate the previously created activation key with the kickstart profile.
Important
- By default, the kickstart profile chooses
md5
as the hash algorithm for user passwords.You must change this algorithm tosha512
by providing the following settings in theauth
field of theKickstart Details
,Advanced Options
page of the kickstart profile:--enableshadow --passalgo=sha512
- After creating the kickstart profile, you must change the root password in the Kickstart Details, Advanced Options page of the kickstart profile and add a root password based on the prepared sha512 hash algorithm.
For more information on installing Red Hat Gluster Storage Server using a kickstart profile, see Kickstart in the Reference Guide.
2.7. Managing the glusterd Service
glusterd
service automatically starts on all the servers in the trusted storage pool. The service can be manually started and stopped by using the glusterd
service commands. For more information on creating trusted storage pools, see the Red Hat Gluster Storage 3.5 Administration Guide.
glusterd
also offers elastic volume management.
gluster
CLI commands to decouple logical storage volumes from physical hardware. This allows the user to grow, shrink, and migrate storage volumes without any application downtime. As storage is added to the cluster, the volumes are distributed across the cluster. This distribution ensures that the cluster is always available despite changes to the underlying hardware.
2.7.1. Manually Starting and Stopping glusterd
glusterd
service.
- Manually start
glusterd
as follows:# /etc/init.d/glusterd start
or# service glusterd start
- Manually stop
glusterd
as follows:# /etc/init.d/glusterd stop
or# service glusterd stop
2.8. Installing Ansible to Support Gdeploy
Note
- Execute the following command to enable the repository required to install Ansible:For Red Hat Enterprise Linux 8
# subscription-manager repos --enable=ansible-2-for-rhel-8-x86_64-rpms
For Red Hat Enterprise Linux 7# subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
- Install
ansible
by executing the following command:# yum install ansible
2.9. Installing Native Client
Note
- Red Hat Gluster Storage server supports the Native Client version which is the same as the server version and the preceding version of Native Client . For list of releases see: https://access.redhat.com/solutions/543123.
- From Red Hat Gluster Storage 3.5 batch update 7 onwards,
glusterfs-6.0-62
and higher version of glusterFS Native Client is only available viarh-gluster-3-client-for-rhel-8-x86_64-rpms
for Red Hat Gluster Storage based on Red Hat Enterprise Enterprise Linux (RHEL 8) andrh-gluster-3-client-for-rhel-7-server-rpms
for Red Hat Gluster Storage based on RHEL 7.
Use the Command Line to Register and Subscribe a System to Red Hat Subscription Management
Prerequisites
- Know the user name and password of the Red Hat Subscription Manager account with Red Hat Gluster Storage entitlements.
- Run the
subscription-manager register
command to list the available pools. Select the appropriate pool and enter your Red Hat Subscription Manager user name and password to register the system with Red Hat Subscription Manager.# subscription-manager register
- Depending on your client, run one of the following commands to subscribe to the correct repositories.
- For Red Hat Enterprise Linux 8 clients:
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-8-x86_64-rpms
- For Red Hat Enterprise Linux 7.x clients:
# subscription-manager repos --enable=rhel-7-server-rpms --enable=rh-gluster-3-client-for-rhel-7-server-rpms
Note
The following command can also be used, but Red Hat Gluster Storage may deprecate support for this repository in future releases.# subscription-manager repos --enable=rhel-7-server-rh-common-rpms
- For Red Hat Enterprise Linux 6.1 and later clients:
# subscription-manager repos --enable=rhel-6-server-rpms --enable=rhel-6-server-rhs-client-1-rpms
For more information, see Section 3.1 Registering and attaching a system using the Command Line in Using and Configuring Red Hat Subscription Management. - Verify that the system is subscribed to the required repositories.
# yum repolist
Use the Web Interface to Register and Subscribe a System
Prerequisites
- Know the user name and password of the Red Hat Subsrciption Management (RHSM) account with Red Hat Gluster Storage entitlements.
- Log on to Red Hat Subscription Management (https://access.redhat.com/management).
- Click the Systems link at the top of the screen.
- Click the name of the system to which the Red Hat Gluster Storage Native Client channel must be appended.
- Click Subscribed Channels section of the screen.in the
- Expand the node for Additional Services Channels for
Red Hat Enterprise Linux 8 for x86_64
,Red Hat Enterprise Linux 7 for x86_64
,Red Hat Enterprise Linux 6 for x86_64
orRed Hat Enterprise Linux 5 for x86_64
depending on the client platform. - Click thebutton to finalize the changes.When the page refreshes, select the Details tab to verify the system is subscribed to the appropriate channels.
Install Native Client Packages
Prerequisites
- Run the
yum install
command to install the native client RPM packages.#
yum install glusterfs glusterfs-fuse
Chapter 3. Deploying Samba on Red Hat Gluster Storage
3.1. Prerequisites
- You must install Red Hat Gluster Storage Server on the target server.
Warning
- For layered installation of Red Hat Gluster Storage, ensure to have only the default Red Hat Enterprise Linux server installation, without the Samba or CTDB packages installed from Red Hat Enterprise Linux.
- The Samba version 3 is being deprecated from Red Hat Gluster Storage 3.0 Update 4. Further updates will not be provided for samba-3.x. It is recommended that you upgrade to Samba-4.x, which is provided in a separate channel or repository, for all updates including the security updates.
- CTDB version 2.5 is not supported from Red Hat Gluster Storage 3.1 Update 2. To use CTDB in Red Hat Gluster Storage 3.1.2 and later, you must upgrade the system to CTDB 4.x, which is provided in the Samba channel of Red Hat Gluster Storage.
- Downgrade of Samba from Samba 4.x to Samba 3.x is not supported.
- Ensure that Samba is upgraded on all the nodes simultaneously, as running different versions of Samba in the same cluster will lead to data corruption.
- Enable the channel where the Samba packages are available:For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 6.x
- If you have registered your machine using Red Hat Subscription Manager or Satellite server-6.x, enable the repository by running the following command:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7 and later- If you have registered your machine using Red Hat Subscription Manager or Satellite server-7.x, enable the repository by running the following command:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 8.2 and later- If you have registered your machine using Red Hat Subscription Manager or Satellite server-8.x, enable the repository by running the following command:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-8-x86_64-rpms
3.2. Installing Samba Using ISO
Figure 3.1. Customize Packages
3.3. Installing Samba Using yum
# yum groupinstall RH-Gluster-Samba-Server
# yum groupinstall RH-Gluster-AD-Integration
- To install the basic Samba packages, execute the following command:
# yum install samba
- If you require the
smbclient
on the server, then execute the following command:# yum install samba-client
- If you require an Active directory setup, then execute the following commands:
# yum install samba-winbind # yum install samba-winbind-clients # yum install samba-winbind-krb5-locator
- Verify if the following packages are installed.
samba-libs samba-winbind-krb5-locator samba-winbind-modules samba-vfs-glusterfs samba-winbind samba-client samba-common samba-winbind-clients samba
Note
Chapter 4. Deploying NFS-Ganesha on Red Hat Gluster Storage
- Installing NFS-Ganesha using yum
- Installing NFS-Ganesha during an ISO Installation
Warning
4.1. Prerequisites
Enable the channel where the NFS-Ganesha packages are available:
- If you have registered your machine using Red Hat Subscription Manager, enable the repository by running the following command:
# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-8-x86_64-rpms
To add the HA repository, execute the following command:# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
- If you have registered your machine using Satellite server, enable the channel by running the following command:
# rhn-channel --add --channel rh-gluster-3-nfs-for-rhel-8-x86_64-rpms
To subscribe to the HA channel, execute the following command:# rhn-channel --add --channel rhel-8-for-x86_64-highavailability-rpms
Enable the channel where the NFS-Ganesha packages are available:
- If you have registered your machine using Red Hat Subscription Manager, enable the repository by running the following command:
# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms
To add the HA repository, execute the following command:# subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
- If you have registered your machine using Satellite server, enable the channel by running the following command:
# rhn-channel --add --channel rhel-x86_64-server-7-rh-gluster-3-nfs
To subscribe to the HA channel, execute the following command:# rhn-channel --add --channel rhel-x86_64-server-ha-7
4.2. Installing NFS-Ganesha during an ISO Installation
- While installing Red Hat Storage using an ISO, in the Customizing the Software Selection screen, select RH-Gluster-NFS-Ganesha and click Next.
- Proceed with the remaining installation steps for installing Red Hat Gluster Storage. For more information on how to install Red Hat Storage using an ISO, see Installing from an ISO Image.
4.3. Installing NFS-Ganesha using yum
- The glusterfs-ganesha package can be installed using the following command:
# yum install glusterfs-ganesha
NFS-Ganesha is installed along with the above package. nfs-ganesha-gluster and HA packages are also installed.Important
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 7:# yum install nfs-ganesha-selinux
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 8:# dnf install glusterfs-ganesha
Note
Chapter 5. Upgrading to Red Hat Gluster Storage 3.5
Important
Upgrade support limitations
- While upgrading RHGS from versions lower than RHGS-3.5.4 to version RHGS-3.5.4 or higher, both servers and clients must upgrade to RHGS-3.5.4 or higher versions before bumping up the operating version of the cluster.
- Virtual Data Optimizer (VDO) volumes, which are supported in Red Hat Enterprise Linux 7.5, are not currently supported in Red Hat Gluster Storage. VDO is supported only when used as part of Red Hat Hyperconverged Infrastructure for Virtualization 2.0. See Understanding VDO for more information.
- Servers must be upgraded prior to upgrading clients.
- If you are upgrading from Red Hat Gluster Storage 3.1 Update 2 or earlier, you must upgrade servers and clients simultaneously.
- If you use NFS-Ganesha, your supported upgrade path to Red Hat Gluster Storage 3.5 depends on the version from which you are upgrading. If you are upgrading from version 3.3 or earlier, use Offline Upgrade to Red Hat Gluster Storage 3.3 and then perform an in-service upgrade from version 3.3 to 3.4. Later, perform the upgrading from version 3.4 to 3.5 using Section 5.2, “In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5”. If you are upgrading from version 3.4 to 3.5, directly use Section 5.2, “In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5”.
5.1. Offline Upgrade to Red Hat Gluster Storage 3.5
Warning
Important
5.1.1. Upgrading to Red Hat Gluster Storage 3.5 for Systems Subscribed to Red Hat Subscription Manager
Procedure 5.1. Before you upgrade
- Back up the following configuration directory and files in a location that is not on the operating system partition.
/var/lib/glusterd
/etc/samba
/etc/ctdb
/etc/glusterfs
/var/lib/samba
/var/lib/ctdb
/var/run/gluster/shared_storage/nfs-ganesha
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .If you use NFS-Ganesha, back up the following files from all nodes:/run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf
/etc/ganesha/ganesha.conf
/etc/ganesha/ganesha-ha.conf
If upgrading from Red Hat Gluster Storage 3.3 to 3.4 or subsequent releases, back up all xattr by executing the following command individually on the brick root(s) for all nodes:# find ./ -type d ! -path "./.*" ! -path "./" | xargs getfattr -d -m. -e hex > /var/log/glusterfs/xattr_dump_brick_name
- Unmount gluster volumes from all clients. On a client, use the following command to unmount a volume from a mount point.
# umount mount-point
- If you use NFS-Ganesha, run the following on a gluster server to disable the nfs-ganesha service:
# gluster nfs-ganesha disable
- On a gluster server, disable the shared volume.
# gluster volume set all cluster.enable-shared-storage disable
- Stop all volumes
#
for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
- Verify that all volumes are stopped.
# gluster volume info
- Stop the
glusterd
services on all servers using the following command:# service glusterd stop # pkill glusterfs # pkill glusterfsd
- Stop the pcsd service.
# systemctl stop pcsd
Procedure 5.2. Upgrade using yum
Note
# migrate-rhs-classic-to-rhsm --status
# migrate-rhs-classic-to-rhsm --rhn-to-rhsm
# migrate-rhs-classic-to-rhsm --status
- If you use Samba:
- For Red Hat Enterprise Linux 6.7 or higher, enable the following repository:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
For Red Hat Enterprise Linux 7, enable the following repository:# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
For Red Hat Enterprise Linux 8, enable the following repository:# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-8-x86_64-rpms
- Ensure that Samba is upgraded on all the nodes simultaneously, as running different versions of Samba in the same cluster will lead to data corruption.Stop the CTDB and SMB services.On Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8:
# systemctl stop ctdb
On Red Hat Enterprise Linux 6:# service ctdb stop
To verify that services are stopped, run:# ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
- Upgrade the server to Red Hat Gluster Storage 3.5.
# yum update
Wait for the update to complete.Important
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 7:# yum install nfs-ganesha-selinux
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 8:# dnf install glusterfs-ganesha
- If you use Samba/CTDB, update the following files to replace
META="all"
withMETA="<ctdb_volume_name>"
, for example,META="ctdb"
:/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
- This script ensures the file system and its lock volume are mounted on all Red Hat Gluster Storage servers that use Samba, and ensures that CTDB starts at system boot./var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
- This script ensures that the file system and its lock volume are unmounted when the CTDB volume is stopped.Note
For RHEL based Red Hat Gluster Storage upgrading to 3.5 batch update 4 with Samba, the write-behind translator has to manually disabled for all existing samba volumes.# gluster volume set <volname> performance.write-behind off
- Reboot the server to ensure that kernel updates are applied.
- Ensure that glusterd and pcsd services are started.
# systemctl start glusterd # systemctl start pcsd
Note
During upgrade of servers, the glustershd.log file throws some “Invalid argument” errors during every index crawl (10 mins by default) on the upgraded nodes. It is *EXPECTED* and can be *IGNORED* until the op-version bump up, after which these errors are not triggered. Sample error message:[2021-05-25 17:58:38.007134] E [MSGID: 114031] [client-rpc-fops_v2.c:216:client4_0_mkdir_cbk] 0-spvol-client-40: remote operation failed. Path: (null) [Invalid argument]
If you are in op-version '70000' or lower, do not bump up the op-version to '70100' or higher until all the servers and clients are upgraded to the newer version. - When all nodes have been upgraded, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.# gluster volume set all cluster.op-version 70200
Note
70200
is thecluster.op-version
value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:gluster volume heal $VOLNAME granular-entry-heal enable
The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correctcluster.op-version
value for other versions.Important
If the op-version is bumped up to '70100' after upgrading the servers and before upgrading the clients, some internal metadata files under the root of the mount point named '.glusterfs-anonymous-inode-(gfid)' exposed to the older clients. The clients must not do any I/O or remove or touch contents in this directory. The clients must upgrade to 3.5.4 or higher version, then this directory becomes invisible to the clients. - If you want to migrate from Gluster NFS to NFS Ganesha as part of this upgrade, install the NFS-Ganesha packages as described in Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage, and configure the NFS Ganesha cluster using the information in the NFS Ganesha section of the Red Hat Gluster Storage 3.5 Administration Guide.
- Start all volumes.
#
for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; done
- If you are using NFS-Ganesha:
- Copy the volume's export information from your backup copy of
ganesha.conf
to the new/etc/ganesha/ganesha.conf
file.The export information in the backed up file is similar to the following:%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf"
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . - Copy the backup volume export files from the backup directory to
/etc/ganesha/exports
by running the following command from the backup directory:# cp export.* /etc/ganesha/exports/
- Enable the shared volume.
# gluster volume set all cluster.enable-shared-storage enable
- Ensure that the shared storage volume is mounted on the server. If the volume is not mounted, run the following command:
# mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
- Ensure that the
/var/run/gluster/shared_storage/nfs-ganesha
directory is created.# cd /var/run/gluster/shared_storage/ # mkdir nfs-ganesha
- Enable firewall settings for new services and ports. See Getting Started in the Red Hat Gluster Storage 3.5 Administration Guide.
- If you use Samba/CTDB:
- Mount
/gluster/lock
before starting CTDB by executing the following commands:# mount <ctdb_volume_name> # mount -t glusterfs server:/ctdb_volume_name /gluster/lock/
- Verify that the lock volume mounted correctly by checking for
lock
in the output of themount
command on any Samba server.# mount | grep 'lock' ... <hostname>:/<ctdb_volume_name>.tcp on /gluster/lock type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
- If all servers that host volumes accessed via SMB have been updated, then start the CTDB and Samba services by executing the following commands.On Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8:
# systemctl start ctdb
On Red Hat Enterprise Linux 6:# service ctdb start
- To verify that the CTDB and SMB services have started, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
- If you use NFS-Ganesha:
- Copy the
ganesha.conf
andganesha-ha.conf
files, and the/etc/ganesha/exports
directory to the/var/run/gluster/shared_storage/nfs-ganesha
directory.# cd /etc/ganesha/ # cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
- Update the path of any export entries in the
ganesha.conf
file.# sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
- Run the following to clean up any existing cluster configuration:
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
- If you have upgraded to Red Hat Enterprise Linux 7.4 or later, set the following SELinux Boolean:
# setsebool -P ganesha_use_fusefs on
- Start the nfs-ganesha service and verify that all nodes are functional.
# gluster nfs-ganesha enable
- Enable NFS-Ganesha on all volumes.
# gluster volume set volname ganesha.enable on
5.1.2. Upgrading to Red Hat Gluster Storage 3.5 for Systems Subscribed to Red Hat Network Satellite Server
Procedure 5.3. Before you upgrade
- Back up the following configuration directory and files in a location that is not on the operating system partition.
/var/lib/glusterd
/etc/samba
/etc/ctdb
/etc/glusterfs
/var/lib/samba
/var/lib/ctdb
/var/run/gluster/shared_storage/nfs-ganesha
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .If you use NFS-Ganesha, back up the following files from all nodes:/run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf
/etc/ganesha/ganesha.conf
/etc/ganesha/ganesha-ha.conf
If upgrading from Red Hat Gluster Storage 3.3 to 3.4 or subsequent releases, back up all xattr by executing the following command individually on the brick root(s) for all nodes:# find ./ -type d ! -path "./.*" ! -path "./" | xargs getfattr -d -m. -e hex > /var/log/glusterfs/xattr_dump_brick_name
- Unmount gluster volumes from all clients. On a client, use the following command to unmount a volume from a mount point.
# umount mount-point
- If you use NFS-Ganesha, run the following on a gluster server to disable the nfs-ganesha service:
# gluster nfs-ganesha disable
- On a gluster server, disable the shared volume.
# gluster volume set all cluster.enable-shared-storage disable
- Stop all volumes.
#
for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
- Verify that all volumes are stopped.
# gluster volume info
- Stop the
glusterd
services on all servers using the following command:# service glusterd stop # pkill glusterfs # pkill glusterfsd
- Stop the pcsd service.
# systemctl stop pcsd
Procedure 5.4. Upgrade using Satellite
- Create an Activation Key at the Red Hat Network Satellite Server, and associate it with the following channels. For more information, see Section 2.6, “Installing from Red Hat Satellite Server”
- For Red Hat Enterprise Linux 6.7 or higher:
Base Channel: Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 6 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 6 for x86_64)
If you use Samba, add the following channel:Red Hat Gluster 3 Samba (RHEL 6 for x86_64)
- For Red Hat Enterprise Linux 7:
Base Channel: Red Hat Enterprise Linux Server (v.7 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 7 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 7 for x86_64)
If you use Samba, add the following channel:Red Hat Gluster 3 Samba (RHEL 7 for x86_64)
- Unregister your system from Red Hat Network Satellite by following these steps:
- Log in to the Red Hat Network Satellite server.
- Click on the Systems tab in the top navigation bar and then the name of the old or duplicated system in the System List.
- Click the delete system link in the top-right corner of the page.
- To confirm the system profile deletion by clicking the Delete System button.
- Run the following command on your Red Hat Gluster Storage server, using your credentials and the Activation Key you prepared earlier. This re-registers the system to the Red Hat Gluster Storage 3.5 channels on the Red Hat Network Satellite Server.
# rhnreg_ks --username username --password password --force --activationkey Activation Key ID
- Verify that the channel subscriptions have been updated.On Red Hat Enterprise Linux 6.7 and higher, look for the following channels, as well as the
rh-gluster-3-samba-for-rhel-6-server-rpms
channel if you use Samba.# rhn-channel --list rhel-6-server-rpms rhel-scalefs-for-rhel-6-server-rpms rhs-3-for-rhel-6-server-rpms
On Red Hat Enterprise Linux 7, look for the following channels, as well as therh-gluster-3-samba-for-rhel-7-server-rpms
channel if you use Samba.# rhn-channel --list rhel-7-server-rpms rh-gluster-3-for-rhel-7-server-rpms
- Upgrade to Red Hat Gluster Storage 3.5.
# yum update
Important
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 7:# yum install nfs-ganesha-selinux
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 8:# dnf install glusterfs-ganesha
- Reboot the server and run volume and data integrity checks.
- When all nodes have been upgraded, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.# gluster volume set all cluster.op-version 70200
Note
70200
is thecluster.op-version
value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:gluster volume heal $VOLNAME granular-entry-heal enable
The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correctcluster.op-version
value for other versions. - Start all volumes.
#
for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; done
- If you are using NFS-Ganesha:
- Copy the volume's export information from your backup copy of
ganesha.conf
to the new/etc/ganesha/ganesha.conf
file.The export information in the backed up file is similar to the following:%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf" %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf"
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . - Copy the backup volume export files from the backup directory to
/etc/ganesha/exports
by running the following command from the backup directory:# cp export.* /etc/ganesha/exports/
- Enable the shared volume.
# gluster volume set all cluster.enable-shared-storage enable
- Ensure that the shared storage volume is mounted on the server. If the volume is not mounted, run the following command:
# mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
- Ensure that the
/var/run/gluster/shared_storage/nfs-ganesha
directory is created.# cd /var/run/gluster/shared_storage/ # mkdir nfs-ganesha
- Enable firewall settings for new services and ports. See Getting Started in the Red Hat Gluster Storage 3.5 Administration Guide.
- If you use Samba/CTDB:
- Mount
/gluster/lock
before starting CTDB by executing the following commands:# mount <ctdb_volume_name> # mount -t glusterfs server:/ctdb_volume_name /gluster/lock/
- Verify that the lock volume mounted correctly by checking for
lock
in the output of themount
command on any Samba server.# mount | grep 'lock' ... <hostname>:/<ctdb_volume_name>.tcp on /gluster/lock type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
- If all servers that host volumes accessed via SMB have been updated, then start the CTDB and Samba services by executing the following commands.On Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8:
# systemctl start ctdb
On Red Hat Enterprise Linux 6:# service ctdb start
- To verify that the CTDB and SMB services have started, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
- If you use NFS-Ganesha:
- Copy the
ganesha.conf
andganesha-ha.conf
files, and the/etc/ganesha/exports
directory to the/var/run/gluster/shared_storage/nfs-ganesha
directory.# cd /etc/ganesha/ # cp ganesha.conf ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/ # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
- Update the path of any export entries in the
ganesha.conf
file.# sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
- Run the following to clean up any existing cluster configuration:
/usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
- If you have upgraded to Red Hat Enterprise Linux 7.4 or later, set the following SELinux Boolean:
# setsebool -P ganesha_use_fusefs on
- Start the ctdb service (and nfs-ganesha service, if used) and verify that all nodes are functional.
# systemctl start ctdb # gluster nfs-ganesha enable
- If this deployment uses NFS-Ganesha, enable NFS-Ganesha on all volumes.
# gluster volume set volname ganesha.enable on
5.1.3. Special consideration for Offline Software Upgrade
5.1.3.1. Migrate CTDB configuration files
- Make a temporary directory to migrate configuration files.
# mkdir /tmp/ctdb-migration
- Run the CTDB configuration migration script.
# ./usr/share/doc/ctdb-4.9.x/examples/config_migrate.sh -o /tmp/ctdb-migration /etc/sysconfig/ctdb
The script assumes that the CTDB configuration directory is/etc/ctdb
. If this is not correct for your setup, specify an alternative configuration directory with the-d
option, for example:# ./usr/share/doc/ctdb-4.9.x/examples/config_migrate.sh -o /tmp/ctdb-migration /etc/sysconfig/ctdb -d ctdb-config-dir
- Verify that the
/tmp/ctdb-migration
directory now contains the following files:commands.sh
ctdb.conf
script.options
ctdb.tunables
(if additional changes are required)ctdb.sysconfig
(if additional changes are required)README.warn
(if additional changes are required)
- Back up the current configuration files.
# mv /etc/ctdb/ctdb.conf /etc/ctdb/ctdb.conf.default
- Install the new configuration files.
# mv /tmp/ctdb-migration/ctdb.conf /etc/ctdb/ctdb.conf # mv script.options /etc/ctdb/
- Make the
commands.sh
file executable, and run it.# chmod +x /tmp/ctdb-migration/commands.sh # ./tmp/ctdb-migration/commands.sh
- If
/tmp/ctdb-migration/ctdb.tunables
exists, move it to the/etc/ctdb
directory.# cp /tmp/ctdb-migration/ctdb.tunables /etc/ctdb
- If
/tmp/ctdb-migration/ctdb.sysconfig
exists, back up the old/etc/sysconfig/ctdb
file and replace it with/tmp/ctdb-migration/ctdb.sysconfig
.# mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.old # mv /tmp/ctdb-migration/ctdb.sysconfig /etc/sysconfig/ctdb
Otherwise, back up the old/etc/sysconfig/ctdb
file and replace it with/etc/sysconfig/ctdb.rpmnew
.# mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.old # mv /etc/sysconfig/ctdb.rpmnew /etc/sysconfig/ctdb
5.2. In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5
Important
Note
ganesha.conf.rpmnew
" in /etc/ganesha
folder. The old configuration file is not overwritten during the inservice upgrade process. However, post upgradation, you have to manually copy any new configuration changes from "ganesha.conf.rpmnew
" to the existing ganesha.conf
file in /etc/ganesha
folder.
5.2.1. Pre-upgrade Tasks
5.2.1.1. Upgrade Requirements for Red Hat Gluster Storage 3.5
- In-service software upgrade is supported for pure and distributed versions of arbiter, erasure-coded (disperse) and three way replicated volume types. It is not supported for a pure distributed volume.
- If you want to use snapshots for your existing environment, each brick must be an independent thin provisioned logical volume (LV). If you do not plan to use snapshots, thickly provisioned volumes remain supported.
- A Logical Volume that contains a brick must not be used for any other purpose.
- Linear LVM and thin LV are supported with Red Hat Gluster Storage 3.4 and later. For more information, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/logical_volume_manager_administration/index#LVM_components
- When server-side quorum is enabled, ensure that bringing one node down does not violate server-side quorum. Add dummy peers to ensure the server-side quorum is not violated until the completion of rolling upgrade using the following command:
# gluster peer probe dummynode
Note
If you have a geo-replication session, then to add a node follow the steps mentioned in the sectionStarting Geo-replication for a New Brick or New Node in the Red Hat Gluster Storage Administration Guide.For example, when the server-side quorum percentage is set to the default value (>50%), for a plain replicate volume with two nodes and one brick on each machine, a dummy node that does not contain any bricks must be added to the trusted storage pool to provide high availability of the volume using the command mentioned above.In a three node cluster, if the server-side quorum percentage is set to 77%, bringing down one node would violate the server-side quorum. In this scenario, you have to add two dummy nodes to meet server-side quorum. - Stop any geo-replication sessions running between the master and slave.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
- Ensure that there are no pending self-heals before proceeding with in-service software upgrade using the following command:
# gluster volume heal volname info
- Ensure the Red Hat Gluster Storage server is registered to the required channels.On Red Hat Enterprise Linux 6:
rhel-6-server-rpms rhel-scalefs-for-rhel-6-server-rpms rhs-3-for-rhel-6-server-rpms
On Red Hat Enterprise Linux 7:rhel-7-server-rpms rh-gluster-3-for-rhel-7-server-rpms
On Red Hat Enterprise Linux 8:rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms rh-gluster-3-for-rhel-8-x86_64-rpms
To subscribe to the channels, run the following command:# subscription-manager repos --enable=repo-name
5.2.1.2. Restrictions for In-Service Software Upgrade
- In-service upgrade for NFS-Ganesha clusters is supported only from Red Hat Gluster Storage 3.4 and beyond. If you are upgrading from Red Hat Gluster Storage 3.1 and you use NFS-Ganesha, offline upgrade to Red Hat Gluster Storage 3.4. Then use in-service upgrade method to upgrade to Red Hat Gluster Storage 3.5.
- Erasure coded (dispersed) volumes can be upgraded while in-service only if the
disperse.optimistic-change-log
,disperse.eager-lock
anddisperse.other-eager-lock
options are set tooff
. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations. - Post upgrade the values can be changed to resemble the pre-upgrade values on erasure coded volumes, but ensure that
disperse.optimistic-change-log
anddisperse.other-eager-lock
options are set toon
. - Ensure that the system workload is low before performing the in-service software upgrade, so that the self-heal process does not have to heal too many entries during the upgrade. Also, with high system workload healing is time-consuming.
- Do not perform any volume operations on the Red Hat Gluster Storage server.
- Do not change hardware configurations.
- Do not run mixed versions of Red Hat Gluster Storage for an extended period of time. For example, do not have a mixed environment of Red Hat Gluster Storage 3.3, Red Hat Gluster Storage 3.4, and Red Hat Gluster Storage 3.5 for a prolonged time.
- Do not combine different upgrade methods.
- It is not recommended to use in-service software upgrade for migrating to thin provisioned volumes, but to use offline upgrade scenario instead. For more information see, Section 5.1, “Offline Upgrade to Red Hat Gluster Storage 3.5”
5.2.1.3. Configuring repo for Upgrading using ISO
Note
- Mount the ISO image file under any directory using the following command:
# mount -o loop <ISO image file> <mount-point>
For example:# mount -o loop rhgs-3.5-rhel-7-x86_64-dvd-1.iso /mnt
- Set the repo options in a file in the following location:
/etc/yum.repos.d/<file_name.repo>
- Add the following information to the repo file:
[local] name=local baseurl=file:///mnt enabled=1 gpgcheck=0
5.2.1.4. Preparing and Monitoring the Upgrade Activity
- Check the peer and volume status to ensure that all peers are connected and there are no active volume tasks.
# gluster peer status
# gluster volume status
- Check the rebalance status using the following command:
# gluster volume rebalance r2 status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- --------- -------- --------- ------ -------- -------------- 10.70.43.198 0 0Bytes 99 0 0 completed 1.00 10.70.43.148 49 196Bytes 100 0 0 completed 3.00
- If you need to upgrade an erasure coded (dispersed) volume, set the
disperse.optimistic-change-log
,disperse.eager-lock
anddisperse.other-eager-lock
options tooff
. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations.# gluster volume set volname disperse.optimistic-change-log off # gluster volume set volname disperse.eager-lock off # gluster volume set volname disperse.other-eager-lock off
- Ensure that there are no pending self-heals by using the following command:
# gluster volume heal volname info
The following example shows no pending self-heals.# gluster volume heal drvol info Gathering list of entries to be healed on volume drvol has been successful Brick 10.70.37.51:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.78:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.51:/rhs/brick2/dir2 Number of entries: 0 Brick 10.70.37.78:/rhs/brick2/dir2 Number of entries: 0
5.2.2. Service Impact of In-Service Upgrade
Warning
Ongoing I/O on Samba shares will fail as the Samba shares will be temporarily unavailable during the in-service software upgrade, hence it is recommended to stop the Samba service using the following command:
# service ctdb stop ;Stopping CTDB will also stop the SMB service.
In-service software upgrade is not supported for distributed volume. If you have a distributed volume in the cluster, stop that volume for the duration of the upgrade.
# gluster volume stop <VOLNAME>
The virtual machine images are likely to be modified constantly. The virtual machine listed in the output of the volume heal command does not imply that the self-heal of the virtual machine is incomplete. It could mean that the modifications on the virtual machine are happening constantly.
Important
5.2.3. In-Service Software Upgrade
Note
- Back up the following configuration directory and files in a location that is not on the operating system partition.
/var/lib/glusterd
/etc/samba
/etc/ctdb
/etc/glusterfs
/var/lib/samba
/var/lib/ctdb
/var/run/gluster/shared_storage/nfs-ganesha
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .If you use NFS-Ganesha, back up the following files from all nodes:/run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf
/etc/ganesha/ganesha.conf
/etc/ganesha/ganesha-ha.conf
Important
If you are updating from RHEL 8.3 to RHEL 8.4, then follow these additional steps.- On one node in the cluster, edit /etc/corosync/corosync.conf. Add a line "token: 3000" to the totem stanza, for example:
totem { version: 2 secauth: off cluster_name: rhel8-cluster transport: knet token: 3000 }
- Run `pcs cluster sync`. It is optional to verify /etc/corosync/corosync.conf on all nodes has the new token: 3000 line.
- Run `pcs cluster reload corosync`
- Run `corosync-cmapctl | grep totem.token, confirm that "runtime.config.totem.token (u32) = 3000" as output.
- If the node is part of an NFS-Ganesha cluster, place the node in standby mode.
# pcs node standby
- Ensure that there are no pending self-heal operations.
# gluster volume heal volname info
- If this node is part of an NFS-Ganesha cluster:
- Disable the PCS cluster and verify that it has stopped.
# pcs cluster disable # pcs status
- Stop the nfs-ganesha service.
# systemctl stop nfs-ganesha
- Stop all gluster services on the node and verify that they have stopped.
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd # pgrep gluster
- Verify that your system is not using the legacy Red Hat Classic update software.
# migrate-rhs-classic-to-rhsm --status
If your system uses this legacy software, migrate to Red Hat Subscription Manager and verify that your status has changed when migration is complete.# migrate-rhs-classic-to-rhsm --rhn-to-rhsm
# migrate-rhs-classic-to-rhsm --status
- Update the server using the following command:
# yum update
Important
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 7:# yum install nfs-ganesha-selinux
Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 8:# dnf install glusterfs-ganesha
- If the volumes are thick provisioned, and you plan to use snapshots, perform the following steps to migrate to thin provisioned volumes:
Note
Migrating from thick provisioned volume to thin provisioned volume during in-service software upgrade takes a significant amount of time based on the data you have in the bricks. If you do not plan to use snapshots, you can skip this step. However, if you plan to use snapshots on your existing environment, the offline method to upgrade is recommended. For more information regarding offline upgrade, see Section 5.1, “Offline Upgrade to Red Hat Gluster Storage 3.5”Contact a Red Hat Support representative before migrating from thick provisioned volumes to thin provisioned volumes using in-service software upgrade.- Unmount all the bricks associated with the volume by executing the following command:
# umount mount_point
- Remove the LVM associated with the brick by executing the following command:
# lvremove logical_volume_name
For example:# lvremove /dev/RHS_vg/brick1
- Remove the volume group by executing the following command:
# vgremove -ff volume_group_name
For example:# vgremove -ff RHS_vg
- Remove the physical volume by executing the following command:
# pvremove -ff physical_volume
- If the physical volume (PV) is not created, then create the PV for a RAID 6 volume by executing the following command, else proceed with the next step:
# pvcreate --dataalignment 2560K /dev/vdb
For more information, see Formatting and Mounting Bricks in the Red Hat Gluster Storage 3.5 Administration Guide. - Create a single volume group from the PV by executing the following command:
# vgcreate volume_group_name disk
For example:# vgcreate RHS_vg /dev/vdb
- Create a thinpool using the following command:
# lvcreate -L size --poolmetadatasize md_size --chunksize chunk_size -T pool_device
For example:# lvcreate -L 2T --poolmetadatasize 16G --chunksize 256 -T /dev/RHS_vg/thin_pool
- Create a thin volume from the pool by executing the following command:
# lvcreate -V size -T pool_device -n thinvol_name
For example:# lvcreate -V 1.5T -T /dev/RHS_vg/thin_pool -n thin_vol
- Create filesystem in the new volume by executing the following command:
# mkfs.xfs -i size=512 thin_vol
For example:# mkfs.xfs -i size=512 /dev/RHS_vg/thin_vol
The back-end is now converted to a thin provisioned volume. - Mount the thin provisioned volume to the brick directory and setup the extended attributes on the bricks. For example:
# setfattr -n trusted.glusterfs.volume-id \ -v 0x$(grep volume-id /var/lib/glusterd/vols/volname/info \ | cut -d= -f2 | sed 's/-//g') $brick
- Disable glusterd.
# systemctl disable glusterd
This prevents it starting during boot time, so that you can ensure the node is healthy before it rejoins the cluster. - Reboot the server.
# shutdown -r now "Shutting down for upgrade to Red Hat Gluster Storage 3.5"
Important
Perform this step only for each thick provisioned volume that has been migrated to thin provisioned volume in the previous step.Change the Automatic File Replication extended attributes from another node, so that the heal process is executed from a brick in the replica subvolume to the thin provisioned brick.- Create a FUSE mount point to edit the extended attributes.
# mount -t glusterfs HOSTNAME_or_IPADDRESS:/VOLNAME /MOUNTDIR
- Create a new directory on the mount point, and ensure that a directory with such a name is not already present.
# mkdir /MOUNTDIR/name-of-nonexistent-dir
- Delete the directory and set the extended attributes.
# rmdir /MOUNTDIR/name-of-nonexistent-dir
# setfattr -n trusted.non-existent-key -v abc /MOUNTDIR # setfattr -x trusted.non-existent-key /MOUNTDIR
- Ensure that the extended attributes of the brick in the replica subvolume is not set to zero.
# getfattr -d -m. -e hex brick_path
In the following example, the extended attributetrusted.afr.repl3-client-1
for/dev/RHS_vg/brick2
is not set to zero:# getfattr -d -m. -e hex /dev/RHS_vg/brick2 getfattr: Removing leading '/' from absolute path names # file: /dev/RHS_vg/brick2 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.repl3-client-1=0x000000000000000400000002 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x000000010000000000000000ffffffff trusted.glusterfs.volume-id=0x924c2e2640d044a687e2c370d58abec9
- Start the
glusterd
service.# systemctl start glusterd
- Verify that you have upgraded to the latest version of Red Hat Gluster Storage.
# gluster --version
- Ensure that all bricks are online.
# gluster volume status
For example:# gluster volume status Status of volume: r2 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.43.198:/brick/r2_0 49152 Y 32259 Brick 10.70.42.237:/brick/r2_1 49152 Y 25266 Brick 10.70.43.148:/brick/r2_2 49154 Y 2857 Brick 10.70.43.198:/brick/r2_3 49153 Y 32270 NFS Server on localhost 2049 Y 25280 Self-heal Daemon on localhost N/A Y 25284 NFS Server on 10.70.43.148 2049 Y 2871 Self-heal Daemon on 10.70.43.148 N/A Y 2875 NFS Server on 10.70.43.198 2049 Y 32284 Self-heal Daemon on 10.70.43.198 N/A Y 32288 Task Status of Volume r2 ------------------------------------------------------------------------------ There are no active volume tasks
- Start self-heal on the volume.
# gluster volume heal volname
- Ensure that self-heal on the volume is complete.
# gluster volume heal volname info
The following example shows a completed self heal operation.# gluster volume heal drvol info Gathering list of entries to be healed on volume drvol has been successful Brick 10.70.37.51:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.78:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.51:/rhs/brick2/dir2 Number of entries: 0 Brick 10.70.37.78:/rhs/brick2/dir2 Number of entries: 0
- Verify that shared storage is mounted.
# mount | grep /run/gluster/shared_storage
- If this node is part of an NFS-Ganesha cluster:
- If the system is managed by SELinux, set the
ganesha_use_fusefs
Boolean toon
.# setsebool -P ganesha_use_fusefs on
- Start the NFS-Ganesha service.
# systemctl start nfs-ganesha
- Enable and start the cluster.
# pcs cluster enable # pcs cluster start
- Release the node from standby mode.
# pcs node unstandby
- Verify that the pcs cluster is running, and that the volume is being exported correctly after upgrade.
# pcs status # showmount -e
NFS-ganesha enters a short grace period after performing these steps. I/O operations halt during this grace period. Wait until you seeNFS Server Now NOT IN GRACE
in theganesha.log
file before continuing.
- Optionally, enable the glusterd service to start at boot time.
# systemctl enable glusterd
- Repeat the above steps on the other node of the replica pair. In the case of a distributed-replicate setup, repeat the above steps on all the replica pairs.
Important
If you are updating from RHEL 8.3 to RHEL 8.4, then follow these additional steps.- Restore the totem-token timeout to the original value, for example, by deleting the token: 3000 line from /etc/corosync/corosync.conf
- Run `pcs cluster sync`. It is optional to verify /etc/corosync/corosync.conf on all nodes has the new token: 3000 line.
- Run `pcs cluster reload corosync`.
- Run `corosync-cmapctl | grep totem.token to verify the changes.
- When all nodes have been upgraded, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.# gluster volume set all cluster.op-version 70200
Note
70200
is thecluster.op-version
value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:gluster volume heal $VOLNAME granular-entry-heal enable
The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correctcluster.op-version
value for other versions.Note
If you want to enable snapshots, see Managing Snapshots in the Red Hat Gluster Storage 3.5 Administration Guide.Important
Remount the Red Hat Gluster Storage volume on the client side, in the case of updating the Red Hat Gluster Storage nodes enabled with network encryption using TLS/SSL. For more information, refer to Configuring Network Encryption in Red Hat Gluster Storage - If the client-side quorum was disabled before upgrade, then upgrade it by executing the following command:
# gluster volume set volname cluster.quorum-type auto
- If a dummy node was created earlier, then detach it by executing the following command:
# gluster peer detach <dummy_node name>
- If the geo-replication session between master and slave was disabled before upgrade, then configure the meta volume and restart the session:
- Run the following command:
# gluster volume set all cluster.enable-shared-storage enable
- If you use a non-root user to perform geo-replication, run this command on the primary slave to set permissions for the non-root group.
# gluster-mountbroker setup <mount_root> <group>
- Run the following command:
# gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume true
- Run the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
- If you disabled the
disperse.optimistic-change-log
,disperse.eager-lock
anddisperse.other-eager-lock
options in order to upgrade an erasure-coded (dispersed) volume, re-enable these settings.# gluster volume set volname disperse.optimistic-change-log on # gluster volume set volname disperse.eager-lock on # gluster volume set volname disperse.other-eager-lock on
5.2.4. Special Consideration for In-Service Software Upgrade
Note
5.2.4.1. Upgrading the Native Client
Warning
Unmount gluster volumes
Unmount any gluster volumes prior to upgrading the native client.# umount /mnt/glusterfs
Upgrade the client
Run theyum update
command to upgrade the native client:#
yum update glusterfs glusterfs-fuse
Remount gluster volumes
Remount the volumes as discussed in Section 6.2.3, “Mounting Red Hat Gluster Storage Volumes”.
5.3. Minor version updates of RHGS 3.5.z
Note
Chapter 6. Upgrading Red Hat Gluster Storage to Red Hat Enterprise Linux 7
Important
- RHEL6 channel subscriptions are required.
- Upgrade servers before upgrading clients.
6.1. Preparing System for Upgrade
Important
Migrate from Red Hat Network Classic to Red Hat Subscription Manager
Verify that your system is not on the legacy Red Hat Network Classic update system:# migrate-rhs-classic-to-rhsm --status
If the system is on Red Hat Network Classic, migrate to Red Hat Subscription Manager using: Migrating from RHN to RHSM in Red Hat Enterprise Linux.Register the system with Red Hat Subscription Manager
To register the system with Red Hat Network, execute the following command. Enter your Red Hat Network username and password that have the Red Hat Enterprise Linux entitlements:# subscription-manager register --username=user_name --password=password
Identify the available entitlement pools
Find the entitlement pools containing the Red Hat Enterprise Linux 6 repositories:# subscription-manager list --available
Attach the entitlement pool to the system
Use the pool identifier to attach the Red Hat Enterprise Linux 6 entitlements.# subscription-manager attach --pool=pool_ID
Enable the repositories
Enable the Red Hat Enterprise Linux 6, scalable file system, and Red Hat Gluster Storage repositories:# subscription-manager repos --enable=rhel-6-server-rpms --enable=rhel-scalefs-for-rhel-6-server-rpms --enable=rhs-3-for-rhel-6-server-rpms
Back up the Gluster configuration files
Note
It is recommended to make a complete backup before you update your system. Refer to https://access.redhat.com/solutions/1484053 to know one possible approach.- Ensure that the following configuration directories and files are backed up:
/var/lib/glusterd
/etc/glusterfs
- For systems with samba-ctdb enabled cluster, create a new directory to store the backup:
# mkdir backup_folder_name # cd backup_folder_name
- Execute the following command to take a backup of samba-ctdb data:
for each in `ctdb getdbmap | grep PERSISTENT | cut -d" " -f2 | cut -d":" -f2`; do echo $each ; ctdb backup_folder_name $each ${each}.bak; done
Stop all Gluster services, volumes, and processes
- Stop any geo-replication sessions:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
- Stop all volumes:
# for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
- Stop the Gluster processes:
# service glusterd stop # pkill glusterfs # pkill glusterfsd
- For systems with a samba-ctdb enabled cluster, stop the CTDB services:
# service ctdb stop
Update the system
Update the system to the latest minor version of Red Hat Enterprise Linux 6:# yum update
Reboot
Reboot the system:# reboot
Verify the version number
Check the current version number of the updated Red Hat Enterprise Linux 6 system:# cat /etc/redhat-release
Ensure the version number is6.10
.
6.2. Performing System Assessment
Install the preupgrade-assistant
The preupgrade-assistant is a tool that scans the existing Red Hat Enterprise Linux 6 system and checks whether it is ready to be upgraded.Subscribe to the required repositories and install the preupgrade-assistant tool using the following commands:# subscription-manager repos --enable rhel-6-server-extras-rpms --enable rhel-6-server-optional-rpms # yum install preupgrade-assistant preupgrade-assistant-el6toel7
Run the preupgrade tool
Launch the preupgrade tool:# preupg -v
Assess the results
- View the following result of the preupgrade tool in a browser:
/root/preupgrade/result.html
- Make a note of all the components that are marked as
Failed
andNeeds Action
. - Share the findings with the Red Hat executive assisting with the upgrade process.
Check the SELinux policy in Red Hat Enterprise Linux 7
SELinux policies have changed between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. For a working SELinux policy in Red Hat Enterprise Linux 7, execute the following command:# semodule -r sandbox
Uninstall Cluster and HA related packages
Migration is not possible with Cluster and HA related packages installed on the system.Uninstall Cluster and HA related packages by executing the following command:# yum remove modcluster ricci openais corosync
6.3. Upgrading from Red Hat Enterprise Linux 6.X to Red Hat Enterprise Linux 7.X
Install migration tool
# yum install redhat-upgrade-tool # yum install yum-utils
Disable all repositories
Disable all the enabled repositories:# yum-config-manager --disable \*
Download the operating system as an ISO file
Follow these steps to download the latest ISO file for one of the following operating systems.Red Hat Enterprise Linux 7 based Red Hat Gluster Storage 3.5
- Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in.
- Click Downloads to go to the Software & Download Center.
- Click Red Hat Gluster Storage.
- Select
3.5 for RHEL 7 (latest)
from the Version dropdown menu.Important
If you are upgrading to Red Hat Gluster Storage 3.5, select version3.4 for RHEL 7
. Bug 1762637 means that the Preupgrade Assistant only allows upgrading to Red Hat Enterprise Linux 7.6 at this time. - Click Download Now beside the Red Hat Gluster Storage Server 3.5 on RHEL 7 Installation DVD.
Red Hat Enterprise Linux 7
- Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in.
- Click Downloads to go to the Software & Download Center.
- Click Versions 7 and below, beside Red Hat Enterprise Linux.
- Select
7.x (latest)
from the Version dropdown menu.Important
If you are upgrading to Red Hat Gluster Storage 3.5, select version7.6
. Bug 1762637 means that the Preupgrade Assistant only allows upgrading to Red Hat Enterprise Linux 7.6 at this time. - Click Download Now beside the Red Hat Enterprise Linux 7.x Binary DVD.
Upgrade to Red Hat Enterprise Linux 7 using ISO
Upgrade to Red Hat Enterprise Linux 7 using the Red Hat upgrade tool and reboot after the upgrade process is completed:# redhat-upgrade-tool --iso ISO_filepath --cleanup-post # reboot
Important
The upgrade process is time-consuming depending on your system's configuration and amount of data.
6.4. Upgrading to Red Hat Gluster Storage 3.5
Disable all repositories
# subscription-manager repos --disable=’*’
Subscribe to the Red Hat Enterprise Linux 7 channel
# subscription-manager repos --enable=rhel-7-server-rpms
Check for stale Red Hat Enterprise Linux 6 packages
Check for any stale Red Hat Enterprise Linux 6 packages post upgrade:# rpm -qa | grep el6
Important
If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.Update and reboot
Update the Red Hat Enterprise Linux 7 packages and reboot.# yum update # reboot
Verify the version number
Ensure that the latest version of Red Hat Enterprise Linux 6 is shown when you view the `redhat-release` file:# cat /etc/redhat-release
Subscribe to the required channels
- Subscribe to the Gluster channel:
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
- If you use Samba, enable its repository.
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
- If you use NFS-Ganesha, enable its repository.
# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms --enable=rhel-ha-for-rhel-7-server-rpms
- If you use gdeploy, enable the Ansible repository:
# subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
Install and update Gluster
- If you used a Red Hat Enterprise Linux 7 ISO, install Red Hat Gluster Storage 3.5 using the following command:
# yum install redhat-storage-server
This is already installed if you used a Red Hat Gluster Storage 3.5 ISO based on Red Hat Enterprise Linux 7. - Update Red Hat Gluster Storage to the latest packages using the following command:
# yum update
Verify the installation and update
- Check the current version number of the updated Red Hat Gluster Storage system:
# cat /etc/redhat-storage-release
Important
The version number should be3.5
. - Ensure that no Red Hat Enterprise Linux 6 packages are present:
# rpm -qa | grep el6
Important
If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.
Install and configure Firewalld
- Install and start the firewall daemon using the following commands:
# yum install firewalld # systemctl start firewalld
- Add the Gluster process to the firewall:
# firewall-cmd --zone=public --add-service=glusterfs --permanent
- Add the required services and ports to firewalld. For more information see Considerations for Red Hat Gluster Storage
- Reload the firewall using the following commands:
# firewall-cmd --reload
Start the Gluster processes
- Start the
glusterd
process:# systemctl start glusterd
Update Gluster op-version
Update the Gluster op-version to the required maximum version using the following commands:# gluster volume get all cluster.max-op-version # gluster volume set all cluster.op-version op_version
Note
70200
is thecluster.op-version
value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:gluster volume heal $VOLNAME granular-entry-heal enable
The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correctcluster.op-version
value for other versions.Set up Samba and CTDB
If the Gluster setup on Red Hat Enterprise Linux 6 had Samba and CTDB configured, you should have the following available on the updated Red Hat Enterprise Linux 7 system:- CTDB volume
/etc/ctdb/nodes
file/etc/ctdb/public_addresses
file
Perform the following steps to reconfigure Samba and CTDB:- Configure the firewall for Samba:
# firewall-cmd --zone=public --add-service=samba --permanent # firewall-cmd --zone=public --add-port=4379/tcp --permanent
- Subscribe to the Samba channel:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
- Update Samba to the latest packages:
# yum update
- Configure CTDB for Samba. For more information, see Configuring CTDB on Red Hat Gluster Storage Server in Setting up CTDB for Samba. You must skip creating the volume as the volumes present before the upgrade would be persistent after the upgrade.
- In the following files, replace
all
in the statementMETA="all"
with the volume name:/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
For example, the volume name isctdb_volname
, theMETA="all"
in the files should be changed toMETA="ctdb_volname"
. - Restart the CTDB volume using the following commands:
# gluster volume stop volume_name # gluster volume start volume_name
- Start the CTDB process:
# systemctl start ctdb
- Share the volume over Samba if required. See Sharing Volumes over SMB.
Start the volumes and geo-replication
- Start the required volumes using the following command:
# gluster volume start volume_name
- Mount the meta-volume:
# mount /var/run/gluster/shared_storage/
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .If this command does not work, review the content of the/etc/fstab
file and ensure that the entry for the shared storage is configured correctly, and re-run themount
command. The line for the meta volume in the/etc/fstab
file should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
- Restore the geo-replication session:
#
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
For more information on geo-replication, see Preparing to Deploy Geo-replication.
Chapter 7. Upgrading from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5 in a Red Hat Enterprise Virtualization-Red Hat Gluster Storage Environment
yum
.
Warning
Important
Important
7.1. Prerequisites
- Verify that no self-heal operations are in progress.
# gluster volume heal volname info
- Ensure that the gluster volume corresponding to Glusterfs Storage Domain does not have any pending self heal by executing the following command:
# gluster volume heal volname info summary
7.2. Upgrading an in-service integrated environment
Upgrade the Red Hat Gluster Storage nodes
Perform the following steps for each Red Hat Gluster Storage node, one node at a time. If you have multiple replica sets, upgrade each node in a replica set before moving on to another replica set.- Stop all gluster services on the storage node that you want to upgrade by running the following commands as the root user on that node.
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd # pgrep gluster
- In the Administration Portal, click Storage → Hosts and select the storage node to upgrade.
- Click Management → Maintenance and click .
- Upgrade the storage node using the procedure at Section 5.2, “In-Service Software Upgrade from Red Hat Gluster Storage 3.4 to Red Hat Gluster Storage 3.5”.
- Reboot the storage node.
- In the Administration Portal, click Storage → Hosts and select the storage node.
- Click Management → Activate.
- Click the name of the storage node → Bricks, and verify that the Self-Heal Info column of all bricks is listed as
OK
before upgrading the next storage node.
Upgrade Gluster client software on virtualization hosts
After upgrading all storage nodes, update the client software on all virtualization hosts.For Red Hat Virtualization hosts
- In the Administration Portal, click Compute → Hosts and select the node to update.
- On Red Hat Virtualization versions earlier than 4.2, click Management → Maintenance and click
OK
. - Click Installation → Upgrade and click
OK
.The node is automatically updated with the latest packages and reactivated when the update is complete.
For Red Hat Enterprise Linux hosts
- In the Administration Portal, click Compute → Hosts and select the node to update.
- Click Management → Maintenance and click
OK
. - Subscribe the host to the correct repositories to receive updates.
- Update the host.
# yum update
- In the Administration Portal, click Management → Activate.
7.3. Upgrading an offline integrated environment
7.3.1. Upgrading using an ISO
- Using Red Hat Enterprise Virtualization Manager, stop all the virtual machine instances.The Red Hat Gluster Storage volume on the instances will be stopped during the upgrade.
- Using Red Hat Enterprise Virtualization Manager, move the data domain of the data center to Maintenance mode.
- Using Red Hat Enterprise Virtualization Manager, stop the volume (the volume used for data domain) containing Red Hat Gluster Storage nodes in the data center.
- Using Red Hat Enterprise Virtualization Manager, move all Red Hat Gluster Storage nodes to Maintenance mode.
- Perform the ISO Upgrade as mentioned in section Section 5.2.1.3, “Configuring repo for Upgrading using ISO”.
- Re-install the Red Hat Gluster Storage nodes from Red Hat Enterprise Virtualization Manager.
Note
- Re-installation for the Red Hat Gluster Storage nodes should be done from Red Hat Enterprise Virtualization Manager. The newly upgraded Red Hat Gluster Storage 3.5 nodes lose their network configuration and other configuration details, such as firewall configuration, which was done while adding the nodes to Red Hat Enterprise Virtualization Manager. Re-install the Red Hat Gluster Storage nodes to have the bootstrapping done.
- You can re-configure the Red Hat Gluster Storage nodes using the option provided under Action Items, as shown in Figure 7.1, “Red Hat Gluster Storage Node before Upgrade ”, and perform bootstrapping.
Figure 7.1. Red Hat Gluster Storage Node before Upgrade
- Perform the steps above in all Red Hat Gluster Storage nodes.
- Start the volume once all the nodes are shown in Up status in Red Hat Enterprise Virtualization Manager.
- Upgrade the native client bits for Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7, depending on the hypervisor you are using.
Note
If Red Hat Enterprise Virtualization Hypervisor is used as hypervisor, then install a suitable build of Red Hat Enterprise Virtualization Hypervisor containing the latest native client.Figure 7.2. Red Hat Gluster Storage Node after Upgrade
- Using Red Hat Enterprise Virtualization Manager, activate the data domain and start all the virtual machine instances in the data center.
7.3.2. Upgrading using yum
- Using Red Hat Enterprise Virtualization Manager, stop all virtual machine instances in the data center.
- Using Red Hat Enterprise Virtualization Manager, move the data domain backed by gluster volume to Maintenance mode.
- Using Red Hat Enterprise Virtualization Manager, move all Red Hat Gluster Storage nodes to Maintenance mode.
- Perform
yum
update as mentioned in Section 5.1.1, “Upgrading to Red Hat Gluster Storage 3.5 for Systems Subscribed to Red Hat Subscription Manager”. - Once the Red Hat Gluster Storage nodes are rebooted and up,them using Red Hat Enterprise Virtualization Manager.
Note
Re-installation of Red Hat Gluster Storage nodes is required, as the network configurations and bootstrapping configurations done prior to upgrade are preserved, unlike ISO upgrade. - Using Red Hat Enterprise Virtualization Manager, start the volume.
- Upgrade the native client bits for Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7, based on the Red Hat Enterprise Linux server hypervisor used.
Note
If Red Hat Enterprise Virtualization Hypervisor is used as hypervisor, reinstall Red Hat Enterprise Virtualization Hypervisor containing the latest version of Red Hat Gluster Storage native client. - Activate the data domain and start all the virtual machine instances.
Chapter 8. Enabling SELinux
# rpm -q package_name
Important
dracut
utility has to be run to put SELinux awareness into the initramfs
file system. Failing to do so causes SELinux to not start during system startup.
- Before SELinux is enabled, each file on the file system must be labeled with an SELinux context. Before this happens, confined domains may be denied access, preventing your system from booting correctly. To prevent this, configure
SELINUX=permissive
in/etc/selinux/config
:# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=permissive # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection SELINUXTYPE=targeted
- As the Linux root user, reboot the system. During the next boot, file systems are labeled. The label process labels each file with an SELinux context:
*** Warning -- SELinux targeted policy relabel is required. *** Relabeling could take a very long time, depending on file *** system size and speed of hard drives. ****
Each * (asterisk) character on the bottom line represents 1000 files that have been labeled. In the above example, four * characters represent 4000 files have been labeled. The time it takes to label all files depends on the number of files on the system and the speed of hard drives. On modern systems, this process can take as short as 10 minutes. - In permissive mode, the SELinux policy is not enforced, but denial messages are still logged for actions that would have been denied in enforcing mode. Before changing to enforcing mode, as the Linux root user, run the following command to confirm that SELinux did not deny actions during the last boot:
# grep "SELinux is preventing" /var/log/messages
If SELinux did not deny any actions during the last boot, this command returns no output. - If there were no denial messages in /var/log/messages, configure SELINUX=enforcing in /etc/selinux/config:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted
- Reboot your system. After reboot, confirm that getenforce returns Enforcing
~]$ getenforce Enforcing
Chapter 9. Using the Gluster Command Line Interface
Note
--mode=script
to any CLI command ensures that the command executes without confirmation prompts.
Run the Gluster CLI on any Red Hat Gluster Storage Server by either invoking the commands or running the Gluster CLI in interactive mode. The gluster command can be remotely used via SSH.
# gluster COMMAND
peer status
command:
# gluster peer status
Alternatively, run the Gluster CLI in interactive mode using the following command:
# gluster
gluster>
gluster> COMMAND
peer status
to view the status of the peer server:
- Start Gluster CLI's interactive mode:
# gluster
- Request the peer server status:
gluster> peer status
- The peer server status displays.
help
to view the gluster help options.
- Start Gluster CLI's interactive mode:
# gluster
- Request the help options:
gluster> help
- A list of gluster commands and options displays.
Appendix A. Revision History
Revision History | |||
---|---|---|---|
Revision 3.5-0 | Wed Oct 30 2019 | ||
|