Chapter 5. Shared storage
The Red Hat Update Appliance (RHUA) and content delivery servers (CDSs) need a shared storage volume that both can access. Red Hat Gluster Storage is provided with Red Hat Enterprise Linux (RHEL) 7, but you can use any network file system (NFS) solution.
5.1. Gluster Storage
5.1.1. Create shared storage
glusterfs-server is available only with the appropriate subscription.
See the Red Hat Gluster Storage documentation for installation and administration details. In particular, see Section 11.15 of the Red Hat Gluster Storage 3.4 Administration Guide for split-brain management.
As of Red Hat Gluster Storage 3.4, two-way replication without arbiter bricks is considered deprecated. Existing volumes that use two-way replication without arbiter bricks remain supported for this release. New volumes with this configuration are not supported. Red Hat no longer recommends the use of two-way replication without arbiter bricks and plans to remove support entirely in future versions of Red Hat Gluster Storage. This change affects replicated and distributed-replicated volumes that do not use arbiter bricks.
Two-way replication without arbiter bricks is being deprecated because it does not provide adequate protection from split-brain conditions. Even in distributed-replicated configurations, two-way replication cannot ensure that the correct copy of a conflicting file is selected without the use of a tie-breaking node.
Red Hat recommends using three-node Gluster Storage volumes.
Information about three-way replication is available in Section 5.6.2, Creating Three-way Replicated Volumes and Section 5.7.2, Creating Three-way Distributed Replicated Volumes of the Red Hat Gluster Storage 3.4 Administration Guide.
One concern regarding shared storage is the inability to expand block storage size if the disk usage approaches 100% because of raw disk. Gluster Storage usually works on a physical server, and its bricks are internal storage. With a physical server, the disks of bricks cannot be extended if they are assigned to an entire physical internal disk. According to general storage practice, the brick should be placed on the Logical Volume Manager (LVM.)
The following steps describe how to create a shared volume on LVM using Gluster Storage of three nodes and install required packages. Refer to the product documentation if you are using a different storage solution.
Procedure
Run the following steps on all CDS nodes. The example shows
cds1
:[root@cds1 ~]# yum install glusterfs-server glusterfs-cli rh-rhua-selinux-policy
Initialize the physical volume on the new disk:
# pvcreate /dev/vdb
Create a Volume Group on
/dev/vdb
:# vgcreate vg_gluster /dev/vdb
Create a logical volume on LVM:
# lvcreate -n lv_brick1 -l 100%FREE vg_gluster
Format the device:
mkfs.xfs -f -i size=512 /dev/mapper/vg_gluster-lv_brick1
Create a mount directory, mount the disk, enable
glusterd
, and startglusterd
:# mkdir -p /export/xvdb; mount /dev/mapper/vg_gluster-lv_brick1 /export/xvdb; mkdir -p /export/xvdb/brick; systemctl enable glusterd.service; systemctl start glusterd.service
Add the following entry in
/etc/fstab
on each CDS node:/dev/mapper/vg_gluster-lv_brick1 /export/xvdb xfs defaults 0 0
Run the following steps on only one CDS node, for example,
cds1
:[root@cds1 ~]# gluster peer probe cds2.example.com peer probe: success. [root@cds1 ~]# gluster peer probe cds3.example.com peer probe: success.
NoteMake sure DNS resolution is working. A bad name resolution error is shown below.
[root@cds1 ~]# gluster peer probe <cds[23].example.com hostnames> peer probe: failed: Probe returned with Transport endpoint is not connected
ImportantThe Gluster peer probe might also fail with
peer probe: failed: Probe returned with Transport endpoint is not connected
when there is a communication or port issue. A workaround to this failure is to disable thefirewalld
service. If you prefer not to disable the firewall, you can allow the correct ports as described in Section 3.1, Verifying Port Access of the Red Hat Gluster Storage Administration Guide 3.4.Before proceeding, verify that the peer connections were successful. You should see a similar output:
[root@cds1 ~]# gluster peer status Number of Peers: 2 Hostname: cds2.v3.example.com Uuid: 6cb9fdf9-1486-4db5-a438-24c64f47e63e State: Peer in Cluster (Connected) Hostname: cds3.v3.example.com Uuid: 5e0eea6c-933d-48ff-8c2f-0228effa6b82 State: Peer in Cluster (Connected)
[root@cds1 ~]# gluster volume create rhui_content_0 replica 3 \ cds1.example.com:/export/xvdb/brick cds2.example.com:/export/xvdb/brick \ cds3.example.com:/export/xvdb/brick volume create: rhui_content_0: success: please start the volume to access data
[root@cds1 ~]# gluster volume start rhui_content_0 volume start: rhui_content_0: success
5.1.2. Extend the storage volume
You can extend a disk’s volume if it is approaching its capacity by adding a new disk of the same size to each CDS node and running the following commands on each CDS node. The name of the device file representing the disk depends on the technology you use, but if the first disk was /dev/vdb
, the second can be /dev/vdc
. Replace the device file in the following procedure with the actual device file name.
Procedure
Initialize the physical volume on the new disk:
# pvcreate /dev/vdc
Extend the logical volume group:
# vgextend vg_gluster /dev/vdc
Extend the logical volume by the amount of free disk space on the new physical volume:
# lvextend vg_gluster/lv_brick1 /dev/vdc
Expand the file system:
# xfs_growfs /dev/mapper/vg_gluster-lv_brick1
-
Run
df
on the RHUA node to confirm that the mounted Gluster Storage volume has the expected new size.
5.2. Create NFS storage
You can set up an NFS server for the content managed by RHUI on the RHUA node or on a dedicated machine. Use the following procedure to set up storage using NFS.
Using a dedicated machine allows CDS nodes, and mainly your RHUI clients, to continue to work if something happens to the RHUA node. Red Hat recommends that you set up an NFS server on a dedicated machine.
Procedure
Install the
nfs-utils
package on the node hosting the NFS server, on the RHUA node (if it differs), and also on all CDS nodes:# yum install nfs-utils
Edit the
/etc/exports
file on the NFS server. Choose a suitable directory to hold the RHUI content and allow the RHUA node and all your CDS nodes to access it. For example, to use the/export
directory and make it available to all systems in theexample.com
domain, put the following line in/etc/exports
:/export *.example.com(rw,no_root_squash)
Create the directory for the RHUI content as defined in
/export
:# mkdir /export
Start and enable the
NFS
service:# systemctl start nfs # systemctl start rpcbind # systemctl enable nfs-server # systemctl enable rpcbind
NoteIf you are using an existing NFS server and the NFS service is running, use the
restart
command instead of thestart
command.Test your setup. On a CDS node, run the following commands, which assume that the NFS server has been set up on a machine named
filer.example.com
:# mkdir /mnt/nfstest # mount filer.example.com:/export /mnt/nfstest # touch /mnt/nfstest/test
You should not get any error messages.
To clean up after this test, remove the
test
file, unmount the remote share, and remove thetest
directory:# rm /mnt/nfstest/test # umount /mnt/nfstest # rmdir /mnt/nfstest
Your NFS server is now set up. See Section 8.7. NFS Server Configuration for more information on NFS server configuration for RHEL 7.