Chapter 10. Configuring shared storage
The RHUA and CDS nodes require a shared storage volume, which can be accessed by both, to store content managed by RHUI.
Currently, RHUI supports the following storage solutions:
10.1. Configuring shared storage using NFS Copy linkLink copied to clipboard!
When using Network File System (NFS) as your shared storage, you must set up an NFS server either on the RHUA node or on a dedicated machine.
The following instructions explain how to create, configure, and verify NFS to work with RHUI.
Set up your NFS server on a dedicated machine to allow the CDS nodes and your RHUI clients to continue working even if something happens to the RHUA node. Do not set it up on the RHUA node itself.
Prerequisites
- Ensure you have root access to the NFS server
Procedure
Install the
nfs-utilspackage on the node hosting the NFS server. You need not install this package on the RHUA node or the CDS nodes because it will be installed automatically, but you can install it on any of them if you want to test the ability to use the NFS share beforehand.# dnf install nfs-utilsCreate a suitable directory to hold all the RHUI content.
# mkdir /exportAllow your RHUA and CDS nodes access to the directory by editing the
/etc/exportsfile and adding the following line:/export rhua.example.com(rw,no_root_squash) cds01.example.com(rw,no_root_squash) cds02.example.com(rw,no_root_squash)Start and enable the NFS service.
# systemctl start nfs-server # systemctl start rpcbind # systemctl enable nfs-server # systemctl enable rpcbindNoteIf the NFS service is already running use the
restartcommand instead of thestartcommand.
Verification
To test whether an NFS server is set up on a machine named
filer.example.com, run the following commands on a system that has access to the NFS server:# mkdir /mnt/nfstest # mount filer.example.com:/export /mnt/nfstest # touch /mnt/nfstest/testYour setup is working properly if you do not get any error messages.
10.2. Configuring shared storage using CephFS Copy linkLink copied to clipboard!
When using Ceph File System (CephFS) as your shared storage, you must set up a file system and share it over the network. RHUI treats the shared file system as a simple mount point, which you can mount on the file systems of the RHUA and CDS nodes.
Do not set up the Ceph shared file storage on the RHUI nodes. You must configure CephFS on independent dedicated machines.
The following instructions explain how to verify whether an existing Ceph file system can work with RHUI.
This document does not provide instructions to set up Ceph shared file storage. For instructions on how to do so, consult your system administrator.
Prerequisites
Ensure you have the following identification information:
The IP Address and port of the host where the cluster monitor daemon for the Ceph distributed file system is running.
-
As a CephFS system administrator, run the command
ceph mon dumpon the Ceph master node. You can find the IP address and port listed as<ceph_monip>:<ceph_port>.
-
As a CephFS system administrator, run the command
-
The Ceph username, usually
admin. The Ceph file system name.
-
As a CephFS system administrator, run the command
ceph fs lson the Ceph master node. You can find the file system name listed as<cephfs_name>.
-
As a CephFS system administrator, run the command
The Ceph secret key.
-
As a CephFS system administrator, run the command
ceph auth get client.adminon the Ceph master node. You can find the secret key listed as<ceph_secretkey>.
-
As a CephFS system administrator, run the command
- Ensure you have root access to the RHUA node and all the CDS nodes you plan to use.
Enable the Ceph Tools repository on the RHUA and CDS nodes. For more information, see:
Procedure
On the RHUA and CDS nodes install the
ceph-commonpackage:# dnf install ceph-common
Verification
To test whether a Ceph File Share is available and whether RHUI can use it, run the following commands on the RHUA node or on one of the CDS nodes:
# mkdir /mnt/mycephfs_test # mount -t ceph <ceph_monip>:<ceph_port>:/ /mnt/mycephfs_test -o name=admin,secret=<ceph_secretkey>,fs=<cephfs_name> # touch /mnt/cephfs_test/testfile # ls /mnt/mycephfs_testYour setup is working properly if you do not get any error messages.
Clean up the test mount point.
# rm /mnt/cephfs_test/testfile # umount /mnt/mycephfs_test