OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Este contenido no está disponible en el idioma seleccionado.
Appendix A. Optional Deployment Method (with cns-deploy)
Following sections provides an optional method to deploy Red Hat Openshift Container Storage using cns-deploy.
A.1. Setting up Converged mode Copiar enlaceEnlace copiado en el portapapeles!
Copiar enlaceEnlace copiado en el portapapeles!
The converged mode environment addresses the use-case where applications require both shared storage and the flexibility of a converged infrastructure with compute and storage instances being scheduled and run from the same set of hardware.
A.1.1. Configuring Port Access Copiar enlaceEnlace copiado en el portapapeles!
Copiar enlaceEnlace copiado en el portapapeles!
- On each of the OpenShift nodes that will host the Red Hat Gluster Storage container, add the following rules to
/etc/sysconfig/iptables
in order to open the required ports:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
For more information about Red Hat Gluster Storage Server ports, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/chap-getting_started.- Execute the following command to reload the iptables:
systemctl reload iptables
# systemctl reload iptables
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on each node to verify if the iptables are updated:
iptables -L
# iptables -L
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
A.1.2. Enabling Kernel Modules Copiar enlaceEnlace copiado en el portapapeles!
Copiar enlaceEnlace copiado en el portapapeles!
Before running the
cns-deploy
tool, you must ensure that the dm_thin_pool
, dm_multipath
, and target_core_user
modules are loaded in the OpenShift Container Platform node. Execute the following commands only on Gluster nodes to verify if the modules are loaded:
lsmod | grep dm_thin_pool
# lsmod | grep dm_thin_pool
lsmod | grep dm_multipath
# lsmod | grep dm_multipath
lsmod | grep target_core_user
# lsmod | grep target_core_user
If the modules are not loaded, then execute the following command to load the modules:
modprobe dm_thin_pool
# modprobe dm_thin_pool
modprobe dm_multipath
# modprobe dm_multipath
modprobe target_core_user
# modprobe target_core_user
Note
To ensure these operations are persisted across reboots, create the following files and update each with the content as mentioned:
cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/dm_thin_pool.conf
dm_thin_pool
cat /etc/modules-load.d/dm_multipath.conf dm_multipath
# cat /etc/modules-load.d/dm_multipath.conf
dm_multipath
cat /etc/modules-load.d/target_core_user.conf target_core_user
# cat /etc/modules-load.d/target_core_user.conf
target_core_user
A.1.3. Starting and Enabling Services Copiar enlaceEnlace copiado en el portapapeles!
Copiar enlaceEnlace copiado en el portapapeles!
Execute the following commands to enable and run rpcbind on all the nodes hosting the gluster pod :
systemctl add-wants multi-user rpcbind.service systemctl enable rpcbind.service systemctl start rpcbind.service
# systemctl add-wants multi-user rpcbind.service
# systemctl enable rpcbind.service
# systemctl start rpcbind.service
Execute the following command to check the status of rpcbind
Next Step: Proceed to Section A.3, “Setting up the Environment” to prepare the environment for Red Hat Gluster Storage Container Converged in OpenShift.
Note
To remove an installation of Red Hat Openshift Container Storage done using cns-deploy, run the
cns-deploy --abort
command. Use the -g
option if Gluster is containerized.
When the pods are deleted, not all Gluster states are removed from the node. Therefore, you must also run
rm -rf /var/lib/heketi /etc/glusterfs /var/lib/glusterd /var/log/glusterfs
command on every node that was running a Gluster pod and also run wipefs -a <device>
for every storage device that was consumed by Heketi. This erases all the remaining Gluster states from each node. You must be an administrator to run the device wiping command