Chapter 7. The Ceph iSCSI Gateway (Limited Availability)
As a storage administrator, you can install and configure an iSCSI gateway for the Red Hat Ceph Storage cluster. With Ceph’s iSCSI gateway you can effectively run a fully integrated block-storage infrastructure with all features and benefits of a conventional Storage Area Network (SAN).
This technology is Limited Availability. See the Deprecated functionality chapter for additional information.
SCSI persistent reservations are not supported. Mapping multiple iSCSI initiators to an RBD image is supported, if using a cluster aware file system or clustering software that does not rely on SCSI persistent reservations. For example, VMware vSphere environments using ATS is supported, but using Microsoft’s clustering server (MSCS) is not supported.
7.1. Introduction to the Ceph iSCSI gateway Copy linkLink copied to clipboard!
Traditionally, block-level access to a Ceph storage cluster has been limited to QEMU and librbd
, which is a key enabler for adoption within OpenStack environments. Block-level access to the Ceph storage cluster can now take advantage of the iSCSI standard to provide data storage.
The iSCSI gateway integrates Red Hat Ceph Storage with the iSCSI standard to provide a highly available (HA) iSCSI target that exports RADOS Block Device (RBD) images as SCSI disks. The iSCSI protocol allows clients, known as initiators, to send SCSI commands to SCSI storage devices, known as targets, over a TCP/IP network. This allows for heterogeneous clients, such as Microsoft Windows, to access the Red Hat Ceph Storage cluster.
Figure 7.1. Ceph iSCSI Gateway HA Design
7.2. Requirements for the iSCSI target Copy linkLink copied to clipboard!
The Red Hat Ceph Storage Highly Available (HA) iSCSI gateway solution has requirements for the number of gateway nodes, memory capacity, and timer settings to detect down OSDs.
Required Number of Nodes
Install a minimum of two iSCSI gateway nodes. To increase resiliency and I/O handling, install up to four iSCSI gateway nodes.
Memory Requirements
The memory footprint of the RBD images can grow to a large size. Each RBD image mapped on the iSCSI gateway nodes uses roughly 90 MB of memory. Ensure the iSCSI gateway nodes have enough memory to support each mapped RBD image.
Detecting Down OSDs
There are no specific iSCSI gateway options for the Ceph Monitors or OSDs, but it is important to lower the default timers for detecting down OSDs to reduce the possibility of initiator timeouts. Follow the instructions in Lowering timer settings for detecting down OSDs to reduce the possibility of initiator timeouts.
Additional Resources
- See the Red Hat Ceph Storage Hardware Selection Guide for more information.
7.3. Installing the iSCSI gateway Copy linkLink copied to clipboard!
As a storage administrator, before you can utilize the benefits of the Ceph iSCSI gateway, you must install the required software packages. You can install the Ceph iSCSI gateway by using the Ansible deployment tool, or by using the command-line interface.
Each iSCSI gateway runs the Linux I/O target kernel subsystem (LIO) to provide iSCSI protocol support. LIO utilizes a user-space passthrough (TCMU) to interact with the Ceph librbd
library to expose RBD images to iSCSI clients. With the Ceph iSCSI gateway you can effectively run a fully integrated block-storage infrastructure with all features and benefits of a conventional Storage Area Network (SAN).
7.3.1. Prerequisites Copy linkLink copied to clipboard!
- Red Hat Enterprise Linux 8 or 7.7 or higher.
- A running Red Hat Ceph Storage 4 or higher cluster.
7.3.2. Installing the Ceph iSCSI gateway using Ansible Copy linkLink copied to clipboard!
Use the Ansible utility to install packages and set up the daemons for the Ceph iSCSI gateway.
Prerequisites
-
The Ansible administration node with the
ceph-ansible
package installed.
Procedure
- On the iSCSI gateway nodes, enable the Red Hat Ceph Storage 4 Tools repository. For details, see the Enabling the Red Hat Ceph Storage Repositories section in the Red Hat Ceph Storage Installation Guide.
On the Ansible administration node, add an entry in
/etc/ansible/hosts
file for the gateway group. If you colocate the iSCSI gateway with an OSD node, add the OSD node to the[iscsigws]
section.[iscsigws] ceph-igw-1 ceph-igw-2
[iscsigws] ceph-igw-1 ceph-igw-2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Ansible places a file in the
/usr/share/ceph-ansible/group_vars/
directory callediscsigws.yml.sample
. Create a copy of theiscsigws.yml.sample
file named itiscsigws.yml
. -
Open the
iscsigws.yml
file for editing. Uncomment the
trusted_ip_list
option and update the values accordingly, using IPv4 or IPv6 addresses.Example
Adding two gateways with the IPv4 addresses of 10.172.19.21 and 10.172.19.22, configure
trusted_ip_list
like this:trusted_ip_list: 10.172.19.21,10.172.19.22
trusted_ip_list: 10.172.19.21,10.172.19.22
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, review the Ansible variables and descriptions in the iSCSI Gateway Variables section and update
iscsigws.yml
as needed.WarningGateway configuration changes are only supported from one gateway at a time. Attempting to run changes concurrently through multiple gateways might lead to configuration instability and inconsistency.
WarningAnsible installs the
ceph-iscsi
package, creates, and updates the/etc/ceph/iscsi-gateway.cfg
file based on settings in thegroup_vars/iscsigws.yml
file when theansible-playbook
command is used. If you have previously installed theceph-iscsi
package using the command-line interface described in Installing the iSCSI gateway using the command-line interface, copy the existing settings from theiscsi-gateway.cfg
file to thegroup_vars/iscsigws.yml
file.On the Ansible administration node, execute the Ansible playbook.
Bare-metal deployments:
cd /usr/share/ceph-ansible ansible-playbook site.yml -i hosts
[admin@ansible ~]$ cd /usr/share/ceph-ansible [admin@ansible ceph-ansible]$ ansible-playbook site.yml -i hosts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Container deployments:
cd /usr/share/ceph-ansible ansible-playbook site-container.yml -i hosts
[admin@ansible ~]$ cd /usr/share/ceph-ansible [admin@ansible ceph-ansible]$ ansible-playbook site-container.yml -i hosts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningOn stand-alone iSCSI gateway nodes, verify that the correct Red Hat Ceph Storage 4 software repositories are enabled. If they are unavailable, Ansible might install incorrect packages.
To create targets, LUNs, and clients, use the
gwcli
utility or the Red Hat Ceph Storage Dashboard.ImportantDo not use the
targetcli
utility to change the configuration, this will result in the following issues: ALUA misconfiguration and path failover problems. There is the potential to corrupt data, to have mismatched configuration across iSCSI gateways, and to have mismatched WWN information, which will lead to client pathing problems.
Additional Resources
-
See the Sample
iscsigws.yml
file to view the full sample file. - Configuring the iSCSI target using the command-line interface
- Creating iSCSI targets
7.3.3. Installing the Ceph iSCSI gateway using the command-line interface Copy linkLink copied to clipboard!
The Ceph iSCSI gateway is the iSCSI target node and also a Ceph client node. The Ceph iSCSI gateway can be a standalone node or be colocated on a Ceph Object Store Disk (OSD) node. Complete the following steps to install the Ceph iSCSI gateway.
Prerequisites
- Red Hat Enterprise Linux 8 or 7.7 and later
- A Red Hat Ceph Storage 4 cluster or later
On all Ceph Monitor nodes in the storage cluster, restart the
ceph-mon
service, as theroot
user:Syntax
systemctl restart ceph-mon@MONITOR_HOST_NAME
systemctl restart ceph-mon@MONITOR_HOST_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
systemctl restart ceph-mon@monitor1
[root@mon ~]# systemctl restart ceph-mon@monitor1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the Ceph iSCSI gateway is not colocated on an OSD node, copy the Ceph configuration files, located in the
/etc/ceph/
directory, from a running Ceph node in the storage cluster to the all iSCSI Gateway nodes. The Ceph configuration files must exist on the iSCSI gateway nodes under/etc/ceph/
. - On all Ceph iSCSI gateway nodes, enable the Ceph Tools repository. For details see the Enabling the Red Hat Ceph Storage Repositories section in the Installation Guide.
- On all Ceph iSCSI gateway nodes, install and configure the Ceph command-line interface. For details, see the Installing the Ceph Command Line Interface chapter in the Red Hat Ceph Storage 4 Installation Guide.
- If needed, open TCP ports 3260 and 5000 on the firewall on all Ceph iSCSI nodes.
- Create a new or use an existing RADOS Block Device (RBD).
Procedure
On all Ceph iSCSI gateway nodes, install the
ceph-iscsi
andtcmu-runner
packages:yum install ceph-iscsi tcmu-runner
[root@iscsigw ~]# yum install ceph-iscsi tcmu-runner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf previous versions of these packages exist, remove them before installing the newer versions. You must install these newer versions from a Red Hat Ceph Storage repository.
Optionally, on all Ceph iSCSI gateway nodes, install and configure the OpenSSL utility, if needed.
Install the
openssl
package:yum install openssl
[root@iscsigw ~]# yum install openssl
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the primary iSCSI gateway node, create a directory to hold the SSL keys:
mkdir ~/ssl-keys cd ~/ssl-keys
[root@iscsigw ~]# mkdir ~/ssl-keys [root@iscsigw ~]# cd ~/ssl-keys
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the primary iSCSI gateway node, create the certificate and key files. Enter the environmental information when prompted.
openssl req -newkey rsa:2048 -nodes -keyout iscsi-gateway.key -x509 -days 365 -out iscsi-gateway.crt
[root@iscsigw ~]# openssl req -newkey rsa:2048 -nodes -keyout iscsi-gateway.key -x509 -days 365 -out iscsi-gateway.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the primary iSCSI gateway node, create a PEM file:
cat iscsi-gateway.crt iscsi-gateway.key > iscsi-gateway.pem
[root@iscsigw ~]# cat iscsi-gateway.crt iscsi-gateway.key > iscsi-gateway.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the primary iSCSI gateway node, create a public key:
openssl x509 -inform pem -in iscsi-gateway.pem -pubkey -noout > iscsi-gateway-pub.key
[root@iscsigw ~]# openssl x509 -inform pem -in iscsi-gateway.pem -pubkey -noout > iscsi-gateway-pub.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
From the primary iSCSI gateway node, copy the
iscsi-gateway.crt
,iscsi-gateway.pem
,iscsi-gateway-pub.key
, andiscsi-gateway.key
files to the/etc/ceph/
directory on the other iSCSI gateway nodes.
Create a configuration file on a Ceph iSCSI gateway node, and then copy it to all iSCSI gateway nodes.
Create a file named
iscsi-gateway.cfg
in the/etc/ceph/
directory:touch /etc/ceph/iscsi-gateway.cfg
[root@iscsigw ~]# touch /etc/ceph/iscsi-gateway.cfg
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
iscsi-gateway.cfg
file and add the following lines:Syntax
[config] cluster_name = CLUSTER_NAME gateway_keyring = CLIENT_KEYRING api_secure = false trusted_ip_list = IP_ADDR,IP_ADDR
[config] cluster_name = CLUSTER_NAME gateway_keyring = CLIENT_KEYRING api_secure = false trusted_ip_list = IP_ADDR,IP_ADDR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[config] cluster_name = ceph gateway_keyring = ceph.client.admin.keyring api_secure = false trusted_ip_list = 192.168.0.10,192.168.0.11
[config] cluster_name = ceph gateway_keyring = ceph.client.admin.keyring api_secure = false trusted_ip_list = 192.168.0.10,192.168.0.11
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Copy the
iscsi-gateway.cfg
file to all iSCSI gateway nodes. Note that the file must be identical on all iSCSI gateway nodes.
On all Ceph iSCSI gateway nodes, enable and start the API services:
systemctl enable rbd-target-api systemctl start rbd-target-api systemctl enable rbd-target-gw systemctl start rbd-target-gw
[root@iscsigw ~]# systemctl enable rbd-target-api [root@iscsigw ~]# systemctl start rbd-target-api [root@iscsigw ~]# systemctl enable rbd-target-gw [root@iscsigw ~]# systemctl start rbd-target-gw
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Next, configure targets, LUNs, and clients. See the Configuring the iSCSI target using the command-line interface section for details.
Additional Resources
- See the iSCSI Gateway variables section for more details on the options.
- Creating iSCSI targets
7.3.4. Additional Resources Copy linkLink copied to clipboard!
- See Appendix B, iSCSI Gateway Variables for more information on Ceph iSCSI gateway Anisble variables.
7.4. Configuring the iSCSI target Copy linkLink copied to clipboard!
As a storage administrator, you can configure targets, LUNs, and clients, using the gwcli
command-line utility. You can also optimize performance of the iSCSI target, use the gwcli reconfigure
subcommand.
Red Hat does not support managing Ceph block device images exported by the Ceph iSCSI gateway tools, such as gwcli
and ceph-ansible
. Also, using the rbd
command to rename or remove RBD images exported by the Ceph iSCSI gateway, can result in an unstable storage cluster.
Before removing RBD images from the iSCSI gateway configuration, follow the standard procedures for removing a storage device from the operating system. For details, see the Removing a storage device chapter in the Storage Administration Guide for Red Hat Enterprise Linux 7 or the System Design Guide for Red Hat Enterprise Linux 8.
7.4.1. Prerequisites Copy linkLink copied to clipboard!
- Installation of the Ceph iSCSI gateway software.
7.4.2. Configuring the iSCSI target using the command-line interface Copy linkLink copied to clipboard!
The Ceph iSCSI gateway is the iSCSI target node and also a Ceph client node. Configure the Ceph iSCSI gateway either on a standalone node, or colocate it with a Ceph Object Storage Device (OSD) node.
Do not adjust other options using the gwcli reconfigure
subcommand unless specified in this document or Red Hat Support has instructed you to do so.
Prerequisites
- Installation of the Ceph iSCSI gateway software.
Procedure
Start the iSCSI gateway command-line interface:
gwcli
[root@iscsigw ~]# gwcli
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the iSCSI gateways using either IPv4 or IPv6 addresses:
Syntax
>/iscsi-targets create iqn.2003-01.com.redhat.iscsi-gw:_target_name_ > goto gateways > create ISCSI_GW_NAME IP_ADDR_OF_GW > create ISCSI_GW_NAME IP_ADDR_OF_GW
>/iscsi-targets create iqn.2003-01.com.redhat.iscsi-gw:_target_name_ > goto gateways > create ISCSI_GW_NAME IP_ADDR_OF_GW > create ISCSI_GW_NAME IP_ADDR_OF_GW
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
>/iscsi-targets create iqn.2003-01.com.redhat.iscsi-gw:ceph-igw > goto gateways > create ceph-gw-1 10.172.19.21 > create ceph-gw-2 10.172.19.22
>/iscsi-targets create iqn.2003-01.com.redhat.iscsi-gw:ceph-igw > goto gateways > create ceph-gw-1 10.172.19.21 > create ceph-gw-2 10.172.19.22
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou cannot use a mix of IPv4 and IPv6 addresses.
Add a Ceph block device:
Syntax
> cd /disks >/disks/ create POOL_NAME image=IMAGE_NAME size=IMAGE_SIZE_m|g|t
> cd /disks >/disks/ create POOL_NAME image=IMAGE_NAME size=IMAGE_SIZE_m|g|t
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
> cd /disks >/disks/ create rbd image=disk_1 size=50g
> cd /disks >/disks/ create rbd image=disk_1 size=50g
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDo not use any periods (
.
) in the pool or image name.Create a client:
Syntax
> goto hosts > create iqn.1994-05.com.redhat:_client_name_ > auth use username=USER_NAME password=PASSWORD
> goto hosts > create iqn.1994-05.com.redhat:_client_name_ > auth use username=USER_NAME password=PASSWORD
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
> goto hosts > create iqn.1994-05.com.redhat:rh7-client > auth username=iscsiuser1 password=temp12345678
> goto hosts > create iqn.1994-05.com.redhat:rh7-client > auth username=iscsiuser1 password=temp12345678
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantRed Hat does not support mixing clients, some with Challenge Handshake Authentication Protocol (CHAP) enabled and some CHAP disabled. All clients must have either CHAP enabled or have CHAP disabled. The default behavior is to only authenticate an initiator by its initiator name.
If initiators are failing to log into the target, the CHAP authentication might not be configured correctly for some initiators, for example:
o- hosts ................................ [Hosts: 2: Auth: MISCONFIG]
o- hosts ................................ [Hosts: 2: Auth: MISCONFIG]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following command at the
hosts
level to reset all the CHAP authentication:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add disks to a client:
Syntax
>/iscsi-target..eph-igw/hosts > cd iqn.1994-05.com.redhat:_CLIENT_NAME_ > disk add POOL_NAME/IMAGE_NAME
>/iscsi-target..eph-igw/hosts > cd iqn.1994-05.com.redhat:_CLIENT_NAME_ > disk add POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
>/iscsi-target..eph-igw/hosts > cd iqn.1994-05.com.redhat:rh7-client > disk add rbd/disk_1
>/iscsi-target..eph-igw/hosts > cd iqn.1994-05.com.redhat:rh7-client > disk add rbd/disk_1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To confirm that the API is using SSL correctly, search the
rbd-target-api
log file, located at/var/log/rbd-target-api.log
or/var/log/rbd-target/rbd-target-api.log
, forhttps
, for example:Aug 01 17:27:42 test-node.example.com python[1879]: * Running on https://0.0.0.0:5000/
Aug 01 17:27:42 test-node.example.com python[1879]: * Running on https://0.0.0.0:5000/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verifying that the Ceph ISCSI gateways are working:
/> goto gateways /iscsi-target...-igw/gateways> ls o- gateways ............................ [Up: 2/2, Portals: 2] o- ceph-gw-1 ........................ [ 10.172.19.21 (UP)] o- ceph-gw-2 ........................ [ 10.172.19.22 (UP)]
/> goto gateways /iscsi-target...-igw/gateways> ls o- gateways ............................ [Up: 2/2, Portals: 2] o- ceph-gw-1 ........................ [ 10.172.19.21 (UP)] o- ceph-gw-2 ........................ [ 10.172.19.22 (UP)]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the status is
UNKNOWN
, check for network issues and any misconfigurations. If using a firewall, verify that the appropriate TCP port is open. Verify that the iSCSI gateway is listed in thetrusted_ip_list
option. Verify that therbd-target-api
service is running on the iSCSI gateway node.Optionally, reconfigure the
max_data_area_mb
option:Syntax
>/disks/ reconfigure POOL_NAME/IMAGE_NAME max_data_area_mb NEW_BUFFER_SIZE
>/disks/ reconfigure POOL_NAME/IMAGE_NAME max_data_area_mb NEW_BUFFER_SIZE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
>/disks/ reconfigure rbd/disk_1 max_data_area_mb 64
>/disks/ reconfigure rbd/disk_1 max_data_area_mb 64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
max_data_area_mb
option controls the amount of memory in megabytes that each image can use to pass SCSI command data between the iSCSI target and the Ceph cluster. If this value is too small, it can result in excessive queue full retries which will affect performance. If the value is too large, it can result in one disk using too much of the system memory, which can cause allocation failures for other subsystems. The default value for themax_data_area_mb
option is8
.- Configure an iSCSI initiator.
Additional Resources
- See Installing the iSCSI gateway for details.
- See Configuring the iSCSI initiator section for more information.
7.4.3. Optimize the performance of the iSCSI Target Copy linkLink copied to clipboard!
There are many settings that control how the iSCSI Target transfers data over the network. These settings can be used to optimize the performance of the iSCSI gateway.
Only change these settings if instructed to by Red Hat Support or as specified in this document.
The gwcli reconfigure
subcommand controls the settings that are used to optimize the performance of the iSCSI gateway.
Settings that affect the performance of the iSCSI target
-
max_data_area_mb
-
cmdsn_depth
-
immediate_data
-
initial_r2t
-
max_outstanding_r2t
-
first_burst_length
-
max_burst_length
-
max_recv_data_segment_length
-
max_xmit_data_segment_length
Additional Resources
-
Information about
max_data_area_mb
, including an example showing how to adjust it usinggwcli reconfigure
, is in the section Configuring the iSCSI Target using the Command Line Interface.
7.4.4. Lowering timer settings for detecting down OSDs Copy linkLink copied to clipboard!
Sometimes it is necessary to lower the timer settings for detecting down OSDs. For example, when using Red Hat Ceph Storage as an iSCSI gateway, you can reduce the possibility of initiator timeouts by lowering the timer settings for detecting down OSDs.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to the Ansible administration node.
Procedure
Configure Ansible to use the new timer settings.
On the Ansible administration node, add a
ceph_conf_overrides
section in thegroup_vars/all.yml
file that looks like this, or edit any existingceph_conf_overrides
section as follows:ceph_conf_overrides: osd: osd_client_watch_timeout: 15 osd_heartbeat_grace: 20 osd_heartbeat_interval: 5
ceph_conf_overrides: osd: osd_client_watch_timeout: 15 osd_heartbeat_grace: 20 osd_heartbeat_interval: 5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The above settings will be added to the
ceph.conf
configuration files on the OSD nodes when the Ansible playbook runs.Change to the
ceph-ansible
directory:cd /usr/share/ceph-ansible
[admin@ansible ~]$ cd /usr/share/ceph-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use Ansible to update the
ceph.conf
file and restart the OSD daemons on all the OSD nodes. On the Ansible admin node, run the following command:Bare-metal Deployments
ansible-playbook site.yml --limit osds
[admin@ansible ceph-ansible]$ ansible-playbook site.yml --limit osds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Container Deployments
ansible-playbook site-container.yml --limit osds -i hosts
[admin@ansible ceph-ansible]$ ansible-playbook site-container.yml --limit osds -i hosts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the timer settings are the same as set in
ceph_conf_overrides
:Syntax
ceph daemon osd.OSD_ID config get osd_client_watch_timeout ceph daemon osd.OSD_ID config get osd_heartbeat_grace ceph daemon osd.OSD_ID config get osd_heartbeat_interval
ceph daemon osd.OSD_ID config get osd_client_watch_timeout ceph daemon osd.OSD_ID config get osd_heartbeat_grace ceph daemon osd.OSD_ID config get osd_heartbeat_interval
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you cannot restart the OSD daemons immediately, you can do online updates from Ceph Monitor nodes, or update all Ceph OSD nodes directly. Once you are able to restart the OSD daemons, use Ansible as described above to add the new timer settings into
ceph.conf
so that the settings persist across reboots.To do an online update of OSD timer settings from a Ceph Monitor node:
Syntax
ceph tell osd.OSD_ID injectargs '--osd_client_watch_timeout 15' ceph tell osd.OSD_ID injectargs '--osd_heartbeat_grace 20' ceph tell osd.OSD_ID injectargs '--osd_heartbeat_interval 5'
ceph tell osd.OSD_ID injectargs '--osd_client_watch_timeout 15' ceph tell osd.OSD_ID injectargs '--osd_heartbeat_grace 20' ceph tell osd.OSD_ID injectargs '--osd_heartbeat_interval 5'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph tell osd.0 injectargs '--osd_client_watch_timeout 15' ceph tell osd.0 injectargs '--osd_heartbeat_grace 20' ceph tell osd.0 injectargs '--osd_heartbeat_interval 5'
[root@mon ~]# ceph tell osd.0 injectargs '--osd_client_watch_timeout 15' [root@mon ~]# ceph tell osd.0 injectargs '--osd_heartbeat_grace 20' [root@mon ~]# ceph tell osd.0 injectargs '--osd_heartbeat_interval 5'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To do an online update of OSD timer settings from an Ceph OSD node:
Syntax
ceph daemon osd.OSD_ID config set osd_client_watch_timeout 15 ceph daemon osd.OSD_ID config set osd_heartbeat_grace 20 ceph daemon osd.OSD_ID config set osd_heartbeat_interval 5
ceph daemon osd.OSD_ID config set osd_client_watch_timeout 15 ceph daemon osd.OSD_ID config set osd_heartbeat_grace 20 ceph daemon osd.OSD_ID config set osd_heartbeat_interval 5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph daemon osd.0 config set osd_client_watch_timeout 15 ceph daemon osd.0 config set osd_heartbeat_grace 20 ceph daemon osd.0 config set osd_heartbeat_interval 5
[root@osd ~]# ceph daemon osd.0 config set osd_client_watch_timeout 15 [root@osd ~]# ceph daemon osd.0 config set osd_heartbeat_grace 20 [root@osd ~]# ceph daemon osd.0 config set osd_heartbeat_interval 5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- For more information about using Red Hat Ceph Storage as an iSCSI gateway, see The Ceph iSCSI gateway in the Red Hat Ceph Storage Block Device Guide.
7.4.5. Configuring iSCSI host groups using the command-line interface Copy linkLink copied to clipboard!
The Ceph iSCSI gateway can configure host groups for managing multiple servers that share the same disk configuration. iSCSI host groups creates a logical grouping of hosts and the disks that each host in the group has access to.
The sharing of disk devices to multiple hosts must use a cluster-aware file system.
Prerequisites
- Installation of the Ceph iSCSI gateway software.
- Root-level access to the Ceph iSCSI gateway node.
Procedure
Start the iSCSI gateway command-line interface:
gwcli
[root@iscsigw ~]# gwcli
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new host group:
Syntax
cd iscsi-targets/ cd IQN/host-groups create group_name=GROUP_NAME
cd iscsi-targets/ cd IQN/host-groups create group_name=GROUP_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
/> cd iscsi-targets/ /iscsi-targets> cd iqn.2003-01.com.redhat.iscsi-gw:ceph-igw/host-groups/ /iscsi-target.../host-groups> create group_name=igw_grp01
/> cd iscsi-targets/ /iscsi-targets> cd iqn.2003-01.com.redhat.iscsi-gw:ceph-igw/host-groups/ /iscsi-target.../host-groups> create group_name=igw_grp01
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a host to the host group:
Syntax
cd GROUP_NAME host add client_iqn=CLIENT_IQN
cd GROUP_NAME host add client_iqn=CLIENT_IQN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
> cd igw_grp01 /iscsi-target.../host-groups/igw_grp01> host add client_iqn=iqn.1994-05.com.redhat:rh8-client
> cd igw_grp01 /iscsi-target.../host-groups/igw_grp01> host add client_iqn=iqn.1994-05.com.redhat:rh8-client
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat this step to add additional hosts to the group.
Add a disk to the host group:
Syntax
cd /disks/ /disks> create pool=POOL image=IMAGE_NAME size=SIZE cd /IQN/host-groups/GROUP_NAME disk add POOL/IMAGE_NAME
cd /disks/ /disks> create pool=POOL image=IMAGE_NAME size=SIZE cd /IQN/host-groups/GROUP_NAME disk add POOL/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
> cd /disks/ /disks> create pool=rbd image=rbdimage size=1G /> cd iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-igw/host-groups/igw_grp01/ /iscsi-target...s/igw_grp01> disk add rbd/rbdimage
> cd /disks/ /disks> create pool=rbd image=rbdimage size=1G /> cd iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-igw/host-groups/igw_grp01/ /iscsi-target...s/igw_grp01> disk add rbd/rbdimage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat this step to add additional disks to the group.
7.4.6. Additional Resources Copy linkLink copied to clipboard!
- For details on configuring iSCSI targets using the Red Hat Ceph Storage Dashboard, see the Creating iSCSI targets section in the Red Hat Ceph Storage Dashboard Guide.
7.5. Configuring the iSCSI initiator Copy linkLink copied to clipboard!
You can configure the iSCSI initiator to connect to the Ceph iSCSI gateway on the following platforms.
7.5.1. Configuring the iSCSI initiator for Red Hat Enterprise Linux Copy linkLink copied to clipboard!
Prerequisites
- Red Hat Enterprise Linux 7.7 or higher.
-
Package
iscsi-initiator-utils-6.2.0.873-35
or newer must be installed. -
Package
device-mapper-multipath-0.4.9-99
or newer must be installed.
Procedure
Install the iSCSI initiator and multipath tools:
yum install iscsi-initiator-utils yum install device-mapper-multipath
[root@rhel ~]# yum install iscsi-initiator-utils [root@rhel ~]# yum install device-mapper-multipath
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Set the initiator name by editing the
/etc/iscsi/initiatorname.iscsi
file. Note that the initiator name must match the initiator name that was used during the initial setup using thegwcli
command. Configure multipath I/O.
Create the default
/etc/multipath.conf
file and enable themultipathd
service:mpathconf --enable --with_multipathd y
[root@rhel ~]# mpathconf --enable --with_multipathd y
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
/etc/multipath.conf
file as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
multipathd
service:systemctl reload multipathd
[root@rhel ~]# systemctl reload multipathd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set up CHAP and iSCSI discovery and login.
Provide a CHAP user name and password by updating the
/etc/iscsi/iscsid.conf
file accordingly, for example:node.session.auth.authmethod = CHAP node.session.auth.username = user node.session.auth.password = password
node.session.auth.authmethod = CHAP node.session.auth.username = user node.session.auth.password = password
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Discover the target portals:
Syntax
iscsiadm -m discovery -t st -p IP_ADDR
iscsiadm -m discovery -t st -p IP_ADDR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to target:
Syntax
iscsiadm -m node -T TARGET -l
iscsiadm -m node -T TARGET -l
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
View the multipath I/O configuration. The
multipathd
daemon sets up devices automatically based on the settings in themultipath.conf
file.Use the
multipath
command to show devices setup in a failover configuration with a priority group for each path, for example:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
multipath -ll
outputprio
value indicates the ALUA state, whereprio=50
indicates it is the path to the owning iSCSI gateway in the ALUA Active-Optimized state andprio=10
indicates it is an Active-non-Optimized path. Thestatus
field indicates which path is being used, whereactive
indicates the currently used path, andenabled
indicates the failover path, if theactive
fails.To match the device name, for example,
sde
in themultipath -ll
output, to the iSCSI gateway:Example
iscsiadm -m session -P 3
[root@rhel ~]# iscsiadm -m session -P 3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
Persistent Portal
value is the IP address assigned to the iSCSI gateway listed in thegwcli
utility.
7.5.2. Configuring the iSCSI initiator for Red Hat Virtualization Copy linkLink copied to clipboard!
Prerequisites
- Red Hat Virtualization 4.1
- Configured MPIO devices on all Red Hat Virtualization nodes
-
The
iscsi-initiator-utils-6.2.0.873-35
package or newer -
The
device-mapper-multipath-0.4.9-99
package or newer
Procedure
Configure multipath I/O.
Update the
/etc/multipath/conf.d/DEVICE_NAME.conf
file as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
multipathd
service:systemctl reload multipathd
[root@rhv ~]# systemctl reload multipathd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Click the Storage resource tab to list the existing storage domains.
- Click the New Domain button to open the New Domain window.
- Enter the Name of the new storage domain.
- Use the Data Center drop-down menu to select an data center.
- Use the drop-down menus to select the Domain Function and the Storage Type. The storage domain types that are not compatible with the chosen domain function are not available.
- Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center’s SPM host.
The New Domain window automatically displays known targets with unused LUNs when iSCSI is selected as the storage type. If the target that you are adding storage from is not listed then you can use target discovery to find it, otherwise proceed to the next step.
- Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment. Note that LUNs external to the environment are also displayed. You can use the Discover Targets options to add LUNs on many targets, or multiple paths to the same LUNs.
- Enter the fully qualified domain name or IP address of the iSCSI host in the Address field.
-
Enter the port to connect to the host on when browsing for targets in the Port field. The default is
3260
. - If the Challenge Handshake Authentication Protocol (CHAP) is being used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.
- Click the Discover button.
Select the target to use from the discovery results and click the Login button. Alternatively, click the Login All to log in to all of the discovered targets.
ImportantIf more than one path access is required, ensure to discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported.
- Click the + button next to the desired target. This will expand the entry and display all unused LUNs attached to the target.
- Select the check box for each LUN that you are using to create the storage domain.
Optionally, you can configure the advanced parameters.
- Click Advanced Parameters.
- Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
- Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
-
Select the Wipe After Delete check box to enable the
wipe after delete
option. You can edit this option after creating the domain, but doing so does not change thewipe after delete
property of disks that already exist. - Select the Discard After Delete check box to enable the discard after delete option. You can edit this option after creating the domain. This option is only available to block storage domains.
- Click OK to create the storage domain and close the window.
7.5.3. Configuring the iSCSI initiator for Microsoft Windows Copy linkLink copied to clipboard!
Prerequisites
- Microsoft Windows Server 2016
Procedure
Install the iSCSI initiator and configure discovery and setup.
- Install the iSCSI initiator driver and MPIO tools.
- Launch the MPIO program, click the Discover Multi-Paths tab, check the Add support for iSCSI devices box, and click Add.
- Reboot the MPIO program.
On the iSCSI Initiator Properties window, on the Discovery tab
, add a target portal. Enter the IP address or DNS name
and Port
of the Ceph iSCSI gateway:
On the Targets tab
, select the target and click Connect
:
On the Connect To Target window, select the Enable multi-path option
, and click the Advanced button
:
Under the Connect using section, select a Target portal IP
. Select Enable CHAP login on
and enter the Name and Target secret values
from the Ceph iSCSI client credentials section, and click OK
:
ImportantWindows Server 2016 does not accept a CHAP secret less than 12 bytes.
- Repeat the previous two steps for each target portal defined when setting up the iSCSI gateway.
If the initiator name is different than the initiator name used during the initial setup, rename the initiator name. From iSCSI Initiator Properties window, on the Configuration tab
, click the Change button
to rename the initiator name.
Set up
multipath
I/O. In PowerShell, use thePDORemovePeriod
command to set the MPIO load balancing policy and thempclaim
command to set the load balancing policy. The iSCSI Initiator Tool configures the remaining options.NoteRed Hat recommends increasing the
PDORemovePeriod
option to 120 seconds from PowerShell. You might need to adjust this value based on the application. When all paths are down, and 120 seconds expires, the operating system starts failing I/O requests.Set-MPIOSetting -NewPDORemovePeriod 120
Set-MPIOSetting -NewPDORemovePeriod 120
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the failover policy
mpclaim.exe -l -m 1
mpclaim.exe -l -m 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the failover policy
mpclaim -s -m MSDSM-wide Load Balance Policy: Fail Over Only
mpclaim -s -m MSDSM-wide Load Balance Policy: Fail Over Only
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the iSCSI Initiator tool, from the Targets tab
click on the Devices… button
:
From the Devices window, select a disk
and click the MPIO… button
:
The Device Details window displays the paths to each target portal. The Load Balancing Policy Fail Over Only must be selected.
View the
multipath
configuration from the PowerShell:mpclaim -s -d MPIO_DISK_ID
mpclaim -s -d MPIO_DISK_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace MPIO_DISK_ID with the appropriate disk identifier.
NoteThere is one Active/Optimized path which is the path to the iSCSI gateway node that owns the LUN, and there is an Active/Unoptimized path for each other iSCSI gateway node.
Optionally, tune the settings. Consider using the following registry settings:
Windows Disk Timeout
Key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Value
TimeOutValue = 65
TimeOutValue = 65
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Microsoft iSCSI Initiator Driver
Key
HKEY_LOCAL_MACHINE\\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Instance_Number>\Parameters
HKEY_LOCAL_MACHINE\\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Instance_Number>\Parameters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Values
LinkDownTime = 25 SRBTimeoutDelta = 15
LinkDownTime = 25 SRBTimeoutDelta = 15
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.5.4. Configuring the iSCSI initiator for VMware ESXi Copy linkLink copied to clipboard!
Prerequisites
- See the iSCSI Gateway (IGW) section in the Customer Portal Knowledgebase article for supported VMware ESXi versions.
- Access to the VMware Host Client.
-
Root access to VMware ESXi host to execute the
esxcli
command.
Procedure
Disable
HardwareAcceleratedMove
(XCOPY):> esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedMove
> esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedMove
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the iSCSI software. From the Navigator pane, click Storage
. Select the Adapters tab
. Click on Configure iSCSI
:
Verify the initiator name in the Name & alias section
.
If the initiator name is different than the initiator name used when creating the client during the initial setup using
gwcli
, change the initiator name: From the VMware ESX host, use theseesxcli
commands.Get the adapter name for the iSCSI software:
> esxcli iscsi adapter list > Adapter Driver State UID Description > ------- --------- ------ ------------- ---------------------- > vmhba64 iscsi_vmk online iscsi.vmhba64 iSCSI Software Adapter
> esxcli iscsi adapter list > Adapter Driver State UID Description > ------- --------- ------ ------------- ---------------------- > vmhba64 iscsi_vmk online iscsi.vmhba64 iSCSI Software Adapter
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the initiator name:
Syntax
> esxcli iscsi adapter set -A ADAPTOR_NAME -n INITIATOR_NAME
> esxcli iscsi adapter set -A ADAPTOR_NAME -n INITIATOR_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
> esxcli iscsi adapter set -A vmhba64 -n iqn.1994-05.com.redhat:rh7-client
> esxcli iscsi adapter set -A vmhba64 -n iqn.1994-05.com.redhat:rh7-client
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure CHAP. Expand the CHAP authentication section
. Select “Do not use CHAP unless required by target”
. Enter the CHAP Name and Secret
credentials that were used in the initial setup. Verify the Mutual CHAP authentication section
has “Do not use CHAP” selected.
WarningDue to a bug in the VMware Host Client, the CHAP settings are not used initially. On the Ceph iSCSI gateway node, the kernel logs include the following errors as an indication of this bug:
> kernel: CHAP user or password not set for Initiator ACL > kernel: Security negotiation failed. > kernel: iSCSI Login negotiation failed.
> kernel: CHAP user or password not set for Initiator ACL > kernel: Security negotiation failed. > kernel: iSCSI Login negotiation failed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To work around this bug, configure the CHAP settings using the
esxcli
command. Theauthname
argument is the Name in the vSphere Web Client:> esxcli iscsi adapter auth chap set --direction=uni --authname=myiscsiusername --secret=myiscsipassword --level=discouraged -A vmhba64
> esxcli iscsi adapter auth chap set --direction=uni --authname=myiscsiusername --secret=myiscsipassword --level=discouraged -A vmhba64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the iSCSI settings. Expand Advanced settings
. Set the RecoveryTimeout value to 25
.
Set the discovery address. In the Dynamic targets section
, click Add dynamic target
. Under Address
add an IP addresses for one of the Ceph iSCSI gateways. Only one IP address needs to be added. Finally, click the Save configuration button
. From the main interface, on the Devices tab, you will see the RBD image.
NoteLUN is configured automatically, using the ALUA SATP and MRU PSP. Do not use other SATPs and PSPs. You can verify this by the
esxcli
command:Syntax
esxcli storage nmp path list -d eui.DEVICE_ID
esxcli storage nmp path list -d eui.DEVICE_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace DEVICE_ID with the appropriate device identifier.
Verify that multipathing has been set up correctly.
List the devices:
Example
> esxcli storage nmp device list | grep iSCSI Device Display Name: LIO-ORG iSCSI Disk (naa.6001405f8d087846e7b4f0e9e3acd44b) Device Display Name: LIO-ORG iSCSI Disk (naa.6001405057360ba9b4c434daa3c6770c)
> esxcli storage nmp device list | grep iSCSI Device Display Name: LIO-ORG iSCSI Disk (naa.6001405f8d087846e7b4f0e9e3acd44b) Device Display Name: LIO-ORG iSCSI Disk (naa.6001405057360ba9b4c434daa3c6770c)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the multipath information for the Ceph iSCSI disk from the previous step:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the example output, each path has an iSCSI or SCSI name with the following parts:
Initiator name =
iqn.2005-03.com.ceph:esx1
ISID =00023d000002
Target name =iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
Target port group =2
Device id =naa.6001405f8d087846e7b4f0e9e3acd44b
The
Group State
value ofactive
indicates this is the Active-Optimized path to the iSCSI gateway. Thegwcli
command lists theactive
as the iSCSI gateway owner. The rest of the paths have theGroup State
value ofunoptimized
and are the failover path, if theactive
path goes into adead
state.
To match all paths to their respective iSCSI gateways:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Match the path name with the
ISID
value, and theRemoteAddress
value is the IP address of the owning iSCSI gateway.
7.6. Managing iSCSI services Copy linkLink copied to clipboard!
The ceph-iscsi
package installs the configuration management logic, and the rbd-target-gw
and rbd-target-api
systemd
services.
The rbd-target-api
service restores the Linux iSCSI target state at startup, and responds to ceph-iscsi
REST API calls from tools like gwcli
and Red Hat Ceph Storage Dashboard. The rbd-target-gw
service provides metrics using the Prometheus plug-in.
The rbd-target-api
service assumes it is the only user of the Linux kernel’s target layer. Do not use the target service installed with the targetcli
package when using rbd-target-api
. Ansible automatically disables the targetcli
target service during the Ceph iSCSI gateway installation.
Procedure
To start the services:
systemctl start rbd-target-api systemctl start rbd-target-gw
# systemctl start rbd-target-api # systemctl start rbd-target-gw
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To restart the services:
systemctl restart rbd-target-api systemctl restart rbd-target-gw
# systemctl restart rbd-target-api # systemctl restart rbd-target-gw
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To reload the services:
systemctl reload rbd-target-api systemctl reload rbd-target-gw
# systemctl reload rbd-target-api # systemctl reload rbd-target-gw
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
reload
request forcesrbd-target-api
to reread the configuration and apply it to the current running environment. This is normally not required, because changes are deployed in parallel from Ansible to all iSCSI gateway nodes.To stop the services:
systemctl stop rbd-target-api systemctl stop rbd-target-gw
# systemctl stop rbd-target-api # systemctl stop rbd-target-gw
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
stop
request closes the gateway’s portal interfaces, dropping connections to clients and wipes the current Linux iSCSI target configuration from the kernel. This returns the iSCSI gateway to a clean state. When clients are disconnected, active I/O is rescheduled to the other iSCSI gateways by the client side multipathing layer.
7.7. Adding more iSCSI gateways Copy linkLink copied to clipboard!
As a storage administrator, you can expand the initial two iSCSI gateways to four iSCSI gateways by using the gwcli
command-line tool or the Red Hat Ceph Storage Dashboard. Adding more iSCSI gateways provides you more flexibility when using load-balancing and failover options, along with providing more redundancy.
7.7.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage 4 cluster
- Spare nodes or existing OSD nodes
-
root
permissions
7.7.2. Using Ansible to add more iSCSI gateways Copy linkLink copied to clipboard!
You can using the Ansible automation utility to add more iSCSI gateways. This procedure expands the default installation of two iSCSI gateways to four iSCSI gateways. You can configure the iSCSI gateway on a standalone node or it can be collocated with existing OSD nodes.
Prerequisites
- Red Hat Enterprise Linux 7.7 or later.
- A running Red Hat Ceph Storage cluster.
- Installation of the iSCSI gateway software.
-
Having
admin
user access on the Ansible administration node. -
Having
root
user access on the new nodes.
Procedure
On the new iSCSI gateway nodes, enable the Red Hat Ceph Storage Tools repository:
Red Hat Enterprise Linux 7
subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
[root@iscsigw ~]# subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
[root@iscsigw ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
ceph-iscsi-config
package:yum install ceph-iscsi-config
[root@iscsigw ~]# yum install ceph-iscsi-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Append to the list in
/etc/ansible/hosts
file for the gateway group:Example
[iscsigws] ... ceph-igw-3 ceph-igw-4
[iscsigws] ... ceph-igw-3 ceph-igw-4
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf colocating the iSCSI gateway with an OSD node, add the OSD node to the
[iscsigws]
section.Change to the
ceph-ansible
directory:cd /usr/share/ceph-ansible
[admin@ansible ~]$ cd /usr/share/ceph-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the Ansible administration node, run the appropriate Ansible playbook:
Bare-metal deployments:
ansible-playbook site.yml -i hosts
[admin@ansible ceph-ansible]$ ansible-playbook site.yml -i hosts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Container deployments:
ansible-playbook site-container.yml -i hosts
[admin@ansible ceph-ansible]$ ansible-playbook site-container.yml -i hosts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
ImportantProviding IP addresses for the
gateway_ip_list
option is required. You cannot use a mix of IPv4 and IPv6 addresses.- From the iSCSI initiators, re-login to use the newly added iSCSI gateways.
Additional Resources
- See the Configure the iSCSI Initiator for more details on using an iSCSI Initiator.
- See the Enabling the Red Hat Ceph Storage Repositories section in the Red Hat Ceph Storage Installation Guide for more details.
7.7.3. Using gwcli to add more iSCSI gateways Copy linkLink copied to clipboard!
You can use the gwcli
command-line tool to add more iSCSI gateways. This procedure expands the default of two iSCSI gateways to four iSCSI gateways.
Prerequisites
- Red Hat Enterprise Linux 7.7 or later.
- A running Red Hat Ceph Storage cluster.
- Installation of the iSCSI gateway software.
-
Having
root
user access to the new nodes or OSD nodes.
Procedure
-
If the Ceph iSCSI gateway is not colocated on an OSD node, copy the Ceph configuration files, located in the
/etc/ceph/
directory, from a running Ceph node in the storage cluster to the new iSCSI Gateway node. The Ceph configuration files must exist on the iSCSI gateway node under the/etc/ceph/
directory. - Install and configure the Ceph command-line interface.
On the new iSCSI gateway nodes, enable the Red Hat Ceph Storage Tools repository:
Red Hat Enterprise Linux 7
subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
[root@iscsigw ~]# subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
[root@iscsigw ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
ceph-iscsi
, andtcmu-runner
packages:Red Hat Enterprise Linux 7
yum install ceph-iscsi tcmu-runner
[root@iscsigw ~]# yum install ceph-iscsi tcmu-runner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
dnf install ceph-iscsi tcmu-runner
[root@iscsigw ~]# dnf install ceph-iscsi tcmu-runner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If needed, install the
openssl
package:Red Hat Enterprise Linux 7
yum install openssl
[root@iscsigw ~]# yum install openssl
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
dnf install openssl
[root@iscsigw ~]# dnf install openssl
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
On one of the existing iSCSI gateway nodes, edit the
/etc/ceph/iscsi-gateway.cfg
file and append thetrusted_ip_list
option with the new IP addresses for the new iSCSI gateway nodes. For example:[config] ... trusted_ip_list = 10.172.19.21,10.172.19.22,10.172.19.23,10.172.19.24
[config] ... trusted_ip_list = 10.172.19.21,10.172.19.22,10.172.19.23,10.172.19.24
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the updated
/etc/ceph/iscsi-gateway.cfg
file to all the iSCSI gateway nodes.ImportantThe
iscsi-gateway.cfg
file must be identical on all iSCSI gateway nodes.-
Optionally, if using SSL, also copy the
~/ssl-keys/iscsi-gateway.crt
,~/ssl-keys/iscsi-gateway.pem
,~/ssl-keys/iscsi-gateway-pub.key
, and~/ssl-keys/iscsi-gateway.key
files from one of the existing iSCSI gateway nodes to the/etc/ceph/
directory on the new iSCSI gateway nodes. Enable and start the API service on the new iSCSI gateway nodes:
systemctl enable rbd-target-api systemctl start rbd-target-api
[root@iscsigw ~]# systemctl enable rbd-target-api [root@iscsigw ~]# systemctl start rbd-target-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the iSCSI gateway command-line interface:
gwcli
[root@iscsigw ~]# gwcli
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Creating the iSCSI gateways using either IPv4 or IPv6 addresses:
Syntax
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:_TARGET_NAME_ > goto gateways > create ISCSI_GW_NAME IP_ADDR_OF_GW > create ISCSI_GW_NAME IP_ADDR_OF_GW
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:_TARGET_NAME_ > goto gateways > create ISCSI_GW_NAME IP_ADDR_OF_GW > create ISCSI_GW_NAME IP_ADDR_OF_GW
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:ceph-igw > goto gateways > create ceph-gw-3 10.172.19.23 > create ceph-gw-4 10.172.19.24
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:ceph-igw > goto gateways > create ceph-gw-3 10.172.19.23 > create ceph-gw-4 10.172.19.24
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou cannot use a mix of IPv4 and IPv6 addresses.
- From the iSCSI initiators, re-login to use the newly added iSCSI gateways.
Additional Resources
- See Configure the iSCSI Initiator for more details on using an iSCSI Initiator.
- For details, see the Installing the Ceph Command Line Interface chapter in the Red Hat Ceph Storage Installation Guide.
7.8. Verifying that the initiator is connected to the iSCSI target Copy linkLink copied to clipboard!
After installing the iSCSI gateway and configuring the iSCSI target and an initiator, verify that the initiator is properly connected to the iSCSI target.
Prerequisites
- Installation of the Ceph iSCSI gateway software.
- Configured the iSCSI target.
- Configured the iSCSI initiator.
Procedure
Start the iSCSI gateway command-line interface:
gwcli
[root@iscsigw ~]# gwcli
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the initiator is connected the iSCSI target:
/> goto hosts /iscsi-target...csi-igw/hosts> ls o- hosts .............................. [Hosts: 1: Auth: None] o- iqn.1994-05.com.redhat:rh7-client [LOGGED-IN, Auth: None, Disks: 0(0.00Y)]
/> goto hosts /iscsi-target...csi-igw/hosts> ls o- hosts .............................. [Hosts: 1: Auth: None] o- iqn.1994-05.com.redhat:rh7-client [LOGGED-IN, Auth: None, Disks: 0(0.00Y)]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The initiator status is
LOGGED-IN
if it is connected.Verify that LUNs are balanced across iSCSI gateways:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When creating a disk, the disk is assigned an iSCSI gateway as its
Owner
based on what gateways have the lowest number of mapped LUNs. If this number is balanced, gateways are assigned based on a round robin allocation. Currently, the balancing of LUNs is not dynamic and cannot be selected by the user.When the initiator is logged into the target, and the
multipath
layer is in a optimized state, the initiator’s operating systemmultipath
utilities report the path to theOwner
gateway as being in ALUA Active-Optimized (AO) state. Themultipath
utilities report the other paths as being in the ALUA Active-non-Optimized (ANO) state.If the AO path fails, one of the other iSCSI gateways is used. The ordering for the failover gateway depends on the initiator’s
multipath
layer, where normally, the order is based on which path was discovered first.
7.9. Upgrading the Ceph iSCSI gateway using Ansible Copy linkLink copied to clipboard!
Upgrading the Red Hat Ceph Storage iSCSI gateways can be done by using an Ansible playbook designed for rolling upgrades.
Prerequisites
- A running Ceph iSCSI gateway.
- A running Red Hat Ceph Storage cluster.
- Admin-level access to all nodes in the storage cluster.
You can run the upgrade procedure as an administrative user or as root. If you want to run it as root, make sure that you have ssh
set up for use with Ansible.
Procedure
-
Verify that the correct iSCSI gateway nodes are listed in the Ansible inventory file (
/etc/ansible/hosts
). Run the rolling upgrade playbook:
ansible-playbook rolling_update.yml
[admin@ansible ceph-ansible]$ ansible-playbook rolling_update.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the appropriate playbook to finish the upgrade:
Bare-metal deployments
ansible-playbook site.yml --limit iscsigws -i hosts
[admin@ansible ceph-ansible]$ ansible-playbook site.yml --limit iscsigws -i hosts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Container deployments
ansible-playbook site-container.yml --limit iscsigws -i hosts
[admin@ansible ceph-ansible]$ ansible-playbook site-container.yml --limit iscsigws -i hosts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
7.10. Upgrading the Ceph iSCSI gateway using the command-line interface Copy linkLink copied to clipboard!
Upgrading the Red Hat Ceph Storage iSCSI gateways can be done in a rolling fashion, by upgrading one bare-metal iSCSI gateway node at a time.
Do not upgrade the iSCSI gateway while upgrading and restarting Ceph OSDs. Wait until the OSD upgrades are finished and the storage cluster is in an active+clean
state.
Prerequisites
- A running Ceph iSCSI gateway.
- A running Red Hat Ceph Storage cluster.
-
Having
root
access to the iSCSI gateway node.
Procedure
Update the iSCSI gateway packages:
yum update ceph-iscsi
[root@iscsigw ~]# yum update ceph-iscsi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the iSCSI gateway daemons:
systemctl stop rbd-target-api systemctl stop rbd-target-gw
[root@iscsigw ~]# systemctl stop rbd-target-api [root@iscsigw ~]# systemctl stop rbd-target-gw
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the iSCSI gateway daemons stopped cleanly:
systemctl status rbd-target-gw
[root@iscsigw ~]# systemctl status rbd-target-gw
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the
rbd-target-gw
service successfully stops, then skip to step 4. If the
rbd-target-gw
service fails to stop, then do the following steps:If the
targetcli
package is not install, then install thetargetcli
package:yum install targetcli
[root@iscsigw ~]# yum install targetcli
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check for existing target objects:
targetcli ls
[root@iscsigw ~]# targetcli ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
o- / ............................................................. [...] o- backstores .................................................... [...] | o- user:rbd ..................................... [Storage Objects: 0] o- iscsi .................................................. [Targets: 0]
o- / ............................................................. [...] o- backstores .................................................... [...] | o- user:rbd ..................................... [Storage Objects: 0] o- iscsi .................................................. [Targets: 0]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
backstores
andStorage Objects
are empty, then the iSCSI target has been shutdown cleanly and you can skip to step 4.If you have still have target objects, use the following command to force remove all target objects:
targetcli clearconfig confirm=True
[root@iscsigw ~]# targetcli clearconfig confirm=True
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningIf multiple services are using the iSCSI target, use
targetcli
in interactive mode to delete those specific objects.
-
If the
Update the
tcmu-runner
package:yum update tcmu-runner
[root@iscsigw ~]# yum update tcmu-runner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the
tcmu-runner
service:systemctl stop tcmu-runner
[root@iscsigw ~]# systemctl stop tcmu-runner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the iSCSI gateway services in the following order:
systemctl start tcmu-runner systemctl start rbd-target-gw systemctl start rbd-target-api
[root@iscsigw ~]# systemctl start tcmu-runner [root@iscsigw ~]# systemctl start rbd-target-gw [root@iscsigw ~]# systemctl start rbd-target-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.11. Monitoring the iSCSI gateways Copy linkLink copied to clipboard!
Red Hat Ceph Storage cluster now incorporates a generic metric gathering framework within the OSDs and MGRs to provide built-in monitoring. The metrics are generated within the Red Hat Ceph Storage cluster and there is no need to access client nodes to scrape metrics. To monitor the performance of RBD images, Ceph has a built-in MGR Prometheus exporter module to translate individual RADOS object metrics into aggregated RBD image metrics for Input/Output(I/O) operations per second, throughput, and latency. The Ceph iSCSI gateway also provides a Prometheus exporter for Linux-IO (LIO) level performance metrics, supporting monitoring and visualization tools like Grafana. These metrics include the information about defined Target Portal Groups (TPGs) and mapped Logical Unit Numbers (LUNs), per LUN state and the number of Input Output operations per second (IOPS), read bytes and write bytes per LUN per client. By default, the Prometheus exporter is enabled. You can change the default settings by using the following options in the iscsi-gateway.cfg
:
Example
[config] prometheus_exporter = True prometheus_port = 9287 prometheus_host = xx.xx.xx.xxx
[config]
prometheus_exporter = True
prometheus_port = 9287
prometheus_host = xx.xx.xx.xxx
The gwtop
tool used for Ceph iSCSI gateway environments to monitor performance of exported Ceph block device (RBD) images is deprecated.
7.12. Removing the iSCSI configuration Copy linkLink copied to clipboard!
To remove the iSCSI configuration, use the gwcli
utility to remove hosts and disks, and the Ansible purge-iscsi-gateways.yml
playbook to remove the iSCSI target configuration.
Using the purge-iscsi-gateways.yml
playbook is a destructive action against the iSCSI gateway environment.
An attempt to use purge-iscsi-gateways.yml
fails if RBD images have snapshots or clones and are exported through the Ceph iSCSI gateway.
Prerequisites
Disconnect all iSCSI initiators:
Red Hat Enterprise Linux initiators:
Syntax
iscsiadm -m node -T TARGET_NAME --logout
iscsiadm -m node -T TARGET_NAME --logout
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
TARGET_NAME
with the configured iSCSI target name, for example:Example
iscsiadm -m node -T iqn.2003-01.com.redhat.iscsi-gw:ceph-igw --logout
# iscsiadm -m node -T iqn.2003-01.com.redhat.iscsi-gw:ceph-igw --logout Logging out of session [sid: 1, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.21,3260] Logging out of session [sid: 2, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.22,3260] Logout of [sid: 1, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.21,3260] successful. Logout of [sid: 2, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.22,3260] successful.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Windows initiators:
See the Microsoft documentation for more details.
VMware ESXi initiators:
See the VMware documentation for more details.
Procedure
Run the iSCSI gateway command line utility:
gwcli
[root@iscsigw ~]# gwcli
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the hosts:
Syntax
/> cd /iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:$TARGET_NAME/hosts /> /iscsi-target...TARGET_NAME/hosts> delete CLIENT_NAME
/> cd /iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:$TARGET_NAME/hosts /> /iscsi-target...TARGET_NAME/hosts> delete CLIENT_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
TARGET_NAME
with the configured iSCSI target name, and replaceCLIENT_NAME
with iSCSI initiator name, for example:Example
/> cd /iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:ceph-igw/hosts /> /iscsi-target...eph-igw/hosts> delete iqn.1994-05.com.redhat:rh7-client
/> cd /iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:ceph-igw/hosts /> /iscsi-target...eph-igw/hosts> delete iqn.1994-05.com.redhat:rh7-client
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the disks:
Syntax
/> cd /disks/ /disks> delete POOL_NAME.IMAGE_NAME
/> cd /disks/ /disks> delete POOL_NAME.IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
POOL_NAME
with the name of the pool and theIMAGE_NAME
with the name of the image.Example
/> cd /disks/ /disks> delete rbd.disk_1
/> cd /disks/ /disks> delete rbd.disk_1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As a root user, for the containerized deployment ensure all the Red Hat Ceph Storage tools and repositories are enabled on the iSCSI gateway nodes:
Red Hat Enterprise Linux 7
subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms
[root@admin ~]# subscription-manager repos --enable=rhel-7-server-rpms [root@admin ~]# subscription-manager repos --enable=rhel-7-server-extras-rpms [root@admin ~]# subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms
[root@admin ~]# subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms [root@admin ~]# subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms [root@admin ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor bare-metal deployment, the Ceph tools are enabled with client install.
On each of the iSCSI gateway nodes, install the
ceph-common
andceph-iscsi
packages:Red Hat Enterprise Linux 7
yum install -y ceph-common yum install -y ceph-iscsi
[root@admin ~]# yum install -y ceph-common [root@admin ~]# yum install -y ceph-iscsi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
dnf install -y ceph-common dnf install -y ceph-iscsi
[root@admin ~]# dnf install -y ceph-common [root@admin ~]# dnf install -y ceph-iscsi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Run the
yum history list
command and get the transaction ID of theceph-iscsi
installation. Switch to Ansible user:
Example
su ansible
[root@admin ~]# su ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the
/usr/share/ceph-ansible/
directory:Example
cd /usr/share/ceph-ansible
[ansible@admin ~]# cd /usr/share/ceph-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As the ansible user, run the iSCSI gateway purge Ansible playbook:
ansible-playbook purge-iscsi-gateways.yml
[ansible@admin ceph-ansible]$ ansible-playbook purge-iscsi-gateways.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the type of purge when prompted:
lio
- In this mode the Linux iSCSI target configuration is purged on all iSCSI gateways that are defined. Disks that were created are left untouched within the Ceph storage cluster.
all
-
When
all
is chosen, the Linux iSCSI target configuration is removed together with all RBD images that were defined within the iSCSI gateway environment, other unrelated RBD images will not be removed. Be sure to choose the correct mode because this operation deletes data.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the active containers are removed:
Red Hat Enterprise Linux 7
docker ps
[root@admin ~]# docker ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
podman ps
[root@admin ~]# podman ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Ceph iSCSI container IDs are removed.
Optional: Remove the
ceph-iscsi
package:Syntax
yum history undo TRANSACTION_ID
yum history undo TRANSACTION_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
yum history undo 4
[root@admin ~]# yum history undo 4
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningDo not remove the
ceph-common
packages. This removes the contents of/etc/ceph
and renders the daemons on that node unable to start.
7.13. Additional Resources Copy linkLink copied to clipboard!
- For details on managing iSCSI gateway using the Red Hat Ceph Storage Dashboard, see the iSCSI functions section in the Dashboard Guide for Red Hat Ceph Storage 4