Chapter 8. Using an iSCSI Gateway
The iSCSI gateway is integrating Red Hat Ceph Storage with the iSCSI standard to provide a Highly Available (HA) iSCSI target that exports RADOS Block Device (RBD) images as SCSI disks. The iSCSI protocol allows clients (initiators) to send SCSI commands to SCSI storage devices (targets) over a TCP/IP network. This allows for heterogeneous clients, such as Microsoft Windows, to access the Red Hat Ceph Storage cluster.
Each iSCSI gateway runs the Linux IO target kernel subsystem (LIO) to provide iSCSI protocol support. LIO utilizes a user-space passthrough (TCMU) to interact with Ceph’s librbd
library to expose RBD images to iSCSI clients. With Ceph’s iSCSI gateway you can effectively run a fully integrated block-storage infrastructure with all features and benefits of a conventional Storage Area Network (SAN).
Figure 8.1. Ceph iSCSI Gateway HA Design
8.1. Requirements for the iSCSI target
The Red Hat Ceph Storage Highly Available (HA) iSCSI gateway solution has requirements for the number of gateway nodes, memory capacity, and timer settings to detect down OSDs.
Required Number of Nodes
Install a minimum of two iSCSI gateway nodes. To increase resiliency and I/O handling, install up to four iSCSI gateway nodes.
Memory Requirements
The memory footprint of the RBD images can grow to a large size. Each RBD image mapped on the iSCSI gateway nodes uses roughly 90 MB of memory. Ensure the iSCSI gateway nodes have enough memory to support each mapped RBD image.
Detecting Down OSDs
There are no specific iSCSI gateway options for the Ceph Monitors or OSDs, but it is important to lower the default timers for detecting down OSDs to reduce the possibility of initiator timeouts. Follow the instructions in Lowering timer settings for detecting down OSDs to reduce the possibility of initiator timeouts.
Additional Resources
- See the Red Hat Ceph Storage Hardware Selection Guide for more information.
- See Lowering timer settings for detecting down OSDs in the Block Device Guide for more information.
8.2. Lowering timer settings for detecting down OSDs
Sometimes it is necessary to lower the timer settings for detecting down OSDs. For example, when using Red Hat Ceph Storage as an iSCSI gateway, you can reduce the possibility of initiator timeouts by lowering the timer settings for detecting down OSDs.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Configure Ansible to use the new timer settings.
Add a
ceph_conf_overrides
section in thegroup_vars/all.yml
file that looks like this, or edit any existingceph_conf_overrides
section so it includes all the lines starting withosd
:ceph_conf_overrides: osd: osd_client_watch_timeout: 15 osd_heartbeat_grace: 20 osd_heartbeat_interval: 5
When the
site.yml
Ansible playbook is run against OSD nodes, the above settings will be added to theirceph.conf
configuration files.Use Ansible to update the
ceph.conf
file and restart the OSD daemons on all the OSD nodes. On the Ansible admin node, run the following command:[user@admin ceph-ansible]$ ansible-playbook --limit osds site.yml
Verify the timer settings are the same as set in
ceph_conf_overrides
:On one or more OSDs use the
ceph daemon
command to view the settings:# ceph daemon osd.OSD_ID config get osd_client_watch_timeout # ceph daemon osd.OSD_ID config get osd_heartbeat_grace # ceph daemon osd.OSD_ID config get osd_heartbeat_interval
Example:
[root@osd1 ~]# ceph daemon osd.0 config get osd_client_watch_timeout { "osd_client_watch_timeout": "15" } [root@osd1 ~]# ceph daemon osd.0 config get osd_heartbeat_grace { "osd_heartbeat_grace": "20" } [root@osd1 ~]# ceph daemon osd.0 config get osd_heartbeat_interval { "osd_heartbeat_interval": "5" }
Optional: If you cannot restart the OSD daemons immediately, do online updates from a Ceph Monitor node, or on all OSD nodes directly. Once you are able to restart the OSD daemons, use Ansible as described above to add the new timer settings into
ceph.conf
so the settings persist across reboots.To do an online update of OSD timer settings from a Monitor node:
# ceph tell osd.OSD_ID injectargs '--osd_client_watch_timeout 15' # ceph tell osd.OSD_ID injectargs '--osd_heartbeat_grace 20' # ceph tell osd.OSD_ID injectargs '--osd_heartbeat_interval 5'
Example:
[root@mon ~]# ceph tell osd.0 injectargs '--osd_client_watch_timeout 15' [root@mon ~]# ceph tell osd.0 injectargs '--osd_heartbeat_grace 20' [root@mon ~]# ceph tell osd.0 injectargs '--osd_heartbeat_interval 5'
To do an online update of OSD timer settings from an OSD node:
# ceph daemon osd.OSD_ID config set osd_client_watch_timeout 15 # ceph daemon osd.OSD_ID config set osd_heartbeat_grace 20 # ceph daemon osd.OSD_ID config set osd_heartbeat_interval 5
Example:
[root@osd1 ~]# ceph daemon osd.0 config set osd_client_watch_timeout 15 [root@osd1 ~]# ceph daemon osd.0 config set osd_heartbeat_grace 20 [root@osd1 ~]# ceph daemon osd.0 config set osd_heartbeat_interval 5
Additional Resources
- For more information about using Red Hat Ceph Storage as an iSCSI gateway, see Introduction to the Ceph iSCSI gateway in the Block Device Guide.
8.3. Configuring the iSCSI Target
Traditionally, block-level access to a Ceph storage cluster has been limited to QEMU and librbd
, which is a key enabler for adoption within OpenStack environments. Block-level access to the Ceph storage cluster can now take advantage of the iSCSI standard to provide data storage.
Prerequisites:
- Red Hat Enterprise Linux 7.5 or later.
- A running Red Hat Ceph Storage cluster, version 3.1 or later.
- iSCSI gateways nodes, which can either be colocated with OSD nodes or on dedicated nodes.
- Valid Red Hat Enterprise Linux 7 and Red Hat Ceph Storage 3.3 entitlements/subscriptions on the iSCSI gateways nodes.
- Separate network subnets for iSCSI front-end traffic and Ceph back-end traffic.
Deploying the Ceph iSCSI gateway can be done using Ansible or the command-line interface.
8.3.1. Configuring the iSCSI Target using Ansible
Requirements:
- Red Hat Enterprise Linux 7.5 or later.
- A running Red Hat Ceph Storage 3 or later.
Installing:
On the iSCSI gateway nodes, enable the Red Hat Ceph Storage 3 Tools repository. For details, see the Enabling the Red Hat Ceph Storage Repositories section in the Installation Guide for Red Hat Enterprise Linux.
Install the
ceph-iscsi-config
package:# yum install ceph-iscsi-config
On the Ansible administration node, do the following steps, as the
root
user:- Enable the Red Hat Ceph Storage 3 Tools repository. For details, see the Enabling the Red Hat Ceph Storage Repositories section in the Installation Guide for Red Hat Enterprise Linux.
Install the
ceph-ansible
package:# yum install ceph-ansible
Add an entry in
/etc/ansible/hosts
file for the gateway group:[iscsigws] ceph-igw-1 ceph-igw-2
NoteIf colocating the iSCSI gateway with an OSD node, add the OSD node to the
[iscsigws]
section.
Configuring:
The ceph-ansible
package places a file in the /usr/share/ceph-ansible/group_vars/
directory called iscsigws.yml.sample
.
Create a copy of the
iscsigws.yml.sample
file and name itiscsigws.yml
.ImportantThe new file name (
iscsigws.yml
) and the new section heading ([iscsigws]
) are only applicable to Red Hat Ceph Storage 3.1 or higher. Upgrading from previous versions of Red Hat Ceph Storage to 3.1 will still use the old file name (iscsi-gws.yml
) and the old section heading ([iscsi-gws]
).-
Open the
iscsigws.yml
file for editing. Uncomment the
gateway_ip_list
option and update the values accordingly, using IPv4 or IPv6 addresses.For example, adding two gateways with the IPv4 addresses of 10.172.19.21 and 10.172.19.22, configure
gateway_ip_list
like this:gateway_ip_list: 10.172.19.21,10.172.19.22
ImportantProviding IP addresses for the
gateway_ip_list
option is required. You cannot use a mix of IPv4 and IPv6 addresses.Uncomment the
rbd_devices
variable and update the values accordingly, for example:rbd_devices: - { pool: 'rbd', image: 'ansible1', size: '30G', host: 'ceph-1', state: 'present' } - { pool: 'rbd', image: 'ansible2', size: '15G', host: 'ceph-1', state: 'present' } - { pool: 'rbd', image: 'ansible3', size: '30G', host: 'ceph-1', state: 'present' } - { pool: 'rbd', image: 'ansible4', size: '50G', host: 'ceph-1', state: 'present' }
Uncomment the
client_connections
variable and update the values accordingly, for example:Example with enabling CHAP authentication
client_connections: - { client: 'iqn.1994-05.com.redhat:rh7-iscsi-client', image_list: 'rbd.ansible1,rbd.ansible2', chap: 'rh7-iscsi-client/redhat', status: 'present' } - { client: 'iqn.1991-05.com.microsoft:w2k12r2', image_list: 'rbd.ansible4', chap: 'w2k12r2/microsoft_w2k12', status: 'absent' }
Example with disabling CHAP authentication
client_connections: - { client: 'iqn.1991-05.com.microsoft:w2k12r2', image_list: 'rbd.ansible4', chap: '', status: 'present' } - { client: 'iqn.1991-05.com.microsoft:w2k16r2', image_list: 'rbd.ansible2', chap: '', status: 'present' }
ImportantDisabling CHAP is only supported on Red Hat Ceph Storage 3.1 or higher. Red Hat does not support mixing clients, some with CHAP enabled and some CHAP disabled. All clients marked as
present
must have CHAP enabled or must have CHAP disabled.Review the following Ansible variables and descriptions, and update accordingly, if needed.
Table 8.1. iSCSI Gateway General Variables Variable Meaning/Purpose seed_monitor
Each gateway needs access to the ceph cluster for rados and rbd calls. This means the iSCSI gateway must have an appropriate
/etc/ceph/
directory defined. Theseed_monitor
host is used to populate the iSCSI gateway’s/etc/ceph/
directory.cluster_name
Define a custom storage cluster name.
gateway_keyring
Define a custom keyring name.
deploy_settings
If set to
true
, then deploy the settings when the playbook is ran.perform_system_checks
This is a boolean value that checks for multipath and lvm configuration settings on each gateway. It must be set to true for at least the first run to ensure multipathd and lvm are configured properly.
gateway_iqn
This is the iSCSI IQN that all the gateways will expose to clients. This means each client will see the gateway group as a single subsystem.
gateway_ip_list
The comma separated ip list defines the IPv4 or IPv6 addresses that will be used on the front end network for iSCSI traffic. This IP will be bound to the active target portal group on each node, and is the access point for iSCSI traffic. Each IP should correspond to an IP available on the hosts defined in the
iscsigws.yml
host group in/etc/ansible/hosts
.rbd_devices
This section defines the RBD images that will be controlled and managed within the iSCSI gateway configuration. Parameters like
pool
andimage
are self explanatory. Here are the other parameters:
size
= This defines the size of the RBD. You may increase the size later, by simply changing this value, but shrinking the size of an RBD is not supported and is ignored.
host
= This is the iSCSI gateway host name that will be responsible for the rbd allocation/resize. Every definedrbd_device
entry must have a host assigned.
state
= This is typical Ansible syntax for whether the resource should be defined or removed. A request with a state of absent will first be checked to ensure the rbd is not mapped to any client. If the RBD is unallocated, it will be removed from the iSCSI gateway and deleted from the configuration.client_connections
This section defines the iSCSI client connection details together with the LUN (RBD image) masking. Currently only CHAP is supported as an authentication mechanism. Each connection defines an
image_list
which is a comma separated list of the formpool.rbd_image[,pool.rbd_image,…]
. RBD images can be added and removed from this list, to change the client masking. Note, that there are no checks done to limit RBD sharing across client connections.Table 8.2. iSCSI Gateway RBD-TARGET-API Variables Variable Meaning/Purpose api_user
The user name for the API. The default is
admin
.api_password
The password for using the API. The default is
admin
.api_port
The TCP port number for using the API. The default is
5000
.api_secure
Value can be
true
orfalse
. The default isfalse
.loop_delay
Controls the sleeping interval in seconds for polling the iSCSI management object. The default value is
1
.trusted_ip_list
A list of IPv4 or IPv6 addresses who have access to the API. By default, only the iSCSI gateway nodes have access.
ImportantFor
rbd_devices
, there can not be any periods (.) in the pool name or in the image name.WarningGateway configuration changes are only supported from one gateway at a time. Attempting to run changes concurrently through multiple gateways may lead to configuration instability and inconsistency.
WarningAnsible will install the
ceph-iscsi-cli
package, create, and then update the/etc/ceph/iscsi-gateway.cfg
file based on settings in thegroup_vars/iscsigws.yml
file when theansible-playbook
command is ran. If you have previously installed theceph-iscsi-cli
package using the command line installation procedures, then the existing settings from theiscsi-gateway.cfg
file must be copied to thegroup_vars/iscsigws.yml
file.See the Appendix A, Sample
iscsigws.yml
File to view the fulliscsigws.yml.sample
file.
Deploying:
On the Ansible administration node, do the following steps, as the root
user.
Execute the Ansible playbook:
# cd /usr/share/ceph-ansible # ansible-playbook site.yml
NoteThe Ansible playbook will handle RPM dependencies, RBD creation and Linux iSCSI target configuration.
WarningOn stand-alone iSCSI gateway nodes, verify that the correct Red Hat Ceph Storage 3.3 software repositories are enabled. If they are unavailable, then the wrong packages will be installed.
Verify the configuration by running the following command:
# gwcli ls
ImportantDo not use the
targetcli
utility to change the configuration, this will result in the following issues: ALUA misconfiguration and path failover problems. There is the potential to corrupt data, to have mismatched configuration across iSCSI gateways, and to have mismatched WWN information, which will lead to client pathing problems.
Service Management:
The ceph-iscsi-config
package installs the configuration management logic and a Systemd service called rbd-target-gw
. When the Systemd service is enabled, the rbd-target-gw
will start at boot time and will restore the Linux iSCSI target state. Deploying the iSCSI gateways with the Ansible playbook disables the target service.
# systemctl start rbd-target-gw
Below are the outcomes of interacting with the rbd-target-gw
Systemd service.
# systemctl <start|stop|restart|reload> rbd-target-gw
reload
-
A reload request will force
rbd-target-gw
to reread the configuration and apply it to the current running environment. This is normally not required, since changes are deployed in parallel from Ansible to all iSCSI gateway nodes. stop
- A stop request will close the gateway’s portal interfaces, dropping connections to clients and wipe the current Linux iSCSI target configuration from the kernel. This returns the iSCSI gateway to a clean state. When clients are disconnected, active I/O is rescheduled to the other iSCSI gateways by the client side multipathing layer.
Administration:
Within the /usr/share/ceph-ansible/group_vars/iscsigws.yml
file there are a number of operational workflows that the Ansible playbook supports.
Red Hat does not support managing RBD images exported by the Ceph iSCSI gateway tools, such as gwcli
and ceph-ansible
. Also, using the rbd
command to rename or remove RBD images exported by the Ceph iSCSI gateway, can result in an unstable storage cluster.
Before removing RBD images from the iSCSI gateway configuration, follow the standard procedures for removing a storage device from the operating system.
For clients and systems using Red Hat Enterprise Linux 7, see the Red Hat Enterprise Linux 7 Storage Administration Guide for more details on removing devices.
I want to… | Update the iscsigws.yml file by… |
---|---|
Add more RBD images |
Adding another entry to the |
Resize an existing RBD image |
Updating the size parameter within the |
Add a client |
Adding an entry to the |
Add another RBD to a client |
Adding the relevant RBD |
Remove an RBD from a client |
Removing the RBD |
Remove an RBD from the system |
Changing the RBD entry state variable to |
Change the clients CHAP credentials |
Updating the relevant CHAP details in |
Remove a client |
Updating the relevant |
Once a change has been made, rerun the Ansible playbook to apply the change across the iSCSI gateway nodes.
# ansible-playbook site.yml
Removing the Configuration:
Disconnect all iSCSI initiators before purging the iSCSI gateway configuration. Follow the procedures below for the appropriate operating system:
Red Hat Enterprise Linux initiators:
Syntax
iscsiadm -m node -T $TARGET_NAME --logout
Replace
$TARGET_NAME
with the configured iSCSI target name.Example
# iscsiadm -m node -T iqn.2003-01.com.redhat.iscsi-gw:ceph-igw --logout Logging out of session [sid: 1, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.21,3260] Logging out of session [sid: 2, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.22,3260] Logout of [sid: 1, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.21,3260] successful. Logout of [sid: 2, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.22,3260] successful.
Windows initiators:
See the Microsoft documentation for more details.
VMware ESXi initiators:
See the VMware documentation for more details.
On the Ansible administration node, as the Ansbile user, change to the
/usr/share/ceph-ansible/
directory:[user@admin ~]$ cd /usr/share/ceph-ansible/
Run the Ansible playbook to remove the iSCSI gateway configuration:
[user@admin ceph-ansible]$ ansible-playbook purge-cluster.yml --limit iscsigws
On a Ceph Monitor or Client node, as the
root
user, remove the iSCSI gateway configuration object (gateway.conf
):[root@mon ~]# rados rm -p pool gateway.conf
Optional.
If the exported Ceph RADOS Block Device (RBD) is no longer needed, then remove the RBD image. Run the following command on a Ceph Monitor or Client node, as the
root
user:Syntax
rbd rm $IMAGE_NAME
Replace
$IMAGE_NAME
with the name of the RBD image.Example
[root@mon ~]# rbd rm rbd01
8.3.2. Configuring the iSCSI Target using the Command Line Interface
The Ceph iSCSI gateway is the iSCSI target node and also a Ceph client node. The Ceph iSCSI gateway can be a standalone node or be colocated on a Ceph Object Store Disk (OSD) node. Completing the following steps will install, and configure the Ceph iSCSI gateway for basic operation.
Requirements:
- Red Hat Enterprise Linux 7.5 or later
- A running Red Hat Ceph Storage 3.3 cluster or later
The following packages must be installed:
-
targetcli-2.1.fb47-0.1.20170815.git5bf3517.el7cp
or newer package -
python-rtslib-2.1.fb64-0.1.20170815.gitec364f3.el7cp
or newer package -
tcmu-runner-1.4.0-0.2.el7cp
or newer package openssl-1.0.2k-8.el7
or newer packageImportantIf previous versions of these packages exist, then they must be removed first before installing the newer versions. These newer versions must be installed from a Red Hat Ceph Storage repository.
-
Do the following steps on all Ceph Monitor nodes in the storage cluster, before using the gwcli
utility:
Restart the
ceph-mon
service, as theroot
user:# systemctl restart ceph-mon@$MONITOR_HOST_NAME
For example:
# systemctl restart ceph-mon@monitor1
Do the following steps on the Ceph iSCSI gateway node, as the root
user, before proceeding to the Installing section:
-
If the Ceph iSCSI gateway is not colocated on an OSD node, then copy the Ceph configuration files, located in
/etc/ceph/
, from a running Ceph node in the storage cluster to the iSCSI Gateway node. The Ceph configuration files must exist on the iSCSI gateway node under/etc/ceph/
. - Install and configure the Ceph command-line interface.For details, see the Installing the Ceph Command Line Interface chapter in the Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux.
Enable the Ceph tools repository:
# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
- If needed, open TCP ports 3260 and 5000 on the firewall.
Create a new or use an existing RADOS Block Device (RBD).
- See Section 2.1, “Prerequisites” for more details.
If you already installed the Ceph iSCSI gateway using Ansible, then do not use this procedure.
Ansible will install the ceph-iscsi-cli
package, create, and then update the /etc/ceph/iscsi-gateway.cfg
file based on settings in the group_vars/iscsigws.yml
file when the ansible-playbook
command is ran. See Requirements: for more information.
Installing:
Do the following steps on all iSCSI gateway nodes, as the root
user, unless otherwise noted.
Install the
ceph-iscsi-cli
package:# yum install ceph-iscsi-cli
Install the
tcmu-runner
package:# yum install tcmu-runner
If needed, install the
openssl
package:# yum install openssl
On the primary iSCSI gateway node, create a directory to hold the SSL keys:
# mkdir ~/ssl-keys # cd ~/ssl-keys
On the primary iSCSI gateway node, create the certificate and key files:
# openssl req -newkey rsa:2048 -nodes -keyout iscsi-gateway.key -x509 -days 365 -out iscsi-gateway.crt
NoteYou will be prompted to enter the environmental information.
On the primary iSCSI gateway node, create a PEM file:
# cat iscsi-gateway.crt iscsi-gateway.key > iscsi-gateway.pem
On the primary iSCSI gateway node, create a public key:
# openssl x509 -inform pem -in iscsi-gateway.pem -pubkey -noout > iscsi-gateway-pub.key
-
From the primary iSCSI gateway node, copy the
iscsi-gateway.crt
,iscsi-gateway.pem
,iscsi-gateway-pub.key
, andiscsi-gateway.key
files to the/etc/ceph/
directory on the other iSCSI gateway nodes.
Create a file named
iscsi-gateway.cfg
in the/etc/ceph/
directory:# touch /etc/ceph/iscsi-gateway.cfg
Edit the
iscsi-gateway.cfg
file and add the following lines:Syntax
[config] cluster_name = <ceph_cluster_name> gateway_keyring = <ceph_client_keyring> api_secure = true trusted_ip_list = <ip_addr>,<ip_addr>
Example
[config] cluster_name = ceph gateway_keyring = ceph.client.admin.keyring api_secure = true trusted_ip_list = 192.168.0.10,192.168.0.11
See Tables 8.1 and 8.2 in the Requirements: for more details on these options.
ImportantThe
iscsi-gateway.cfg
file must be identical on all iSCSI gateway nodes.-
Copy the
iscsi-gateway.cfg
file to all iSCSI gateway nodes.
Enable and start the API service:
# systemctl enable rbd-target-api # systemctl start rbd-target-api
Configuring:
Start the iSCSI gateway command-line interface:
# gwcli
Creating the iSCSI gateways using either IPv4 or IPv6 addresses:
Syntax
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:<target_name> > goto gateways > create <iscsi_gw_name> <IP_addr_of_gw> > create <iscsi_gw_name> <IP_addr_of_gw>
Example
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:ceph-igw > goto gateways > create ceph-gw-1 10.172.19.21 > create ceph-gw-2 10.172.19.22
ImportantYou cannot use a mix of IPv4 and IPv6 addresses.
Adding a RADOS Block Device (RBD):
Syntax
> cd /disks >/disks/ create <pool_name> image=<image_name> size=<image_size>m|g|t max_data_area_mb=<buffer_size>
Example
> cd /disks >/disks/ create rbd image=disk_1 size=50g max_data_area_mb=32
ImportantThere can not be any periods (.) in the pool name or in the image name.
WarningThe
max_data_area_mb
option controls the amount of memory in megabytes that each image can use to pass SCSI command data between the iSCSI target and the Ceph cluster. If this value is too small, then it can result in excessive queue full retries which will affect performance. If the value is too large, then it can result in one disk using too much of the system’s memory, which can cause allocation failures for other subsystems. The default value is 8.This value can be changed using the
gwcli reconfigure
subcommand. The image must not be in use by an iSCSI initiator for this command to take effect. Do not adjust other options using thegwcli reconfigure
subcommand unless specified in this document or Red Hat Support has instructed you to do so.Syntax
>/disks/ reconfigure max_data_area_mb <new_buffer_size>
Example
>/disks/ reconfigure max_data_area_mb 64
Creating a client:
Syntax
> goto hosts > create iqn.1994-05.com.redhat:<client_name> > auth chap=<user_name>/<password>
Example
> goto hosts > create iqn.1994-05.com.redhat:rh7-client > auth chap=iscsiuser1/temp12345678
ImportantDisabling CHAP is only supported on Red Hat Ceph Storage 3.1 or higher. Red Hat does not support mixing clients, some with CHAP enabled and some CHAP disabled. All clients must have either CHAP enabled or have CHAP disabled. The default behavior is to only authenticate an initiator by its initiator name.
If initiators are failing to log into the target, then the CHAP authentication might be a misconfigured for some initiators.
Example
o- hosts ................................ [Hosts: 2: Auth: MISCONFIG]
Do the following command at the
hosts
level to reset all the CHAP authentication:/> goto hosts /iscsi-target...csi-igw/hosts> auth nochap ok ok /iscsi-target...csi-igw/hosts> ls o- hosts ................................ [Hosts: 2: Auth: None] o- iqn.2005-03.com.ceph:esx ........... [Auth: None, Disks: 4(310G)] o- iqn.1994-05.com.redhat:rh7-client .. [Auth: None, Disks: 0(0.00Y)]
Adding disks to a client:
Syntax
>/iscsi-target..eph-igw/hosts> cd iqn.1994-05.com.redhat:<client_name> > disk add <pool_name>.<image_name>
Example
>/iscsi-target..eph-igw/hosts> cd iqn.1994-05.com.redhat:rh7-client > disk add rbd.disk_1
To confirm that the API is using SSL correctly, look in the
/var/log/rbd-target-api.log
file forhttps
, for example:Aug 01 17:27:42 test-node.example.com python[1879]: * Running on https://0.0.0.0:5000/
- The next step is to configure an iSCSI initiator. See Section 8.4, “Configuring the iSCSI Initiator” for more information on configuring an iSCSI initiator.
Verifying
To verify if the iSCSI gateways are working:
Example
/> goto gateways /iscsi-target...-igw/gateways> ls o- gateways ............................ [Up: 2/2, Portals: 2] o- ceph-gw-1 ........................ [ 10.172.19.21 (UP)] o- ceph-gw-2 ........................ [ 10.172.19.22 (UP)]
NoteIf the status is
UNKNOWN
, then check for network issues and any misconfigurations. If using a firewall, then check if the appropriate TCP port is open. Check if the iSCSI gateway is listed in thetrusted_ip_list
option. Verify that therbd-target-api
service is running on the iSCSI gateway node.To verify if the initiator is connecting to the iSCSI target, you will see the initiator
LOGGED-IN
:Example
/> goto hosts /iscsi-target...csi-igw/hosts> ls o- hosts .............................. [Hosts: 1: Auth: None] o- iqn.1994-05.com.redhat:rh7-client [LOGGED-IN, Auth: None, Disks: 0(0.00Y)]
To verify if LUNs are balanced across iSCSI gateways:
/> goto hosts /iscsi-target...csi-igw/hosts> ls o- hosts ................................. [Hosts: 2: Auth: None] o- iqn.2005-03.com.ceph:esx ............ [Auth: None, Disks: 4(310G)] | o- lun 0 ............................. [rbd.disk_1(100G), Owner: ceph-gw-1] | o- lun 1 ............................. [rbd.disk_2(10G), Owner: ceph-gw-2]
When creating a disk, the disk will be assigned an iSCSI gateway as its
Owner
based on the initiator’s multipath layer. The initiator’s multipath layer will reported as being in ALUA Active-Optimized (AO) state. The other paths will be reported as being in the ALUA Active-non-Optimized (ANO) state.If the AO path fails one of the other iSCSI gateways will be used. The ordering for the failover gateway depends on the initiator’s multipath layer, where normally, the order is based on which path was discovered first.
Currently, the balancing of LUNs is not dynamic. The owning iSCSI gateway is selected at disk creation time and is not changeable.
8.3.3. Optimizing the performance of the iSCSI Target
There are many settings that control how the iSCSI Target transfers data over the network. These settings can be used to optimize the performance of the iSCSI gateway.
Only change these settings if instructed to by Red Hat Support or as specified in this document.
The gwcli reconfigure subcommand
The gwcli reconfigure
subcommand controls the settings that are used to optimize the performance of the iSCSI gateway.
Settings that affect the performance of the iSCSI target
- max_data_area_mb
- cmdsn_depth
- immediate_data
- initial_r2t
- max_outstanding_r2t
- first_burst_length
- max_burst_length
- max_recv_data_segment_length
- max_xmit_data_segment_length
Additional Resources
-
Information about
max_data_area_mb
, including an example showing how to adjust it usinggwcli reconfigure
, is in the section Configuring the iSCSI Target using the Command Line Interface for the Block Device Guide, and Configuring the Ceph iSCSI gateway in a container for the Container Guide.
8.3.4. Adding more iSCSI gateways
As a storage administrator, you can expand the initial two iSCSI gateways to four iSCSI gateways by using either Ansible or the gwcli
command-line tool. Adding more iSCSI gateways provides you more flexibility when using load-balancing and failover options, along with providing more redundancy.
8.3.4.1. Prerequisites
- A running Red Hat Ceph Storage 3 cluster.
- Installation of the iSCSI gateway software.
- Spare nodes or existing OSD nodes.
8.3.4.2. Using Ansible to add more iSCSI gateways
You can using the Ansible automation utility to add more iSCSI gateways. This procedure expands the default installation of two iSCSI gateways to four iSCSI gateways. You can configure the iSCSI gateway on a standalone node or it can be collocated with existing OSD nodes.
Prerequisites
- A running Red Hat Ceph Storage 3 cluster.
- Installation of the iSCSI gateway software.
-
Having
root
user access on the Ansible administration node. -
Having
root
user access on the new nodes.
Procedure
On the new iSCSI gateway nodes, enable the Red Hat Ceph Storage 3 Tools repository.
[root@iscsigw ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-els-rpms
See the Enabling the Red Hat Ceph Storage Repositories section in the Installation Guide for Red Hat Enterprise Linux for more details.
Install the
ceph-iscsi-config
package:[root@iscsigw ~]# yum install ceph-iscsi-config
Append to the list in
/etc/ansible/hosts
file for the gateway group:Example
[iscsigws] ... ceph-igw-3 ceph-igw-4
NoteIf colocating the iSCSI gateway with an OSD node, add the OSD node to the
[iscsigws]
section.Open the
/usr/share/ceph-ansible/group_vars/iscsigws.yml
file for editing and append the additional two iSCSI gateways with the IPv4 addresses to thegateway_ip_list
option:Example
gateway_ip_list: 10.172.19.21,10.172.19.22,10.172.19.23,10.172.19.24
ImportantProviding IP addresses for the
gateway_ip_list
option is required. You cannot use a mix of IPv4 and IPv6 addresses.On the Ansible administration node, as the
root
user, execute the Ansible playbook:# cd /usr/share/ceph-ansible # ansible-playbook site.yml
- From the iSCSI initiators, re-login to use the newly added iSCSI gateways.
Additional Resources
- See Configure the iSCSI Initiator for more details on using an iSCSI Initiator.
8.3.4.3. Using gwcli
to add more iSCSI gateways
You can use the gwcli
command-line tool to add more iSCSI gateways. This procedure expands the default of two iSCSI gateways to four iSCSI gateways.
Prerequisites
- A running Red Hat Ceph Storage 3 cluster.
- Installation of the iSCSI gateway software.
-
Having
root
user access to the new nodes or OSD nodes.
Procedure
-
If the Ceph iSCSI gateway is not collocated on an OSD node, then copy the Ceph configuration files, located in the
/etc/ceph/
directory, from a running Ceph node in the storage cluster to the new iSCSI Gateway node. The Ceph configuration files must exist on the iSCSI gateway node under the/etc/ceph/
directory. - Install and configure the Ceph command-line interface. For details, see the Installing the Ceph Command Line Interface chapter in the Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux.
On the new iSCSI gateway nodes, enable the Red Hat Ceph Storage 3 Tools repository.
[root@iscsigw ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-els-rpms
See the Enabling the Red Hat Ceph Storage Repositories section in the Installation Guide for Red Hat Enterprise Linux for more details.
Install the
ceph-iscsi-cli
, andtcmu-runner
packages:[root@iscsigw ~]# yum install ceph-iscsi-cli tcmu-runner
If needed, install the
openssl
package:[root@iscsigw ~]# yum install openssl
On one of the existing iSCSI gateway nodes, edit the
/etc/ceph/iscsi-gateway.cfg
file and append thetrusted_ip_list
option with the new IP addresses for the new iSCSI gateway nodes.Example
[config] ... trusted_ip_list = 10.172.19.21,10.172.19.22,10.172.19.23,10.172.19.24
See the Configuring the iSCSI Target using Ansible tables for more details on these options.
Copy the updated
/etc/ceph/iscsi-gateway.cfg
file to all the iSCSI gateway nodes.ImportantThe
iscsi-gateway.cfg
file must be identical on all iSCSI gateway nodes.-
Optionally, if using SSL, also copy the
~/ssl-keys/iscsi-gateway.crt
,~/ssl-keys/iscsi-gateway.pem
,~/ssl-keys/iscsi-gateway-pub.key
, and~/ssl-keys/iscsi-gateway.key
files from one of the existing iSCSI gateway nodes to the/etc/ceph/
directory on the new iSCSI gateway nodes. Enable and start the API service on the new iSCSI gateway nodes:
[root@iscsigw ~]# systemctl enable rbd-target-api [root@iscsigw ~]# systemctl start rbd-target-api
Start the iSCSI gateway command-line interface:
[root@iscsigw ~]# gwcli
Creating the iSCSI gateways using either IPv4 or IPv6 addresses:
Syntax
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:_TARGET_NAME_ > goto gateways > create ISCSI_GW_NAME IP_ADDR_OF_GW > create ISCSI_GW_NAME IP_ADDR_OF_GW
Example
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:ceph-igw > goto gateways > create ceph-gw-3 10.172.19.23 > create ceph-gw-4 10.172.19.24
ImportantYou cannot use a mix of IPv4 and IPv6 addresses.
- From the iSCSI initiators, re-login to use the newly added iSCSI gateways.
Additional Resources
- See Configure the iSCSI Initiator for more details on using an iSCSI Initiator.
8.4. Configuring the iSCSI Initiator
Red Hat Ceph Storage supports iSCSI initiators on three operating systems for connecting to the Ceph iSCSI gateway:
8.4.1. The iSCSI Initiator for Red Hat Enterprise Linux
Prerequisite:
-
Package
iscsi-initiator-utils-6.2.0.873-35
or newer must be installed -
Package
device-mapper-multipath-0.4.9-99
or newer must be installed
Installing the Software:
Install the iSCSI initiator and multipath tools:
# yum install iscsi-initiator-utils # yum install device-mapper-multipath
Setting the Initiator Name
Edit the
/etc/iscsi/initiatorname.iscsi
file.NoteThe initiator name must match the initiator name used in the Ansible
client_connections
option or what was used during the initial setup usinggwcli
.
Configuring Multipath IO:
Create the default
/etc/multipath.conf
file and enable themultipathd
service:# mpathconf --enable --with_multipathd y
Add the following to
/etc/multipath.conf
file:devices { device { vendor "LIO-ORG" hardware_handler "1 alua" path_grouping_policy "failover" path_selector "queue-length 0" failback 60 path_checker tur prio alua prio_args exclusive_pref_bit fast_io_fail_tmo 25 no_path_retry queue } }
Restart the
multipathd
service:# systemctl reload multipathd
CHAP Setup and iSCSI Discovery/Login:
Provide a CHAP username and password by updating the
/etc/iscsi/iscsid.conf
file accordingly.Example
node.session.auth.authmethod = CHAP node.session.auth.username = user node.session.auth.password = password
NoteIf you update these options, then you must rerun the
iscsiadm discovery
command.Discover the target portals:
# iscsiadm -m discovery -t st -p 192.168.56.101 192.168.56.101:3260,1 iqn.2003-01.org.linux-iscsi.rheln1 192.168.56.102:3260,2 iqn.2003-01.org.linux-iscsi.rheln1
Login to target:
# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.rheln1 -l
Viewing the Multipath IO Configuration:
The multipath daemon (multipathd
), will set up devices automatically based on the multipath.conf
settings. Running the multipath
command show devices setup in a failover configuration with a priority group for each path, for example:
# multipath -ll mpathbt (360014059ca317516a69465c883a29603) dm-1 LIO-ORG ,IBLOCK size=1.0G features='0' hwhandler='1 alua' wp=rw |-+- policy='queue-length 0' prio=50 status=active | `- 28:0:0:1 sde 8:64 active ready running `-+- policy='queue-length 0' prio=10 status=enabled `- 29:0:0:1 sdc 8:32 active ready running
The multipath -ll
output prio
value indicates the ALUA state, where prio=50
indicates it is the path to the owning iSCSI gateway in the ALUA Active-Optimized state and prio=10
indicates it is an Active-non-Optmized path. The status
field indicates which path is being used, where active
indicates the currently used path, and enabled
indicates the failover path, if the active
fails. To match the device name, for example, sde
in the multipath -ll
output, to the iSCSI gateway run the following command:
# iscsiadm -m session -P 3
The Persistent Portal
value is the IP address assigned to the iSCSI gateway listed in gwcli
or the IP address of one of the iSCSI gateways listed in the gateway_ip_list
, if Ansible was used.
8.4.2. The iSCSI Initiator for Red Hat Virtualization
Prerequisite:
- Red Hat Virtualization 4.1
- Configured MPIO devices on all Red Hat Virtualization nodes
-
Package
iscsi-initiator-utils-6.2.0.873-35
or newer must be installed -
Package
device-mapper-multipath-0.4.9-99
or newer must be installed
Configuring Multipath IO:
Update the
/etc/multipath/conf.d/DEVICE_NAME.conf
file as follows:devices { device { vendor "LIO-ORG" hardware_handler "1 alua" path_grouping_policy "failover" path_selector "queue-length 0" failback 60 path_checker tur prio alua prio_args exclusive_pref_bit fast_io_fail_tmo 25 no_path_retry queue } }
Restart the
multipathd
service:# systemctl reload multipathd
Adding iSCSI Storage
- Click the Storage resource tab to list the existing storage domains.
- Click the New Domain button to open the New Domain window.
- Enter the Name of the new storage domain.
- Use the Data Center drop-down menu to select an data center.
- Use the drop-down menus to select the Domain Function and the Storage Type. The storage domain types that are not compatible with the chosen domain function are not available.
- Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center’s SPM host.
The New Domain window automatically displays known targets with unused LUNs when iSCSI is selected as the storage type. If the target that you are adding storage from is not listed then you can use target discovery to find it, otherwise proceed to the next step.
Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment.
NoteLUNs external to the environment are also displayed.
You can use the Discover Targets options to add LUNs on many targets, or multiple paths to the same LUNs.
- Enter the fully qualified domain name or IP address of the iSCSI host in the Address field.
-
Enter the port to connect to the host on when browsing for targets in the Port field. The default is
3260
. - If the Challenge Handshake Authentication Protocol (CHAP) is being used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.
- Click the Discover button.
Select the target to use from the discovery results and click the Login button. Alternatively, click the Login All to log in to all of the discovered targets.
ImportantIf more than one path access is required, ensure to discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported.
- Click the + button next to the desired target. This will expand the entry and display all unused LUNs attached to the target.
- Select the check box for each LUN that you are using to create the storage domain.
Optionally, you can configure the advanced parameters.
- Click Advanced Parameters.
- Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
- Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
- Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
- Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
- Click OK to create the storage domain and close the window.
8.4.3. The iSCSI Initiator for Microsoft Windows
Prerequisite:
- Microsoft Windows Server 2016
iSCSI Initiator, Discovery and Setup:
- Install the iSCSI initiator driver and MPIO tools.
- Launch the MPIO program, click on the Discover Multi-Paths tab, check the Add support for iSCSI devices box, and click Add. This change will require a reboot.
On the iSCSI Initiator Properties window, on the Discovery tab , add a target portal. Enter the IP address or DNS name and Port of the Ceph iSCSI gateway:
On the Targets tab , select the target and click on Connect :
On the Connect To Target window, select the Enable multi-path option , and click the Advanced button :
Under the Connect using section, select a Target portal IP . Select the Enable CHAP login on and enter the Name and Target secret values from the Ceph iSCSI Ansible client credentials section, and click OK :
ImportantWindows Server 2016 does not accept a CHAP secret less than 12 bytes.
- Repeat steps 5 and 6 for each target portal defined when setting up the iSCSI gateway.
If the initiator name is different than the initiator name used during the initial setup, then rename the initiator name. From iSCSI Initiator Properties window, on the Configuration tab , click the Change button to rename the initiator name.
Multipath IO Setup:
Configuring the MPIO load balancing policy, setting the timeout and retry options are using PowerShell with the mpclaim
command. The iSCSI Initiator tool configures the remaining options.
Red Hat recommends increasing the PDORemovePeriod
option to 120 seconds from PowerShell. This value might need to be adjusted based on the application. When all paths are down, and 120 seconds expires, the operating system will start failing IO requests.
Set-MPIOSetting -NewPDORemovePeriod 120
- Set the failover policy
mpclaim.exe -l -m 1
- Verify the failover policy
mpclaim -s -m MSDSM-wide Load Balance Policy: Fail Over Only
Using the iSCSI Initiator tool, from the Targets tab click on the Devices… button :
From the Devices window, select a disk and click the MPIO… button :
On the Device Details window the paths to each target portal is displayed. If using the
ceph-ansible
setup method, the iSCSI gateway will use ALUA to tell the iSCSI initiator which path and iSCSI gateway should be used as the primary path. The Load Balancing Policy Fail Over Only must be selected- From PowerShell, to view the multipath configuration
mpclaim -s -d $MPIO_DISK_ID
+ Replace $MPIO_DISK_ID
with the appropriate disk identifier.
There will be one Active/Optimized path which is the path to the iSCSI gateway node that owns the LUN, and there will be an Active/Unoptimized path for each other iSCSI gateway node.
Tuning:
Consider using the following registry settings:
Windows Disk Timeout
Key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk
Value
TimeOutValue = 65
Microsoft iSCSI Initiator Driver
Key
HKEY_LOCAL_MACHINE\\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Instance_Number>\Parameters
Values
LinkDownTime = 25 SRBTimeoutDelta = 15
8.4.4. The iSCSI Initiator for VMware ESX vSphere Web Client
Prerequisite:
- VMware ESX 6.5 or later using Virtual Machine compatibility 6.5 with VMFS 6
- Access to the vSphere Web Client
-
Root access to VMware ESX host to execute the
esxcli
command
iSCSI Discovery and Multipath Device Setup:
Disable
HardwareAcceleratedMove
(XCOPY):# esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedMove
Enable the iSCSI software. From Navigator pane, click on Storage . Select the Adapters tab . Click on Confgure iSCSI :
Verify the initiator name in the Name & alias section .
NoteIf the initiator name is different than the initiator name used when creating the client during the initial setup using
gwcli
or if the initiator name used in the Ansibleclient_connections:
client
variable is different, then follow this procedure to change the initiator name. From the VMware ESX host, run theseesxcli
commands.Get the adapter name for the iSCSI software:
> esxcli iscsi adapter list > Adapter Driver State UID Description > ------- --------- ------ ------------- ---------------------- > vmhba64 iscsi_vmk online iscsi.vmhba64 iSCSI Software Adapter
Set the initiator name:
Syntax
> esxcli iscsi adapter set -A <adaptor_name> -n <initiator_name>
Example
> esxcli iscsi adapter set -A vmhba64 -n iqn.1994-05.com.redhat:rh7-client
Configure CHAP. Expand the CHAP authentication section . Select “Do not use CHAP unless required by target” . Enter the CHAP Name and Secret credentials that were used in the initial setup, whether using the
gwcli auth
command or the Ansibleclient_connections:
credentials
variable. Verify the Mutual CHAP authentication section has “Do not use CHAP” selected.WarningThere is a bug in the vSphere Web Client where the CHAP settings are not used initially. On the Ceph iSCSI gateway node, in kernel logs, you will see the following errors as an indication of this bug:
> kernel: CHAP user or password not set for Initiator ACL > kernel: Security negotiation failed. > kernel: iSCSI Login negotiation failed.
To workaround this bug, configure the CHAP settings using the
esxcli
command. Theauthname
argument is the Name in the vSphere Web Client:> esxcli iscsi adapter auth chap set --direction=uni --authname=myiscsiusername --secret=myiscsipassword --level=discouraged -A vmhba64
Configure the iSCSI settings. Expand Advanced settings . Set the RecoveryTimeout value to 25 .
Set the discovery address. In the Dynamic targets section , click Add dynamic target . Under Address add an IP addresses for one of the Ceph iSCSI gateways. Only one IP address needs to be added. Finally, click the Save configuration button . From the main interface, on the Devices tab, you will see the RBD image.
NoteConfiguring the LUN will be done automatically, using the ALUA SATP and MRU PSP. Other SATPs and PSPs must not be used. This can be verified with the
esxcli
command:esxcli storage nmp path list -d eui.$DEVICE_ID
Replace
$DEVICE_ID
with the appropriate device identifier.Verify that multipathing has been setup correctly.
List the devices:
Example
# esxcli storage nmp device list | grep iSCSI Device Display Name: LIO-ORG iSCSI Disk (naa.6001405f8d087846e7b4f0e9e3acd44b) Device Display Name: LIO-ORG iSCSI Disk (naa.6001405057360ba9b4c434daa3c6770c)
Get the multipath information for the Ceph iSCSI disk from the previous step:
Example
# esxcli storage nmp path list -d naa.6001405f8d087846e7b4f0e9e3acd44b iqn.2005-03.com.ceph:esx1-00023d000001,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,t,1-naa.6001405f8d087846e7b4f0e9e3acd44b Runtime Name: vmhba64:C0:T0:L0 Device: naa.6001405f8d087846e7b4f0e9e3acd44b Device Display Name: LIO-ORG iSCSI Disk (naa.6001405f8d087846e7b4f0e9e3acd44b) Group State: active Array Priority: 0 Storage Array Type Path Config: {TPG_id=1,TPG_state=AO,RTP_id=1,RTP_health=UP} Path Selection Policy Path Config: {current path; rank: 0} iqn.2005-03.com.ceph:esx1-00023d000002,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,t,2-naa.6001405f8d087846e7b4f0e9e3acd44b Runtime Name: vmhba64:C1:T0:L0 Device: naa.6001405f8d087846e7b4f0e9e3acd44b Device Display Name: LIO-ORG iSCSI Disk (naa.6001405f8d087846e7b4f0e9e3acd44b) Group State: active unoptimized Array Priority: 0 Storage Array Type Path Config: {TPG_id=2,TPG_state=ANO,RTP_id=2,RTP_health=UP} Path Selection Policy Path Config: {non-current path; rank: 0}
From the example output, each path has an iSCSI/SCSI name with the following parts:
Initiator name =
iqn.2005-03.com.ceph:esx1
ISID =00023d000002
Target name =iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
Target port group =2
Device id =naa.6001405f8d087846e7b4f0e9e3acd44b
The
Group State
value ofactive
indicates this is the Active-Optimized path to the iSCSI gateway. Thegwcli
command lists theactive
as the iSCSI gateway owner. The rest of the paths will have theGroup State
value ofunoptimized
and will be the failover path, if theactive
path goes into adead
state.
To match all paths to their respective iSCSI gateways, run the following command:
# esxcli iscsi session connection list
Example output
vmhba64,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,00023d000001,0 Adapter: vmhba64 Target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw ISID: 00023d000001 CID: 0 DataDigest: NONE HeaderDigest: NONE IFMarker: false IFMarkerInterval: 0 MaxRecvDataSegmentLength: 131072 MaxTransmitDataSegmentLength: 262144 OFMarker: false OFMarkerInterval: 0 ConnectionAddress: 10.172.19.21 RemoteAddress: 10.172.19.21 LocalAddress: 10.172.19.11 SessionCreateTime: 08/16/18 04:20:06 ConnectionCreateTime: 08/16/18 04:20:06 ConnectionStartTime: 08/16/18 04:30:45 State: logged_in vmhba64,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,00023d000002,0 Adapter: vmhba64 Target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw ISID: 00023d000002 CID: 0 DataDigest: NONE HeaderDigest: NONE IFMarker: false IFMarkerInterval: 0 MaxRecvDataSegmentLength: 131072 MaxTransmitDataSegmentLength: 262144 OFMarker: false OFMarkerInterval: 0 ConnectionAddress: 10.172.19.22 RemoteAddress: 10.172.19.22 LocalAddress: 10.172.19.12 SessionCreateTime: 08/16/18 04:20:06 ConnectionCreateTime: 08/16/18 04:20:06 ConnectionStartTime: 08/16/18 04:30:41 State: logged_in
Match the path name with the
ISID
value, and theRemoteAddress
value is the IP address of the owning iSCSI gateway.
8.5. Upgrading the Ceph iSCSI gateway using Ansible
Upgrading the Red Hat Ceph Storage iSCSI gateways can be done by using an Ansible playbook designed for rolling upgrades.
Prerequisites
- A running Ceph iSCSI gateway.
- A running Red Hat Ceph Storage cluster.
Procedure
-
Verify the correct iSCSI gateway nodes are listed in the Ansible inventory file (
/etc/ansible/hosts
). Run the rolling upgrade playbook:
[admin@ansible ~]$ ansible-playbook rolling_update.yml
Run the site playbook to finish the upgrade:
[admin@ansible ~]$ ansible-playbook site.yml --limit iscsigws
8.6. Upgrading the Ceph iSCSI gateway using the command-line interface
Upgrading the Red Hat Ceph Storage iSCSI gateways can be done in a rolling fashion, by upgrading one iSCSI gateway node at a time.
Do not upgrade the iSCSI gateway while upgrading and restarting Ceph OSDs. Wait until the OSD upgrades are finished and the storage cluster is in an active+clean
state.
Prerequisites
- A running Ceph iSCSI gateway.
- A running Red Hat Ceph Storage cluster.
-
Having
root
access to the iSCSI gateway node.
Procedure
Update the iSCSI gateway packages:
[root@igw ~]# yum update ceph-iscsi-config ceph-iscsi-cli
Stop the iSCSI gateway daemons:
[root@igw ~]# systemctl stop rbd-target-api [root@igw ~]# systemctl stop rbd-target-gw
Verify that the iSCSI gateway daemons stopped cleanly:
[root@igw ~]# systemctl status rbd-target-gw
-
If the
rbd-target-gw
service successfully stops, then skip to step 4. If the
rbd-target-gw
service fails to stop, then do the following steps:If the
targetcli
package is not install, then install thetargetcli
package:[root@igw ~]# yum install targetcli
Check for existing target objects:
[root@igw ~]# targetlci ls
Example output
o- / ............................................................. [...] o- backstores .................................................... [...] | o- user:rbd ..................................... [Storage Objects: 0] o- iscsi .................................................. [Targets: 0]
If the
backstores
andStorage Objects
are empty, then the iSCSI target has been shutdown cleanly and you can skip to step 4.If you have still have target objects, then run the following command to force remove all target objects:
[root@igw ~]# targetlci clearconfig confirm=True
WarningIf multiple services are using the iSCSI target, then run
targetcli
in interactive mode to delete those specific objects.
-
If the
Update the
tcmu-runner
package:[root@igw ~]# yum update tcmu-runner
Stop the
tcmu-runner
service:[root@igw ~]# systemctl stop tcmu-runner
Restart the all the iSCSI gateway service in this order:
[root@igw ~]# systemctl start tcmu-runner [root@igw ~]# systemctl start rbd-target-gw [root@igw ~]# systemctl start rbd-target-api
8.7. Monitoring the iSCSI gateways
Red Hat provides an additional tool for Ceph iSCSI gateway environments to monitor performance of exported RADOS Block Device (RBD) images.
The gwtop
tool is a top
-like tool that displays aggregated performance metrics of RBD images that are exported to clients over iSCSI. The metrics are sourced from a Performance Metrics Domain Agent (PMDA). Information from the Linux-IO target (LIO) PMDA is used to list each exported RBD image with the connected client and its associated I/O metrics.
Requirements:
- A running Ceph iSCSI gateway
Installing:
Do the following steps on the iSCSI gateway nodes, as the root
user.
Enable the Ceph tools repository:
# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
Install the
ceph-iscsi-tools
package:# yum install ceph-iscsi-tools
Install the performance co-pilot package:
# yum install pcp
NoteFor more details on performance co-pilot, see the Red Hat Enterprise Linux Performance Tuning Guide.
Install the LIO PMDA package:
# yum install pcp-pmda-lio
Enable and start the performance co-pilot service:
# systemctl enable pmcd # systemctl start pmcd
Register the
pcp-pmda-lio
agent:cd /var/lib/pcp/pmdas/lio ./Install
By default, gwtop
assumes the iSCSI gateway configuration object is stored in a RADOS object called gateway.conf
in the rbd
pool. This configuration defines the iSCSI gateways to contact for gathering the performance statistics. This can be overridden by using either the -g
or -c
flags. See gwtop --help
for more details.
The LIO configuration determines which type of performance statistics to extract from performance co-pilot. When gwtop
starts it looks at the LIO configuration, and if it find user-space disks, then gwtop
selects the LIO collector automatically.
Example gwtop
Outputs:
For user backed storage (TCMU) devices:
gwtop 2/2 Gateways CPU% MIN: 4 MAX: 5 Network Total In: 2M Out: 3M 10:20:00 Capacity: 8G Disks: 8 IOPS: 503 Clients: 1 Ceph: HEALTH_OK OSDs: 3 Pool.Image Src Size iops rMB/s wMB/s Client iscsi.t1703 500M 0 0.00 0.00 iscsi.testme1 500M 0 0.00 0.00 iscsi.testme2 500M 0 0.00 0.00 iscsi.testme3 500M 0 0.00 0.00 iscsi.testme5 500M 0 0.00 0.00 rbd.myhost_1 T 4G 504 1.95 0.00 rh460p(CON) rbd.test_2 1G 0 0.00 0.00 rbd.testme 500M 0 0.00 0.00
In the Client column, (CON)
means the iSCSI initiator (client) is currently logged into the iSCSI gateway. If -multi-
is displayed, then multiple clients are mapped to the single RBD image.
SCSI persistent reservations are not supported. Mapping multiple iSCSI initiators to an RBD image is supported, if using a cluster aware file system or clustering software that does not rely on SCSI persistent reservations. For example, VMware vSphere environments using ATS is supported, but using Microsoft’s clustering server (MSCS) is not supported.