Search

Chapter 8. Configuring an iSCSI initiator

download PDF

An iSCSI initiator forms a session to connect to the iSCSI target. By default, an iSCSI service is lazily started and the service starts after running the iscsiadm command. If root is not on an iSCSI device or there are no nodes marked with node.startup = automatic then the iSCSI service will not start until an iscsiadm command is executed that requires iscsid or the iscsi kernel modules to be started.

Execute the systemctl start iscsid.service command as root to force the iscsid daemon to run and iSCSI kernel modules to load.

8.1. Creating an iSCSI initiator

Create an iSCSI initiator to connect to the iSCSI target to access the storage devices on the server.

Prerequisites

  • You have an iSCSI target’s hostname and IP address:

    • If you are connecting to a storage target that the external software created, find the target’s hostname and IP address from the storage administrator.
    • If you are creating an iSCSI target, see Creating an iSCSI target.

Procedure

  1. Install iscsi-initiator-utils on client machine:

    # dnf install iscsi-initiator-utils
  2. Check the initiator name:

    # cat /etc/iscsi/initiatorname.iscsi
    
    InitiatorName=iqn.2006-04.com.example:888
  3. If the ACL was given a custom name in Creating an iSCI ACL, update the initiator name to match the ACL:

    1. Open the /etc/iscsi/initiatorname.iscsi file and modify the initiator name:

      # vi /etc/iscsi/initiatorname.iscsi
      
      InitiatorName=custom-name
    2. Restart the iscsid service:

      # systemctl restart iscsid
  4. Discover the target and log in to the target with the displayed target IQN:

    # iscsiadm -m discovery -t st -p 10.64.24.179
        10.64.24.179:3260,1 iqn.2006-04.example:444
    
    # iscsiadm -m node -T iqn.2006-04.example:444 -l
        Logging in to [iface: default, target: iqn.2006-04.example:444, portal: 10.64.24.179,3260] (multiple)
        Login to [iface: default, target: iqn.2006-04.example:444, portal: 10.64.24.179,3260] successful.

    Replace 10.64.24.179 with the target-ip-address.

    You can use this procedure for any number of initiators connected to the same target if their respective initiator names are added to the ACL as described in the Creating an iSCSI ACL.

  5. Find the iSCSI disk name and create a file system on this iSCSI disk:

    # grep "Attached SCSI" /var/log/messages
    
    # mkfs.ext4 /dev/disk_name

    Replace disk_name with the iSCSI disk name displayed in the /var/log/messages file.

  6. Mount the file system:

    # mkdir /mount/point
    
    # mount /dev/disk_name /mount/point

    Replace /mount/point with the mount point of the partition.

  7. Edit the /etc/fstab file to mount the file system automatically when the system boots:

    # vi /etc/fstab
    
    /dev/disk_name /mount/point ext4 _netdev 0 0

    Replace disk_name with the iSCSI disk name and /mount/point with the mount point of the partition.

Additional resources

  • targetcli(8) and iscsiadm(8) man pages

8.2. Setting up the Challenge-Handshake Authentication Protocol for the initiator

By using the Challenge-Handshake Authentication Protocol (CHAP), users can protect the target with a password. The initiator must be aware of this password to be able to connect to the target.

Prerequisites

Procedure

  1. Enable CHAP authentication in the iscsid.conf file:

    # vi /etc/iscsi/iscsid.conf
    
    node.session.auth.authmethod = CHAP

    By default, the node.session.auth.authmethod is set to None

  2. Add target username and password in the iscsid.conf file:

    node.session.auth.username = redhat
    node.session.auth.password = redhat_passwd
  3. Start the iscsid daemon:

    # systemctl start iscsid.service

Additional resources

  • iscsiadm(8) man page

8.3. Monitoring an iSCSI session using the iscsiadm utility

This procedure describes how to monitor the iscsi session using the iscsiadm utility.

By default, an iSCSI service is lazily started and the service starts after running the iscsiadm command. If root is not on an iSCSI device or there are no nodes marked with node.startup = automatic then the iSCSI service will not start until an iscsiadm command is executed that requires iscsid or the iscsi kernel modules to be started.

Execute the systemctl start iscsid.service command as root to force the iscsid daemon to run and iSCSI kernel modules to load.

Procedure

  1. Install the iscsi-initiator-utils on client machine:

    # dnf install iscsi-initiator-utils
  2. Find information about the running sessions:

    # iscsiadm -m session -P 3

    This command displays the session or device state, session ID (sid), some negotiated parameters, and the SCSI devices accessible through the session.

    • For shorter output, for example, to display only the sid-to-node mapping, run:

      # iscsiadm -m session -P 0
              or
      # iscsiadm -m session
      
      tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
      tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

      These commands print the list of running sessions in the following format: driver [sid] target_ip:port,target_portal_group_tag proper_target_name.

Additional resources

  • /usr/share/doc/iscsi-initiator-utils-version/README file
  • iscsiadm(8) man page

8.4. DM Multipath overrides of the device timeout

The recovery_tmo sysfs option controls the timeout for a particular iSCSI device. The following options globally override the recovery_tmo values:

  • The replacement_timeout configuration option globally overrides the recovery_tmo value for all iSCSI devices.
  • For all iSCSI devices that are managed by DM Multipath, the fast_io_fail_tmo option in DM Multipath globally overrides the recovery_tmo value.

    The fast_io_fail_tmo option in DM Multipath also overrides the fast_io_fail_tmo option in Fibre Channel devices.

The DM Multipath fast_io_fail_tmo option takes precedence over replacement_timeout. Red Hat does not recommend using replacement_timeout to override recovery_tmo in devices managed by DM Multipath because DM Multipath always resets recovery_tmo, when the multipathd service reloads.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.