Questo contenuto non è disponibile nella lingua selezionata.

Chapter 17. Configuring a Linux instance on 64-bit IBM Z


This section describes most of the common tasks for installing Red Hat Enterprise Linux on 64-bit IBM Z.

17.1. Adding DASDs

Direct Access Storage Devices (DASDs) are a type of storage commonly used with 64-bit IBM Z. For more information, see Working with DASDs in the IBM Knowledge Center. The following example is how to set a DASD online, format it, and make the change persistent.

Verify that the device is attached or linked to the Linux system if running under z/VM.

CP ATTACH EB1C TO *

To link a mini disk to which you have access, run the following commands:

CP LINK RHEL7X 4B2E 4B2E MR
DASD 4B2E LINKED R/W

17.2. Dynamically setting DASDs online

This section contains information about setting a DASD online.

Procedure

  1. Use the cio_ignore utility to remove the DASD from the list of ignored devices and make it visible to Linux:

    # cio_ignore -r device_number

    Replace device_number with the device number of the DASD. For example:

    # cio_ignore -r 4b2e
  2. Set the device online. Use a command of the following form:

    # chccwdev -e device_number

    Replace device_number with the device number of the DASD. For example:

    # chccwdev -e 4b2e

    As an alternative, you can set the device online using sysfs attributes:

    1. Use the cd command to change to the /sys/ directory that represents that volume:

      # cd /sys/bus/ccw/drivers/dasd-eckd/0.0.4b2e/
      # ls -l
      total 0
      -r--r--r--  1 root root 4096 Aug 25 17:04 availability
      -rw-r--r--  1 root root 4096 Aug 25 17:04 cmb_enable
      -r--r--r--  1 root root 4096 Aug 25 17:04 cutype
      -rw-r--r--  1 root root 4096 Aug 25 17:04 detach_state
      -r--r--r--  1 root root 4096 Aug 25 17:04 devtype
      -r--r--r--  1 root root 4096 Aug 25 17:04 discipline
      -rw-r--r--  1 root root 4096 Aug 25 17:04 online
      -rw-r--r--  1 root root 4096 Aug 25 17:04 readonly
      -rw-r--r--  1 root root 4096 Aug 25 17:04 use_diag
    2. Check to see if the device is already online:

      # cat online
      0
    3. If it is not online, enter the following command to bring it online:

      # echo 1 > online
      # cat online
      1
  3. Verify which block devnode it is being accessed as:

    # ls -l
    total 0
    -r--r--r--  1 root root 4096 Aug 25 17:04 availability
    lrwxrwxrwx  1 root root    0 Aug 25 17:07 block -> ../../../../block/dasdb
    -rw-r--r--  1 root root 4096 Aug 25 17:04 cmb_enable
    -r--r--r--  1 root root 4096 Aug 25 17:04 cutype
    -rw-r--r--  1 root root 4096 Aug 25 17:04 detach_state
    -r--r--r--  1 root root 4096 Aug 25 17:04 devtype
    -r--r--r--  1 root root 4096 Aug 25 17:04 discipline
    -rw-r--r--  1 root root    0 Aug 25 17:04 online
    -rw-r--r--  1 root root 4096 Aug 25 17:04 readonly
    -rw-r--r--  1 root root 4096 Aug 25 17:04 use_diag

    As shown in this example, device 4B2E is being accessed as /dev/dasdb.

These instructions set a DASD online for the current session, but this is not persistent across reboots.

For instructions on how to set a DASD online persistently, see Persistently setting DASDs online. When you work with DASDs, use the persistent device symbolic links under /dev/disk/by-path/.

17.3. Preparing a new DASD with low-level formatting

Once the disk is online, change back to the /root directory and low-level format the device. This is only required once for a DASD during its entire lifetime:

# cd /root
# dasdfmt -b 4096 -d cdl -p /dev/disk/by-path/ccw-0.0.4b2e
Drive Geometry: 10017 Cylinders * 15 Heads =  150255 Tracks

I am going to format the device /dev/disk/by-path/ccw-0.0.4b2e in the following way:
Device number of device : 0x4b2e
Labelling device        : yes
Disk label              : VOL1
Disk identifier         : 0X4B2E
Extent start (trk no)   : 0
Extent end (trk no)     : 150254
Compatible Disk Layout  : yes
Blocksize               : 4096

--->> ATTENTION! <<---
All data of that device will be lost.
Type "yes" to continue, no will leave the disk untouched: yes
cyl    97 of  3338 |#----------------------------------------------|   2%

When the progress bar reaches the end and the format is complete, dasdfmt prints the following output:

Rereading the partition table...
Exiting...

Now, use fdasd to partition the DASD. You can create up to three partitions on a DASD. In our example here, we create one partition spanning the whole disk:

# fdasd -a /dev/disk/by-path/ccw-0.0.4b2e
reading volume label ..: VOL1
reading vtoc ..........: ok

auto-creating one partition for the whole disk...
writing volume label...
writing VTOC...
rereading partition table...

After a (low-level formatted) DASD is online, it can be used like any other disk under Linux. For example, you can create file systems, LVM physical volumes, or swap space on its partitions, for example /dev/disk/by-path/ccw-0.0.4b2e-part1. Never use the full DASD device (dev/dasdb) for anything but the commands dasdfmt and fdasd. If you want to use the entire DASD, create one partition spanning the entire drive as in the fdasd example above.

To add additional disks later without breaking existing disk entries in, for example, /etc/fstab, use the persistent device symbolic links under /dev/disk/by-path/.

17.4. Persistently setting DASDs online

The above instructions described how to activate DASDs dynamically in a running system. However, such changes are not persistent and do not survive a reboot. Making changes to the DASD configuration persistent in your Linux system depends on whether the DASDs belong to the root file system. Those DASDs required for the root file system need to be activated very early during the boot process by the initramfs to be able to mount the root file system.

The cio_ignore commands are handled transparently for persistent device configurations and you do not need to free devices from the ignore list manually.

17.5. DASDs that are part of the root file system

The file you have to modify to add DASDs that are part of the root file system has changed in Red Hat Enterprise Linux 9. Instead of editing the /etc/zipl.conf file, the new file to be edited, and its location, may be found by running the following commands:

# machine_id=$(cat /etc/machine-id)
# kernel_version=$(uname -r)
# ls /boot/loader/entries/$machine_id-$kernel_version.conf

There is one boot option to activate DASDs early in the boot process: rd.dasd=. This option takes a Direct Access Storage Device (DASD) adapter device bus identifier. For multiple DASDs, specify the parameter multiple times, or use a comma separated list of bus IDs. To specify a range of DASDs, specify the first and the last bus ID. Below is an example of the /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf file for a system that uses physical volumes on partitions of two DASDs for an LVM volume group vg_devel1 that contains a logical volume lv_root for the root file system.

title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa)
version 4.18.0-80.el8.s390x
linux /boot/vmlinuz-4.18.0-80.el8.s390x
initrd /boot/initramfs-4.18.0-80.el8.s390x.img
options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0
id rhel-20181027190514-4.18.0-80.el8.s390x
grub_users $grub_users
grub_arg --unrestricted
grub_class kernel

To add another physical volume on a partition of a third DASD with device bus ID 0.0.202b. To do this, add rd.dasd=0.0.202b to the parameters line of your boot kernel in /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-32.el8.s390x.conf:

title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa)
version 4.18.0-80.el8.s390x
linux /boot/vmlinuz-4.18.0-80.el8.s390x
initrd /boot/initramfs-4.18.0-80.el8.s390x.img
options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0
id rhel-20181027190514-4.18.0-80.el8.s390x
grub_users $grub_users
grub_arg --unrestricted
grub_class kernel
Warning

Make sure the length of the kernel command line in the configuration file does not exceed 896 bytes. Otherwise, the boot loader cannot be saved, and the installation fails.

Run zipl to apply the changes of the configuration file for the next IPL:

# zipl -V
Using config file '/etc/zipl.conf'
Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf'
Target device information
  Device..........................: 5e:00
  Partition.......................: 5e:01
  Device name.....................: dasda
  Device driver name..............: dasd
  DASD device number..............: 0201
  Type............................: disk partition
  Disk layout.....................: ECKD/compatible disk layout
  Geometry - heads................: 15
  Geometry - sectors..............: 12
  Geometry - cylinders............: 13356
  Geometry - start................: 24
  File system block size..........: 4096
  Physical block size.............: 4096
  Device size in physical blocks..: 262152
Building bootmap in '/boot'
Building menu 'zipl-automatic-menu'
Adding #1: IPL section '4.18.0-80.el8.s390x' (default)
  initial ramdisk...: /boot/initramfs-4.18.0-80.el8.s390x.img
  kernel image......: /boot/vmlinuz-4.18.0-80.el8.s390x
  kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0'
  component address:
    kernel image....: 0x00010000-0x0049afff
    parmline........: 0x0049b000-0x0049bfff
    initial ramdisk.: 0x004a0000-0x01a26fff
    internal loader.: 0x0000a000-0x0000cfff
Preparing boot menu
  Interactive prompt......: enabled
  Menu timeout............: 5 seconds
  Default configuration...: '4.18.0-80.el8.s390x'
Preparing boot device: dasda (0201).
Syncing disks...
Done.

17.6. DASDs that are not part of the root file system

Direct Access Storage Devices (DASDs) that are not part of the root file system, that is, data disks, are persistently configured in the /etc/dasd.conf file. This file contains one DASD per line, where each line begins with the DASD’s bus ID.

When adding a DASD to the /etc/dasd.conf file, use key-value pairs to specify the options for each entry. Separate the key and its value with an equal (=) sign. When adding multiple options, use a space or a tab to separate each option.

Example /etc/dasd.conf file

0.0.0207
0.0.0200 use_diag=1 readonly=1

Changes to the /etc/dasd.conf file take effect after a system reboot or after a new DASD is dynamically added by changing the system’s I/O configuration (that is, the DASD is attached under z/VM).

Alternatively, to activate a DASD that you have added to the /etc/dasd.conf file, complete the following steps:

  1. Remove the DASD from the list of ignored devices and make it visible using the cio_ignore utility:

    # cio_ignore -r device_number

    where device_number is the DASD device number.

    For example, if the device number is 021a, run:

    # cio_ignore -r 021a
  2. Activate the DASD by writing to the device’s uevent attribute:

    # echo add > /sys/bus/ccw/devices/dasd-bus-ID/uevent

    where dasd-bus-ID is the DASD’s bus ID.

    For example, if the bus ID is 0.0.021a, run:

    # echo add > /sys/bus/ccw/devices/0.0.021a/uevent

17.7. FCP LUNs that are part of the root file system

The only file you have to modify for adding FCP LUNs that are part of the root file system has changed in Red Hat Enterprise Linux 9. Instead of editing the /etc/zipl.conf file, the new file to be edited, and its location, may be found by running the following commands:

# machine_id=$(cat /etc/machine-id)
# kernel_version=$(uname -r)
# ls /boot/loader/entries/$machine_id-$kernel_version.conf

Red Hat Enterprise Linux provides a parameter to activate FCP LUNs early in the boot process: rd.zfcp=. The value is a comma-separated list containing the FCP device bus ID, the target WWPN as 16 digit hexadecimal number prefixed with 0x, and the FCP LUN prefixed with 0x and padded with zeroes to the right to have 16 hexadecimal digits.

The WWPN and FCP LUN values are only necessary if the zFCP device is not configured in NPIV mode, when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter or when installing RHEL-9.0 or older releases. Otherwise they can be omitted, for example, rd.zfcp=0.0.4000. Below is an example of the /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-5.14.0-55.el9.s390x.conf file for a system that uses a physical volume on a partition of an FCP-attached SCSI disk, with two paths, for an LVM volume group vg_devel1 that contains a logical volume lv_root for the root file system.

title Red Hat Enterprise Linux (5.14.0-55.el9.s390x) 9.0 (Plow)
version 5.14.0-55.el9.s390x
linux /boot/vmlinuz-5.14.0-55.el9.s390x
initrd /boot/initramfs-5.14.0-55.el9.s390x.img
options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a000000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0
id rhel-20181027190514-5.14.0-55.el9.s390x
grub_users $grub_users
grub_arg --unrestricted
grub_class kernel
  1. To add another physical volume on a partition of a second FCP-attached SCSI disk with FCP LUN 0x401040a300000000 using the same two paths as the already existing physical volume, add rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 and rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a300000000 to the parameters line of your boot kernel in /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-5.14.0-55.el9.s390x.conf. For example:
title Red Hat Enterprise Linux (5.14.0-55.el9.s390x) 9.0 (Plow)
version 5.14.0-55.el9.s390x
linux /boot/vmlinuz-5.14.0-55.el9.s390x
initrd /boot/initramfs-5.14.0-55.el9.s390x.img
options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0
id rhel-20181027190514-5.14.0-55.el9.s390x
grub_users $grub_users
grub_arg --unrestricted
grub_class kernel
Warning

Make sure the length of the kernel command line in the configuration file does not exceed 896 bytes. Otherwise, the boot loader cannot be saved, and the installation fails.

  • Run dracut -f to update the initial RAM disk of your target kernel.
  • Run zipl to apply the changes of the configuration file for the next IPL:
# zipl -V
Using config file '/etc/zipl.conf'
Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-5.14.0-55.el9.s390x.conf'
Run /lib/s390-tools/zipl_helper.device-mapper /boot
Target device information
Device..........................: fd:00
Partition.......................: fd:01
Device name.....................: dm-0
Device driver name..............: device-mapper
Type............................: disk partition
Disk layout.....................: SCSI disk layout
Geometry - start................: 2048
File system block size..........: 4096
Physical block size.............: 512
Device size in physical blocks..: 10074112
Building bootmap in '/boot/'
Building menu 'zipl-automatic-menu'
Adding #1: IPL section '5.14.0-55.el9.s390x' (default)
kernel image......: /boot/vmlinuz-5.14.0-55.el9.s390x
kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0'
initial ramdisk...: /boot/initramfs-5.14.0-55.el9.s390x.img component address:
kernel image....: 0x00010000-0x007a21ff
parmline........: 0x00001000-0x000011ff
initial ramdisk.: 0x02000000-0x028f63ff
internal loader.: 0x0000a000-0x0000a3ff
Preparing boot device: dm-0.
Detected SCSI PCBIOS disk layout.
Writing SCSI master boot record.
Syncing disks...
Done.

17.8. FCP LUNs that are not part of the root file system

FCP LUNs that are not part of the root file system, such as data disks, are persistently configured in the file /etc/zfcp.conf. It contains one FCP LUN per line. Each line contains the device bus ID of the FCP adapter, the target WWPN as 16 digit hexadecimal number prefixed with 0x, and the FCP LUN prefixed with 0x and padded with zeroes to the right to have 16 hexadecimal digits, separated by a space or tab.

The WWPN and FCP LUN values are only necessary if the zFCP device is not configured in NPIV mode, when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter or when installing RHEL-9.0 or older releases. Otherwise they can be omitted and only the device bus ID is mandatory.

Entries in /etc/zfcp.conf are activated and configured by udev when an FCP adapter is added to the system. At boot time, all FCP adapters visible to the system are added and trigger udev.

Example content of /etc/zfcp.conf:

0.0.fc00 0x5105074308c212e9 0x401040a000000000
0.0.fc00 0x5105074308c212e9 0x401040a100000000
0.0.fc00 0x5105074308c212e9 0x401040a300000000
0.0.fcd0 0x5105074308c2aee9 0x401040a000000000
0.0.fcd0 0x5105074308c2aee9 0x401040a100000000
0.0.fcd0 0x5105074308c2aee9 0x401040a300000000
0.0.4000
0.0.5000

Modifications of /etc/zfcp.conf only become effective after a reboot of the system or after the dynamic addition of a new FCP channel by changing the system’s I/O configuration (for example, a channel is attached under z/VM). Alternatively, you can trigger the activation of a new entry in /etc/zfcp.conf for an FCP adapter which was previously not active, by executing the following commands:

  1. Use the zfcp_cio_free utility to remove the FCP adapters from the list of ignored devices and make them visible to Linux:

    # zfcp_cio_free
  2. To apply the additions from /etc/zfcp.conf to the running system, issue:

    # zfcpconf.sh

17.9. Adding a qeth device

The qeth network device driver supports 64-bit IBM Z OSA-Express features in QDIO mode, HiperSockets, z/VM guest LAN, and z/VM VSWITCH.

For more information about the qeth device driver naming scheme, see Customizing boot parameters.

17.10. Dynamically adding a qeth device

This section contains information about how to add a qeth device dynamically.

Procedure

  1. Determine whether the qeth device driver modules are loaded. The following example shows loaded qeth modules:

    # lsmod | grep qeth
    qeth_l3                69632  0
    qeth_l2                49152  1
    qeth                  131072  2 qeth_l3,qeth_l2
    qdio                   65536  3 qeth,qeth_l3,qeth_l2
    ccwgroup               20480  1 qeth

    If the output of the lsmod command shows that the qeth modules are not loaded, run the modprobe command to load them:

    # modprobe qeth
  2. Use the cio_ignore utility to remove the network channels from the list of ignored devices and make them visible to Linux:

    # cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id

    Replace read_device_bus_id,write_device_bus_id,data_device_bus_id with the three device bus IDs representing a network device. For example, if the read_device_bus_id is 0.0.f500, the write_device_bus_id is 0.0.f501, and the data_device_bus_id is 0.0.f502:

    # cio_ignore -r 0.0.f500,0.0.f501,0.0.f502
  3. Use the znetconf utility to sense and list candidate configurations for network devices:

    # znetconf -u
    Scanning for network devices...
    Device IDs                 Type    Card Type      CHPID Drv.
    ------------------------------------------------------------
    0.0.f500,0.0.f501,0.0.f502 1731/01 OSA (QDIO)        00 qeth
    0.0.f503,0.0.f504,0.0.f505 1731/01 OSA (QDIO)        01 qeth
    0.0.0400,0.0.0401,0.0.0402 1731/05 HiperSockets      02 qeth
  4. Select the configuration you want to work with and use znetconf to apply the configuration and to bring the configured group device online as network device.

    # znetconf -a f500
    Scanning for network devices...
    Successfully configured device 0.0.f500 (encf500)
  5. Optional: You can also pass arguments that are configured on the group device before it is set online:

    # znetconf -a f500 -o portname=myname
    Scanning for network devices...
    Successfully configured device 0.0.f500 (encf500)

    Now you can continue to configure the encf500 network interface.

Alternatively, you can use sysfs attributes to set the device online as follows:

  1. Create a qeth group device:

    # echo read_device_bus_id,write_device_bus_id,data_device_bus_id > /sys/bus/ccwgroup/drivers/qeth/group

    For example:

    # echo 0.0.f500,0.0.f501,0.0.f502 > /sys/bus/ccwgroup/drivers/qeth/group
  2. Next, verify that the qeth group device was created properly by looking for the read channel:

    # ls /sys/bus/ccwgroup/drivers/qeth/0.0.f500

    You can optionally set additional parameters and features, depending on the way you are setting up your system and the features you require, such as:

    • portno
    • layer2
    • portname
  3. Bring the device online by writing 1 to the online sysfs attribute:

    # echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online
  4. Then verify the state of the device:

    # cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online
    											1

    A return value of 1 indicates that the device is online, while a return value 0 indicates that the device is offline.

  5. Find the interface name that was assigned to the device:

    # cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/if_name
    encf500

    Now you can continue to configure the encf500 network interface.

    The following command from the s390utils package shows the most important settings of your qeth device:

    # lsqeth encf500
    Device name                     : encf500
    -------------------------------------------------
    card_type               : OSD_1000
    cdev0                   : 0.0.f500
    cdev1                   : 0.0.f501
    cdev2                   : 0.0.f502
    chpid                   : 76
    online                  : 1
    portname                : OSAPORT
    portno                  : 0
    state                   : UP (LAN ONLINE)
    priority_queueing       : always queue 0
    buffer_count            : 16
    layer2                  : 1
    isolation               : none

17.11. Persistently adding a qeth device

To make a new qeth device persistent, create a configuration file for the new interface. The network interface configuration files are placed in the /etc/NetworkManager/system-connections/ directory.

The network configuration files use the naming convention device.nmconnection, where device is the value found in the interface-name file in the qeth group device that was created earlier, for example enc9a0. The cio_ignore commands are handled transparently for persistent device configurations and you do not need to free devices from the ignore list manually.

If a configuration file for another device of the same type already exists, copy it to the new name and edit it:

# cd /etc/NetworkManager/system-connections/
# cp enc9a0.nmconnection enc600.nmconnection

To learn IDs of your network devices, use the lsqeth utility:

# lsqeth -p
devices                    CHPID interface        cardtype       port chksum prio-q'ing rtr4 rtr6 lay'2 cnt
-------------------------- ----- ---------------- -------------- ---- ------ ---------- ---- ---- ----- -----
0.0.09a0/0.0.09a1/0.0.09a2 x00   enc9a0    Virt.NIC QDIO  0    sw     always_q_2 n/a  n/a  1     64
0.0.0600/0.0.0601/0.0.0602 x00   enc600    Virt.NIC QDIO  0    sw     always_q_2 n/a  n/a  1     64

If you do not have a similar device defined, create a new file. Use this example:

[connection]
type=ethernet
interface-name=enc600

[ipv4]
address1=10.12.20.136/24,10.12.20.1
dns=10.12.20.53;
method=manual

[ethernet]
mac-address=00:53:00:8f:fa:66

Edit the new enc600.nmconnection file as follows:

  1. Ensure the new connection file is owned by root:root:

    # chown root:root /etc/NetworkManager/system-connections/enc600.nmconnection
  2. Add more details in this file or modify these parameters based on your connection requirements.
  3. Save the file.
  4. Reload the connection profile:

    # nmcli connection reload
  5. To view complete details of the connection newly added, enter:

    # nmcli connection show enc600

Changes to the enc600.nmconnection file become effective after either rebooting the system, dynamic addition of new network device channels by changing the system’s I/O configuration (for example, attaching under z/VM), or reloading network connections. Alternatively, you can trigger the activation of enc600.nmconnection for network channels, which were previously not active yet, by executing the following commands:

  1. Use the cio_ignore utility to remove the network channels from the list of ignored devices and make them visible to Linux:

    # cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id

    Replace read_device_bus_id, write_device_bus_id, data_device_bus_id with the three device bus IDs representing a network device. For example, if the read_device_bus_id is 0.0.0600, the write_device_bus_id is 0.0.0601, and the data_device_bus_id is 0.0.0602:

    #  cio_ignore -r 0.0.0600,0.0.0601,0.0.0602
  2. To trigger the uevent that activates the change, issue:

    # echo add > /sys/bus/ccw/devices/read-channel/uevent

    For example:

    # echo add > /sys/bus/ccw/devices/0.0.0600/uevent
  3. Check the status of the network device:

    # lsqeth
  4. If the default route information has changed, you must also update the ipaddress1 parameters in both the [ipv4] and [ipv6] sections of the /etc/NetworkManager/system-connections/<profile_name>.nmconnection file accordingly:

    [ipv4]
    address1=10.12.20.136/24,10.12.20.1
    [ipv6]
    address1=2001:db8:1::1,2001:db8:1::fffe
  5. Now start the new interface:

    # nmcli connection up enc600
  6. Check the status of the interface:

    # ip addr show enc600
    3: enc600:  <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 3c:97:0e:51:38:17 brd ff:ff:ff:ff:ff:ff
    10.12.20.136/24 brd 10.12.20.1 scope global dynamic enc600
    valid_lft 81487sec preferred_lft 81487sec
    inet6 1574:12:5:1185:3e97:eff:fe51:3817/64 scope global noprefixroute dynamic
    valid_lft 2591994sec preferred_lft 604794sec
    inet6 fe45::a455:eff:d078:3847/64 scope link
    valid_lft forever preferred_lft forever
  7. Check the routing for the new interface:

    # ip route
    default via 10.12.20.136 dev enc600 proto dhcp src
  8. Verify your changes by using the ping utility to ping the gateway or another host on the subnet of the new device:

    # ping -c 1 10.12.20.136
    PING 10.12.20.136 (10.12.20.136) 56(84) bytes of data.
    64 bytes from 10.12.20.136: icmp_seq=0 ttl=63 time=8.07 ms
  9. If the default route information has changed, you must also update /etc/sysconfig/network accordingly.

Additional resources

  • nm-settings-keyfile man page on your system

17.12. Configuring an 64-bit IBM Z network device for network root file system

To add a network device that is required to access the root file system, you only have to change the boot options. The boot options can be in a parameter file, however, the /etc/zipl.conf file no longer contains specifications of the boot records. The file that needs to be modified can be located using the following commands:

# machine_id=$(cat /etc/machine-id)
# kernel_version=$(uname -r)
# ls /boot/loader/entries/$machine_id-$kernel_version.conf

Dracut, the mkinitrd successor that provides the functionality in the initramfs that in turn replaces initrd, provides a boot parameter to activate network devices on 64-bit IBM Z early in the boot process: rd.znet=.

As input, this parameter takes a comma-separated list of the NETTYPE (qeth, lcs, ctc), two (lcs, ctc) or three (qeth) device bus IDs, and optional additional parameters consisting of key-value pairs corresponding to network device sysfs attributes. This parameter configures and activates the 64-bit IBM Z network hardware. The configuration of IP addresses and other network specifics works the same as for other platforms. See the dracut documentation for more details.

The cio_ignore commands for the network channels are handled transparently on boot.

Example boot options for a root file system accessed over the network through NFS:

root=10.16.105.196:/nfs/nfs_root cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0,portname=OSAPORT ip=10.16.105.197:10.16.105.196:10.16.111.254:255.255.248.0:nfs‑server.subdomain.domain:enc9a0:none rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us
Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.