27.4. Using libStorageMgmt


To use libStorageMgmt interactively, use the lsmcli tool.
The lsmcli tool requires two things to run:
  • A Uniform Resource Identifier (URI) which is used to identify the plug-in to connect to the array and any configurable options the array requires.
  • A valid user name and password for the array.
URI has the following form:
plugin+optional-transport://user-name@host:port/?query-string-parameters
Each plug-in has different requirements for what is needed.

Example 27.1. Examples of Different Plug-in Requirements

Simulator Plug-in That Requires No User Name or Password

sim://

NetApp Plug-in over SSL with User Name root

ontap+ssl://root@filer.company.com/

SMI-S Plug-in over SSL for EMC Array

smis+ssl://admin@provider.com:5989/?namespace=root/emc

There are three options to use the URI:
  1. Pass the URI as part of the command.
    $ lsmcli -u sim://
  2. Store the URI in an environmental variable.
    $ export LSMCLI_URI=sim://
  3. Place the URI in the file ~/.lsmcli, which contains name-value pairs separated by "=". The only currently supported configuration is 'uri'.
Determining which URI to use needs to be done in this order. If all three are supplied, only the first one on the command line will be used.
Provide the password by specifying the -P option on the command line or by placing it in an environmental variable LSMCLI_PASSWORD.

Example 27.2. Example of lsmcli

An example for using the command line to create a new volume and making it visible to an initiator.
List arrays that are serviced by this connection:
$ lsmcli list --type SYSTEMS
ID     | Name                          | Status
-------+-------------------------------+--------
sim-01 | LSM simulated storage plug-in | OK
List storage pools:
$ lsmcli list --type POOLS -H
ID   | Name          | Total space          | Free space           | System ID
-----+---------------+----------------------+----------------------+-----------
POO2 | Pool 2        | 18446744073709551616 | 18446744073709551616 | sim-01
POO3 | Pool 3        | 18446744073709551616 | 18446744073709551616 | sim-01
POO1 | Pool 1        | 18446744073709551616 | 18446744073709551616 | sim-01
POO4 | lsm_test_aggr | 18446744073709551616 | 18446744073709551616 | sim-01
Create a volume:
$ lsmcli volume-create --name volume_name --size 20G --pool POO1 -H
ID   | Name        | vpd83                            | bs  | #blocks  | status | ...
-----+-------------+----------------------------------+-----+----------+--------+----
Vol1 | volume_name | F7DDF7CA945C66238F593BC38137BD2F | 512 | 41943040 | OK     | ...
Create an access group with an iSCSI initiator in it:
$ lsmcli --create-access-group example_ag --id iqn.1994-05.com.domain:01.89bd01 --type ISCSI --system sim-01
ID                               | Name       | Initiator ID                     |SystemID
---------------------------------+------------+----------------------------------+--------
782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-05.com.domain:01.89bd01 |sim-01
Create an access group with an iSCSI intiator in it:
$ lsmcli access-group-create --name example_ag --init iqn.1994-05.com.domain:01.89bd01 --init-type ISCSI --sys sim-01
ID                               | Name       | Initiator IDs                    | System ID
---------------------------------+------------+----------------------------------+-----------
782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-05.com.domain:01.89bd01 | sim-01
Allow the access group visibility to the newly created volume:
$ lsmcli access-group-grant --ag 782d00c8ac63819d6cca7069282e03a0 --vol Vol1 --access RW
The design of the library provides for a process separation between the client and the plug-in by means of inter-process communication (IPC). This prevents bugs in the plug-in from crashing the client application. It also provides a means for plug-in writers to write plug-ins with a license of their own choosing. When a client opens the library passing a URI, the client library looks at the URI to determine which plug-in should be used.
The plug-ins are technically stand alone applications but they are designed to have a file descriptor passed to them on the command line. The client library then opens the appropriate Unix domain socket which causes the daemon to fork and execute the plug-in. This gives the client library a point to point communication channel with the plug-in. The daemon can be restarted without affecting existing clients. While the client has the library open for that plug-in, the plug-in process is running. After one or more commands are sent and the plug-in is closed, the plug-in process cleans up and then exits.
The default behavior of lsmcli is to wait until the operation is complete. Depending on the requested operations, this could potentially could take many hours. To allow a return to normal usage, it is possible to use the -b option on the command line. If the exit code is 0 the command is completed. If the exit code is 7 the command is in progress and a job identifier is written to standard output. The user or script can then take the job ID and query the status of the command as needed by using lsmcli --jobstatus JobID. If the job is now completed, the exit value will be 0 and the results printed to standard output. If the command is still in progress, the return value will be 7 and the percentage complete will be printed to the standard output.

Example 27.3. An Asynchronous Example

Create a volume passing the -b option so that the command returns immediately.
$ lsmcli volume-create --name async_created --size 20G --pool POO1 -b JOB_3
Check the exit value:
$ echo $?
7
7 indicates that the job is still in progress.
Check if the job is completed:
$ lsmcli job-status --job JOB_3
33
Check the exit value. 7 indicates the job is still in progress so the standard output is the percentage done or 33% based on the given screen.
$ echo $?
7
Wait for sometime and check the exit value again:
$ lsmcli job-status --job JOB_3
ID   | Name          | vpd83                            | Block Size  | ...
-----+---------------+----------------------------------+-------------+-----
Vol2 | async_created | 855C9BA51991B0CC122A3791996F6B15 | 512         | ...
0 means success and standard out displays the new volume.
For scripting, pass the -t SeparatorCharacters option. This will make it easier to parse the output.

Example 27.4. Scripting Examples

$ lsmcli list --type volumes -t#
Vol1#volume_name#049167B5D09EC0A173E92A63F6C3EA2A#512#41943040#21474836480#OK#sim-01#POO1
Vol2#async_created#3E771A2E807F68A32FA5E15C235B60CC#512#41943040#21474836480#OK#sim-01#POO1
$ lsmcli list --type volumes -t " | "
Vol1 | volume_name | 049167B5D09EC0A173E92A63F6C3EA2A | 512 | 41943040 | 21474836480 | OK | 21474836480 | sim-01 | POO1
Vol2 | async_created | 3E771A2E807F68A32FA5E15C235B60CC | 512 | 41943040 | 21474836480 | OK | sim-01 | POO1
$ lsmcli list --type volumes -s
---------------------------------------------
ID         | Vol1
Name       | volume_name
VPD83      | 049167B5D09EC0A173E92A63F6C3EA2A
Block Size | 512
#blocks    | 41943040
Size       | 21474836480
Status     | OK
System ID  | sim-01
Pool ID    | POO1
---------------------------------------------
ID         | Vol2
Name       | async_created
VPD83      | 3E771A2E807F68A32FA5E15C235B60CC
Block Size | 512
#blocks    | 41943040
Size       | 21474836480
Status     | OK
System ID  | sim-01
Pool ID    | POO1
---------------------------------------------
It is recommended to use the Python library for non-trivial scripting.
For more information on lsmcli, see the lsmcli man page or lsmcli --help.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.