16.2. Common SSM Tasks
The following sections describe common SSM tasks.
16.2.1. Installing SSM
To install SSM use the following command:
# yum install system-storage-manager
There are several back ends that are enabled only if the supporting packages are installed:
- The LVM back end requires the
lvm2
package. - The Btrfs back end requires the
btrfs-progs
package. - The Crypt back end requires the
device-mapper
andcryptsetup
packages.
16.2.2. Displaying Information about All Detected Devices
Displaying information about all detected devices, pools, volumes, and snapshots is done with the
list
command. The ssm list
command with no options display the following output:
#
ssm list
---------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------- /dev/sda 2.00 GB PARTITIONED /dev/sda1 47.83 MB /test /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel ---------------------------------------------------------- ------------------------------------------------ Pool Type Devices Free Used Total ------------------------------------------------ rhel lvm 1 0.00 KB 14.51 GB 14.51 GB ------------------------------------------------ --------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point --------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/sda1 47.83 MB xfs 44.50 MB 44.41 MB part /test /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot ---------------------------------------------------------------------------------
This display can be further narrowed down by using arguments to specify what should be displayed. The list of available options can be found with the
ssm list --help
command.
Note
Depending on the argument given, SSM may not display everything.
- Running the
devices
ordev
argument omits some devices. CDRoms and DM/MD devices, for example, are intentionally hidden as they are listed as volumes. - Some back ends do not support snapshots and cannot distinguish between a snapshot and a regular volume. Running the
snapshot
argument on one of these back ends cause SSM to attempt to recognize the volume name in order to identify a snapshot. If the SSM regular expression does not match the snapshot pattern then the snapshot is not be recognized. - With the exception of the main Btrfs volume (the file system itself), any unmounted Btrfs volumes are not shown.
16.2.3. Creating a New Pool, Logical Volume, and File System
In this section, a new pool is being created with a default name which have the devices
/dev/vdb
and /dev/vdc
, a logical volume of 1G, and an XFS file system.
The command to create this scenario is
ssm create --fs xfs -s 1G /dev/vdb /dev/vdc
. The following options are used:
- The
--fs
option specifies the required file system type. Current supported file system types are:- ext3
- ext4
- xfs
- btrfs
- The
-s
specifies the size of the logical volume. The following suffixes are supported to define units:K
ork
for kilobytesM
orm
for megabytesG
org
for gigabytesT
ort
for terabytesP
orp
for petabytesE
ore
for exabytes
- Additionaly, with the
-s
option, the new size can be specified as a percentage. Look at the examples:10%
for 10 percent of the total pool size10%FREE
for 10 percent of the free pool space10%USED
for 10 percent of the used pool space
The two listed devices,
/dev/vdb
and /dev/vdc
, are the two devices you wish to create.
#
ssm create --fs xfs -s 1G /dev/vdb /dev/vdc
Physical volume "/dev/vdb" successfully created Physical volume "/dev/vdc" successfully created Volume group "lvm_pool" successfully created Logical volume "lvol001" created
There are two other options for the
ssm command
that may be useful. The first is the -p pool
command. This specifies the pool the volume is to be created on. If it does not yet exist, then SSM creates it. This was omitted in the given example which caused SSM to use the default name lvm_pool
. However, to use a specific name to fit in with any existing naming conventions, the -p
option should be used.
The second useful option is the
-n name
command. This names the newly created logical volume. As with the -p
, this is needed in order to use a specific name to fit in with any existing naming conventions.
An example of these two options being used follows:
#
ssm create --fs xfs -p new_pool -n XFS_Volume /dev/vdd
Volume group "new_pool" successfully created Logical volume "XFS_Volume" created
SSM has now created two physical volumes, a pool, and a logical volume with the ease of only one command.
16.2.4. Checking a File System's Consistency
The
ssm check
command checks the file system consistency on the volume. It is possible to specify multiple volumes to check. If there is no file system on the volume, then the volume is skipped.
To check all devices in the volume
lvol001
, run the command ssm check /dev/lvm_pool/lvol001
.
#
ssm check /dev/lvm_pool/lvol001
Checking xfs file system on '/dev/mapper/lvm_pool-lvol001'. Phase 1 - find and verify superblock... Phase 2 - using internal log - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.
16.2.5. Increasing a Volume's Size
The
ssm resize
command changes the size of the specified volume and file system. If there is no file system then only the volume itself will be resized.
For this example, we currently have one logical volume on
/dev/vdb
that is 900MB called lvol001
.
#
ssm list
----------------------------------------------------------------- Device Free Used Total Pool Mount point ----------------------------------------------------------------- /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel /dev/vdb 120.00 MB 900.00 MB 1.00 GB lvm_pool /dev/vdc 1.00 GB ----------------------------------------------------------------- --------------------------------------------------------- Pool Type Devices Free Used Total --------------------------------------------------------- lvm_pool lvm 1 120.00 MB 900.00 MB 1020.00 MB rhel lvm 1 0.00 KB 14.51 GB 14.51 GB --------------------------------------------------------- -------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point -------------------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB linear /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot --------------------------------------------------------------------------------------------
The logical volume needs to be increased by another 500MB. To do so we will need to add an extra device to the pool:
~]# ssm resize -s +500M /dev/lvm_pool/lvol001 /dev/vdc Physical volume "/dev/vdc" successfully created Volume group "lvm_pool" successfully extended Phase 1 - find and verify superblock... Phase 2 - using internal log - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. Extending logical volume lvol001 to 1.37 GiB Logical volume lvol001 successfully resized meta-data=/dev/mapper/lvm_pool-lvol001 isize=256 agcount=4, agsize=57600 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 data = bsize=4096 blocks=230400, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=853, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 230400 to 358400
SSM runs a check on the device and then extends the volume by the specified amount. This can be verified with the
ssm list
command.
#
ssm list
------------------------------------------------------------------ Device Free Used Total Pool Mount point ------------------------------------------------------------------ /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel /dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool /dev/vdc 640.00 MB 380.00 MB 1.00 GB lvm_pool ------------------------------------------------------------------ ------------------------------------------------------ Pool Type Devices Free Used Total ------------------------------------------------------ lvm_pool lvm 2 640.00 MB 1.37 GB 1.99 GB rhel lvm 1 0.00 KB 14.51 GB 14.51 GB ------------------------------------------------------ ---------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ---------------------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/lvm_pool/lvol001 lvm_pool 1.37 GB xfs 1.36 GB 1.36 GB linear /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot ----------------------------------------------------------------------------------------------
Note
It is only possible to decrease an LVM volume's size; it is not supported with other volume types. This is done by using a
-
instead of a +
. For example, to decrease the size of an LVM volume by 50M the command would be:
#
ssm resize -s-50M /dev/lvm_pool/lvol002
Rounding size to boundary between physical extents: 972.00 MiB WARNING: Reducing active logical volume to 972.00 MiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lvol002? [y/n]: y Reducing logical volume lvol002 to 972.00 MiB Logical volume lvol002 successfully resized
Without either the
+
or -
, the value is taken as absolute.
16.2.6. Snapshot
To take a snapshot of an existing volume, use the
ssm snapshot
command.
Note
This operation fails if the back end that the volume belongs to does not support snapshotting.
To create a snapshot of the
lvol001
, use the following command:
#
ssm snapshot /dev/lvm_pool/lvol001
Logical volume "snap20150519T130900" created
To verify this, use the
ssm list
, and note the extra snapshot section.
#
ssm list
---------------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------------- /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel /dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool /dev/vdc 1.00 GB ---------------------------------------------------------------- -------------------------------------------------------- Pool Type Devices Free Used Total -------------------------------------------------------- lvm_pool lvm 1 0.00 KB 1020.00 MB 1020.00 MB rhel lvm 1 0.00 KB 14.51 GB 14.51 GB -------------------------------------------------------- ---------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ---------------------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB linear /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot ---------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------- Snapshot Origin Pool Volume size Size Type ---------------------------------------------------------------------------------- /dev/lvm_pool/snap20150519T130900 lvol001 lvm_pool 120.00 MB 0.00 KB linear ----------------------------------------------------------------------------------
16.2.7. Removing an Item
The
ssm remove
is used to remove an item, either a device, pool, or volume.
Note
If a device is being used by a pool when removed, it will fail. This can be forced using the
-f
argument.
If the volume is mounted when removed, it will fail. Unlike the device, it cannot be forced with the
-f
argument.
To remove the
lvm_pool
and everything within it use the following command:
#
ssm remove lvm_pool
Do you really want to remove volume group "lvm_pool" containing 2 logical volumes? [y/n]: y Do you really want to remove active logical volume snap20150519T130900? [y/n]: y Logical volume "snap20150519T130900" successfully removed Do you really want to remove active logical volume lvol001? [y/n]: y Logical volume "lvol001" successfully removed Volume group "lvm_pool" successfully removed