Este conteúdo não está disponível no idioma selecionado.

Chapter 16. System Storage Manager (SSM)


System Storage Manager (SSM) provides a command line interface to manage storage in various technologies. Storage systems are becoming increasingly complicated through the use of Device Mappers (DM), Logical Volume Managers (LVM), and Multiple Devices (MD). This creates a system that is not user friendly and makes it easier for errors and problems to arise. SSM alleviates this by creating a unified user interface. This interface allows users to run complicated systems in a simple manner. For example, to create and mount a new file system without SSM, there are five commands that must be used. With SSM only one is needed.
This chapter explains how SSM interacts with various back ends and some common use cases.

16.1. SSM Back Ends

SSM uses a core abstraction layer in ssmlib/main.py which complies with the device, pool, and volume abstraction, ignoring the specifics of the underlying technology. Back ends can be registered in ssmlib/main.py to handle specific storage technology methods, such as create, snapshot, or to remove volumes and pools.
There are already several back ends registered in SSM. The following sections provide basic information on them as well as definitions on how they handle pools, volumes, snapshots, and devices.

16.1.1. Btrfs Back End

Note

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a future major release of Red Hat Enterprise Linux.
For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4 Release Notes.
Btrfs, a file system with many advanced features, is used as a volume management back end in SSM. Pools, volumes, and snapshots can be created with the Btrfs back end.

16.1.1.1. Btrfs Pool

The Btrfs file system itself is the pool. It can be extended by adding more devices or shrunk by removing devices. SSM creates a Btrfs file system when a Btrfs pool is created. This means that every new Btrfs pool has one volume of the same name as the pool which cannot be removed without removing the entire pool. The default Btrfs pool name is btrfs_pool.
The name of the pool is used as the file system label. If there is already an existing Btrfs file system in the system without a label, the Btrfs pool will generate a name for internal use in the format of btrfs_device_base_name.

16.1.1.2. Btrfs Volume

Volumes created after the first volume in a pool are the same as sub-volumes. SSM temporarily mounts the Btrfs file system if it is unmounted in order to create a sub-volume.
The name of a volume is used as the subvolume path in the Btrfs file system. For example, a subvolume displays as /dev/lvm_pool/lvol001. Every object in this path must exist in order for the volume to be created. Volumes can also be referenced with its mount point.

16.1.1.3. Btrfs Snapshot

Snapshots can be taken of any Btrfs volume in the system with SSM. Be aware that Btrfs does not distinguish between subvolumes and snapshots. While this means that SSM cannot recognize the Btrfs snapshot destination, it will try to recognize special name formats. If the name specified when creating the snapshot does the specific pattern, the snapshot is not be recognized and instead be listed as a regular Btrfs volume.

16.1.1.4. Btrfs Device

Btrfs does not require any special device to be created on.

16.1.2. LVM Back End

Pools, volumes, and snapshots can be created with LVM. The following definitions are from an LVM point of view.

16.1.2.1. LVM Pool

LVM pool is the same as an LVM volume group. This means that grouping devices and new logical volumes can be created out of the LVM pool. The default LVM pool name is lvm_pool.

16.1.2.2. LVM Volume

An LVM volume is the same as an ordinary logical volume.

16.1.2.3. LVM Snapshot

When a snapshot is created from the LVM volume a new snapshot volume is created which can then be handled just like any other LVM volume. Unlike Btrfs, LVM is able to distinguish snapshots from regular volumes so there is no need for a snapshot name to match a particular pattern.

16.1.2.4. LVM Device

SSM makes the need for an LVM back end to be created on a physical device transparent for the user.

16.1.3. Crypt Back End

The crypt back end in SSM uses cryptsetup and dm-crypt target to manage encrypted volumes. Crypt back ends can be used as a regular back end for creating encrypted volumes on top of regular block devices (or on other volumes such as LVM or MD volumes), or to create encrypted LVM volumes in a single steps.
Only volumes can be created with a crypt back end; pooling is not supported and it does not require special devices.
The following sections define volumes and snapshots from the crypt point of view.

16.1.3.1. Crypt Volume

Crypt volumes are created by dm-crypt and represent the data on the original encrypted device in an unencrypted form. It does not support RAID or any device concatenation.
Two modes, or extensions, are supported: luks and plain. Luks is used by default. For more information on the extensions, see man cryptsetup.

16.1.3.2. Crypt Snapshot

While the crypt back end does not support snapshotting, if the encrypted volume is created on top of an LVM volume, the volume itself can be snapshotted. The snapshot can then be opened by using cryptsetup.

16.1.4. Multiple Devices (MD) Back End

MD back end is currently limited to only gathering the information about MD volumes in the system.
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja oBlog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

© 2024 Red Hat, Inc.