Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 4. Performing advanced operations with the Block Storage service (cinder)

download PDF

Block Storage volumes form persistent storage for Compute instances in your overcloud. Configure advanced features of your volumes, for example, using volume snapshots, creating multi-attach volumes, retyping volumes, and migrating volumes.

4.1. Creating volume snapshots

You can preserve the state of a volume at a specific point in time by creating a volume snapshot. You can then use the snapshot to clone new volumes.

Note

Volume backups are different from snapshots. Backups preserve the data contained in the volume, whereas snapshots preserve the state of a volume at a specific point in time. You cannot delete a volume if it has existing snapshots. Volume backups prevent data loss, whereas snapshots facilitate cloning.

For this reason, snapshot back ends are typically colocated with volume back ends to minimize latency during cloning. By contrast, a backup repository is usually located in a different location, for example, on a different node, physical storage, or even geographical location in a typical enterprise deployment. This is to protect the backup repository from any damage that might occur to the volume back end.

For more information about volume backups, see the Backing up Block Storage volumes guide.

Prerequisites

Procedure

  1. Log into the dashboard.
  2. Select Project > Compute > Volumes.
  3. Select the Create Snapshot action for the target volume.
  4. Provide a Snapshot Name for the snapshot and click Create a Volume Snapshot. The Volume Snapshots tab displays all snapshots.
Note

For RADOS block device (RBD) volumes for the Block Storage service (cinder) that are created from snapshots, you can use the CinderRbdFlattenVolumeFromSnapshot heat parameter to flatten and remove the dependency on the snapshot. When you set CinderRbdFlattenVolumeFromSnapshot to true, the Block Storage service flattens RBD volumes and removes the dependency on the snapshot and also flattens all future snapshots. The default value is false, which is also the default value for the cinder RBD driver.

Be aware that flattening a snapshot removes any potential block sharing with the parent and results in larger snapshot sizes on the back end and increases the time for snapshot creation.

Verification

  • Verify that the new snapshot is present in the Volume Snapshots tab, or use the CLI to list volume snapshots and verify that the snapshot is created:

    $ openstack volume snapshot list

4.2. Creating new volumes from snapshots

You can create new volumes as clones of volume snapshots. These snapshots preserve the state of a volume at a specific point in time.

Prerequisites

Procedure

  1. Log into the dashboard.
  2. Select Project > Compute > Volumes.
  3. In the Volume Snapshots table, select the Create Volume action for the snapshot that you want to create a new volume from. For more information about volume creation, see Creating Block Storage volumes.
Important

If you want to create a new volume from a snapshot of an encrypted volume, ensure that the new volume is at least 1GB larger than the old volume.

Verification

  • Verify that the new volume is present in the Volumes tab, or use the CLI to list volumes and verify that the new volume is created:

    $ openstack volume list

4.3. Deleting volume snapshots

Red Hat OpenStack Platform (RHOSP) 17.1 uses the RBD CloneV2 API, which means that you can delete volume snapshots even if they have dependencies. If your RHOSP deployment uses a director-deployed Ceph back end, the Ceph cluster is configured correctly by director.

If you use an external Ceph back end, you must configure the minimum client on the Ceph cluster. For more information about configuring an external Ceph cluster, see Configuring the existing Red Hat Ceph Storage cluster in Integrating the overcloud with an existing Red Hat Ceph Storage Cluster.

Prerequisites

Procedure

  1. Log into the dashboard.
  2. Select Project > Compute > Volumes.
  3. In the Volume Snapshots table, select the Delete Volume Snapshot action for the snapshot that you want to delete.

If your OpenStack deployment uses a Red Hat Ceph back end, for more information about snapshot security and troubleshooting, see Protected and unprotected snapshots in a Red Hat Ceph Storage back end.

Verification

  • Verify that the snapshot is no longer present in the Volume Snapshots tab, or use the CLI to list volume snapshots and verify that the snapshot is deleted:

    $ openstack volume snapshot list

4.4. Restoring a volume from a snapshot

You can recover the most recent snapshot of a volume by reverting the volume data to its most recent snapshot.

Warning

The ability to recover the most recent snapshot of a volume is supported but is driver dependent. For more information about support for this feature, contact your driver vendor.

Limitations

  • There might be limitations to using the revert-to-snapshot feature with multi-attach volumes. Check whether such limitations apply before you use this feature.
  • You cannot revert a volume that you resize (extend) after you take a snapshot.
  • You cannot use the revert-to-snapshot feature on an attached or in-use volume.
  • By default, you cannot use the revert-to-snapshot feature on a bootable root volume. To use this feature, you must boot the instance with the delete_on_termination=false property to preserve the boot volume if the instance is terminated. In this case, to revert to a snapshot you must:

    • delete the instance so that the volume is available, and then
    • revert the volume, and then
    • create a new instance from the volume.

Prerequisites

  • Block Storage (cinder) REST API microversion 3.40 or later.
  • You must have created at least one snapshot for the volume.

Procedure

  1. Source your credentials file.
  2. Detach your volume:

    $ openstack server remove volume <instance_id> <vol_id>
    • Replace <instance_id> and <vol_id> with the IDs of the instance and volume that you want to revert.
  3. Locate the ID or name of the snapshot that you want to revert. You can only revert the latest snapshot.

    $ cinder snapshot-list
  4. Revert the snapshot:

    $ cinder --os-volume-api-version=3.40 revert-to-snapshot <snapshot_id>
    • Replace <snapshot_id> with the ID of the snapshot.
  5. Optional: You can use the cinder snapshot-list command to check that the volume you are reverting is in a reverting state.

    $  cinder snapshot-list
  6. Reattach the volume:

    $ openstack server add volume <instance_id> <vol_id>
    • Replace <instance_id> and <vol_id> with the IDs of the instance and volume that you reverted.

Verification

  • To check that the procedure is successful, you can use the cinder list command to verify that the volume you reverted is now in the available state.

    $ cinder list

4.5. Uploading a volume to the Image service (glance)

You can upload an existing volume as an image to the Image service directly.

Prerequisites

Procedure

  1. Log into the dashboard.
  2. Select Project > Compute > Volumes.
  3. Select the target volume’s Upload to Image action.
  4. Provide an Image Name for the volume and select a Disk Format from the list.
  5. Click Upload.

To view the uploaded image, select Project > Compute > Images. The new image appears in the Images table. For information on how to use and configure images, see Creating and managing images.

4.6. Volumes that can be attached to multiple instances

You can create a multi-attach Block Storage volume that can be attached to multiple instances and these instances can simultaneously read and write to it. Multi-attach volumes require a multi-attach volume type.

Warning

You must use a multi-attach or cluster-aware file system to manage write operations from multiple instances. Failure to do so causes data corruption.

Limitations of multi-attach volumes

  • The Block Storage (cinder) back end must support multi-attach volumes. For information about supported back ends, contact Red Hat Support.
  • Your Block Storage (cinder) driver must support multi-attach volumes. The Ceph RBD driver is supported. Contact Red Hat support to verify that multi-attach is supported for your vendor plugin. For more information about the certification for your vendor plugin, see the following locations:

  • Read-only multi-attach volumes are not supported.
  • Live migration of multi-attach volumes is not available.
  • Encryption of multi-attach volumes is not supported.
  • Multi-attach volumes are not supported by the Bare Metal Provisioning service (ironic) virt driver. Multi-attach volumes are supported only by the libvirt virt driver.
  • You cannot retype an attached volume from a multi-attach type to a non-multi-attach type, and you cannot retype a non-multi-attach type to a multi-attach type.
  • You cannot use multi-attach volumes that have multiple read write attachments as the source or destination volume during an attached volume migration.
  • You cannot attach multi-attach volumes to shelved offloaded instances.

4.6.1. Creating a multi-attach volume type

To attach a volume to multiple instances, set the multiattach flag to <is> True in the volume extra specs. When you create a multi-attach volume type, the volume inherits the flag and becomes a multi-attach volume.

Prerequisites

  • You must be a project administrator to create a volume type.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Create a new volume type for multi-attach volumes:

    $ cinder type-create multiattach
  3. Enable the multiattach property for this multi-attach volume type:

    $ cinder type-key multiattach set multiattach="<is> True"
  4. Run the following command to specify the back end:

    $ cinder type-key multiattach set volume_backend_name=<backend_name>

4.6.2. Multi-attach volume retyping

You can retype a volume to be multi-attach capable or retype a multi-attach capable volume to make it incapable of attaching to multiple instances. However, you can retype a volume only when it is not in use and its status is available.

When you attach a multi-attach volume, some hypervisors require special considerations, such as when you disable caching. Currently, there is no way to safely update an attached volume while keeping it attached the entire time. Retyping fails if you attempt to retype a volume that is attached to multiple instances.

4.6.3. Creating a multi-attach volume

You can create a Block Storage volume that can be attached to multiple instances and these instances can simultaneously read and write to it.

Note

This procedure creates a volume on any back end that supports multiattach. Therefore, if there are two back ends that support multiattach, the scheduler decides which back end to use. For more information, see Volume allocation on multiple back ends.

Prerequisites

  • A multi-attach volume type is available in your project.

Procedure

  1. Source your credentials file.
  2. Run the following command to create a multi-attach volume:

    $ cinder create <volume_size> --name <volume_name> --volume-type multiattach
  3. Run the following command to verify that a volume is multi-attach capable. If the volume is multi-attach capable, the multiattach field equals True.

    $ cinder show <vol_id> | grep multiattach
    
    | multiattach | True |

4.7. Moving volumes between back ends

There are many reasons to move volumes from one storage back end to another, such as:

  • To retire storage systems that are no longer supported.
  • To change the storage class or tier of a volume.
  • To change the availability zone of a volume.

With the Block Storage service (cinder), you can move volumes between back ends in the following ways:

Restrictions

Red Hat supports moving volumes between back ends within and across availability zones (AZs), but with the following restrictions:

  • Volumes must have either available or in-use status to move.
  • Support for in-use volumes is driver dependent.
  • Volumes cannot have snapshots.
  • Volumes cannot belong to a group or consistency group.

4.7.1. Moving available volumes

You can move available volumes between all back ends, but performance depends on the back ends that you use. Many back ends support assisted migration. For more information about back-end support for assisted migration, contact the vendor.

Assisted migration works with both volume retype and volume migration. With assisted migration, the back end optimizes the movement of the data from the source back end to the destination back end, but both back ends must be from the same vendor.

Note

Red Hat supports back-end assisted migrations only with multi-pool back ends or when you use the cinder migrate operation for single-pool back ends, such as RBD.

When assisted migrations between back ends are not possible, the Block Storage service performs a generic volume migration.

Generic volume migration requires volumes on both back ends to be connected before the Block Storage (cinder) service moves data from the source volume to the Controller node and from the Controller node to the destination volume. The Block Storage service seamlessly performs the process regardless of the type of storage from the source and destination back ends.

Important

Ensure that you have adequate bandwidth before you perform a generic volume migration. The duration of a generic volume migration is directly proportional to the size of the volume, which makes the operation slower than assisted migration.

4.7.2. Moving in-use volumes

There is no optimized or assisted option for moving in-use volumes. When you move in-use volumes, the Compute service (nova) must use the hypervisor to transfer data from a volume in the source back end to a volume in the destination back end. This requires coordination with the hypervisor that runs the instance where the volume is in use.

The Block Storage service (cinder) and the Compute service work together to perform this operation. The Compute service manages most of the work, because the data is copied from one volume to another through the Compute node.

Important

Ensure that you have adequate bandwidth before you move in-use volumes. The duration of this operation is directly proportional to the size of the volume, which makes the operation slower than assisted migration.

Restrictions

  • In-use multi-attach volumes cannot be moved while they are attached to more than one nova instance.
  • Non block devices are not supported, which limits storage protocols on the target back end to be iSCSI, Fibre Channel (FC), and RBD.

4.8. Block Storage volume retyping

When you retype a volume, you apply a volume type and its settings to an existing volume. For more information about volume types, see Group volume configuration with volume types.

Note

Only volume owners and administrators can retype volumes.

You can retype a volume provided that the extra specs of the new volume type can be applied to the existing volume. You can retype a volume to apply predefined settings or storage attributes to an existing volume, such as:

  • To move the volume to a different back end.
  • To change the storage class or tier of a volume.
  • To enable or disable features such as replication.

Volume retyping is the standard way to move volumes from one back end to another. But retyping a volume does not necessarily mean that you must move the volume from one back end to another. However, there are circumstances in which you must move a volume to complete a retype:

  • The new volume type has a different volume_backend_name defined.
  • The volume_backend_name of the current volume type is undefined, and the volume is stored in a different back end than the one specified by the volume_backend_name of the new volume type.

Moving a volume from one back end to another can require extensive time and resources. Therefore, when a retype requires moving data, the Block Storage service does not move data by default. The operation fails unless it is explicitly allowed by specifying a migration policy as part of the retype request. For more information, see Retyping a volume from the Dashboard or Retyping a volume from the CLI.

Restrictions

  • You cannot retype all volumes. For more information about moving volumes between back ends, see Moving volumes between back ends.
  • Unencrypted volumes cannot be retyped to encrypted volume types, but encrypted volumes can be retyped to unencrypted ones.
  • Retyping an unencrypted volume to an encrypted volume of the same size is not supported, because encrypted volumes require additional space to store encryption data. For more information about encrypting unencrypted volumes, see Encrypting unencrypted volumes.
  • Users with no administrative privileges can only retype volumes that they own.

4.8.1. Retyping a volume from the Dashboard

Retype a volume to apply a volume type and its settings to an existing volume.

Important

Retyping an unencrypted volume to an encrypted volume of the same size is not supported, because encrypted volumes require additional space to store encryption data. For more information about encrypting unencrypted volumes, see Encrypting unencrypted volumes.

Prerequisites

Procedure

  1. Log into the dashboard as an admin user or volume owner.
  2. Select Project > Compute > Volumes.
  3. In the Actions column of the volume you want to migrate, select Change Volume Type.
  4. In the Change Volume Type dialog, select the target volume type and define the new back end from the Type list.
  5. If you are migrating the volume to another back end, select On Demand from the Migration Policy list. For more information, see Moving volumes between back ends.
  6. Click Change Volume Type to start the migration.

4.8.2. Retyping a volume from the CLI

Retype a volume to apply a volume type and its settings to an existing volume.

Important

Retyping an unencrypted volume to an encrypted volume of the same size is not supported, because encrypted volumes require additional space to store encryption data. For more information about encrypting unencrypted volumes, see Encrypting unencrypted volumes.

Prerequisites

  • Only volume owners and administrators can retype volumes.

Procedure

  1. Source your credentials file.
  2. Enter the following command to retype a volume:

    $ cinder retype <volume name or id> <new volume type name>
  3. If the retype operation requires moving the volume from one back end to another, the Block Storage service requires a specific flag:

    $ cinder retype --migration-policy on-demand <volume name or id> <new volume type name>

4.9. Migrating a volume between back ends with the Dashboard

With the Block Storage service (cinder) you can migrate volumes between back ends within and across availability zones (AZs). This is the least common way to move volumes from one back end to another.

In highly customized deployments or in situations in which you must retire a storage system, an administrator can migrate volumes. In both use cases, multiple storage systems share the same volume_backend_name, or it is undefined.

Restrictions

  • The volume cannot be replicated.
  • The destination back end must be different from the current back end of the volume.
  • The existing volume type must be valid for the new back end, which means the following must be true:

    • Volume type must not have the backend_volume_name defined in its extra specs, or both Block Storage back ends must be configured with the same backend_volume_name.
    • Both back ends must support the same features configured in the volume type, such as support for thin provisioning, support for thick provisioning, or other feature configurations.
Note

Moving volumes from one back end to another might require extensive time and resources. For more information, see Moving volumes between back ends.

Prerequisites

Procedure

  1. Log into the dashboard as an admin user.
  2. Select Admin > Volumes.
  3. In the Actions column of the volume you want to migrate, select Migrate Volume.
  4. In the Migrate Volume dialog, select the target host from the Destination Host drop-down list.

    Note

    To bypass any driver optimizations for the host migration, select the Force Host Copy check box.

  5. Click Migrate to start the migration.

4.10. Migrating a volume between back ends with the CLI

With the Block Storage service (cinder) you can migrate volumes between back ends within and across availability zones (AZs). This is the least common way to move volumes from one back end to another.

In highly customized deployments or in situations in which you must retire a storage system, an administrator can migrate volumes. In both use cases, multiple storage systems share the same volume_backend_name, or it is undefined.

Restrictions

  • The volume cannot be replicated.
  • The destination back end must be different from the current back end of the volume.
  • The existing volume type must be valid for the new back end, which means the following must be true:

    • Volume type must not have the backend_volume_name defined in its extra specs, or both Block Storage back ends must be configured with the same backend_volume_name.
    • Both back ends must support the same features configured in the volume type, such as support for thin provisioning, support for thick provisioning, or other feature configurations.
Note

Moving volumes from one back end to another might require extensive time and resources. For more information, see Moving volumes between back ends.

Prerequisites

  • You must be a project administrator to migrate volumes.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Enter the following command to retrieve the name of the destination back end:

    $ cinder get-pools --detail
    
    Property                      | Value
    
    ...
    
    | name                        | localdomain@lvmdriver-1#lvmdriver-1
    | pool_name                   | lvmdriver-1
    
    ...
    
    | volume_backend_name         | lvmdriver-1
    
    ...
    
    Property                      | Value
    
    ...
                                                          |
    | name                        | localdomain@lvmdriver-2#lvmdriver-1
    | pool_name                   | lvmdriver-1
    
    ...
    
    | volume_backend_name         | lvmdriver-1
    
    ...

    The destination back-end names use this syntax: host@volume_backend_name#pool.

    In the example output, there are two LVM back ends exposed in the Block Storage service: localdomain@lvmdriver-1#lvmdriver-1 and localdomain@lvmdriver-2#lvmdriver-1. Notice that both back ends share the same volume_backend_name, lvmdriver-1.

  3. Enter the following command to migrate a volume from one back end to another:

    $ cinder migrate <volume id or name> <new host>

4.11. Manage and unmanage volumes and their snapshots

You can add volumes to or remove volumes from the Block Storage volume service (cinder-volume) by using the cinder manage and cinder unmanage commands. Typically the Block Storage volume service manages the volumes that it creates, so that it can, for instance, list, attach, and delete these volumes. But you can use the cinder unmanage command to remove a volume from the Block Storage volume service, so that it will no longer list, attach or delete this volume. Similarly, you can use the cinder manage command to add a volume to the Block Storage volume service, so that it can, for instance, list, attach, and delete this volume.

Note

You cannot unmanage a volume if it has snapshots. In this case, you must unmanage all of the snapshots before you unmanage a volume, by using the cinder snapshot-unmanage command. Similarly, when you manage a volume that has snapshots, you must manage the volume first and then manage the snapshots, by using the cinder snapshot-manage command.

You can use these Block Storage commands when upgrading your Red Hat OpenStack Platform (RHOSP) deployment in parallel, by keeping your existing RHOSP version running while you deploy the new version of RHOSP. In this scenario, you must unmanage all of the snapshots and then unmanage the volume to remove a volume from your existing RHOSP and then you must manage this volume and all of its snapshots to add this volume and its snapshots to the new version of RHOSP. In this way, you can move your volumes and their snaphots to your new RHOSP version while running your existing cloud.

Another possible scenario is if you had a bare metal machine using a volume in one of your storage arrays. Then you decide to move the software running on this machine into the cloud but you still want to use this volume. In this scenario, you use the cinder manage command to add this volume to the Block Storage volume service.

You can use the cinder manageable-list command to determine whether there are volumes in the storage arrays of the Block Storage volume service that are not being managed. The volumes in this list are typically volumes that users have unmanaged or which have been created manually on a storage array without using the Block Storage volume service. Similarly, the cinder snapshot-manageable-list command lists all the manageable snapshots.

The syntax of the cinder manage command is back end specific, because the properties required to identify the volume are back end specific. Most back ends support either or both of the source-name and source-id properties, others require additional properties to be set. Some back ends can list which volumes are manageable and what parameters need to be passed. For those back ends that do not, please refer to the vendor documentation. The syntax of the cinder unmanage command is not back end specific, you must specify the required volume name or volume ID.

Similarly, the syntax of the cinder snapshot-manage command is back end specific, because the properties required to identify the snapshot are back end specific. The syntax of the cinder snapshot-unmanage command is not back end specific, you must specify the required snapshot name or snapshot ID.

4.12. Encrypting unencrypted volumes

You can encrypt an unencrypted volume.

If the cinder-backup service is available, then back up the unencrypted volume and then restore it to a new encrypted volume.

If the cinder-backup service is unavailable, then create a glance image from the unencrypted volume and create a new encrypted volume from this image.

Prerequisites

  • You must be a project administrator to create encrypted volumes.
  • An unencrypted volume that you want to encrypt.

Procedure

The cinder-backup service is available:

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Back up the current unencrypted volume:

    cinder backup-create <unencrypted_volume>
    • Replace <unencrypted_volume> with the name or ID of the unencrypted volume.
  3. Create a new encrypted volume:

    cinder create <encrypted_volume_size> --volume-type <encrypted_volume_type>
    • Replace <encrypted_volume_size> with the size of the new volume in GB. This value must be larger than the size of the unencrypted volume by 1GB to accommodate the encryption metadata.
    • Replace <encrypted_volume_type> with the encryption type that you require.
  4. Restore the backup of the unencrypted volume to the new encrypted volume:

    cinder backup-restore <backup> --volume <encrypted_volume>
    • Replace <backup> with the name or ID of the unencrypted volume backup.
    • Replace <encrypted_volume> with the ID of the new encrypted volume.

The cinder-backup service is unavailable:

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Create a glance image of the unencrypted volume:

    cinder upload-to-image <unencrypted_volume> <new_image>
    • Replace <unencrypted_volume> with the name or ID of the unencrypted volume.
    • Replace <new_image> with a name for the new image.
  3. Create a new volume from the image that is 1GB larger than the image:

    cinder volume create --size <size> --volume-type luks --image <new_image> <encrypted_volume_name>
    • Replace <size> with the size of the new volume. This value must be 1GB larger than the size of the old unencrypted volume.
    • Replace <new_image> with the name of the image that you created from the unencrypted volume.
    • Replace <encrypted_volume_name> with a name for the new encrypted volume.

4.13. Protected and unprotected snapshots in a Red Hat Ceph Storage back end

When you use Red Hat Ceph Storage (RHCS) as a back end for your Red Hat OpenStack Platform (RHOSP) deployment, you can set a snapshot to protected in the back end. Do not delete protected snapshots through the RHOSP dashboard or the cinder snapshot-delete command because deletion fails.

When this occurs, set the snapshot to unprotected in the RHCS back end first. You can then delete the snapshot through RHOSP as normal.

For more information about protecting snapshots, see Protecting a block device snapshot and Unprotecting a block device snapshot in the Red Hat Ceph Storage Block Device Guide.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.